Talk:Gambler's fallacy/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Opening comment

The example about the winner of a sports event being more likely to win the next belongs in the section on "not even" not the section on "not independent". I have moved it.

How about an explanation for the joke at the end?

Your bomb doesn’t make other terrorists less likely to attack your plane, since no one even knows about it. Similarly, when tossing a coin, it may be unlikely that you will get ten heads in a row, but it doesn’t mean that after you have already got nine heads in a row, another head is less likely than tail. Rafał Pocztarski 12:29, 4 Dec 2004 (UTC)
Actually, I once made an experiment to demonstrate it to someone who wouldn’t believe me: we were tossing a coin waiting for two heads in a row, and noting the third result after those two heads, and of course we counted more or less the same number of heads and tails. Rafał Pocztarski 03:03, 12 Dec 2004 (UTC)
I really don't think the joke needs an explanation - it goes along the same theories as were explained in the rest of the article. It's also really not that improtant, and explaining it would just ruin the joke. Oracleoftruth 09:29, May 26, 2005 (UTC)

Remove some repitition?

This article reaslly ssems to just repeat the same things in almost the same way, several times, often making it difficult to tell new concepts from repititions of an old concept... I'll try to clean this up. Oracleoftruth 09:33, May 26, 2005 (UTC)

Reading it over more carefully, I realize that everything included is actually a different point, but it would still be good to seperate them more clearly, and perhaps to fuse some more similar points together so it makes more sense. Oracleoftruth 02:05, May 27, 2005 (UTC)

Trouble understanding

Consider a slot machine that is set to a 50% probability of winning or losing. Assuming an infinite pile of money for both the slot machine and the players, if you stand at the slot machine for long enough, you will end up with the same amount of money that you started with. However, one can beat the machine by allowing someone else to quit while down and then playing until you're up, and then repeating the process, basically taking the money from those who lose. If the Gambler's Fallacy were accurate, then this would be impossible, and the machine would make money for the owner. It seems like this is a logical paradox. Even though it is extremely hard for me to accept, I understand that consecutive heads followed by a tails is just as likely as another heads, but since you're comparing heads to tails, it seems that the heads-heads-heads-heads-tails result is irrelevant while five consecutive heads in a row is not. -- Demonesque talk 18:20, 31 October 2005 (UTC)

Slot machines don't fall into the category or "truly random events" as the results of a previous payout do affect future payouts (assuming the slot machine does occasionally award all the money it has collected). The effect you mentioned is real, in that your odds of coming out ahead are slightly better if the machine has been pumped full of money by the previous player. However, the odds are so horribly stacked against you to begin with in slots that this slight advantage is unlikely to overcome that huge disadvantage. Another way of putting it is that you are unlikely to get the big payout before you run out of funds. If you look at the odds of getting the big payout and how much you have to spend, you can figure the chances out for yourself. (If the odds are one in a million and you have a thousand coins to spend, then your chances are one in a thousand, or a bit better, if you "reinvest" small payouts in trying for the larger payout.) StuRat 18:47, 31 October 2005 (UTC)
This is all rubbish. The machines are not (supposed to be) sensitive to how much money happens to be in their cash box. Haven't you ever seen a pauout interrupted by there not being enough coins in the box? I certainly have. Similarly, sometimes play is interrupted by a machine needing its coin box emptied. I've played a lot of video poker (not the same as slot machines I admit), and they've worked out fair in that about the expected number of payoffs comes out of the machine, but you have to average over a very long time. Most people don't have the patience for this; I only recorded certain payoffs (sometimes four of a kind, usually straight flushes, and always royal flushes). One time my wife hit a royal flush after only 20 minutes of play (between the two of us). That's about 1% of the expected number of plays. She's also had a royal flush dealt. These things happen. We've also had a stretch of more than twice the expected number of plays, and they deinstalled the machine before that one paid off. These things also happen. --Mike Van Emmerik 21:25, 31 October 2005 (UTC)
Let's say that any slot machine where the maximum win goes up with every loss will have progressively better payout odds as the losses accumulate. Now, whether a particular slot machine works that way or not, I do not know, ask the casino's operator. And whether the casino would continue to allow the machine to operate after the odds turn against them, I do not know. I suppose they might, in rare cases, for the good publicity it would generate. StuRat 21:44, 31 October 2005 (UTC)
Ah. Perhaps you are talking about a progressive jackpot; with these, with every bet on machines linked to the jackpot's progressive meter, a small contribution is made to a special account that is used for the top prize. (Sometimes there are multiple jackpots, e.g. one for a royal flush in each of the four card suits; I'll assume only one jackpot, and only for the highest prize, let's say that's five cherries). In this case, the game does become more and more favourable for the player, although for the house the expected profit is the same (house edge for the game, less the jackpot contribution). So when the game passes the break-even point (most players will not know when this happens, but for certain games such as video poker, it can be calculated), it becomes profitable for the player, while still being profitable for the house. How is this possible? Because the house is transferring a portion of the losses made by all players to the eventual winner. Of course, if everyone waited for the jackpot to become break-even, then there would be no play, and the jackpot would remain at its reset value. So there is an opportunity for the player that knows the break-even point, and has the discipline not to play when the game is not yet profitable, and providing that there are other players who are willing to play when the game is not profitable. The house is quite happy with this situation, because they still make a profit (albeit a reduced one because of the jackpot contribution), but this is considered to be a good investment because a large jackpot attracts customers, most of whom will play other games as well, purchase food, and so on. Note that the casino continues to make the same profit on the machines no matter how high the jackpot rises, since the jackpot payout is paid in advance from a proportion of past player's losses. This is entirely different to the idea that payouts are related to the amount of money in the coin box (or otherwise related to player's past losses or wins). I don't believe that machines ever pay out more after heavier than expected losses. If they did, to maintain their expected profit, they would have to pay out less after lighter than average losses (or a certain sized win). Most players would consider that unfair. --Mike Van Emmerik 06:45, 1 November 2005 (UTC)
Yes, the progressive jackpot is what I was referring to. I don't quite follow how this is different than basing jackpot payouts on the number of coins in the coin box, however. In both cases a portion of loses seem to be transferred to the jackpot fund. What is the material difference between the two cases ?
For the coin box version, I would expect the casino to take some coins out when it gets full or replenish them if it empties beyond a certain point, so there should be a finite range of jackpot payouts. As for the odds discussion, I agree that the long run odds heavily favor the house in all cases, but think that individual "pulls" may be weighted against the house in rare cases, when the progressive jackpot is quite high. StuRat 16:42, 1 November 2005 (UTC)
Two main differences: 1) with the progressive jackpot, the variation from normal payouts is explicit; there is a (usually large) meter clearly displaying the change of payoff(s). With your cash box sensitive idea, a player unaware of the previous history of the machine could be disadvantaged (or perhaps even advantaged, depending on the details) without knowing it. 2) Your scheme seems to require a change to the probability of certain payouts; with the progressive jackpot, the probability of all payouts remains the same, but the payoff for the jackpot increases in value. (After a jackpot win, of course, it suddenly reduces in value to the "reset" amount). As I've pointed out before, no individual game ("pull") will be weighted against the house, even with a large jackpot value, because the house already has money "in the bank" from previous losses by players. For example, the casino may make 4 cents of profit for every 1 cent contributed to the jackpot. So even for a $10,000 increase in the jackpot value, the casino already has $40,000 of profit. The jackpot will only get to $10,000 more than reset until there have been a million plays, so the house can't be at a long term disadvantage. (It can still make a short term loss if the frequency of some high payoff combinations comes up more frequently than usual, but this will be balance in the long term by less frequent high payoff combinations at some time in the future.) The same (pulls will never be weighted against the house) would be true of your cash box idea, if implemented correctly; the higher probability of paying out N coins would only be triggered if at least N more than expected coins are in the box. --Mike Van Emmerik 21:53, 1 November 2005 (UTC)
1) Each slot machine certainly could display the contents of the cash box, if they so desired. The simplest way would be to make the case transparent, perhaps of bullet proof plastic, to discourage theft.
2) No, I was only talking about a change in payouts for certain combos, not a change in the odds of getting any particular combo.
Your statemnt that no particular pull is weighted against the house is at odds with what you said. Yes, I agree that they have already made enough money to cover that pull through losses in the past, but that doesn't change the fact that, on average, they are going to lose more money than they will gain on certain pulls of the lever. That is, if they stopped the game right then, they would have more money, on average, than if they continued to allow players when the jackpot is that high. Of course, to do so would be bad publicity and also illegal, if they have promised that the jackpot will be awarded, but that's quite irrelevant to the odds and payouts. StuRat 04:31, 2 November 2005 (UTC)
Every payoff, by itself, is negative expected value for the house. But we were talking about the long term expectation for the game, i.e. many plays.
I wasn't talking about the long-term average, but rather each pull, where a certain amount is bet, and a certain amount is won, or not won. Some such pulls may be weighted against the house, but the majority are definitely in their favor. StuRat 21:59, 2 November 2005 (UTC)

It seems my point has been misunderstood. I'm not talking about actual slot machines, I'm talking about a hypothetical slot machine and slot machine players, all with an infinite amount of money. The Gambler's Fallacy must be a logical paradox. To bring it back to the coin example, since it's apparently easier to understand, when you have four heads in a row, tails must be more likely with every heads result, and vice versa, because given enough time, the results even out. -- Demonesque talk 03:27, 2 November 2005 (UTC)

I just tested this, by flipping a coin and recording the results. When heads was significantly higher than tails, I bet on tails, and vice versa. By exploiting the fact that it always breaks even in the end, you win. Betting on the same result every time, of course, breaks even in the end. Demonesque talk03:49, 2 November 2005 (UTC)

Try testing it 1000 more times, and see if it works out the same each time. It won't ! StuRat 04:00, 2 November 2005 (UTC)

Ok, let me explain this in terms of asymptotes and limits. I hope you're familiar with those terms. If not, my apologies in advance...

The graph of y = 1/x will get closer and closer to 0, but never actually reach zero, as x increases. It will approach it from the top side. Similarly the graph of y = -1/x with approach zero, but never actually reach zero, from the bottom side. So, if you have tossed heads more often in the past, then, on average, the number of heads will approach, but never reach, 50%, from the "more heads" side. Similarly, if you have tossed more tails in the past, the number of tails will approach, but never reach, 50%, from the "more tails" side. Now, unlike the 1/x graphs, the heads/tails flips are not guaranteed to be evenly distributed, so that you may very well get exactly 50% or even go past it. The "approaching but never reaching 50%" is only the average behaviour, not the actual behaviour for any one run.

So, what is happening is that the slight heads or tails advantage initially is becoming less and less significant as a percentage of the total rolls.

Let's look at the case where tails fell initially. After the first flip, the average percentage of heads is, of course, 0%:

T = 0% Ave = 0%

After the 2nd flip there are two equally likely possibilitues, which would give us either 50% heads or 0% heads so far. The average of these two possibilities gives us 25% heads on average, which we could expect after the 2nd flip:

TH = 50% TT = 0% Ave = 25%

Here's the same values after the third flip:

THH = 67% THT = 33% TTH = 33% TTT = 0% Ave = 33%

And the fourth flip:

THHH = 75% THHT = 50% THTH = 50% THTT = 25% TTHH = 50% TTHT = 25% TTTH = 25% TTTT = 0% Ave = 37.5%

So, the average number of heads went from 0% to 25% to 33% to 37.5%. We are approaching, but will never quite reach, 50%. Note that the average number of heads is 1/4, 2/6, 3/8 for the 1st, 2nd, and 3rd additional flips, after the initial tails toss. We can generalize this as the formula n/(2n+2). So, after 9 flips, you would have 9/20, or 45% heads, on average. After 99 flips you would have 99/200 or 49.5% heads, on average. And, after 999 flips, you would have `999/2000, or 49.95%, heads, on average.

Another way to look at it: the average number of heads is a ratio: h/n, where h is the total number of heads results, and n is the number of tosses. After an unusually high number of heads (h large for the correstponding n), there are two ways to reduce the ratio back to the long term average: decrease the average h, or increase n. You seem to think that it has to be h that decreases in the short term. But h increases only by 0 or 1 with each toss, and averages an increase of 1/2. n keeps increasing, and eventually (but never completely) "drowns out" the blip from say four heads in a row. There will be other blips along the way, say five heads in a row, or 6 tails in a row. The probability of a more-than-average-heads blip will be the same as a more-than-average-tails blip of the same number of tosses. All these blips will also get drowned out by the underlying fact that in the end, about half the results are heads. There is no necessity to "correct the average" (of single tosses or blips) in the next 10 tosses, or 100 or 1000. The biggest factor is n, or time. Think very long term. If it helps, I struggled with this one for a long time, too. --Mike Van Emmerik 21:33, 2 November 2005 (UTC)
Great attempt to explain Mike. Unfortunately, to use a word you used a little earlier, your calculations are rubbish (although nicely done!). You are comparing apples and oranges. A sequence of coin tosses is just that-a sequence. Not two sequences, or three sequences or four or more. You are averaging together two different (read independent) sequences after the second throw, four different sequences after the third throw, and eight different sequences after the fourth throw. Any average only belongs to its own sequence, and is only correctly calculated in its own sequence. What you are presenting above is an invalid assessment. Another way to look at it (using your creative math in a genetics application): Let's say I marry a (natural) blond haired woman. We have two blond haired kids and then we divorce. Later, I marry a natural red-haired woman. The second wife and I also have two children, both with flaming red hair. Now using your math method of comparing two completely different, independent-of-each-other, columns and 'averaging' them together we could say that I have a high probability of having grandchildren with strawberry-blond hair. But of course, that is genetic nonsense. Even though the red-head kids are 'children of Joe', and the blond haired kids are 'children of Joe' they consist of two seperate sequences of 'children of Joe'. They each have their own 'odds' of producing grandchildren for Joe of any particular hair color. But the odds (averaging the blond hair with the red hair) of producing strawberry-blond grandchildren is near nil (unless they themselves married spouses with the requisite hair color).
Asymptotes huh? That's self defeating from the start. You're going to "prove" that it never reaches 50% by using a rigged number that (by definition) will not ever allow it to reach 50%? I can "prove" that all of the results are equally divisible by three! (Pssst! Just let me multiply them all by six before you start your division calculations, okay? Shhhhh!)Joe Hepperle (talk) 13:18, 9 April 2009 (UTC)
What exactly are you talking about? Averaging discrete possible future outcomes to make a general prediction is sort of the point of probability. Your analogy is flawed; not his. The probability of you getting strawberry-blond grandchildren is one minus the product of the probabilities of each of your children individually having strawberry-blond children. There is simply no average involved. However, it is certainly true that by having children with two different women, genetically speaking you do have a much higher probability of having strawberry-blond children.
I don't know where you got the idea that there is some limit on which numbers can be averaged or what that limit is, but frankly the arithmetic mean has a very simple definition for real numbers, and it can be applied to any set. Obviously this isn't always meaningful (if you add dimensionally different quantities, for example, the result is total garbage), but it is still mathematically valid.
But either way, he really didn't take an average; he simply made a graphical argument based on probability tendencies. You clearly can't dispute that , which was the crux of his explanation. More importantly, you can't dispute the actual underlying probability which is independent of this graphical explanation. Before you try to tell people their are rubbish, you should actually try to understand what they are saying. I don't think I can really understand everything you are saying, either, so I'm trying to restate Mike's point and hopefully you can enlighten me here. Eebster the Great (talk) 23:52, 9 April 2009 (UTC)

Help?

"If I tell you that 1 of the 2 flips was heads then I am removing the tails-tails outcome only, leaving: heads-heads, heads-tails, and tails-heads. Among the 3 remaining possibilities, heads-heads happens 1 in 3 or 33% of the time."

Umm... what? Can someone explain why this makes sense, seems to me its very wrong...

Assuming the probability that the flip that was heads was the 1st coin is x%, then, obviously, the probablity that the head fell second is (100-x)%. If the first coin is heads (x/100 chance), the second coin has a 50/50 chance of being heads, or 1/2 (Chance of 2 heads: x/100 * 1/2). similarly if the second coin is heads ((100-x)%), the first coin has 1/2 chance of being heads (Chance of 2 heads: (100-x)/100 * 1/2).

adding these together, we get the probablility of 2 heads:

x/100 * 1/2 + (100-x)/100 * 1/2 = x/200 + (100 - x)/200 = 100/200 = 1/2

regardless of which coin is the one revealed.

Fallacious article.

This is the kind of article and reasoning that will undermine Wikipedia. Whoever wrote it cannot differentiate between probability and degree of certainty.

Let us take the example of a coin toss. The probability is always 50% for either heads or tails. However, if we consider the Fundamental Law of Gambling:

N = log(1-DC)/log(1-p)

N = number of trials DC = degree of certainty that an event will occur p = probability that an event will occur

Anyone who can do math will see that, yes, the degree of certainty that you will get tails increases as a streak of heads goes on. After 3 heads, mathematically, the probability of either heads or tails is still 50%. It ALWAYS is. However, the degree of certainty that it will be tails as calculated by the above formula is 95%. The degree of certainty that it be heads again is 5%.

But wait a minute, you might say. In the article, the author logically argued that the chances of 3 successive heads is an eighth, or 12.5%. Right he is, but he jumped to a different subject. He introduced erroneous logic into his thinking, and went from talking about the probability of one of two events happening, heads (H) or tails (T), to the probability of one of eight events happening (HHH), (HHT), (HTT), etc..

I would like to credit Ion Saliu for bringing these issues to light. I can be reached at ace_kevi at hotmail dot com.


The article is fine as it stands. You need read the article more carefully, or you need to understand this concept of "degree of certainty". The degree of certainty that you will get at least one heads in N coin flips is, using the above formula (with p=1/2) is
which is correct. There are 2N possible outcomes, and only one has no heads. If I have not yet flipped the coin - if I am about to flip a coin four times, then the probability that I will come up with at least one heads is =93.75%. But, If I flip a fair coin 3 times and get 3 successive tails, the probability that the next flip will yield heads is 50%. Anyone who disagrees with this statement is a victim of the gambler's fallacy. Thats exactly what the article says. Do you disagree with this statement? If you do not, then there is no problem. If you do, then you are a victim. PAR 15:27, 19 December 2005 (UTC)

Explanation

Of course, I do agree. I previously said that the probability never changes. It's 50%. However, the article is far from correct. In fact, it is misleading in many aspects. I will try to cover one of them briefly.

The author says, "suppose that we are in one of these states where, say, four heads have just come up in a row, and someone argues as follows: "if the next coin flipped were to come up heads, it would generate a run of five successive heads. The probability of a run of five successive heads is 0.55 = 0.03125; therefore, the next coin flipped only has a 1 in 32 chance of coming up heads." This is the fallacious step in the argument."

First of all, the "someone" and the author are talking about two different things. The author is talking about the probability of an event which has 32 possible outcomes (HHHHH, HHHHT, etc..) and two possible outcomes after the fourth toss. and the "someone" is talking about the probability of an event that has two possible outcomes (H or T). When the "someone" says "the next coin flipped only has a 1 in 32 chance of coming up heads," if he means probability, he is wrong, but if he means degree of certainty, as should be calculated by the equation I presented above, he is right. The degree of certainty that it will be tails the fifth time based on the previous tosses is above 95%. The probability is still 50%, as always. And yes, it DOES make sense. "Chance" is a vague term, and so is "likely/unlikely." The article should only use "probability or "degree of certainty."

Please take the time to read my comments and understand the difference between probability and degree of certainty. The gambler does not question the probability, that's why the fundamental formula of gambling calculates the degree of certainty.

The concept is hard to grasp. You could call it the non-gambler's fallacy.


I don't understand what your objection is. Can you give an example of a particular statement in the article that you object to and how you would reword it? PAR 04:11, 20 December 2005 (UTC)

detailed explanation.

First of all, thank you for taking the time to read my comments and investigate my argument.

Reminder of the Fundamental Formula of Gambling:

N = log(1-DC)/log(1-p)

where

N = number of trials or events

DC = degree of certainty that an event will happen

p = probability that an event will happen

Say we toss a coin:

The probability of heads (H) is 50%, tails (T) 50%. The degree of certainty of heads is 50%, tails 50%.

It's heads. We toss a second time.

The probability of heads (H) is 50%, tails (T) 50%. The degree of certainty of heads is 25%, tails 75%.

It's heads. We toss a third time.

The probability of heads (H) is 50%, tails (T) 50%. The degree of certainty of heads is 10%, tails 90%.

It's heads. We toss a fourth time.

The probability of heads (H) is 50%, tails (T) 50%. The degree of certainty of heads is 5%, tails 95%.


"But wait!" one may cry out, "how did you get 10% at the third toss? The author logically demonstrated that it's 12.5%! You're wrong!"


Not so fast. That's where the author inadvertantly introduced an error into his logic. I am talking about the degree of certainty of one of two events, namely getting HEADS again after getting 2 HEADS. Note that at this point, the probability of getting heads is still 50%. I am sticking to the topic, see. The author drifted from the main course and started talking about the probability of getting 3 HEADS after 3 tosses, which is one of eight possible outcomes for 3 tosses. He started out talking about the probability of one of two outcomes (H or T), and ended up talking about the probability of one of eight outcomes (HHH, HHT..) You cannot have two variables in the equation (p and DC). The probability must stay the same. Please look at the article and re-read my statement.

A little graphic explanation here:


The gambler, or me:

toss 1: H

toss 2: H

toss 3:  ? probability of H 50%; degree of certainty 10%; hmm I better bet on T..


The author:

toss 1: H

toss 2: H

toss 3:  ? probability of HHH 12.5%, HHT 12.5%, etc..


At the third toss, the probability of H is 50%, that of HHH is 12.5%. But he is comparing apples to oranges. I am sticking to H, while he jumped to a different game (HHH).


He says, "The gambler's fallacy can be illustrated by a game in which a coin is tossed over and over again. Suppose that the coin is in fact fair, so that the chances of it coming up heads are exactly 0.5 (a half). Then the chances of it coming up heads twice in succession are 0.5×0.5=0.25 (a quarter); three times in succession, they are 0.125 (an eighth) and so on. Nothing fallacious so far;.."


That's where his fallacy is. Let me explain this. The gambler doesn't care that the probability of HHH is 0.125, because he is betting on ONE TOSS, NOT THREE. The probability of HHH is irrelevant. Only the probability of H (50%) is relevant, because we are betting one toss at a time. H and HHH are two different things, with different probabilities, and if you get them mixed up, things get blurry.


Considering the probability doesn't change:

A random event is more likely to occur because it has not happened for a period of time; - Correct, the degree of certainty for that event goes up.

A random event is less likely to occur because it has not happened for a period of time; - correct, the degree of certainty for that event goes down.


What I am explaining here is VERY subtle, and evades even the most brilliant minds sometimes. I beg you, dear sir, to re-read my comments again if you do not understand at first.


I still cannot make complete sense of what you are saying, but I may be able to if you can answer the following question:
Suppose I have ten billion people. They each flip a fair coin three times. Now I separate out all those people who have flipped three heads in a row, and I ask them to flip one more time. What percentage of those people will flip heads and what percentage will flip tails? PAR 16:08, 20 December 2005 (UTC)

reply.

The Short answer

10% of 10 billion will have HHH. You separate them out. That's one billion. They're going for a fourth flip. The probability of either H or T is 50%. The degree of certainty of a fourth consecutive H is 5%. 5% of the one billion will get H. 95% of them will get T. (WARNING: the other people who stopped at 3 flips do not matter at this point.)

If you thought the answer is 50% of one billion will get HHHH, and the other 50% will get HHHT, you are incorrect. You are following the author's fallacious logic by ignoring degrees of certainty.

How did I arrive at my results?

N = log (1 - DC) / log (1 - p)

At this point, your intuition is telling you I am wrong, but please read on.

The Long Answer

Say we flip a coin twice and get 2 heads. The degree of certainty of H after 2 H is 10%. So we have a 10% chance of getting HHH, versus 90% chance of getting HHT.

If we are playing the H or T (50% probability), HHH is "what is the degree of certainty that H will come up a third successive time?" the answer to that is 10%. We are at trial 3.

If we are playing the HHH or HHT or else (12.5% probability), HHH is "what is the degree of certainty that HHH will come up?" the answer to that is 12.5%. We are at trial 1.


I know exactly what is bothering you about all this. Say we're playing a game where 3 consecutive heads is a win. We got 2 heads in the first 2 tosses. "Why does it matter what game we're playing at the third toss? It's not gonna change the result." It won't, but it will tell you how to calculate DC for upcoming trials.

"why can't we consider every 3 tosses an event by itself? In that case the probability of HHH is 12.5% and the author is right!"

We can. However, in the case of 3 tosses (1/8 probability), if we have:

event1(T,H,H) event2(H,T,H), it DOES NOT count as a win (HHH). In the case of one toss at a time, 1/2 probability, it DOES. The gambler is betting ONE TOSS AT A TIME, NOT THREE. It is therefore irrelevant to talk about the probability of HHH in the article, because it skews the data. It ignores this HHH I just demonstrated. It's a totally different game and cannot be used to disprove the gambler's logic.

The author build a very solid argument because he blurs the line between probability and degree of certainty by using the word "chance," and jumps from one game to another whenever he sees it fit. In other words, he says p=12.5%, so for instance, he is playing a game of three tosses = 1 event, where HHH wins. But when THH HTH comes up, he declares himself a winner! This is an example applied to the meaning of his words; he does not play any games in the article.


He does what I like to call data prostitution.

THE GAME WHERE [ONE EVENT = ONE TOSS] AND THE GAME WHERE [THREE TOSSES = ONE EVENT] ARE TWO DIFFERENT, SEPARATE GAMES.


Remember:

N = log (1 - DC) / log (1 - p)


Thank you for reading my comment. It took me four hours to write this comment, organize it and think of the examples to demonstrate the flaw in the article. Please take some time to read it carefully, and if there still are things you do not understand, please let me know.


Please don't go into long discussions. My question was:
Suppose I have ten billion people. They each flip a fair coin three times. Now I separate out all those people who have flipped three heads in a row, and I ask them to flip one more time. What percentage of those people will flip heads and what percentage will flip tails?
Your answer was:
10% of 10 billion will have HHH. You separate them out. That's one billion. They're going for a fourth flip. The probability of either H or T is 50%. The degree of certainty of a fourth consecutive H is 5%. 5% of the one billion will get H. 95% of them will get T.
This is simply wrong. We cannot continue this discussion until you realize that this is wrong. It will take some work, but you need to start flipping coins.
Start flipping a coin, and every time you get three heads in a row, flip it again and then mark down your answer. After you have 40 entries, you will see that the number of heads is quite a bit larger than the 2 that you predict. (5% of 40=2). If you are a programmer, you can speed the process up by writing a program that uses a random number generator, so that you can get the answer quickly. PAR 03:29, 21 December 2005 (UTC)

Read this..

..carefully.

The answer to your question lies here. http://www.saliu.com/Saliu2.htm#Table

If you still do not understand degrees of certainty, please let me know.

You are misusing the "degree of certainty" concept. The oft-quoted 10% number in this table (p=1/16, DC=50%, N=10) means this: you will have to run at least 10 trials of a 1-in-16 event (such as HHHT or HHHH) before you will have a 50% or higher probability of the event happening one or more times (e.g. observing HHHH once, twice, etc but not zero times). The reason that N is not 16 is because there is a small chance of seeing two or more occurrences of the desired event (in other words, you could occasionally actually see four heads in a row twice in 10 trials, even through the chances of seeing it are 1/16 every time). For simplicity, let's talk for a moment about throwing four fair coins at once, so each event is one throw. When you add the chances of the event happening at least once (e.g. happens first throw only, second throw only, ... tenth throw only, first and second throw only, ... first and tenth throws only, second and third throws only, ... second and tenth, first, second and third, first, second and fourth, .... all ten throws) these presumably add up to a little over 50%. I've used permutations here, you could think in terms of combinations also, so add the chance of one event being observed (on the first, second, or tenth throw, it doesn't matter), two events, and so on up to ten events.
So to analyse PAR's problem with 10 billion people: 1 in 8 of them will throw HHH, or 1.25 billion, not 1 billion. Of those 1.25 billion, half will throw another head, and half a tail, so that's 625 million each. To use the 50% degree of certainty figure, after the 10 billion have flipped four coins, divide them up into a billion groups of ten people each. Within each group, there have been 10 trials of a 1-in-16 event, say HHHH. About half of the groups, 500 million of them, will have recorded at least one result of four heads. There are 625 million people with the four heads results, but some groups have two of them, some groups have three, there are by my calculations about 954 groups on average that have five four heads results. Overall half the groups (500 million) have no four heads results. --Mike Van Emmerik 22:46, 21 December 2005 (UTC)

I do not know who ace_kevi is. He quotes here my webpage on the Fundamental Formula of Gambling.

The key point is that most people, no matter how well educated they are formally, including mathematics, have a static view of probability. They know and want to know one and only one element: The probability p of an event.

The cornerstone of gambling knowledge is the Fundamental Formula of Gambling (FFG):

N = log(1 – DC) / log(1 – p)

Right now, most people still confuse probability for degree of certainty. Randomness is the rule of the Universe. Randomness has three fundamental elements:

Number of trials N,

for an event of probability p

to appear with the degree of certainty DC.

Indeed, the probability for ‘heads’ to appear is always ½ (or ‘1 in 2’, or .5, or 50%) in coin tossing. But the degree of certainty is 75% that ‘heads’ will appear at least once in two consecutive coin tosses. Again, it is worth repeating that ‘heads’ has the same probability p from one toss to the next. But we deal with a totally different parameter when we consider the number of trials N as well.

The argument is…there is no argument. Simply look at real data. Toss a coin in series of two tosses. Record how many times ‘heads’ appeared (including zero times). Repeat the two-toss series 1000 times. In around 750 series, the ‘heads’ appeared at least once. Only around 250 series have recorded zero ‘heads’. What can be more convincing than real data?

The scholastics (strong traditionalists) will make you believe that ‘tails’ will come up forever while you bet on ‘heads’! They argue that ‘tails’ always has a 50% chance to appear, regardless of what happened in the previous toss. But they hide the fact that ‘heads’ also has a 50% chance to come out! Hence, only the degree of certainty parameter is the most accurate in “predicting” based on what happened previously. Why hasn’t anybody reported 100 ‘heads’ (or ‘tails’) in a row? Let alone that many argue that ‘heads’ (or ‘tails’) can come out an infinite number of consecutive times — as a matter of fact!

Can systems be devised on recording the events? You bet! Matter of fact, the casinos ban me and ban now gamblers who record (write down in a notebook) casino events, such as roulette spins, blackjack hands, even baccarat hands. The casinos even turn off the marquees — the electronic displays at roulette tables! Why? They still have the house edge intact!

Say, you lost a blackjack hand. If you divide the entire session in two-hand series, the approximation is that only 25% of the two-hand series have two losses. Thus, I can increase my bet on the second hand. The bias is even stronger if considering three-hand series. On the other hand (no pun intended!) you can also expect, mathematically, two wins in a row. If you didn’t record two consecutive wins in a number of trials, you can increase your bet again! That’s what frightens the casino bosses!

Ion Saliu http://saliu.com/occult-science-gambling.html “Science Of Gambling - Martingale, Fibonacci Progressions, Double-Attack Blackjack” —Preceding unsigned comment added by Parpaluck (talkcontribs) 19:13, 4 August 2008 (UTC)

O.K., you can claim that nearly all mathematicians in history are wrong, but you still can't argue with actual data. Do the experiments other users have suggested, not the ones you have devised. Consider this one: Flip 100 coins three times each. The coins that have just flipped heads thrice in a row, flipped again; the rest, discard. According to your interpretation, you can be 95% certain that a given coin will flip tails. This is patently absurd, and actual execution of said experiment (performed many times, of course) will bear this out. You are indeed confusing degree of certainty and probability. Specifically, you are looking at the degree of certainly retrospectively, which is no better than looking at the probability retrospectively. Of course the degree of certainty of a coin flipping heads four times in a row is small, because one would on average have to flip a coin about ten times for such an event to occur, but GIVEN THAT THE COIN HAS FLIPPED HEADS THRICE ALREADY, the degree of certainty is 50%, as one would expect. Degree of certainty and probability are not such different notions, as you make them out to be, and neither should be used retrospectively when considering future actions. Also note that by your logic, objects deemed "lucky" because of fortunate things occurring around them should be avoided because of the certainty that unfortunate things will soon occur around them.

Eebster the Great (talk) 03:27, 2 March 2009 (UTC)

Ion is correct

Eebster, what Saliu is saying is that if you plot the actual flip results on a line, starting at position 1 and read left to right, moving one increment (one flip) at a time, viewing the results through a window which is 3 flips wide, the results which display in the window will only show all tails approx 25% of the time. I should have read Ion's comments in more detail before starting my own thread below, because frankly he's right and he's articulating exactly what I was trying to say. You can go here to generate a visual which will help you understand his point. I did one for 1,000 flips. If you cut each line and lay them end to end in the order the appear (top line, then next line, etc), you will preserve the results accuracy and can use that graphic for your test. Cut a window or notch 3 coins wide in an index card and slide it one coin at a time left to right. As you do that, record what appears in the window. Of the 1,000 flips (positions on the line), there will be 250 positions on the line with no heads appearing in the window. I am fully confident of this and suggest you try it if you think it's wrong. You can save the results page (via view source, save as) easily and reformat the page by placing the results inside a fixed width single row, single column, html table which will print an even number per line, making the printing, cutting and taping easier. FYI: The "degree of certaintiy" doesn't refer to the next flip, it refers to the likelyhood that any sequential three positions, read left to right, in order of flips, one increment at a time, will show no H's. It's true that the possibilities are: HHH, HHT, HTH, THH, TTT, TTH, THT, or HHT, but Ion is saying that of 1,000 positions on the line of flips, a window 3 coins wide will only show no H's 25% of the time and he's right about that. Do the test. Print the paper and look in the window. 216.153.214.89 (talk) 09:31, 3 March 2009 (UTC)

Are you talking about Saliu's blackjack strategy? If not, which website were you on, because this was not my understanding of his blackjack strategy. --67.193.128.233 (talk) 14:35, 3 March 2009 (UTC)

67 rebuts 216's assertions re: Ion

Ok, regardless of where you got this problem from, I can answer it for you. The short answer is that 12.5% of the time your window will show no H's (i.e. it will show TTT). The only way to prove this is with a Markov Chain analysis so you're going have to take my word, although I will explain (without all the details) in the next paragraph how I arrived at this. By the way, I did your experiment with 500 flips on the website you mentioned, I got 53 sequences of TTT, that's 10.6% of the time. Just to make sure we're doing the same experiment, if I saw TTHHHTTTTHHT I counted 2 windows where there are no heads (in the sequence of 4 T's).
Here's a rough sketch of what's going on. Let be a sequence of coin flips, then for each i, with probability 1/2 and with probability 1/2 (and the sequence is independent). Now we will construct the sequence as viewed from our window. Let be defined as
So contains the window contents at time i and when we go from i to i+1, the flip is bumped out of the window and comes in from the right. For any given i, there are 8 possibilities for . Either HHH, HHT, HTH, HTT, THH, THT, TTH, or TTT. The sequence of random variables forms a Markov Chain over the state space
Briefly, a Markov Chain is a sequence of random events (random variables) such that the outcome at time i is dependent on the outcome at time i-1 (and ONLY that outcome). So if then there are only 2 possibilities for . Either HHT or HHH again and each will occur with probability 1/2 (because they depend on the outcome of the next coin flip). These are called transition probabilities. A Markov Chain analysis is done by considering each of the 8 states individually and computing all the transition probabilities. Informally, we consider starting in each of the states an look at which states we can get to in one step from that state and consider the probability of actually making that transition. A lot of these probabilities will be zero. For instance, if you are in HHH at time i you cannot be in TTT at time i+1 so this transition probability is zero.
Ok, so once you have the transition probabilities, you can compute something called a stationary distribution. This tells you what proportion of time the Markov chain will spend in each state (i.e. how often each state will be visited). I have done this and we get a uniform stationary distribution. This means that each state in S will be visited equally often. So 12.5% of the time we will be in HHH, 12.5% of the time we will be in HHT, 12.5% of the time we will be in HTH, etc...
So the conclusion is that 12.5% of the time the sliding window (Markov Chain) will be observing TTT (i.e. no H's). By the way, if anyone has studied Markov chains and wants to see more details, I'd be happy to send a more detailed solution. I should also note that the reason we had to use Markov Chains is that the probability our window seeing a specific sequence of heads and tails is dependent on what the previous window saw (because 2 of the flips are common). If we instead shifted the window by 3 each time, then the windows would not overlap and the analysis would be much easier (but that is a different problem). --67.193.132.193 (talk) 18:54, 3 March 2009 (UTC)

Sounds like Ion's formula is off by 50%? Also, rather than a Markov Chain calculation, a physical sliding window across a printed series along with a notepad could also count the instances, right? My point is, if instances can be manually counted, subject to knowing how many spins a roulette wheel does in a day, one could calculate in advance, how many 7-runs (or 10-runs) a single wheel could probablisitcally experience in 1 week, right? Well let's say for instance that a 7-run happens twice a day, 10-run once daily, 15-run once a week and 20-run once a month, etc.; I am not understanding how you contend that the time gap isn't exploitable by stepping in after seeing a 7-run. It seems that the rarity of the longer runs give you an advantage in a single instance of play. 216.153.214.89 (talk) 19:37, 3 March 2009 (UTC)

67 offers aid

You are still taken by the Gambler's fallacy. Future rolls CANNOT be influenced by past events. They are all independent rolls.
And if Ion says it's 25% then he is wrong. I did the experiment as you suggested manually using an online coin flipper (the website you linked to). I used 500 flips and got 10.6%. But please point me to the website where he says this because I have not seen this claim. Maybe the context is different. --67.193.128.233 (talk) 20:41, 3 March 2009 (UTC)
Another thing, if a simulation would help you understand, I can run just about any simulation you want. Tell me the exact game you want to play, and what your strategy is and how many times to play and I can give you the results. --67.193.128.233 (talk) 20:41, 3 March 2009 (UTC)

67 - Thanks. Here's the plan: Run a series of 1,000 flips. By my count, that's about how many spins a roulette wheel spinning every 60-120 seconds can do in a day. Presume we have no 0 or 00 on the wheel (BTW: I don't gamble, drink or smoke - I'm just interested in math). Now, take that string of 1,000 flips and imagine they are in a line, just as if the were generated on the web site I linked for you. When you are done, tell me the count of the three longest length runs. In other words, we mihgt get one run of 7, three runs of 6, five runs of 5, etc. Just the three longest runs. Then, tell me the number of flips between these runs such as 300 flips, 5-run, 82 flips, 5-run, 205 flips, 7-run, etc. Armed with that information, we'll imagine that the 1,000 series forms the ring of a wheel and we'll spin that wheel. The total number of H and T on the wheel is almost exactly 50/50, right? What are the odds you'll land anywhere on one of the longest length series? In other words, we can visualize the wheel's perimiter is populated not with H,T,H,T,H,T,H,T, but rather HH,T,H,T,H,TTT,H,T,H,T,HH,T,HHHHH, etc. Based on the random method we used to create our perimiter (or ring), I imagine the long strings will be somewhat spaced out around the circle. And since we have almost a 50/50 count of H/T, any spin has a 50/50 odds of landing on H or T. But, my question is, what are the odds that any one of the designated "runs" (of specified length) will be landed on? A breakdown of probability per specified length is preferred. In other words, if there's one 7-run on the 1,000 position wheel, are the odds 7/1000 that we'll land anywhere on that 7-run? What about if we have three 5-runs? Are the odds 15/1000 that we'll land on one of those? Is it harder to land on one of the lesser instances of a longer run, than one of the more numerous instances of the shorter run? What about if we track also 3-runs for a more numerous comparison? Now it's very important that you actually execute the 1,000 flips and assemble the hypothetical wheel with the perimiter positions assigned in the exact order of the actual flip results. I need to visualize our hypothetical wheel as it spins. After the wheel is generated and spun and we have the numbers I asked for, I want to revisit the question of waiting for a particular length run to see if my wheel model tells me anything new. BTW: I see the 1,000 series on the perimiter of the wheel as a model of 24 hours of sequential roulette spins (sans the 0 and 00) and when we spin this 1k wheel, it's like me walking into a casino at random and arriving at the regular casino wheel. But if we have 50 of these 1k wheels, it's like walking into a casino on 50 different days or seeing 50 regular R wheels spinning in the same room. Of 50 of these wheels, what's the longest string that occurs (let's say 12)? And if we wait to start 3 steps back from that (at 9), haven't we isolated a rare event? Isn't our 9-run guaranteed to end before 13? I'm talking just from the pools of spins as represented by the 50 custom time-spin wheels we made. That's 50 days of spins and I want to know the longest run which occured on any 1-k wheel out of those 50. I also want to know how many of those runs there were and I want to emulate starting to bid 3-steps back from that (on a run). Then, I want to mesure how much doubling down I had to do to win there, with if I start on any 1-k wheel at 1st flip. Sounds like a big task - is it too much work to model? 216.153.214.89 (talk) 22:16, 3 March 2009 (UTC)

This is far too complicated, can't we do something simpler? You were arguing that if you step up to a roulette table right after a 7-run of red, then it should be to your advantage to bet against another 7-run in the near future (since they are relatively rare). But this is exactly the gambler's fallacy. What I propose is to do to better explain this is to generate a sequence by flipping a coin until you get 7 heads in a row and then record the next 50 flips and then stop and repeat. We'll have a bunch of strings of 50 flips that come right after a 7-run. Say we do this 20 times then we'll have 1000 flips and we'll count the number of 7-runs in these 1000 flips and compare to a sequence of 1000 flips in a row. According to your theory, the 7-runs should be spaced out, so since each of the 50 flips come right after a 7-run, there should be less 7-runs in the first experiment than the second. Would this be satisfactory to you? --67.193.128.233 (talk) 00:41, 4 March 2009 (UTC)

67- Perhaps it is too complicated, but I think it's more realistic and here's why: When you start after a 7-run (or 8,9,10 - something very rare), you are virtually assured of not seeing another run like that on the same day. Now here's what we know: In order for any run to continue long enough to ruin you, it has to continue, but if you only enter the game at the point where it's already a very rare run, it's very unlikely than on that same day with this same wheel, any run will end up much longer than the run you are joining. Why, because long runs are rare. As I see it, if you start on a long enough run, you'll likely be starting on one of, if not the longest run of the day. That being the case, and because long runs fail sooner than short runs, unless the long run you are on goes on to be a monster run, you'll win PDQ. that said, I am not interested in flipping to 7 multiple times, I am only interested in joing at 7 one time. We've already established that most series fail before 8, so if you join at 7, you are already at the point where the attrition is wearing down the aggregate likelyhood from any given time span. In any given day, if you start at a run that's close to what's typically the longest for the day, you should win in just a few flips. All things considered, I think that by joining in at a longer series, you could shave a flip or two off what you'd otherwise need for the series to fail. I think my proposed wheel might show that. Anyway can you program that wheel? BTW: I still agree that ultimately, there's too many things working against any scheme which doubles down on even money. No actual real gamblers worth their salt would choose this as their game. But still the siren song of seeing how iterations play out calls to me and draws me onto the rocks..? Would my time-snapshot-wheel show us anything? I don't know. Can you make any observations about my design? A quick PMI maybe (pluses, minuses, interesting points)? Thanks. One more point, if your assertion that we should only pay attention to the next flip and nothing else is correct and if your assertion that all flips are equal, no matter what time frame they occur in is true, then one series of HHH, followed immediately by HHH and HHH again should happen all the time. The flips aren't related, so if I start right after the HHH,HHH there's no reason not to expect HHH right then and there again, or is there? If we put my hypothetical wheel together we could study the 1,000 flip series and the footprints of each series as they appear around te circle. What we would find is that the HHH's, TTTT's, TTTTT's and HHHHH's are not grouped together hardly ever, are they? What's in between them? Short series and series interruptions, yes? By my reasoning, since the wheel I envision is filled with random entries and a hodge-podge mish-mosh of series sizes and non-series flips, when I'm not on a short series or non-series, I must be on a long series, but if the random pull of 50/50 is always there, the longer I am on a long series, the soone, I must exit that series - due to the inveitable pull of returning towards 50/50. Of the total spaces on my wheel, much more of them are occupied by the elements of short/non series than long. With 50/50 pulling at me, it will be rare indeed if I don't fall off my long series back on to the opposite. And you know, if a longer series than the one I am on shows up, there's only a 50% chance it will be the same side (H or T) I am on. In other words, let's say I start at HHHHHHH and as chance would have it, a 25-series shows up that night - long enough to bust me. There's only a 50% chance that the 25 series is H. It could very well appear as TTTTTTTTTTTTTTTTTTTTTTTTT. Remember, if I'm on HHHHHHH and even one T comes in, I win. So 50% of the threats against me (that of longer series) is removed. Whereas in your probability calculations, you are incorporating both H and T and giving yourself a pool of twice as many possible strings to continue. As I see it, when you are already on a long single side streak, the 50/50 nature works for you, not against you since any opposite wins. But in the aggregate universal pool of flips, if I'm on my table facing TTTTTTT, the guy on the table next to me, sharing the same math, could be geting killed by TTTTTTTTTTTTTTTTTTTTTTTTT, with me only needing T to win. If you are adding up aggregate fails and factoring them into the odds, you can't count an 8T towards the odds for an 7H streak, right? All T+ must be excluded from the aggregate odds when counting the chances that any H+ streak would continue - because any T makes H win, right? I think it's a simple subtraction. You've got to remove the values of the incidents of "continue" that are created by T+ and recalc using only the H+ and T (single T) slices of the data because T and T+ counts both as a defeat for anyone on T and but as a win for H+, because T and T+ are both a stop for H and a win for H. So even though T alone is not a continue for T, T+ is, so our formula must not allow the T+ continues to be counted as losses for H, because they are not, they are wins. In other words any series "losess" (contnues) which occur on the opposite side are not losses for us on our side, they're wins. Because we are on H, any T, whether T or the first T of TT, TTT, TTTT etc, let's us win, but.. All the strings of T which are TT or longer are losses for T people. Anyway, I'm probably talking out my butt here, but this is how it seems to me after reading Ion's postings on this page and I was hoping to get that virtual wheel to test it. 216.153.214.89 (talk) 02:34, 4 March 2009 (UTC)

While such runs are rare, they are not rarer after having occurred. Remember that the wheel spins according to laws of physics, and where it lands is determined by various preconditions. How does having previously spun this wheel affect these preconditions? It would seem the wheel looks identical before and after a long, rare streak, and thus should spin in exactly the same way. If it spins in exactly the same way, the probability of landing on various spaces should be exactly the same.
Let me take this a step further. Consider a coin right after minting, and flipping it ten times. You do this with many coins, and eventually you get very lucky and this coin lands on heads ten times in a row. Firstly, you flip it ten more times. It may at least at first seem reasonable that most of these flips will be tails, to "even things out." However, compare it to another experiment where one remelts and mints the coin such that what was before tails is now heads, and vice-versa. Will most be tails, to even out the heads, or will most be heads, because that side of the coin landed up less often before? How about events that aren't even related to coin-flipping around it. Radioactive decay is also very rare for a given radioactive isotope, so will these slow down after a hot streak of flipping the coin? What if you get ten heads in a row, as stated, but then put the coin in a chest and wait ten years, reopen it, and take the coin out and flip again. Will this result in more tails than heads, or will the effect have "worn off" by then? What if, without your knowledge, some tricky gnomes in the chest secretly flipped the coin while you weren't looking until it landed on tails ten times, "evening out" the streaks, then put it back down. You, unaware, come back ten years later thinking the coin's last flips were ten heads in a row. How would this affect things?
The key, of course, is that all these situations assume using the history of a given coin one can predict future flips. We know this to be impossible; while we can calculate future probabilities by simply multiplying individual probabilities and such, once a given action has occurred, this can't affect what will happen in the future.
If none of this helped, look at it this way. The reason long strings of heads are rare isn't because after the penultimate head, the coin is bound to turn up tails; they are rare because in order for them to happen, not only will the coin have to turn up heads, it will have to do so again, and again, and again, and again. The exact same probability for HHHHHHHHHH can be assigned for, for example, HTHTHTHTHT. Once you have gotten nine heads(HHHHHHHHH), you've already done something very rare. All that's left for it to be as rare as a ten-streak is one more heads, a 50/50 chance. If getting heads were progressively rarer for each flip in a series, these series would be much LESS common, because after H you would probably expect T, so HH would be slightly less common than HT which we know to be 25%. HHH would be even less; instead of 12.5%, maybe it would be just 5%. Hopefully by now you can see what I'm trying to explain.
If none of these help, maybe you could state some more specific, briefly stated misunderstandings, misgivings, or complaints. Sometimes probability can be difficult to grasp at first, but once it's understood, it should become clear why it is true. At least, that would be ideal. Eebster the Great (talk) 02:58, 4 March 2009 (UTC)
Well said Eebster there's not much I would add. 216: Think about what you said for a second:

" We've already established that most series fail before 8, so if you join at 7, you are already at the point where the attrition is wearing down the aggregate likelyhood from any given time span."

I offered to run a simulation to show you that this type of thinking is wrong. You seem to want a simulation to validate your gambler's fallacy and that's really a waste of my time. My proposal is still on the table though. If the above statement is really true(which it is not), then it should show in my proposed simulation. --67.193.128.233 (talk) 14:59, 4 March 2009 (UTC)

67 - Thanks for you patience. If you read back to my original post and follow my others, you can see that I've been thinking this through, even though I already agree that it's a dumb idea to do double-down on even money 50/50 betting. My interest was arriving at a succinct way I could verbalize a correct understanding even without the superior math vernacular you possess. The simple fact is that without your level of math knowledge, it's hard to relate in an articulatible manner any agreement with what you say as I have no valid starting point from which to connect. That said, here's my way of explaining what I've learned here. Tell me what you think:

1) Starting on Flip-win#1+?#wins is no different from starting on flip #1. There's no advantage to waiting until "?#wins"
2) That's because even though long streaks are unlikely, the next flip is always 50/50.
3) Therefore, if the next flip is always 50/50, it's sort of like fighting an endless series of death matches with an evenly matched opponent. No matter how many you've killed before, until you beat the last one (ie; the streak breaks), every upcoming match gives you even odds.
4) This "man" fighting the matches for you (your proxy) is exactly as good as his opponents, so chances are he'll win eventually.
5) But since in every match he's on the leading edge (the next flip), facing an even-steven death match, if you sponsor him in a series of fights, you better have a huge cash reserve because he might lose a goodly number before he wins.
6) It's the automatic 50/50 even-steven nature of the upcoming opponents which govern because at any time, one of those two 50's can beat your guy - no matter how many matches he's won before.
7) Odds are a tough thing to visualize and take some thinking (my wheel design helped me think it through) for the non-math mind.
216.153.214.89 (talk) 18:16, 4 March 2009 (UTC)


Everything above is pretty much correct. The analogy between flipping a coin and a series of death matches is a bit strange to say the least, but if it helps you understand then that's good. I guess you need to assume that your fighter never gets tired or else his past performance could affect future fights :) --67.193.128.233 (talk) 19:40, 4 March 2009 (UTC)

Basically, what it boils down to is the reason why you can't ever catch the Chimera of "increasing odds of failure" is that those odds never do increase and the Chimera doesn't exists - it only appears to on the horizon sort of like a heat mirage. On the other hand, maybe you exploit this by convincing someone else to try it - and getting them to bet against you, the house (he he). Also, for "death match" info, see Mortal Kombat 216.153.214.89 (talk) 20:42, 4 March 2009 (UTC)

    That's exactly it. I actually like the analogy to a fighter, because you can kind of see how in a match of skill (rather than probability), if the two are evenly matched it's still a 50/50 fight regardless of past fights. The fact that this applies to odds is for a somewhat different reason, but it's still certainly true.
    The only other important note is that this applies for all probabilities, not just 50/50. If the house has a slight edge at first, it will still have a slight edge ten years later. If you're buying lottery tickets that only have tiny odds, but that lottery hasn't won in a very long time, it isn't any more "bound" to pay out than before (although the jackpot is much bigger now, so it is actually more worthwhile to play). That is the crux of the gambler's fallacy. Eebster the Great (talk) 01:30, 5 March 2009 (UTC)

Here's another point which I am able to better articulate now: If a person thinks "I'll wait until 7 flips in a row to bet, because then I'll be within 3 flips of 10 and 10 in a row is rare to happen, so my chances rise" what's left out is that 7 is only three flip increments away from 10 forwards or backwards. In other words, a 7 string is already 70% of the way towards a rare 10 string which reduces the rarity of the distance between 7 and 10 to a non-rare 3, so the imagined rare "advantage" is already lost, leaving us that the remaining 3 flip increments don't favor winning or losing. It's like having a boat anchor tied to your ankle - you think you are leaping ahead by waiting to xFlips, but in reality, you aren't out-running anything. There's no jumping the line on flips, because the starting point is always being dragged forward and reset after every flip. 216.153.214.89 (talk) 04:45, 5 March 2009 (UTC)

That is sort of the long-term view of the gambler's fallacy, yes. I actually find that view fairly satisfying, personally, but I don't often hear other people articulate it that way. Eebster the Great (talk) 04:11, 6 March 2009 (UTC)

Basically what it boils down to is that one-flip always decides everything and one flip isn't a rare event, even if it does follow 500 heads in a row. I suppose you could bet that a 10-run won't happen in the next hour, and if you got good odds, benefit from "rarity" that way, but that's not the same thing. Rarity of events is a value point in betting, but it's hard to see at 1st that long runs of flips are a centipede, not a snake and are not contiguous as a monolithic thing. They are no more than abutments of no consequence. 216.153.214.89 (talk) 06:58, 6 March 2009 (UTC)

Degree of Certainty

Let's make sure we all understand degree of certainty since there are some wild claims about degree of certainty being different than probability. Everything here is taken from Ion's website. Say we have an event A with probability p (e.g. a coin toss, A=heads, p=1/2) and we are considering independent trials of this event. We want to answer the question: "In N consecutive trials, what is the probability that event A occurs at least once?". To answer this question, we'll pose the opposite one: "What is the probability that the event A does not occur in N consecutive trials?". Let this event be B. Then:

This comes from the fact that for one trial, the probability that A does not occur is 1-p and the trials are independent so the probabilities multiply. Now let C be the event that A occurs at least once in N trials. This is clearly the opposite of B so:

This is what Ion calls "Degree of Certainty", but there is nothing special about this quantity. It is just the probability that our event A occurs at least once in N trials.

Now to get to his formula involving logarithms, just rearrange this equation

And take logarithms using the fact that log(x^n)=nlog(x)

This formula tells me how many trials, N , do I need to perform before the probability (degree of certainty) of A occurring at least once is DC. Please note that degree of certainty is just a probability. Ion gave it an intuitive name, but this should not cause us to confuse it's meaning.

How about an example. Let's say we're flipping a coin and the event A is that the coin turns up heads. We want to find N such that over N consecutive coin tosses, the probability of at least one head is 0.99 (or greater). So plug DC=0.99 and p=1/2 in the above equation. We get

Thus we need at least 7 flips before the probability of at least one flip being a head is 0.99.

Now here's where people are commonly confused. We know now that the probability of at least one head in 7 flips is 0.99. So say I flip a coin 6 times and get tails each time, then you might think that the 7th flip must have a 0.99 probability of being heads to agree with our above statement. However this is wrong. It does not make proper use of conditional probability. Let's go through this slowly. Let A be the event that out of 7 flips, at least one is heads. We found above. Now let B be the event that the first 6 coin tosses are tails. When we condition on the event B, the probability of A changes and is given by the formula

We know that but what is ? This is the event that both A and B occur, so the first 6 are tails and there is at least one head, therefore, contains only the sequence T T T T T T H. The probability of this sequence (which is made up of independent tosses) is the product of the probability of each toss thus

So plugging these into the above formula for P(A|B) we get

So conditioned on the fact that the first 6 flips were tails, the probability of at least one head in those 7 flips is 0.5. Note that we could also define a "conditional degree of certainty" too. In this case, the degree of certainty that there will be at least one head out of 7 flips, conditioned on the fact that the first 6 flips are tails is 0.5. The important thing to understand is that conditioning changes probabilities and degree of certainties (since they are just probabilities too). I hope this helped clear things up. --67.193.128.233 (talk) 14:15, 3 March 2009 (UTC)

67- your math is excellent, but I think you are missing out on the "instances" aspect here. No one is disputing you on the 50/50 odds. Rather, what Ion and me are saying is that in any given set of flips, there will be a limited (and he says calculatible) number of instances where continuous runs occur of any given length. In his example, of 1,000 flips, there will be 250 instances where 3 flips in a row will result in tails. Please focus on that and disprove it if you can. 216.153.214.89 (talk) 15:19, 3 March 2009 (UTC)

This was not meant to prove or disprove your claim. It was meant as a response to Parpaluck's misuse of the notion of degree of certainty. I think everyone should understand degree of certainty before trying to use it to back up their claims. For example the statement by Parpaluck:

"Right now, most people still confuse probability for degree of certainty"

is completely absurd. As I described above, degree of certainty IS a probability. It's the probability that an event A occurs at least once out of a set of N trials. Statisticians have been calculating this "degree of certainty" for hundreds of years. It is only now that Ion Saliu gave it a special name, but it is hardly an important enough result to be given a name. --67.193.132.193 (talk) 18:16, 3 March 2009 (UTC)

Problem

I have a problem with this quote from the article: "MSimilarly, if I flip a coin twice and tell you that at least one of the two flips was heads, and ask what the probability is that they both came up heads, you might answer, that it is 50/50 (or 50%). This is incorrect: if I tell you that one of the two flips was heads then I am removing the tails-tails outcome only, leaving the following possibile outcomes: heads-heads, heads-tails, and tails-heads. These are equally likely, so heads-heads happens 1 time in 3 or 33% of the time. If I had specified that the first flip was heads, then the chances the second flip was heads too is 50%."

As far as I can see, the chance of this is in reality 50% and NOT 33%. If you say you have one head, and ask for the probability of both heads, you are eliminating two choices: either tails/tails and tails/heads (if the heads is the first one), ir tails/tails and heads/tails (if the heads is the second one). Both of these leave only two possibilities, each of which has a 50% chance of occuring. As a result, I am removing the aforementioned text until someone presents a better argument for the quote.

This is easy to resolve - get two coins, a piece of paper and a pencil. Flip the two coins.

If one of them is heads, write down on the paper what the other one was. In other words, when you get TT, don't write down anything. When you get HT or TH, write down T. When you get HH, write down H. After a while you will start to see that there are about twice as many T's as there are H's. (PS - please restore text.) PAR 03:29, 18 January 2006 (UTC)

I agree, please restore the text. Only the TT choice should be eliminated by the statement that "at least one of the flips was heads". The only justification for removing the TH or HT possibilities would be if they specifically said which toss, the first or last, was heads, which they did not. StuRat 07:30, 18 January 2006 (UTC)
The absolute absurdity of this example actually has inspired me to create a Wiki account and correct this. Each coin flip is considered an independant event in mathematics, and from the article "the coin does not have a memory," therefore order does not matter. The probability of scoring two heads, given that one of the tosses already is a head, can be represented by the equation:
Where is equal to the number of ways to score two "heads" given that one head has already been turned up, and is the total number of possible outcomes. must equal 1, because the only way to get two heads is if the unknown coin is a head, and is equal to 2 because the unknown coin can be either a tail or a head.
or 50%. http://www.mathpages.com/home/kmath309.htm I am removing this example. Alex W 15:46, 10 March 2006 (UTC)

PLEASE PERFORM THIS EXPERIMENT: Get two coins, a piece of paper and a pencil. Flip the two coins. If one of them is heads, write down on the paper what the other one was. In other words, when you get TT, don't write down anything. When you get HT or TH, write down T. When you get HH, write down H. After a while you will start to see that the number of T's approaches a third of all entries. In other words, there will be twice as many T's on the paper as H's.

Please, do not talk about what the results will be by analyzing the situation. DO THE EXPERIMENT. We can go around and around forever arguing about what the outcome will be based on logical analysis of the situation, but if you do the experiment, you will see that the example is correct, and it should be restored. Then we can talk about how to analyze the situation. PAR 16:18, 10 March 2006 (UTC)

Your experiment is correct. When coin flipping, you will find that 50% of the time, the result will be one head and one tail. This is represented by the following formula:
Where is the number of combinations with one tail and one head, and is the total number of possible outcomes. is equal to 2, because the the following outcomes with one head and one tail are possible, either H-T, or T-H. The total number of possible combinations is 4, T-T, H-H, H-T, T-H.
Therefore, you are correct is stating the H-T / T-H results will outnumber the H-H results by a ratio of 2:1. You are twice as likely to have an outcome of H-T or T-H than a result of H-H. Unfortunately, this experiment cannot be applied to this example because it fails to take into account the fact that the outcome of one of the flips is already known.
Some of you seem to be ignoring the fact that "at least one heads" also includes both heads. So at least one heads means HH, HT, or TH, all equally likely. Saying that the ordering doesn't matter doesn't help much; you have to remember that one head and one tail has twice the chance of one head and one tail. Here is a new way to calculate it, using Bayes theorem:
            Pr(B|A).Pr(A) = Pr(A & B)
Pr(A | B) = -------------   ---------  where | means "given" and & means "and".
                Pr(B)         Pr(B)

Let A = two heads; B = at least one head.

Pr(B|A) = Pr(at least one head, given 2 heads) = 1; Pr(A) = 1/4. Or use Pr(A & B) = Pr(A) here (whenever you have two heads, you always have at least one head) = 1/4. Pr(B) = 3/4 (all 4 equally likely possibilities except TT).

So Pr(A | B) = (1/4) / (3/4) = 1/3.

You can also use an alternative form of Bayes theorem:

                    Pr(B|A).Pr(A)
Pr(A | B) = -------------------------------   where | and & are are above, . is multiply,
            Pr(B|A).Pr(A) + Pr(B|~A).Pr(~A)   and ~ means "not"

Pr(B|A) = 1; Pr(A) = 1/4; Pr(B|~A)=pr(at least 1 head given not 2 heads) = all but 1 of three equally likely events)= 2/3; Pr(~A)=Pr(not 2 heads) = 3/4.

                 1.(1/4)           1/4         1/4
Pr(A|B) = ------------------- = ----------- = ----- = 1/3. 
          1.(1/4)+(2/3).(3/4)   (1/4)+(1/2)    3/4

--Mike Van Emmerik 23:50, 2 May 2006 (UTC)


This is how the theory of probability started. The wise philosopher and mathematician Blaise Pascal was asked by Monsieur of Méré to solve a dispute regarding a match of backgammon. The match was interrupted after one game. Thus, one of the players was leading 1-0. The player who was trailing wanted a fresh start — after the match was set to resume.

We must consider here two players of equal strength. Therefore, the probability is 50% for each player. It is equivalent to a coin toss. Let’s name the leading player as 1 and the trailing opponent as 2. We deal in this case with exponents or Ion Saliu’s sets. Such sets can have duplicate elements.

The backgammon match was set for 3 games. Total possible elements is 2 ^ 3 = 8. Here are the 8 possible outcomes:

1,1,1

1,1,2

1,2,1

1,2,2

2,1,1

2,1,2

2,2,1

2,2,2


We know that the result of the first game was 1. Thus, we must discard off all the cases that start with 2. There are 4 such situations. What’s remaining is the following set of outcomes:

1,1,1

1,1,2

1,2,1

1,2,2


There is only 1 case in 4 that Player 2 is going to win the match: 1,2,2. That is, he must win the last two games in order to win the match. ½ * ½ = ¼ = 25%.

Pascal expressed his opinion in strong terms. It is not fair to Player 1 if game #1 was discarded and the match would start afresh.

If you watch pro sporting events in the U.S. such as the playoffs, the winner of the first game in a series shows a clear statistical advantage in winning the series. Moreover, the winner of the first two games has an even stronger statistical advantage.

You can read a lot more on such topics here:

http://saliu.com/probability-caveats.html

“Probability Caveats in Lotto, Lottery, Gambling, Life, Sports”.

Ion Saliu


—Preceding unsigned comment added by Parpaluck (talkcontribs) 19:40, 4 August 2008 (UTC)

"Moreover, the winner of the first two games has an even stronger statistical advantage" Tell that to the 2004 Boston Red Sox 216.153.214.89 (talk) 09:42, 4 January 2009 (UTC)

Though this seems to be resolved, I would like to weigh in that this fact of probability has immense importance when considering distinguishable and indistinguishable particles; specifically, the result of flipping two indistinguishable fair coins is 1/3 HH, 1/3 HT, and 1/3 TT, whereas the result of flipping two distinguishable fair coins is 1/4 HH, 1/2 HT, 1/4 TT, because there are two separate HT scenarios (TH and HT being different since the coins are distinguishable). Eebster the Great (talk) 03:35, 2 March 2009 (UTC)
Can you elaborate on what you mean by indistinguishable coins? I would think that it should not matter if the coins are distinguishable or not, you should still see the sequence HT half of the time (you just don't know which coin was which?). --67.193.128.233 (talk) 00:50, 4 March 2009 (UTC)
This interesting result corresponds to the key fact that for two indistinguishable particles, HT is indeed EXACTLY the same thing as TH; they are not just identical in appearance, these two particles cannot even be distinguished by location (while they may not be in the same place, you cannot simply observe them over time and track their trajectories, as this violates the Heisenberg uncertainty principle). Another way of looking at this is that rather than a collection of two particles, we have a single "macroparticle." This macroparticle has a normal distribution, as I described, which means HH, HT, and TT are simply three random states which are all assigned an equal (1/3) probability. In quantum mechanics, this would mean looking at two (for example) photons and instead considering them a single EM wave of magnitude 2. I'm not really suggesting we add this to the article, though, merely that this is an important probabilistic consequence of indistinguishability and the result you discussed above.
For a better explanation of this, you can look at various Wikipedia articles such as Identical particles or skim any book on quantum physics (something like The Elegant Universe). Eebster the Great (talk) 03:13, 4 March 2009 (UTC)
EDIT: OK, I am terrible at explaining this. I just realized that by saying they have a normal distribution, it still sounds like HT should be twice as likely as HH or TT. Maybe this would sound better if we renamed the states (1), (2), and (3). Keeping in mind that all these states can be defined by the behavior of a single macroparticle rather than two individual constituent particles, it might be a bit more plausible that they are equally likely? Regardless, if you hit that link you'll find people more apt at explaining this phenomenon. Eebster the Great (talk) 03:16, 4 March 2009 (UTC)
Thanks for the explanation. So as I understand it this is a quantum phenomenon and by coins you mean particle, so could this experiment ever be reproduced with 2 actual coins as I think you suggested? The reason I ask is that I'm imaging this scenario: Person A has 2 coins which he flips and records the outcomes. Then person A tells person B the number of heads. If person B were informed of 2 heads he would know HH occurred and if he was informed of zero heads he would know that TT occurred, but if he was informed of 1 head he would know HT or TH, but would never know which one occurred. I would think that the coins are "indistinguishable" to person B. So is this equivalent to person B flipping 2 indistinguishable coins? (or maybe I'm missing something). If not, can you describe an experiment whereby a person flips 2 indistinguishable coins? --67.193.128.233 (talk) 15:41, 4 March 2009 (UTC)
Of course, no two coins are really indistinguishable, and nor do they truly flip randomly. Yes, by coins I mean particle, because in reality everything indistinguishable is considered a particle (or wave), although not necessarily an elementary one. Obviously you are correct in your analysis regarding real coins, as any test would show. What matters in this situation is if the two scenarios are actually different, and in the case of indistinguishable particles, not only do the two situations look the same, they ARE exactly the same. Perhaps a more concrete example is if two (say) electrons of opposite spin collide head on. Do they bounce off each other and go back, or do they go straight through each other and flip spins? There is literally NO difference, even in principle. All we can say is that two electrons of opposite spins entered and two left with the spins on opposite ends (and of course, there are photons, etc.). In the case of probability, having an electron spin up and another spin down in very close proximity is perhaps better described as having an electron wave of magnitude 2 and total spin 0, rather than two electrons, one of spin 1/2 and one of spin -1/2. Eebster the Great (talk) 01:23, 5 March 2009 (UTC)
Very interesting, thanks. —Preceding unsigned comment added by 67.193.128.233 (talk) 00:32, 6 March 2009 (UTC)

Possible Problem

In the article, it says:

Sometimes, gamblers argue, "I just lost four times. Since the coin is fair and therefore in the long run everything has to even out, if I just keep playing, I will eventually win my money back." However, it is irrational to look at things "in the long run" starting from before he started playing; he ought to consider that in the long run from where he is now, he could expect everything to even out to his current point, which is four losses down.

Is this a typo or something? Shouldn't it be:

Sometimes, gamblers argue, "I just lost four times. Since the coin is fair and therefore in the long run everything has to even out, if I just keep playing, I will eventually win my money back." However, it is irrational to look at things "in the long run" starting from after four tosses; he ought to consider that in the long run from where he is now, he could expect everything to even out to his current point, which is four losses down.

Just wondering, because it doesn't seem right. 203.122.192.233 12:29, 22 May 2006 (UTC)

It sounds right to me. That is, you can't consider past tosses when expecting everything to even out in the long run. StuRat 12:54, 22 May 2006 (UTC)
Also, can you guys replace the "even out" expression with something else? What does "even out" mean? If it means that the total number of heads (or tails) will converge to half the number of flips (and thus his money will converge to 0), then that's wrong. If it means that the percentage of total flips that are heads (or tails) will converge to 50%, then that's correct, but I doubt this is relevant for the gambler. --Spoon! 07:53, 1 September 2006 (UTC)
The gamblers fallacy is a rather vague feeling that "things should eventually even out", nothing more specific than that. StuRat 01:54, 3 September 2006 (UTC)

Probability and real life

"The most important questions of life are, for the most part, really only problems of probability." (Pierre Simon de Laplace, "Théorie Analytique des Probabilités")

Finally, I got here. There have been quite a few referrals from this URL. My name is Ion Saliu, the author of the Fundamental Formula of Gambling (FFG). I mean the special approach to gambling relying on a formula started by de Moivre some twenty and a half decades ago. I believe that de Moivre did not finalized his formula out of fear. FFG proves the absurdity of the God concept. It was a very dangerous thought back then. It is still dangerous today, depending on what side of the desert you live.

I wrote the FFG article in 1996. My English is now better. We all evolve and thus prove Darwin's theory. Darwin is another human who was absolutely frightened by his ideas. He had bad dreams very often. He dreamed of being hanged because of his (r)evolutionary theory.

I developed my probability theory to significantly deeper levels. Please read at least two of my newer articles:

Theory of Probability: Best introduction, formulae, software, algorithms

Caveats in Theory of Probability

Yes, the constant p = n/N generates a variety of real–life outcomes. The constant p constantly changes everything. It even generates paradoxes, such as 'Ion Saliu's paradox of N trials'. We were taught that if p = 1/N the probability would be p=1 (or 100%) if we perform N trials. In reality (or paradoxically) the degree of certainty is not 1 but 1 – 1/e (approximately 0.632).


Ion Saliu, Probably At-Large

That is not paradoxical, and nobody thinks that repeating a trial with p = 1/N N times will guarantee a success; everybody knows that sometimes there could be an unlucky streak. Degree of certainty is not a revolutionary concept. Eebster the Great (talk) 03:39, 2 March 2009 (UTC)

Paragraphs deleted from the article

I don't see a reason for why they were deleted, but i don't want to put them back either. I just want to save them in this talk page for easy access (i don't want to look through the article's history for them!), if i ever need them.

  1. Although the gambler's fallacy can apply to any form of gambling, it is easiest to illustrate by considering coin-tossing; its rebuttal can be summarised with the phrase "the coin doesn't have a memory".
  2. Sporting events and races are also not even, in that some entrants have better odds of winning than others. Presumably, the winner of one such event is more likely to win the next event than the loser.
  3. Mathematically, the probability that gains will eventually equal losses is equal to one, and a gambler will return to his starting point; however, the expected number of times he has to play is infinite, and so is the expected amount of capital he will need!

-- Jokes Free4Me 09:28, 20 June 2006 (UTC)

The only one I deleted was number 3, because it's misleading and/or wrong. Maybe it could be reinstated if written more clearly. I mean you could say that the probability that the gambler will eventually win a billion times the single-game bet is equal to one. That's true too. Same for losing a billion times the bet. Same for any amount you wish to name. Moreover, the expected number of times he has to play to do it is not infinite, its finite. The number of times he has to play in order for it to be a certainty is infinite. PAR 15:53, 20 June 2006 (UTC)

Traditional logic won't solve this debate

The Gambler's Fallacy is in direct argument with the Law of Averages or the Law of Large Numbers. It seems reasonable to say that the probability of any individual coin toss is 50%, but it is also reasonable to assume that as you get more heads, SOME TIME DOWN THE ROAD, you will HAVE to start seeing TAILS. As time goes on, therefore, it is logical to assume that the probability of getting a head must progressively fall. There is illusion going on in this discussion. LOGIC prescribes a continued 50/50 chance of a head while EXPERIENCE demonstrates that heads become less & less likely as you continue to flip successive heads. This is an example where the idea of LOGIC doesn't work in real-life. Another example of LOGIC not working is on the topic of astrology and metaphysical influences. LOGIC can not explain nor conceive the idea of an astrological influence, and it also can not assimilate the illogical discrepancy between the law of probability, and our real-life experience of probability. Therefore, the theory of probability fails to explain reality, similar to how logic fails to explain an astrological influence. Logic works wonders in other areas, but it is the Law of Large Numbers which it must concede to because this has been proven in real-life and more aptly predicts the future. The criteria for a theory's truth has to do with predictive validity, and I believe the law of large numbers predicts the future of a head more reliability than the theory of probability. Any idiot will SEE that the chance of a head decreases as you flip despite the fact that it is not logical based upon the THEORY of probability. In theory, a lot of things work, but in REALITY, they end up not working after all. What's most important is you pay attention to the outside world and real-life results/experiments as opposed to what may seem LOGICAL in one's mind. A lot of things in the world are counterintuitive. Like, for example, why are the pedal threads reversed on the left side of a bicycle? It may not seem logical, but that's the way it is. Logic can work sometimes and can fail other times, it is not a perfect philosophy... and this discussion is a good example of this. Again, what ultimately matters is one's empirical findings, not whether or not those findings appear logical or not. It's like Fritz Perls once said, "You need to lose your mind and come to your SENSES!" Our theories must be formulated by our EXPERIENCE, not the other way around.

The only philosophical problem regarding the increased chance of a head as you continue to get successive heads has to do with the birth of the coin and ignorance of it's history. Theoretically speaking, the second that coin comes off the press, it's history begins. This, of course, is only in theory, it has not be verified by experience. Human beings have very little experience of the true life of a coin from it's birth, so we can only try to use reason in order to theorize about coin flipping behavior. Since we are all ignorant about the sum total of heads that were flipped since it's birth, our predictions for a future flip is like trying to come into a conversation in the middle and start arguing. You can't scan back and view it's history, so, for all you know, that coin had just flipped 1000 consecutive tails.... then you happen upon it in your dining room, flip it and get 10 consecutive heads and believe that it MUST mean the next is a tail. Obviously, since it had an abnormal amount of tails before you picked it up, the actual probabilities are quite unknown to you. The 50/50 theory is just a generalization, and, in fact, the chance of a head may be a lot more likely than you believed once you consider a coin's entire history.

Another philosophical problem relating to the history of a coin has to do with the logic behind how history can affect a coin's future. Is it really logical to believe that throwing a coin up & down in space and having it hit the floor and fall to rest has ANY affect on it's future behavior? This is completely illogical. I agree, but I also suggested above that logic is not the be-all and end-all of human wisdom. What ultimately matters is experience and observational consensus. I agree that not many human beings are going to devote their lives to following the behavior of a coins from it's birth (so we have very little data), but this is really the only SURE way of validating a theory of coin flipping. Of course, for all humans know, all the pennies in the world could be metaphysically connected in some way to the point where even following individual coins might not help a lot. In other words, the fact that my sister flipped heads in one room might be illogically affecting the outcome of my brother flipping his own coin in the next room. Again, illogical and difficult to conceive of how there would be a connection, but the idea of astrology is just as illogical but appears to have compelling enough evidence to warrant a statistical investigation. Remember, it didn't make sense to anyone that the world was round. That was viewed as illogical in one time, so logic can be a highly subjective way of trying to predict reality. Logic is also merely a tool to assess reality, not a religion to try to protect. If logic doesn't seem to be predicting reality very well, one needs to begin to reevaluate logic itself, or at the very least one's own logic.Dave925 22:46, 22 July 2006 (UTC)

Sorry, but that's all pretty much BS. You just accept things like the accuracy of astrology as a given, then use that to "prove" that logic in wrong (astrology fails miserably when put to any real test, it's just a psychological issue that people tend to see themselves in any vague statement, like "there has been a crisis in your life"). The only way a coin toss will not have a 50-50 chance is if it isn't a fair coin (weighted unevenly, for example). And I don't think the learned people ever thought the world was flat, only the same common folk who now believe in the gamblers fallacy. StuRat 21:52, 10 August 2006 (UTC)

Real-life Evidence exists for Gambler's Fallacy with a NORMAL fair coin

By the way, I just programmed my computer to bet on it's microsecond digit, in order to disprove the Gambler's Fallacy. In doing so, I essentially proved that it does exist. The table below shows the percentage of time it was able to predict the digit based upon having thousands of trials of history to learn from. Every 100 trials, I programmed the computer to stop, evaluate the data and bet on the underdog. Then I just programmed it to pick one number and always bet on that. For all practical purposes, 15,000 trials is enough for the average person to realize it doesn't work (at least in the short-run). It's still possible/reasonable that you could have an advantage in the extreme long-run, but, then again, the "history" idea associated with a coin could (for all you know) be infinite which would be like saying it does not exist. To have an infinite history would make history a nonissue in betting because it would be impossible to know enough of the history to make any difference whatsoever in your coin flipping predictions. So, either coins have infinite history, no history, or they have such a large history that, for all practical purposes, the average Joe would not benefit in his lifetime betting on the underdog. This is only for FAIR coins, of course. A completely random event which is not unfairly influenced by another phenomenon which can be learned. The short history that one has access to when flipping a fair coin (compared to it's infinite history) is essentially like no history at all.... which brings every coin flip back down to a 50/50 chance. Again, normal fair coin is the operative word. In cases where you are flipping a coin for 15 minutes and keep getting heads.... this will never happen for a normal fair coin. Or, the chances of that happening are so astronomical that it's not worth talking about. In a world where you could flip a coin and get heads for 15 minutes straight, yes, I would imagine that you could have an advantage by betting on a tail. Of course, if you already flipped it for 15 minutes and didn't get a tail, you would already assume that there must be something wrong with the coin or you are high on drugs. If, in a strange new world a fair normal coin could be flipped for 15 minutes without getting a tail, the history idea could become a pertinent advantage... but, again, with normal fair coins, you will never get far enough off the bell-curve to use your distribution history as a predictive tool. The results of the experiment are below Dave925 08:44, 23 July 2006 (UTC)

15,000 trials = 11.69% bet on 4
15,000 trials = 11.39% bet on 1
15,000 trials = 11.39% bet on 9
15,000 trials = 11.30% bet on 2
15,000 trials = 11.16% bet on 4
15,000 trials = 11.12% bet on 4
15,000 trials = 11.11% bet on 4
15,000 trials = 10.90% bet on 4
15,000 trials = 10.79% bet on 0
15,000 trials = 10.62% bet on underdog
15,000 trials = 10.60% bet on 5
15,000 trials = 10.59% bet on 5
15,000 trials = 10.49% bet on underdog
15,000 trials = 10.48% bet on 3
15,000 trials = 10.43% bet on underdog
15,000 trials = 10.41% bet on underdog
15,000 trials = 10.39% bet on underdog
15,000 trials = 10.35% bet on 5
15,000 trials = 10.15% bet on underdog
15,000 trials = 10.01% bet on 1
15,000 trials = 10.00% bet on 5

The problem here is that you are using a computer to generate the numbers, which are pseudo-random numbers, not truly random. Some digits may very well come up more often in computer generated numbers. However, if you used something truly random, like radioactive decay events, this defect would be eliminated. StuRat 21:39, 10 August 2006 (UTC)
While there are issues with some naive RNG, it would not be likely that using one would cause the results seen. I prefer to use algorithmic ones rather than hardware ones like that because I have access to their properties and weaknesses. But the microsecond hardware one here appears to be fine for this application. Baccyak4H 03:28, 22 August 2006 (UTC)
I don't find it unlikely that a pseudorandom variable crept into his program. Without seeing the coding (and knowing the workings of the computer's RNG), it is hard to determine if that's the problem. But there are definitely many cases when pseudorandom distributions are used rather than random ones, and these are designed specifically to make the results seen in this experiment show up. Eebster the Great (talk) 02:30, 9 September 2009 (UTC)

The Proof is in the Slope

Theoretically speaking, however, the chance of getting a tail should begin to increase after a certain point (say, after 10 heads). Yet, the chance of a tail may only be a micropercent which wouldn't help much with prediction. If you talk in terms of 'strings,' a string of heads with a tail at the end will always be THEORETICALLY more likely than a string of the same number of heads with another head added. Again, the percentage between these two distributions may be such an astronomically small amount that it doesn't make sense to make a distinction. It's like splitting hairs here talking about the theoretical influence of ONE coin flip on a string of flips which could reach from here to the moon. The fact is, there are two sides to the coin, so no matter how infinitesimal the percentage is, there has got to be a difference in the distribution. What I'm saying is under the normal distribution of a coin flip, the string 50 heads+1 tail will happen more than 51 heads. If this wasn't the case, the distribution CURVE, wouldn't be a CURVE, right? Think of the distribution of coin flip strings and what the curve looks like. It should look like a normal bell curve and slope downward. It is the 51 heads which makes it slope downward slightly. And, each head you add to the string, the curve slopes down further and further into virtual impossibility. So, one could redefine the Gambler's Fallacy as attributing a SIGNIFICANTLY greater chance to the outcome of a random event due to the history of that event. It's not significant. In fact, it may even be grossly insignificant to the point where it wouldn't make any practical difference to make a distinction. But, the fact is, the curve SLOPES. And it slopes DOWNWARD. If successive head flips doesn't cause it to slope, what does? Dave925 19:11, 10 August 2006 (UTC)

I'm not following you: " ... the string 50 heads+1 tail will happen more than 51 heads..." do you mean "50 heads followed by one tail, compared to 50 heads followed by one head"? If so, I fear you're wrong, because those two will happen exactly as often as each other with a fair coin, no theoretically or infinitesimally about it. Maybe I'm misreading you, though. - DavidWBrooks 20:47, 10 August 2006 (UTC)
I agree. StuRat 21:40, 10 August 2006 (UTC)
Looking at your comments again, I think I understand where you're wrong; I apologize if I've misread it.
If I am about to flip a coin 51 times in a row, it is slightly more likely that I'll get one tail among the flips than no tails - that's the probability cuve. But if I have already flipped the coin 50 times and am preparing to flip it a 51st time, then a tail is no more likely that a head. (In this case the probability "curve" is, as you correctly said, flat.)
The gambler's fallacy is to think that the probability curve which existed before any coins were flipped still holds sway after some of the flips have happened. In fact, the curve is "re-calculated" (so to speak) after every single flip, and it only applies to events that have not yet happened.
Not sure if that helps ... - DavidWBrooks 12:02, 15 August 2006 (UTC)
You're right. History can be used to even out odds, but not gain odds. For example, if I asked you to wager on the odds of flipping 4 heads in a row versus flipping 3 heads and 1 tail, you would always bet on the latter. However, if you already knew that I flipped 3 heads, the chance that the next flip is a head is the same because there's only two possible outcomes. The reason why you bet on the latter in the previous wager is because you have 4 chances of success versus 1. THHH, HTHH, HHTH, HHHT. Once you already know the outcomes of the first 3 events, the odds of one single flip is always the same. You can always gain chances when you group two flips together because your chance of getting a head always increases with the number of times you're able to try. However, if you flip the coin 10 times and get 10 tails, you have already lost all your chances... and the next flip is just a 50/50 shot. You can use the knowledge of previous events to even out odds, but not to gain odds. So, yes, history can be used for something. It can be used to show how unlucky you are when you see your odds of a string of flips reduce back down to a single 50/50 shot. This is ever apparent on the show "deal or no deal." The banker recalculates the odds after each case is opened, for good reason. They have a 1 in 26 chance of winning a million dollars, but once you're down to 2 cases and one has the million in it, the banker knows it's a 50/50 shot now... not 1 in 26 like when the game started. Each case which is opened is used to come closer to predicting the amount in their case. It's similar to playing the game "Clue." You don't solve the crime on your 1st move, you eliminate the other possibilities first, and once done, you have a lot better chance of making a successful accusation. Dave925 2/25/2008

Gender ratios

"A couple has nine daughters. What is the probability that the next child is another daughter? (Answer: 0.5, assuming that children's genders are independent.)" This is a poor example because various factors can make one couple more likely than another to produce girls (see e.g. Kanazawa: Big and tall parents have more sons:Further generalizations of the Trivers–Willard hypothesis, Journal of Theoretical Biology 235 (2005) 583–590) and so a couple that's already produced a long run of daughters is more likely to produce another one (although the effect is probably not as strong as the gambler's fallacy might lead people to believe." Modifying accordingly. --Calair 01:15, 31 December 2006 (UTC)


The statement "Many people lose money while gambling due to their erroneous belief in this fallacy" seems false. A win or loose in gambling is random and is not effected by what a player believes? Marvinciccone 22:13, 9 January 2007 (UTC)

People who believe in this fallacy are more likely to gamble (or keep gambling), and the more you gamble the more likely you are to lose money. But I'll see if I can make the wording a little less ambiguous. --Calair 01:01, 10 January 2007 (UTC)

The Mathematicians' Fallacy

As a practical gambler, I came to the conclusion that any talk about any length of a trial procedure which is outside of the humanly performable is but metaphisics. I spent many years on finding the Rosetta Stone of Gambling (unsuccessfully), but found the following relation in Binary Gambling what I never was able to exploit : "THEORETICALLY" (which notion in this inverted commas is outside of the mathematical rigor) If I have 'N' trials, there will be 'N/2' outcomes of EVENT1 and of EVENT2 Those outcomes will forms GROUPS. A GROUP consists only but similar EVENT. It can have one member (E1; for argument's sake) or two members (E1,E1;) or three members (E1,E1,E1) or . . or 'n' members (E1,E1, .... E1 - total of 'n' members)

Through practical observation only, I found that their (e.g.: the GROUPS')numerosity corresponds to the following relations :

If "N" trials then "N/2" EVENT1 - (the following is true to EVENT2, with its own deviations)

The generated GROUPS in an EVENT FIELD; GROUPNUMBERS = (EVENTi/2) The number of single and multiple GROUPs will be : GROUPNUMBERS = (GROUPS/2) This means, that in "N" trials we shall have SINGLEGROUPNUMBERS = (N/2/2/2) MULTIPLEGROUPNUMBERS = (N/2/2/2)from which follows : SINGLEGROUPNUMBERS = MULTIPLEGROUPNUMBERS ;

Further it follows, that we shall have : TWOLONGGROUPS =( MULTIPLEGROUPNUMBERS /2 ) THREELONGGROUPS = ( TWOLONGGROUPS /2) . . nLONGGROUPS = ( [n-1]LONGGROUPS /2)

In a numeric representation, from

                    1000 trials will produce
                    500  EVENT1 which forms
                    250  GROUPS in the fromation of
                    125  ONELONGGROUP  and
                    125  MULTIPLELONGGROUP;
                    in the 125 MULTIPLELONGGROUP there are
                    {say} 62 TWOLONGGROUP
                    {say} 31 THREELONGGROUP
                    {say} 15 4LONGGROUP
                    {say} 8  5LONGGROUP
                    {say} 4  6LONGGROUP
                    {say} 2  7LONGGROUP
                    {say} 1  8LONGGROUP
                     .
                    {say} 1  otherLONGGROUP  also happen ... (as will);

Naturally, nothing like that exists in this form, but all of my experiences tend to bring this same "theoretical" result.

My arguments with the Coin Toss example - which I call BINARY GAME - are as follows :

0./ All arguments shall relate to fixed length structured play of hazard; e.g. RATIONAL PLAY 1./ There is no such EXPERIENCE as infinite numbers of trials; 2./ There are only two results in a series of BINARY TRIALs : i.) a single outcome of an EVENT, or ii.) a multiple outcome of an EVENT 3./ However the results are unpredictable, there is no probability whatsoever that any character of a GROUP (single, multiple) will continue after a certain numbers of occurance - like HTHTHTHTHTHT.. ad infinitum; 4./ While it cannot be predicted with any certainty, the 'longer' events are occuring with statistical regularity in a series of structured plays, and in certain type of hazard plays they even could be exploited; 5./ Because the length of the occurances is dependent on the trial numbers in concideration, the maximum length experienced in BYNARY GAME ( like Roulette, whre 43 blacks come out in a row, to my knowledge) is but the example of the importance of human limitations in practical gambling. (We, as homo ludens might not have enough collective trials to have other result)

Tamaslevy 03:24, 12 February 2007 (UTC)

Clearing up intro

"A truly random event is by definition one whose outcome cannot be predicted - on the basis of previous outcomes or anything else."

I removed this because it's inaccurate. For one thing, an 'event' is an outcome - 'throwing a die' is a random process, 'rolling a 6' is an event.

For another, under this definition there would be no such thing as a 'truly random' event or process. To see why, note that "rolling a 6" is one possible event when rolling a fair die... but so is "rolling a 5 or 6". Quite clearly the former predicts the latter, and the latter greatly improves the odds of the former.

Alternately, take two fair dice (one red, one black), and define three random variables:

R is the result when the red die is thrown. B is the result when the black die is thrown. RB is the sum of R and B.

Each of these are random (although RB has a different probability distribution from the other two). But if you know R, you have a much better idea what RB will be. This is a situation where you can use one random event to predict another. The important thing in the gambler's fallacy is not just that the events are random, but that they're independent. (Indeed, part of the reason humans are susceptible to the gambler's fallacy is that they're used to dealing with non-independent random events.) --Calair 02:13, 4 March 2007 (UTC)

Lottery

Are you more likely to win the lottery jackpot by choosing the same numbers every time or by choosing different numbers every time? (Answer: Either strategy is equally likely to win.)

Of course, choosing random numbers is the better option. If you always play the same numbers but miss a draw and your numbers come up... R'win 12:12, 22 September 2007 (UTC)

No set of numbers is any more or any less likely to match those drawn than any other set. HOWEVER you can to a very small degree minimise your risk of having to share a jackpot by picking a unique sequence of numbers. Human nature means that the numbers 1-31 are commonly picked (birthdays etc) and strings of consecutive numbers often avoided as people irrationally believe they're unlikely to win. For example 32-33-34-45-46-47 has no special "powers" but if it came up you MAY be less likely to share your jackpot than if you'd picked 1-7-15-22-25-30. It's still a mug's game, however! 77.96.239.229 (talk) 15:46, 6 February 2008 (UTC)

Your best bet is to take a quick pick type number and here's why: If there is any flaw is the number generation system used to generate your quick pick, it will exist in the software of the lottery vendor. That same vendor is more than likely the one who's software picks the winnner. More likley than not, the same programmers worked on both parts of the code. Therefore, whatever flaws they've left in will more likely than not exists in both code bases. If they have num-gen flaws, the randomize will be weak and it's more likely to drop numbers in ranges that way. If and only if there are programming flaws of the same type in both sides, "quick pick" is better. 216.153.214.89 (talk) 22:30, 24 November 2008 (UTC)

I would dispute this. What you said is probably not the case, because the number-pickers for lotteries don't use the same software that the number-generators that vendors use--in fact, the lotteries use cool machines with big balls that bounce up and down and fall into tubes. Trust me, there is no relation. However, if there is a num-gen flaw, other people who use the quick pick are slightly more likely to share your number, so if you win you are slightly more likely to share the prize by using quick pick than picking your own numbers, if there is indeed some flaw. In addition, you should avoid other commonly picked combinations like 2-3-7-11-22-33, again because of the risk of sharing the prize. Of course, all these effects are extremely small, and the odds in all lotteries are heavily stacked against the player, so the best option by far is not to play. Eebster the Great (talk) 03:54, 2 March 2009 (UTC)

Formal fallacy

The article introduction labels this as a formal fallacy, while the box in the bottom of the article places this as an informal fallacy. Either the current box layout is misleading or there is a contradiction here, I think.

I'm also adding a cleanup (inappropriate tone) tag to the An Example section, which seems completely non-encyclopedic. - Roma_emu (talk) 00:53, 18 December 2007 (UTC)

I'm going to edit the top part to say Informal instead of Formal. Regarding my second note, it seems that the inappropriate part was in fact a copyright violation, which was removed by McGeddon. Unfortunately an anonymous user re-added it, and when I came to this page today that's where it standed, so I edited it to give it a more encyclopedic tone. I thought nothing had been done since my post here; I only looked at the history and noticed the section was a copyvio after the edit. I then reverted to McGeddon's version. -Roma_emu (talk) 01:43, 22 December 2007 (UTC)

Related: sample-size fallacy, human-generated sequences?

I've read that the gambler's fallacy is a special case of the sample-size fallacy (aka small-sample fallacy), which seems to jibe with assertions here on the talk page that yes, you might reasonably expect the coin flips to regress to the mean, but you'd have to consider the infinite past of that coin (or your coin, or you, or whatever) and the infinite future to expect the regression to happen with certainty. Or something like that. Intuitively, that sounds right, but given the counter-intuitive nature of fallacies, I'm not expert enough to be bold and edit. It's not like I learned math and logic at school or anything!

Also, many have written about how bad humans are at generating strings of "random" numbers; we avoid even numbers, we avoid sequences, etc. (Naturally, at the moment, I can't find a single web page on the topic to cite.) Is this a corollary of the gambler's fallacy and/or sample-size fallacy, or does it have its own name? Either way, something here should link to it, as they seem connected. --JayLevitt (talk) 15:25, 30 December 2007 (UTC)

Would this also relate to genetics?

Let's say for example, that the genotype of two heterozygous parents were Tt. They produce three offspring, and they are all tall (dominant T). Would there be a 100% chance of the 4th being short (recessive t)? Or would it still be the 3:1 dominant : recessive ratio in every single offspring, including the 4th? So the 4th will still have a 1/4 chance of adopting the recessive trait? In genetics, does this Fallacy still hold false? 64.131.253.168 (talk) 03:20, 18 April 2008 (UTC) Havoc

In all of probability, the fallacy is a fallacy. That is, it is a mathematical, not a physical fallacy. The calculation is wrong, not the practical implications. However, in genetics, you will find that if you have three TT children, the fourth may actually have MORE than a 75% chance of being TT or Tt, because certain alleles may be more likely to be passed on to children, or children with such alleles may be more likely to survive in vitro. These effects are not always small. However, this only applies that the event is not truly random. Eebster the Great (talk) 02:37, 9 September 2009 (UTC)

How does this relate to the Infinite Monkey Theorem?

From my understanding, if the "Infinite monkey theorem" is true, then this fallacy must be false. It appears the two Wikipedia articles are fundamentally at odds with each other. Anyone care to explain this discrepancy here, or include a brief mention of the phenomenon in either article? —Preceding unsigned comment added by 216.249.16.5 (talk) 22:29, 10 July 2008 (UTC)

Neither article mentions the other. If you can quote an academic who's pointed out some contradiction between the two, then feel free to mention and reference it, but if this is your own personal theory or (mis)understanding, I'm afraid there's no place for it here. --McGeddon (talk) 22:33, 10 July 2008 (UTC)


The Martingale article says:

Originally, martingale referred to a class of betting strategies that was popular in 18th century France.[1] The simplest of these strategies was designed for a game in which the gambler wins his stake if a coin comes up heads and loses it if the coin comes up tails. The strategy had the gambler double his bet after every loss so that the first win would recover all previous losses plus win a profit equal to the original stake. As the gambler's wealth and available time jointly approach infinity, his probability of eventually flipping heads approaches 1, which makes the martingale betting strategy seem like a sure thing. However, the exponential growth of the bets eventually bankrupt its users.

This contradicts the premise of the "Gambler's Fallacy". As evidenced by the above quote, the limiting factor is not that betting against the continuation of an event series will fail. Rather, the limiting factor is that one cannot be assured of being able to stay in the game long enough because one might run out of $ before the series fails and the 50/50 odds revert to the mean. The simple fact is that all odds anomalies eventually fail and revert to the mean in a 50/50 betting environment. The "fallacy" is not that the series will eventually fail (it will). Rather, the fallacy is thinking that you can exploit the anomaly well enough to make $ betting. Other than seeing heads come up 20 times in a row and begining to bet on tails at that point, it's unlikely that one is witnessing a sufficiently rare anomaly that attempting to exploit it has value. 216.153.214.89 (talk) 22:21, 24 November 2008 (UTC)

It's clear to me now that 20-in-a-row is not a single event, but a series on indivdual events that only abut by chance with nothing connecting them to the next flip. The "next" flip is always and only connected to the "preceding" flip only and the odds measure that connection point only. 216.153.214.89 (talk) 07:18, 6 March 2009 (UTC)
That thing you described my friend, is exactly what we call the gambler's fallacy.Kurulananfok (talk) 19:51, 20 January 2009 (UTC)

Kurulananfok: Yes or No, do you admit that when flipping a coin, you will only very rarely see 100 heads in a row? And do you also admit that when flipping a coin, 1,000 heads in a row happens less often than 100 in a row? If you admit both of these - and they are undeniably true - then you are admitting that the longer the series, the great the current odds it will break via a reversion. This entire article is essentially a straw-dog argument. Nobody denies that some gamblers misunderstand odds so badly that they hype themselves up on foolish thinking. But even so, this article proves nothing other than some people who know advanced math like to gloat over their ability to confuse simple logic with complex formulas. The very fact that a coin has two sides and the flip is random dictates that every streak will eventually end. It is not a fallacy to know this, nor is it fallacious to think that a rare, long streak will break sooner than a common short streak. The longer the streak, the more rare it is and the sooner it will break. While short streaks have a modest chance to continue for a while, long streaks must fail due to mean reversion decay. Anyone who denies this is either lying or confused. 216.153.214.89 (talk) 06:52, 3 February 2009 (UTC)

Answer this simple question. If I'm flipping a fair coin and I happen to flip 100 heads in a row, what is the probability that the next flip is also a head? Please don't answer with a paragraph, just a number between 0 and 1. —Preceding unsigned comment added by 67.193.128.233 (talk) 00:58, 4 February 2009 (UTC)

Your question is misframed - ask yourself this question instead: Which happens more often, 3 in a row or 500 in a row? The answer is obviously 3. If you can't understand this, you are just not thinking right. Please go read this article: Martingale (probability theory) and pay particular attention to this sentence: "As the gambler's wealth and available time jointly approach infinity, his probability of eventually flipping heads approaches 1...". The sentence which I point out refers to the mounting reversion odds as strings of sameness lengthen. The place to deny that sentence is on the other talk page, not here. This section of this talk page was started by me to point one one thing - that the premise of this article and the math from that article are in conflict. The fact that the "probability of eventually flipping heads approaches 1" is what I am referring to and the other article confirms I am correct about this. Now to answer your question, after 100 flips of the same in a row, the odds are more likely that you will not see 101, than will. Why? Because 101 in a row happens less often than 100. If you don't understand this, I'm not sure what to tell you. I will say though that "approaches 1" is not the same as 1. Rather, it's more like the Dichotomy paradox in that the more you get the same in a row, the closer you get to a certain reversion, but it's never 100% certain. 216.153.214.89 (talk) 07:04, 8 February 2009 (UTC)

You are mistaken. Yes, 3 in a row happens more often than 500 in a row. This is because the probability of getting 3 heads in a row is while the probability of getting 500 heads in a row is . However, your whole "mounting reversion odds" is a load of garbage. After 100 flips of the same in a row, the odds are exactly the same that you will see a head and a tail. Just stop and think about it. Is there some magical force that's going to make your coin land one way or the other? No. The probability of heads and tails is equal: . History does not affect the outcome in this case. You are correct when you say that the probability of flipping 2 heads in a row is smaller than the probability of flipping one head in a row. However, your comparison is silly. You should be comparing the probabilities of flipping 2 heads in a row vs the probability of flipping a head followed by a tails (hint: they're the same). PaievDiscuss! 08:43, 8 February 2009 (UTC)

Paiev: It's amazing to me that posters on this page are simply unable to grasp the truth from the Martingale (probability theory) article which says: "As the gambler's wealth and available time jointly approach infinity, his probability of eventually flipping heads approaches 1". Either that statement is false or this article needs to take that into account. Now to answer your question of "Is there some magical force?", the answer is yes and it's called reversion to mean. Please read this prior to jumping on my sh1t again (ie; "load of garbage"). Once again, as I said above, the only "fallacy" in the gambler's fallacy is that various mathematicians like to speculate and gloat about how stupid gamblers might be. The simple fact is that if one were to wager on a reversion to mean circumstance, if you waited until an outlier event (such as 20 heads in a row), the likelihood of witnessing a reversion is higher than on an initial flip. Your problem is that you are focusing on the absolute odds of all flips which might ever occur, instead of considering that flip-series anomalies (long continuous runs) do occasionally occur and when they do, they are detectable. That's the only point I am making. I am not suggesting one try to win an even money bet on such a thing. As I have already stated, the risk/reward involved makes it a poor betting opportunity. But even so, it is still true that reversion to the mean applies here. And please, stay off the personal attacks - if you don't like what I say and disagree with my sources (I've cited my views to pages that back me up - have you?) then you are free to disagree... but I feel that calling my posting "garbage" is bad wiki-manners. 216.153.214.89 (talk) 20:50, 22 February 2009 (UTC)

I was perhaps too inflammatory, which I apologize for; it was not my intent. To respond to different pieces of this:
It's amazing to me that posters on this page are simply unable to grasp the truth from the Martingale (probability theory) article which says: "As the gambler's wealth and available time jointly approach infinity, his probability of eventually flipping heads approaches 1". Either that statement is false or this article needs to take that into account.
That statement is correct, but I believe you are misinterpreting it. As a gambler's money and time increase, so does the number of losses he can afford in a row without going bankrupt, yes? Let's call this number x. The probability of getting x losses in a row is . From this, it is clear that the more money he has, the less likely he is to lose it all. Now, suppose he loses ten times in a row. The probability of him going bankrupt has changed, because he has less money. It's now , since he can afford ten less losses.
Now to answer your question of "Is there some magical force?", the answer is yes and it's called reversion to mean. Please read this prior to jumping on my sh1t again (ie; "load of garbage").
Please, stop and think about what you've just said. You just said that there is a physical force that will actually push the coin to one direction more than the other. This is a bit silly. If you flip a fair coin, it will come up heads with a probability of . Always. Past results for the coin don't affect future results. The coin doesn't know that it just came up heads or tails. There is nothing to guide it to one outcome or the other.
And yes, I read your link. It isn't relevant. It states, in essence, that the more extreme an event is, the more likely it is to be followed by a less extreme event. This is because the more rare something is, the lower the probability that it happens. The lower this probability, the greater the probability that it doesn't happen. All it boils down to is that the less likely something is to happen, the more likely it is to not happen. You are misinterpreting it. It says that if a certain number (let's call it ) of heads occurs in a row, if you stop and begin counting again, the chance that you will get heads in a row again decreases as increases. I don't see how that is a source that "backs you up".

::The simple fact is that if one were to wager on a reversion to mean circumstance, if you waited until an outlier event (such as 20 heads in a row), the likelihood of witnessing a reversion is higher than on an initial flip. Your problem is that you are focusing on the absolute odds of all flips which might ever occur, instead of considering that flip-series anomalies (long continuous runs) do occasionally occur and when they do, they are detectable. That's the only point I am making.

That is not a "simple fact" and I would like to see you back that up, because it is illogical and has no mathematical basis. PaievDiscuss! 00:53, 25 February 2009 (UTC)

(self-delete my long-winded blather) 216.153.214.89 (talk) 07:18, 6 March 2009 (UTC)

(start: interloper butts-in)
Regardless of whether it's right or not, this original research and doesn't belong in this article. There are lots and lots of Web pages where you can have this argument to your heart's content - just not here. - DavidWBrooks 13:24, 25 February 2009 (UTC)

David, your reversions, combined with your above comment is in essence comitting a fraud on this article. The edits of mine which you keep reverting have nothing whatsoever to do with the above discussion. Please confine your criticisms of my edits of the article to the merits of those edits themselves. I am going to bifurcate my changes so as to isolate your complaints and re-insert my edits after you and I discuss them in detail. If you choose to not discus them, then you waive your prerogative to revert me. Please see the article talk page tommorrow for that thread which I will start then. Thanks 216.153.214.89 (talk) 17:36, 25 February 2009 (UTC)

"Fraud" ... sigh. Anyway, you're right that this comment is not related to your revisions, but my comment is what matters: Original research doesn't belong in a wikipedia article, a principle that is a bedrock of wikipedia, for better or worse. That is the relevant discussion in all detail needed (more words do not equal more insight). - DavidWBrooks 11:49, 26 February 2009 (UTC)

Fraud indeed... if you conflate your comments here with an implied justification for reverting my article edits, then yes FRAUD. Your duty before reverting is to discuss the rationale for your revert itself, that is what is at fault with the particularized article edit you reverted. It's not enough to just complain about a dialog that's on the talk page... a dialog which puts me in the right as I am meeting my duty to discuss and reach consensus. Perhaps you should try it sometime, rather than just condeming me with broad-based slams. PS: Your comment does not "matter" and it is not "what matters" because it doesn't meet the test of mattering for a talk page in that a) it doesn't specifically address an article edit and b) it doesn't make any attempt to reach any consensus. If you are not trying to do either of those things, you are butting in where you don't belong - so please butt out! But thanks anyway for (poorly) defining importance for us! 216.153.214.89 (talk) 14:42, 26 February 2009 (UTC)

(end: interloper butts-in)
There is a simply way to sort this out. Write a short program that flips a coin one million times and watch for strings of heads, say 10 in a row, and record the next flip. Your logic would say that we should record more tails than heads. The mathematics says each draw is independent and you should get 50/50 heads tails. Obviously, the mathematical argument is not swaying you, so maybe this will. In fact, I'll do this and report the results shortly. —Preceding unsigned comment added by 67.193.128.233 (talk) 15:11, 27 February 2009 (UTC)
Ok here are the results. It only took a fraction of a second to run one million flips so I ran 100 million just to make sure the law of large number is really kicking in! What I got was
Overall Statistics
Percentage of heads = 49.990862
Percentage of tails = 50.009138


Statistics of flips after a string of 10 heads (or more)
Number of occurrences of string of 10 heads = 97653
Percentage of heads = 0.499145
Percentage of tails = 0.500855
Please tell me this settles the debate.
On another note, I think it's odd that you're making this argument in a section entitled Martingales. If you knew anything about martingales, you would understand why your argument is wrong. Your betting condition is to wait until a string of N heads and then bet tails. If you let Ti be the indicator random variable of whether or not you bet at time i (i.e. whether or not there was a previous string of heads) then it's not hard to see that Ti is measurable with respect to the sigma-field of information available (i.e. the previous tosses). Thus Ti is a stopping condition and your betting times are stopping times (see the Martingale section for definitions). And the Martingale optional sampling theorem says that the expected value of the Martingale at any stopping time is the same as the expected value of the first trial (i.e. the coin toss is 50/50 at any stopping time regardless of the past). —Preceding unsigned comment added by 67.193.128.233 (talk) 15:45, 27 February 2009 (UTC)

You didn't do the test I asked you to do. You only told me that the number of occurrences of 10 in a row was 97653. Redo from start 1 million flips and give me the following 3 numbers: number of 10 heads in a row, number of 20 heads in a row and number of 35 heads in a row. If you give me that information, I assure you I can prove my point about reversion the the mean and the lesser frequency of longer runs. Also, would you mind sharing your program code so I can do my own runs too? BTW, I am thinking your code is in error. You numbers indicate that almost 10% of all strings last till 10 in a row. I just did 100 manual flips in a row and the longest run I got was 4.216.153.214.89 (talk) 22:24, 27 February 2009 (UTC)

No one doubts that there will be fewer runs of 20 heads in a row versus 10 and even fewer for 35 and I can even tell you approximately how many, but that won't prove your point. If you don't understand why the results above prove you wrong then there is no point arguing with you. However, if you want my code to try for yourself, then give me your email or something or tell me how to post a file here.
And, on another note, I assure you my program is fine. Your calculations are wrong. I ran 100 million flips, thats 100,000,000. Thus there are 10,000,000 strings of 10 flips. I recorded approximately 10,000 which is 0.1%, NOT 10%. This agrees with the mathematics as the probability of a string of 10 heads is 1/2^10 which is approximately 1/1000=0.001 (=0.1%). Thus the expected number of strings of 10 heads out of 10,000,000 strings is 0.001*10,000,000=10,000 which agrees with my program. —Preceding unsigned comment added by 67.193.128.233 (talk) 18:20, 1 March 2009 (UTC)

Proof of reversion to mean

I found a heads/tails generator and was able to generate 10,000 flips. Of those flips, 5007 were heads and 4993 were tails. I imported the results into a database and summarized the results by run length. Here are the numbers:

run--occur--total
1--1252--1252
2--642--1284
3--324--972
4--153--612
5--70--350
6--37--222
7--26--182
8--9--72
9--3--27
10--2--20
4993

This clearly illustrates what I've been saying all along. Due to the principle of Reversion to the Mean and as proven by 4993 tails flips, longer runs are much less frequent than shorter runs. For every 26 "7 tails in a row", you only get 9 "8 tails in a row". The simple fact is, if you were watching at a roulette wheel (disregarding 0 and 00 for a moment) it would take you all day or several days, perhaps a week to see 5,000 spins. 24 hours a day, 1 spin every 2 minutes, 30 spins an hour, 30x24 = 720 spins a day x 7 = 5,040 spins a week (pretty good guess eh?). Well, I did 10,000 test flips and 1/2 of those were tails, so we can say the heads results were about the same. That means if a roulette wheel were only black/red (no 0 or 00), then you would have to watch 1 table 24 hours a day for a week to see 2 series of 10 in a row. This is what I've been telling you all along. Due to the anomalous nature of a long run, it's very rare and when you see one, you can indeed bet against it. It's just that you never see them and the bet is only an even money bet, so it's not worth the time or risk. But it's true that this scenario proves reversion to the mean does apply here. I think other editors to this article have a duty to help me get this information clear enough and corroborated enough to incorporate it into the article as it's quite clearly true. Also, due to the fact that you'd be seeing red runs and black runs both during the week and because the lesser runs eat more time, you'd likely not even see 10-in-a-row once a week as the numbers here scale sideways with time when you add in the heads and frankly it's too much bother to add in the heads now. The bottom line is that 7-in-a-row might happen 2-3 times a day at most and 8 maybe once. If you see 7 in a row, it's unlikely that you'll continue to 8. 216.153.214.89 (talk) 01:13, 28 February 2009 (UTC)

That doesn't prove your point, it proves the opposite. The number of runs of x heads is roughly double the number of x + 1 heads, which is what everyone else agrees upon but you. Now obviously it's not exact, you didn't run enough tests. But if you did more, you'd see. You keep linking to Wolfram, but you clearly don't actually understand the article you're citing, because it doesn't back you up, as I explained above. You are committing the Gambler's fallacy and you cannot seem to understand it. PaievDiscuss! 08:33, 28 February 2009 (UTC)
Paiev, your x + 1 counting is missing my point. Think about this: Using the statistics which my flipper generated, we see that only approx 1/3 of 7-runs continued on to be 8-runs. It's a 2/3 fail chance and a 1/3 continue chance. Why you can't see that is amazing to me. And it gets more profound if you start at 5-run and keep going to 10-run. Of seventy (70) 5-runs, only two (2) of them continued on to become a 10-run - all the rest of those 5-runs (some of which continue to 6-runs, 7-runs, etc) fail prior to 10. By definition, a 10-run must have previously been a 9-run before continuing on to 10. You can walk the two (2) successful 10-runs all the way back to 5's, but you can't advance the other sixty-eight (68) failed 5-runs to 10. I'm sure you think I sound like Ma and Pa Kettle doing special math but I'm not. I am only concerned with isolating a rare event. Basically it comes down to this: when faced with a known rare event, what should one rely on - abstract absolutionist math or the facts on the ground as witnessed? In military parlance, acting on a risky, but advantageous witnessed opportunity is called taking the initiative. It's a method of exploiting anomilies and it certainly applies here. Paiev, if you don't think that there's any advantage to waiting until seven same-flips in a row before under taking an even money bet, then you are entitled to think that. So then, unless you want to add something new, it appears you and I are at an impasse. Also, regarding Wolfram, perhaps I don't understand the math as well as you, but I do understand English and that article says "[A]n extreme event is likely to be followed by a less extreme event." and you've not rebutted this point. 7,8,9,10 in a row - these are all extreme events and are all likely to be followed by less extreme events such as 2,3,4 in a row. When the series fails, it's a step towards reversion and the ultimate ending at an aggregate odds of 50/50 - but along the way, there will be a limited few anomilies where outlier long series occur. Wolfram says that reversion is more likely after an extreme event than before, which is the same thing I am saying. 216.153.214.89 (talk) 17:43, 28 February 2009 (UTC)
Any of you results past 5-runs mean nothing. Please research the law of large numbers before posting statistical results like this and drawing conclusions. 10,000 flips is nowhere near enough to see the actual probability that a 7 run will turn into an 8 run. You need to run at least 1 million. But anyways, we can look at the lower numbers. Approximately half of the 1-runs became 2-runs, and half of the 2-runs became 3-runs and half of the 3-runs became 4-runs and half of the 4-runs became 5 runs and half of the 5-runs became 6-runs. Then actually more than half of the 6-runs became 7-runs. How do you explain this with your misconception of the reversion to mean principle??? At this point, these numbers mean nothing. You only had 9 8-runs, so looking at the number that become 9-runs is equivalent to flipping a coin 9 times and seeing how many are heads. This is clearly not enough get a good statistic, please try to understand this. —Preceding unsigned comment added by 67.193.128.233 (talk) 18:34, 1 March 2009 (UTC)
Ok, you are severely misunderstanding reversion to mean. The page states, "Reversion to mean...is the statistical phenomenon stating that the greater the deviation of a random variate from its mean, the greater the probability that the next measured variate will deviate less far." I accept this to be true as do you, but to use it here, you have to tell me what the random variable is and what the mean is (or equivalently just the distribution P). You have to answer this before we can move on with this discussion. —Preceding unsigned comment added by 67.193.128.233 (talk) 18:41, 1 March 2009 (UTC)

Sample space size is the key

Answering 67.193:

Why don't you answer this instead: In any given time period which is experienceable by a single human on a real-world basis, would 10-runs happen less frequently than 5-runs, yes or no?

And keep in mind, I'm not asking you to answer based on an infinite Sample space - which is what you are doing now, I am asking you to answer based on a real-world sized sample space such as the 1 week at the roulette table which I refer to above. Or let's be even more precise: In 10,000 flips, which will happen more often, 10-runs or 5-runs? I'll answer for you, it's 5-runs. Ah, but that means 10-runs happen less often. In fact if we extrapolate my 5k data to 10k, we see that there is a 140/10,000>14/1000>1.4/100 or 1.4% chance of getting a tails 5-run, but only a 4/10,000>.4/1000>.04/100 or .04% chance of getting a 10-run (double all these numbers if counting heads runs too). In any given series of 10,000 flips the odds of "series continue" reverting to "series fail" grows as the run lenghtens. It's as plain as the nose on your face.

I'm not concerned with the theoretical BS of every flip that might ever occur to infinity, I am only concerned with the rarity of an Event which I am witnessing. In a less than infinite sample space, long runs are a rare event and longer ones, even rarer. If you won't admit that, there's nothing to discuss.

An infinate sample space size violates the premise of the puzzle because the puzzle explains that a "gambler" is involved but a single human being has a limited time span (one life to live). You can't honestly say that one gambler will experience all the flips of an infinate sample space. You've got to limit your sample space and when you do, the rarity of the event becomes clear.

216.153.214.89 (talk) 19:59, 1 March 2009 (UTC)

Why should I answer your question when you ignore mine? You claim you logic is based on reversion to mean. Please explain to us how you use the theorem, what is the random variable X, what is it's mean and what is a "rare" event for the random variable? If you don't answer this, I must conclude that you don't understand the mathematics you're basing your claims on.--67.193.128.233 (talk) 20:29, 1 March 2009 (UTC)

I've already conceded that my understanding of the math is less than yours - big spit. I do however, understand the difference between an infinate sample space and a limited one, which is what this question rises and falls on. Whether or not my efforts to use my poor grasp of the principle of Reversion to the Mean to make my point is irrelevant to my being right on my contention which is: Rare odds events can be detected in real time as they occur. You seem to think this is impossible. Both you and this so-called "Gambler's fallacy" can only be right here if you contradict the premise of a hypothetical gambler's available time by injecting the math for an infinte sample space. Your math is only correct if you calculate the odds of a theoretically infinate number of gamblers across an infinate number of flips and the use of those as criteria are an absurd deviation from the stated premise of the problem, which is a single gambler. The sample space must be limited based on the premise of the problem. Your solution cheats by allocating too large of a sample space. Re-do your math based on single episode of 10,000 flips which is about the maximum that can reasonably be attributed to a real gambler's attention span. In that limited slice of time, it's more likely than not that any given series will fail rather than continue. If my invocation of Reversion to the Mean doesn't help make that clear, I apologize. But even so, what I am saying is true: The size of the sample space is the key to detecting the trick of this question. 216.153.214.89 (talk) 21:13, 1 March 2009 (UTC)

You said,

"In that limited slice of time, it's more likely than not that any given series will fail rather than continue."

Please look at your own results!! Over 10,000 flips, any run between 1-6 was equally likely to stop or continue (i.e. as you increase the run by one the number of runs is reduced by half). Above that you do not have enough trials to get an accurate probability. It has nothing to do with an infinite versus finite sample space. It has to do with the law of large numbers. Please run your 10,000 flips several times and see what percentage of 8-runs turn into 9-runs each time. In you above post, it was 1/3, but I guarantee you it will be different every time and on average it will be 1/2.
Saying that you observe something for a small number of trials but not for a large number doesn't make any sense in probability theory. All you will observe is that the percentages of heads in a small number of trails will have a larger variance than for a large number of trials, but it is not a statistic you can utilize as a gambler. I'm sorry you don't like talking about infinite numbers of trials, but this is the ONLY link between probability theory and the real world. The moment you start talk about probability you are no longer in the real world (in a sense). You need to understand how probability theory models the real world (model is the key word) in order to use its results. When someone says the probability of flipping heads is 1/2 it means that
where the above convergence is in probability and is defined to be 1 if the trial is heads and 0 otherwise. This is the ONLY thing (in the real world) that is meant by the probability of an event and it always involves a limit of trials like this. Remember that probability theory is just a model, you have to understand how it relates to the real world so that the results have meaning. Please pick up a textbook (perhaps online) on probability theory. The first few chapters will be very enlightening. --67.193.128.233 (talk) 23:45, 1 March 2009 (UTC)

67 - So you contend that runs of 100's in a row are routine and happen on a frequent basis? Do even comprehend what I am saying? I am saying that in order to experience enough flips to witness mutiple instances of a particular length series, say 10 - you would have to have a huge sample space - much larger than any human being could experience in a lifetime. You are conflating the odds of all flips everyone could experience into one experience - that of our gambler. If our gambler sees 100 series of 10 in a row, yes 50 will continue to 11 and 50 will not - there's your 50/50. But the total number of runs which continue are 50% less each step. Only by increasing the sample size can you defeat this reduction in results. Of 100 10's, only 50 become 11's. Now we only have 50 11's, not 100. Of 50 11's only 25 become 12's and 12.5 become 13's, etc. You are increasing the sample space if you keep increasing the presumption to 100. It's like a funnel shaped canyon - eventually the exit becomes too narrow to pass out of, unless you increase the width of the entrance -the total number of flips (sample space)- to infinity. The 50/50 odds act as an attrition rate which compounds against continue on each iteration (each flip) - that's what I meant by reversion to the mean. And it is attrition, because every series which doesn't contunue is removed from the pool of possible continues. Once the number of runs which hasn't yet failed gets down to 1, eventually that series fails also and we are at 0 for continue or 1 for not. The only way to change that is to refill the pool with more iterations - that is, scale up the number of flips to infinity. And that's why this is a trick question - our gambler isn't around that long. 216.153.214.89 (talk) 00:01, 2 March 2009 (UTC)

"67 - So you contend that runs of 100's in a row are routine and happen on a frequent basis? "

Please don't make stuff up, I never said that. Of course runs of 100 heads in a row are very unlikely but they are possible and can occur. But this is besides the point, I've been saying that, given that our gambler has seen 100 heads in a row, the probability of flip 101 being heads is 1/2. From what I understand, you've been trying to argue that on the 101st flip, tails will be more likely than heads and this is simply not true. --67.193.128.233 (talk) 00:36, 2 March 2009 (UTC)

attrition

"From what I understand, you've been trying to argue that on the 101st flip, tails will be more likely than heads..."

No that's not what I've been saying. What I've been saying speaks for itself. So let's clear things up. Tell me if this is true:

Presuming we have 100 series of 10 in a row, 50 will continue to 11 and 50 will not.
Now we have 50 11's, of which 25 continue to 12.
Now we have 25 12's, of which 12.5 continue to 13.
Now we have 12.5 13's, of which 6.75 continue to 14.
Now we have 6.75 14's, of which 3.375 continue to 15.
Now we have 3.375 15's. of which 1.6875 continue to 16.
Now we have .84375, etc. etc.

Yes or no, does this "attrition in continue" show that of the fixed set of 100 series, the odds of "continue" decline each flip? 216.153.214.89 (talk) 01:26, 2 March 2009 (UTC)

It's exactly what you've been saying, you must have amnesia. Let me recall a quote FROM YOU:

"The simple fact is that if one were to wager on a reversion to mean circumstance, if you waited until an outlier event (such as 20 heads in a row), the likelihood of witnessing a reversion is higher than on an initial flip."

To me this means that you think that after an outlier event of 20 heads, the 21st flip is more likely to be tails than heads. This is simply wrong. --67.193.128.233 (talk) 01:41, 2 March 2009 (UTC)
By the way, in your example, the odds of "continue" do NOT decline after each flip.
25/50=0.5
12.5/25=0.5
6.75/12.5=0.5
3.375/6.75=0.5
etc...
Each time, the odds of the streak continuing are 0.5 which is exactly the same as the probability of the next coin flip. --67.193.128.233 (talk) 01:47, 2 March 2009 (UTC)

Of the displayed set of 100 series of 10-runs, I've clearly shown that less than 1 series continues on to 17 in a row. That means that of these 100 series, 99 failed to make it to 17. Why you won't concede this doesn't concern me, but I find your attempts to shift the focal point amusing. And your sneaky "gotcha" quotation of "the likelihood of witnessing a reversion is higher than on an initial flip" is bunk. The premise in the article is to keep betting until you witness a series fail. You know that's the premise and yet you try to suggest I am saying the test stops after one bet. The attrition chart I've posted clearly shows that of 100 series of 10-runs, only one will not have failed by the 17th flip. That's 99-1 odds in favor of the gambler. Of 100 10-runs, 99 fail before 7 additional flips. A gambler that entered the game at 10-runs only would win 99 out of 100 series before we reach 7 additional flips - provided that he stayed in the game until each series failed, which 99/100 of they do before 17 flips. 216.153.214.89 (talk) 02:00, 2 March 2009 (UTC)

Listen I agree with everything you just wrote and I have never denied it. What I disagree with is:

"The simple fact is that if one were to wager on a reversion to mean circumstance, if you waited until an outlier event (such as 20 heads in a row), the likelihood of witnessing a reversion is higher than on an initial flip."

These are YOUR words and they are completely wrong. That is the only point I want to make. —Preceding unsigned comment added by 67.193.128.233 (talk) 02:34, 2 March 2009 (UTC)

Ok, then I take the blame for our previous disagreement. My lack of refinement in the proper usage regarding the particularized vernacular of math as it relates to this type of problem, limits my ability to convey my meaning accurately. That said, are you agreeing that of 100 series of equal length (of same flips), 99 of them will fail before 7 additional same-flips in a row? 216.153.214.89 (talk) 02:55, 2 March 2009 (UTC)

In general, if you have a streak of heads, say, then the probability that it will continue for N more flips is . Thus if you have 100 streaks, as you said, then we should talk about expectation. If Y is the number of streaks out 100 that continue on for N more heads, then
In the case of 7, then this is a bit less than 1 (0.78) so as you said, 99.22 will fail on average. But this only says that if you flip a coin 7 times, then 99.22% of the time you will NOT get 7 heads in a row. It doesn't mean that the streak is any more likely to end after the gambler has seen 7 heads than it was after he saw 2. --67.193.128.233 (talk) 03:59, 2 March 2009 (UTC)

67 - you said "if you flip a coin 7 times, then 99.22% of the time you will get 7 heads in a row". Is this what you mean? There's no typo here? 216.153.214.89 (talk) 04:04, 2 March 2009 (UTC)

Sorry I made a typo I meant that 99.22% of the time you will NOT get 7 heads in a row. My point here is that consider our gambler. He sees a streak of 2 heads in a row. At this point, the probability that it continues to 3 is 50%. Say he see's it continue to 7, then at this point, the probability that it continues to 8 is 50%. And this never changes no matter how long he observes the streak. Do you agree with this? --67.193.128.233 (talk) 18:04, 2 March 2009 (UTC)

67- My answer is yes, but... The "but" is that since longer streaks are definately rarer, the longer our gambler keeps doubling down, the better chance (I wrongly muddled the waters by calling his aggregate likelyhood of eventual victory "odds") he has to out-last the streak - because from the attrition chart, it seems that 99% of all streaks (even starting on 1st flip) fail by the 7th flip. That said, it's still a crappy bet, because the doubling raises your exposure ever higher, but you can never end up ahead more than the initial bet. Even so, if our gambler finally outlasts the streak and then quits, he wins. What our guy is banking on is that 99% of the time, he'll only have to stay in the game for 7 flips at the most. So... if he has enough capital to last for 5 million flips (doubling his bet on each loss), he can, for a virtual certainty, last until his odds of eventual victory approach 1, which in its own words is what the Martingale (probability theory) article is saying and why I cited it. Suffice it to say, I think I misunderstood what people say this fallacy is. I thought they were contending that you couldn't prepare yourself outlast streaks, which you clearly can. Rather, they are saying that you can't guess the next flip in advance, and on that I agree. But I am saying that you could realistically expect to outlast a streak, if you start with a small enough wager and have a big enough reserve. My concern is only, how rare are long 50/50 streaks? This dialog helped me find out that 99/100 fail before 7. It also helped me see that it really doesn't matter when you start, 1st or 6th in a row, you must calculate for 7 in a row going forward to get be under the 99/100 as it applies to you. This being the case, I concede that my comments did to a degree implicate a fallacy of logic. At the same time though, I think that we raised a point that should be made clear in this article - the "attrition" numbers I posted. Thanks for your feedback. 216.153.214.89 (talk) 19:18, 2 March 2009 (UTC)

Well everything in that paragraph seems basically correct, as far as I can tell. The only comment I have is that, while 99.2% of the time the streak will last only 7 flips, and the gambler will win one unit, 0.8% of the time it will lose again, and he won't have the money to continue. In effect, he will have lost all of his 256 units in a failed attempt to win just one. As everybody knows, like every betting strategy, the Martingale strategy ultimately confers no benefit to the bettor. Eebster the Great (talk) 02:44, 3 March 2009 (UTC)
I'm glad we agree. I also agree that your statement "But I am saying that you could realistically expect to outlast a streak, if you start with a small enough wager and have a big enough reserve". In reality, if you had enough money to outlast say a streak of 20 losses in a roughly 50/50 game, I think you could win in the short run. But too bad you could never do this in a casino! Next time you're at the roulette table, take a look at the minimum and maximum bets, usually something like $5 and $200 is typical which means you could only last a streak of 5 losses before you can no longer double your bet. --67.193.128.233 (talk) 03:38, 3 March 2009 (UTC)
Actually, maximum bets are more often countermeasures against cheating, since in theory one could place just one or two extremely high bets and get out of the casino without leaving much evidence. Sometimes under controlled conditions, casinos will wave maximums, but I don't think that's common anymore. After all, while most people may indeed profit off of such a Martingale strategy, the rare person who doesn't still pays back the casino and more. In games such as blackjack, maximum and minimum bets also discourage most card-counting strategies that rely on enormous bets when decks are slightly in the player's favor, and very tiny bets the rest of the time (which is some 99% of the time) when the deck is not. Minimum bets have an obvious practical purpose: the bigger the bet, the bigger the profit! Eebster the Great (talk) 02:36, 4 March 2009 (UTC)

67 - Thank you. I can tell you, it's very frustrating to note something, but be unable to explain it per the precise vernacular required in a particular math discipline. Simply put, I do not know enough about math to speak your language clearly. As for your comment regarding table limits, I am pretty sure you are right. Also, the 0 and 00 must count as streak continue if hit, so one has less than 50/50 odds to start. Your sentence of "In reality, if you had enough money to outlast say a streak of 20 losses in a roughly 50/50 game, I think you could win in the short run." is the type of seat-of-the-pants comment I was trying to make and be accepted in making. Because we here are confident you know the math, there's no objection when you say such a thing. But because others like myself use a sloppy vernacular, even when we have the same gut feeling, our comments raise red flags. I agree that you can't 'out-guess' 50/50, but thank you for confirming that you could (highly likely) 'out-last' it in a narrow context, with proper preparation. One last question: You are walking past the roulette table during a company convention and you hear the teetotatling (he's always sober, alert and quick-witted) company accountant exclaim "Uncanny! That's 7 reds in a row"; if you sided up to the table only at that point and started on black in an attempt to outlast, provided that you have your $10,000 annual cash bonus in your pocket and the 1st bet is $5, would the streak of 7 be likely to fail while you were playing? In other words, does that knoweldge you caught about a 7 streak underway before you start help you? Or better yet, what if you overheard him say "20 reds in a row" (provided he's a reliable witness) - wouldn't that be a point of maximum advantage to enter the game? 216.153.214.89 (talk) 06:19, 3 March 2009 (UTC)

This is exactly the gambler's fallacy my friend, thinking that the past affect the future in independent events. Even though you may have just witnessed 200 reds in a row, the probability of outlasting a streak of 7 more reds is exactly the same as if you sat down for the first spin of the day.
Also, note that when I said you could probably outlast a streak of 20 "in the real world", I said "in the short term" which saves me from contradicting the mathematics. Say you play for only 1000 flips, then the probability of witnessing a streak of 20 heads is something less than %0.01. So in the short term, it's extremely likely that your strategy will pay off. But if you play for an arbitrarily long time, then with probability 1 you will experience an arbitrarily long losing streak and lose all your money. It would be interesting to run a simulation of this doubling your bet strategy in a coin flip scenario and make a graph of your money after each flip. I imagine it would be linearly increasing except with random downward spikes corresponding to losing streaks. --67.193.128.233 (talk) 13:08, 3 March 2009 (UTC)
Actually, that would be interesting to see (provided no real money is on the line!). However, the money would actually be linearly increasing during winning streaks, exponentially decreasing during during losing streaks, and massively increasing on the single, final winning bet that breaks a losing streak. Eventually the exponential decrease will become too long and the gambler will go bankrupt, at which point the graph would be constant at 0. Eebster the Great (talk) 02:36, 4 March 2009 (UTC)

so doubling down does work?...

Quite interesting indeed. I ran 500 flips with this strategy (3 separate times) starting with 0 dollars and a bet of $1 (assuming our gambler can go into infinite debt). Check it out http://67.193.128.233/money.pdf --67.193.128.233 (talk) 19:42, 7 March 2009 (UTC)

So it would seem that if you have a huge reserve, doubling down does work. The mistake is made in thinking that waiting for a flip-series of #X to start helps at all. Waiting to start does nothing, but doubling down with a big reserve does do something. Also, the chart seems to show that bad set-backs are rare, so rarity of events does come into play here..? 216.153.214.89 (talk) 20:19, 7 March 2009 (UTC)

If you play a doubling strategy over a finite period of time with a finite amount of money, there is always a probability that you will go bankrupt. Say is your cash reserves and you decide to play for coin tosses. If you fix you could increase until the probability of losing is as small as you would like (but it can never be zero!). Then it would work in the probabilistic sense (i.e. you could choose your reserves so that 99% of the time you win) but you can never be absolutely certain (if your reserves are finite) that you won't experience a losing streak and go bankrupt within the first flips. You can see this in the first graph; by flip 100 we experienced a long losing streak and ended up way down at -$1000 (with $1 bets this means it was a 10 streak) by flip 100. If our reserve was not at least $1000 the game would have been over.
On another interesting note, if you're going to play for flips, the worst losing streak possible is , so if your reserves were then you would be guaranteed to win in the first flips (unless of course you lose all  !). Of course the amount of reserves you would need is orders of magnitude greater than the amount of money you stand to win, so it's hardly a practical betting strategy. --67.193.128.233 (talk) 22:20, 7 March 2009 (UTC)
Also, since N really isn't fixed (you can always choose to keep betting), the limit really is M. By extending N, you can play until you are in a position to "quit while you are ahead", but with enough M, there's nothing forcing you to play only until you are behind. Your control over the elasticity of N, works to your benefit. 216.153.214.89 (talk) 02:04, 8 March 2009 (UTC)
Quite right, we could modify the game so that you play for N flips and then quit if you are ahead/bankrupt or keep playing out the losing streak until it breaks (or you go bankrupt) and then quit. Still with finite reserves M, there is always a positive probability of going bankrupt (which is the point I was making). --67.193.128.233 (talk) 13:58, 8 March 2009 (UTC)

Indeed - the available gain is small and the risk of big loss is high enough to make the possible gain pale. I suppose that's why people think "how can I shortcut this..." and, well, here we are. Would you agree that the "bad luck" of the big busts as shown by your chart are rare in frequency and are sporadic (though still of severe magnitude)? Perhaps people sense this sporadic "rarity" and have that in their mind as they formulate thoughts which lead to the fallacy? 216.153.214.89 (talk) 23:27, 7 March 2009 (UTC)

Indeed the runs of "bad luck" are rare and sporadic, since they occur with very small probability. I think this is the kind of result that gets commonly misinterpreted. By looking at one or two of these graphs, one might very well conclude (wrongly) that it is advantageous to enter the game after a streak of bad luck thinking it is now less likely to occur in the immediate future. --67.193.128.233 (talk) 23:52, 7 March 2009 (UTC)

Now, having already agreed you are right, please think about this: Can you do a number of these charts and inject a hypothetical starting point after the lose streak? You'd have two lines on the chart. Line #1, the original starting point line and #2, the "jump in" starting point line. If you could, we could have a visual and numbers which would together show that although that would seem to work, it doesn't. If the 2nd line doesn't rise at a higher angle than the 1st line, it's visually evident to all that the time of start is no advantage. And I don't think your charts would be a WP:OR violation, because they are simply a visual of already accepted and proven math. What do you think? 216.153.214.89 (talk) 01:48, 8 March 2009 (UTC)

I'm not sure what you intend to show here. The second line would be exactly the same as the first (just shifted down) since both graphs would be observing the same sequence of heads and tails after the "jumping in" point...
Also, there are billions of different ways this experiment could have played out (called sample paths). Showing just one or two sample paths can be interesting to look at, but they don't say much about the process we are observing. I think it would be difficult to conclude (based on just the graphs) that this strategy confers no advantage to the gambler. --67.193.128.233 (talk) 13:45, 8 March 2009 (UTC)
What the 2nd line would show is that the results don't vary regardless of where you enter the process. The lines would be parallel and that's what people would benefit from by being able to visually see. Once they see the lines, the would ask themselves "why is that?" and be more readily able to grasp that the entry point make no difference. 216.153.214.89 (talk) 19:25, 8 March 2009 (UTC)
One more point: If you did a series of trials, say 100 of 1,000 flips, I'd be interested to know what the maximum $ amount is which you are behind at any point on any of the 100 charts. Here's why: Let's say for a minute that after 100 charts, there's no bigger down spike on any chart than 12 in a row and only a few 10 in rows (or whatever the real numbers are). If by some means, you were always able to enter the game and start bidding only right at the bottom of a big down-spike, while you wouldn't net any additional gains by this method, you would avoid the capital strain of having that session's bad run go against you. Your gain doesn't increase by entering after a long run, but your capital strain risk is reduced... for that session anyway?... Comments? 216.153.214.89 (talk) 16:50, 9 March 2009 (UTC)
Well, no, because you have the same chance of getting a set losing streak no matter when you enter the game (assuming the rules and such remain constant). Otherwise the game would have to have some sort of memory. Eebster the Great (talk) 22:54, 9 March 2009 (UTC)

Ebster, take a look at the 3 charts on the .pdf that 67 made. I'd like to see 100 of those charts and see if there are often large down spikes more than 1 times per every chart. It's clear that the large down spikes are rare, but it's also clear that no matter when you join, your upward path angle is the same. But, it does seem if you join right after a large downspike, the rarity of the spike means you won't see another one soon. Look at the charts - we need more of them to see this, but it seems correct. Waiting until after a large down spike to start won't make you win faster, nor will it make you win more, but you will have avoided a current-time-frame rare large down-spike. What does that avoidance do for you? It allows to avoid having to double up huge right out of the starting gate. The charts seems to show that. Coming in after a large down spike does only one thing: It mitigates against an unusually large doubling up right away - and that weighs against you going bust fast from exceeding your grubstake right out of the gate. In the long run, with enough plays, that would smooth out too, but those charts do show that since large spikes are rare, entering the chart right after a big downspike would typically weigh towards not seeing another big downspike on that chart. Now if each chart equals a fixed number of spins at the wheel - say 10 hours @ 50 an hour, it seems that entering after a big down spike allows you to manage the size of your doubling up better - it helps keep down the total $$ in play, so your at risk amount for the same eventual $$ won is less. And again, it doesn't help you win any more or lose any less in the long run, but it helps you avoid a killer blow right when you start. Large killer blows don't closely follow large killer blows on any of those three charts. I'd like to see 100 of them and see if that's the case. What those large downspikes represent is the doubling of your bet to huge amounts as you try to outlast a streak. Avoiding being forced towards your max bet limit helps your doubling stamina. Anything that helps you avoid getting forced out helps you. Because force-out streaks are rare, joining right after a force-out streak helps you not get forced out too fast as the next large streak is expected right away. What you face will still be 50/50, but it typically won't result from that 50/50 into another large down spike right away. We need more charts and a "start after downspike" line to check this. 216.153.214.89 (talk) 02:26, 10 March 2009 (UTC)

I'm afraid you're misinterpreting the charts I made. Although losing streaks (the down spikes) are rare, they are NOT more rare immediately after they have occurred (since the game has no memory). The fact that you only see on average 1 spike per 500 flips in those charts just means that the probability of those extended losing streaks is very small. But no matter when you start betting, the probability of experiencing an extended losing streak within the next 500 flips is exactly the same (even if you entered right after an extended losing streak). --67.193.128.233 (talk) 13:01, 10 March 2009 (UTC)
If you're having trouble understanding, try to be more precise with your hypothesis, you might find that you answer your own question. Let's say a losing streak of 7 would bankrupt you. Now say a roulette table spins 1000 times per day (this could be way off) and you decide to bet on 50 of those spins (but these 50 could be at any time during the day and don't have to be consecutive). Do you propose that the probability of experiencing a losing streak of at least 7 depends on which 50 spins you bet on? --67.193.128.233 (talk) 13:15, 10 March 2009 (UTC)

67 - I am at a loss for words here. Unless I saw a large sample of 500 flip charts, I can't see with my own eyes what actually happens regarding frequency of 7-streaks. Let's say for a minute that in 100 sessions of 500 flips, there's only one occassion where a 7-series started again within 43 flips of a 7-series. Wouldn't that show the odds are only 1 in 500 sessions that if you start right after a 7-series that another will start within 43 flips? I know you can answer me with math, but I want to visualize the answer as well. The "game space" for our blended scenario would be a) session length of 500 flips b) start only after a 7-series c) play for 50 flips in a row. d) if entering a particular session, must enter no later than flip 450 to finish for sure by flip 500 e) If not seeing a 7-series, do not enter at all. 216.153.214.89 (talk) 16:11, 10 March 2009 (UTC)

Isn't this what I suggested earlier? I'll run a simulation where you wait until you see a 7-series and then bet a doubling strategy for the next 50 flips in a row and then stop (no matter if you're up or down) and wait for another 7-series and pick up right where you left off betting the doubling strategy. I'll do this until we've observed 10 7-series so we have bet on a total of 10*50=500 flips and I'll plot it. --67.193.128.233 (talk) 20:56, 10 March 2009 (UTC)

interesting charts

http://67.193.128.233/betting-strategy.bmp // The flat portions on the chart are the periods of time when our gambler is waiting for a 7-run after which he enters the game and plays for 50 flips and then stops playing until the next 7-run. In total he plays 12 times in the first 3000 flips. So he bets on 50*12=600 coin flips. Notice, there is one HUGE losing streak before he plays 500 times, which is pretty much the same as what we observed on the other charts. --67.193.128.233 (talk) 21:31, 10 March 2009 (UTC)

67 - Your charts are very interesting. From the current .bmp chart, it does look like several of the sessions avoided bad spike-downs completely. But 10 sessions is not many (500 plays) as you did 3x that many (1,500 plays) on the .pdf test. In fact, it might even be better to upsize the plays even beyond 1,500. Could you do a chart with 30 sessions, but instead wait for a 10-series before starting the 50 flips of play?. Also, I note that the .bmp player never drops below -50, but our .pdf player dropped much further into negative territory on the previous examples. Could there be anything to this here? Can you play our .bmp guy for 1,500 flips? That's what the .pdf guy played for. Also, if you are willing to play .bmp man again, this time please start him after only after a 10-series - I'm more interested to see that than the 7. I think 1,500 plays (or more) starting after 10-series only would be interesting to see. I'd like see some more sample play charts. We have only 1x 500 play chart for .bmp man at a 7-start, but even with that, .bmp appears to be farther ahead in $$ at the end than any of the three .pdf play sessions. 216.153.214.89 (talk) 04:01, 11 March 2009 (UTC)

"We have only 1x 500 play chart for .bmp man at a 7-start, but even with that, .bmp appears to be farther ahead in $$ at the end than any of the three .pdf play sessions."

Are you looking at the same graphs that I am? The gamblers who played 500 flips in a row ended up around $250 which is the same as our second gambler after 10 plays of 50 flips (=500). Plus you say

"From the current .bmp chart, it does look like several of the sessions avoided bad spike-downs completely."

I count 4 out of the first 10 sessions with down spikes. So 6 out of 10 avoided down spikes, but you have to ask yourself it this is any better than if you just played 500 flips in a row? So you have to do the same for the first charts (money.pdf). Take chart one for instance. If you split it up into 50 flip sessions then 8 out of 10 avoided down spikes. So you're no better off with this strategy (actually worse in this case). The point here is that it doesn't matter when you choose to bet on those 500 flips, it's the EXACT same experiment as long as the number of flips are the same. --67.193.128.233 (talk) 12:34, 11 March 2009 (UTC)

They are only the same in the sense that the rate of climb is about equal. However, .pdf man fell farther behind financially all three times compared to .bmp man, though they do end up about the same in the end. It appears from the charts that starting after the 7 seems to have mitigated against going deeply in the hole...? 216.153.214.89 (talk) 14:13, 11 March 2009 (UTC)

Again, that's not something you can conclude from looking at just 3 charts. They look pretty much the same to me. Remember, the losing streaks are exponential. The difference between losing 250 and 1000 is just 2 flips. --67.193.128.233 (talk) 20:05, 11 March 2009 (UTC)
Check out this one: http://67.193.128.233/betting-strategy2.bmp. Our gambler experienced a horrible losing streak the first time he played! I would say that overall, they look very similar to the charts where our gambler played 500 flips in a row and I don't know how you could conclude otherwise. --67.193.128.233 (talk) 20:27, 11 March 2009 (UTC)
By the way, the theory of Martingales was developed partly to solve this type of problem. See the first application here: http://en.wikipedia.org/wiki/Optional_stopping_theorem . Basically you can prove that there are no successful betting strategies in a fair game (such as a coin toss). --67.193.128.233 (talk) 20:14, 11 March 2009 (UTC)

67- So I don't continue to trouble you with inane questions, could you set me up with the software and method you use to make these charts? I'd like to generate some various ones and look at them, posting a few for comment before I abandon this line of inquiry. 216.153.214.89 (talk) 00:59, 12 March 2009 (UTC)

Code: http://67.193.128.233/coin-flip.cpp. It's written in C++, if you need help compiling and running it let me know. --67.193.128.233 (talk) 04:02, 12 March 2009 (UTC)

I'm a crummy programmer, but I read your code in wordpad - it's clean and compact. I'm going to see if I can find a C++ compiler sitting around or on the web. I've got MS C 6.0 around here somewhere... I'll get back to you with questions in a day or two. 216.153.214.89 (talk) 04:50, 12 March 2009 (UTC)

nothing but a fluke

My suggested wheel (see above) proves visually that there's no "rarity of runs" exploitation factor involved. Here's how to see it: My wheel took a series of 1,000 flip results and put them in order around the circle of a wheel. Now because it's about 500H and 500T on my wheel, that wheel has 50/50 odds, but only "sort of". Around my wheel rim, you'd see various instances of HHH, TT, TTTT, HHH, HHHHH, etc. This is the sequential recording of the series which occured and which we plotted on the wheel rim. However, if you spun my wheel, those spins would not be a true model of a flip, because there are already intact series on the rim and new flip sequences do not issue in an intact series. They might, as a fluke, result in a series which looks like HHHHHH, but that's not how we should see them, rather we should see that result as H,H,H,H,H,H. Each of these flips was and remains discrete. We novices can't see that because we lack the math skills, so we think H,H,H,H,H is HHHHH, but it's not. New flips always issue as .50/.50,.50/.50,.50/.50,.50/.50. I think we should consider that the comma might be needed in our representations so as to convey the meaning of discrete steps. When the comma is included, it's much easier to see that a series of discrete results occured in a manner so as to appear to be a single event, but they never were. Without that comma, novices can't sense the interjection of the equal odds at the point of each new flip. 216.153.214.89 (talk) 07:48, 6 March 2009 (UTC)

Wikipedia cited in court decision

Gagliardi v. Commissioner of Internal Revenue

Respondent’s reliance on a definition of “gambler’s fallacy” found in Wikipedia18 is not persuasive.
18 Although we conclude that the information respondent obtained from Wikipedia was not wholly reliable and not persuasive in the instant case, we make no findings regarding the reliability, persuasiveness, or use of Wikipedia in general.
http://www.ustaxcourt.gov/InOpHistoric/GAGLIARDI.TCM.WPD.pdf

—WWoods (talk) 00:55, 12 February 2009 (UTC)

Removal of sentence from introduction

Please see this edit [1]. I removed that sentence because it is not supported by any WP:RS material at its point of insert nor anywhere else in the article. If indeed the event being described is "often referred to" as something, then surely multiple sources which confirm this frequency of referrence should be easy to find. I searched and found none, hence the deletion. 216.153.214.89 (talk) 18:00, 25 February 2009 (UTC)

You found none? A quick Google search for "number is due" gambling turns up plenty of examples, including some books and papers. --McGeddon (talk) 00:09, 2 March 2009 (UTC)

None of those comments record actual instances of actual gamblers saying that, which is what the article contended. It's merely a straw-dog conjecture that gamblers say this, so as to emphasize a real world context for this math puzzle. The puzzle is interesting enough without inventing hypothetical comments of hypothetical gamblers. In any case, I have since rephrased the sentence to be less smelly and will not be deleting it again in it's current form. 216.153.214.89 (talk) 07:25, 6 March 2009 (UTC)

  1. ^ N. J. Balsara, Money Management Strategies for Futures Traders, Wiley Finance, 1992, ISBN 0-47-152215-5 pp. 122