Jump to content

Wikipedia:Reference desk/Archives/Mathematics/2011 December 28

From Wikipedia, the free encyclopedia
Mathematics desk
< December 27 << Nov | December | Jan >> December 29 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


December 28

[edit]

Optional stopping

[edit]

Say I want to convince people that I can predict the result of a coin flip to better than 50% accuracy. I set up an experiment where I call heads or tails, flip the coin to determine whether I'm right, and repeat the process. However, I have the right to stop the experiment whenever I want, and if I stop the experiment while I'm ahead, I could get an accuracy higher than 50%.

If I want to get the highest expected percentage accuracy, what is the optimal strategy? Using this strategy, what accuracy can I obtain? What happens if I'm only allowed to stop the experiment after at least 10 coin flips, or after at least N coin flips, where N approaches infinity? --99.237.252.228 (talk) 00:12, 28 December 2011 (UTC)[reply]

I would guess that the best approach would be to stop flipping whenever more than 50% of the flips are in your favor. Half the time the first flip should go your way, and, of the half of the times it doesn't, in 1/4 of those you will get the next two flips your way. So, that's 5/8 of the time in your favour with just 3 flips. StuRat (talk) 00:17, 28 December 2011 (UTC)[reply]
Don't guess. If you've identified the dynamic, then why not give a general expression and its subsequent conclusion/s? Fly by Night (talk) 01:22, 28 December 2011 (UTC)[reply]
I expect a rapid regression to the mean. Others can do the math. StuRat (talk) 01:31, 28 December 2011 (UTC)[reply]
One has to be careful about how to phrase such a question. The expectation value of the accuracy will be 50%, regardless of what optional stopping protocol is used. This is the optional stopping theorem. By implementing a protocol such as the one Stu suggest, you will increase the likelihood of having slightly above average runs at the expense of having some very bad runs as well. In other words, you should not "expect" to look like you are able to predict the outcome of coin tosses. Sławomir Biały (talk) 01:35, 28 December 2011 (UTC)[reply]
It doesn't work as a betting strategy, due to the assumption that the bet must be doubled each time you continue. Thus, the few losses cost you as much as the far greater number of wins. However, in this setup, there's no monetary bet to double, so the few losses are not weighted more heavily than the many wins. StuRat (talk) 03:47, 28 December 2011 (UTC)[reply]
Again, you missed the nuance in my reply. I indicated that it is true that you can increase the chances of slightly-above-50% runs, but in doing so you both kill some very good runs while not eliminating some very poor runs. This tradeoff ensures that the expectation value of your accuracy remains 50%. Note that the original poster's question was specifically about expected value. Now, the optional stopping theorem states that a martingale stopped at a stopping time is a martingale. In this case, the original martingale is the discrete random walk obtained by computing time-averages of the process if the ith toss is heads, if it's a tails. Letting be the stopped process. Since is a martingale, the expectation value of at time is . So regardless of what protocol you use, you can only have even chances. (A basic meta-axiom of probability is that you can't get something for nothing.) Sławomir Biały (talk) 13:30, 28 December 2011 (UTC)[reply]
It's simple enough to show that you can increase the average percentage of heads. Let's say you stop after 1 flip if you get heads, and flip once more if you get tails. That would give you 100% heads 50% of the time, 50% heads 25% of the time, and 0% heads 25% of the time. This works out to an average of { 100(50) + 50(25) + 0(25) } / 100 = 62.5% heads. Your argument is only correct if getting one head does not count as much as getting two tails, and the OP said this is NOT the case here. StuRat (talk) 00:55, 29 December 2011 (UTC)[reply]
Two observations.
  • There's no strategy that guarantees a win, because there's an infinitesimal but nonzero probability that you'll go below 50% with the first toss and remain below 50% as long as you keep playing
  • The only strategy that maximizes the likelihood of a win is to stop playing as soon as more than 50% flips are in your favor, because, if you don't stop, there's an infinitesimal but nonzero chance of losing, as before.--Itinerant1 (talk) 01:45, 28 December 2011 (UTC)[reply]
That argument doesn't seem too rigorous. There's no strategy that guarantees a win, but the probability of losing is 0%, so I'm almost certain to win. I'm also not convinced that I should stop after getting at least 50%. There's an infinitesimal chance of losing, but there's a non-infinitesimal chance of increasing my accuracy, so why not continue? --99.237.252.228 (talk) 04:11, 28 December 2011 (UTC)[reply]
Is correctly predicting 1 toss out of 1 (100%) deemed better than, for example, correctly predicting 99 tosses out of 100 (99%)? That doesn't seem a very sensible scoring method... 81.159.105.243 (talk) 03:20, 28 December 2011 (UTC)[reply]
Yes, it is. That's why I introduced the condition that I can only stop after at least 10 tosses. Also, if my strategy is to always stop after the 1st flip, my expected accuracy would only be 50%, so that doesn't work. --99.237.252.228 (talk) 04:11, 28 December 2011 (UTC)[reply]
I wasn't suggesting that the strategy should be to always stop after the first flip. 81.159.105.243 (talk) 13:35, 28 December 2011 (UTC)[reply]
I ran some simulations. Here's the results of a run where I stop whenever more than 50% of the tosses have been heads, with no minimum number of flips allowed, and the maximum number allowed listed (I skipped even numbers since those would have lots of ties):
 MAX      WIN 
FLIPS      %
=====  =========  
  1    50.000000
  3    62.500000
  5    68.750000
  7    72.656250
  9    75.390625
 11    77.441406
 13    79.052734
 15    80.361938
 17    81.452942
 19    82.380295
 21    83.181190
 23    83.881973
Here's another run, but this time the minimum number of flips is 11:
 MAX      WIN 
FLIPS      %
=====  =========  
  11   50.000000
  13   55.639648
  15   59.466553
  17   62.361908
  19   64.676094
  21   66.590591
  23   68.213211
So, not only does setting a minimum number of flips mean more flips are required to do better than 50%, it also means that the rate at which the winning percentage grows is reduced from then on. StuRat (talk) 06:22, 28 December 2011 (UTC)[reply]
Next I repeated the above runs, but set the goal as winning at least 55% of the flips instead of 50%. Here it is with no minimum number of flips:
 MAX    OVER 55% 
FLIPS  PERCENTAGE
=====  ==========
   1   50.000000
   3   62.500000
   5   68.750000
   7   72.656250
   9   75.390625
  11   75.390625
  13   76.416016
  15   77.478027
  17   78.462219
  19   79.352188
  21   79.752640
  23   80.213654
And here it is with the minimum number of flips set to 11:
 MAX    OVER 55% 
FLIPS  PERCENTAGE
=====  ========== 
  11   27.441406
  13   38.720703
  15   44.360352
  17   48.388672
  19   51.548386
  21   52.846050
  23   54.267372
As you can see, if you want a higher percentage of heads, it requires more flips. If you actually wanted to do this for real, the time it takes to do all these flips would soon become a serious constraint. You could probably get 99% of the flips to be heads every time, except that the coin flipping would be interrupted by the death of the universe. In other words, the optimal strategy is entirely dependent on how much time you have. StuRat (talk) 06:35, 28 December 2011 (UTC)[reply]
(i) Is your "WIN %" measuring the probability that 50% heads was exceeded at any point? Is that what the question asked? I thought it was asking about the expected proportion of correct predictions in the run. (ii) I'm not sure about "You could probably get 99% of the flips to be heads every time", if I'm understanding it correctly. Any given surplus of heads over tails (or vice versa), however large, is certain to be achieved (in the "probability tends to one" sense) if we continue long enough, but is the same true for proportions? If we continue long enough, are we guaranteed to always eventually achieve 99% heads? It sounds very unlikely to me. In fact, it is not at all obvious to me that we are always guaranteed to even achieve 50.000001%. (iii) It seems plausible to me that the optimum strategy is to always stop if the number of heads is one more than the number of tails, but I don't think that has been anything like proved so far in this thread. (Note: in case not obvious, I'm using "heads" synonymously with "successful predictions" because we may assume that you always predict heads, since it makes no difference what you predict.) 81.159.105.243 (talk) 13:34, 28 December 2011 (UTC)[reply]
(i) Yes. The original question is unanswerable, because you could get any proportion you wished, if you had an infinite amount of time. StuRat (talk) 20:36, 28 December 2011 (UTC)[reply]
Let me make this very black and white: There is no optimal strategy that will maximize the expected proportion of heads to tails. This is a mathematical theorem. If you don't believe mathematics, trust casinos: go play roulette and bet $1 on black each time. You think you can expect to beat the house? Sławomir Biały (talk) 13:52, 28 December 2011 (UTC)[reply]
It's always possible to get ahead if you go on for long enough, in the sense that the probability of doing so tends to one. It may not be practical because you may have to go on playing for years (potentially even millions of years). 81.159.105.243 (talk) 14:02, 28 December 2011 (UTC)[reply]
But your question is not about the probability of exceeding 50%. It's about expected value. The probability of exceeding 50% can approach 1 while the expected value remains the same. This has nothing to do with practicality. There are some very poor runs in the tail of the distribution that average out with the modest slightly-above-50% runs. See my replies above. Sławomir Biały (talk) 14:06, 28 December 2011 (UTC)[reply]
See also Gambler's ruin. Sławomir Biały (talk) 14:14, 28 December 2011 (UTC)[reply]
Actually it's not my question (I'm not the OP). But, as I understand it, the question is about when to stop so as to maximise one's proportion of successful predictions. If we go on long enough, we are "certain", in the appropriate probabilistic sense, to reach the point where the number of successes exceeds the number of failures by one. Clearly it is advantageous* to continue to that point if we have not yet reached it. We next need to show (assuming it's true) that it is never advantageous to continue beyond that -- even though, if we continue, we are "certain" to eventually reach the point where our successes outnumber our failures by any stated fixed (i.e. not proportional) amount. 81.159.105.243 (talk) 14:24, 28 December 2011 (UTC) *advantageous in principle, but not necessarily in real life, since we don't have an indefinite amount of time...[reply]
In the original post, the game is stopped after a large number N. The expected proportion of heads (regardless of the stopping protocol) is 50%, even if the probability of being less than this is very small. (There is a difference between expected value and probability.) Now let N tend to infinity. Sławomir Biały (talk) 14:47, 28 December 2011 (UTC)[reply]
I'm envisaging that we do not stop tossing until we are are one ahead, so your "poor runs in the tail of the distribution" simply never happen to even out the expectation. Perhaps the legitimacy of that is more of a philosophical question? Actually, though, there are possibly some more prosaic quirks thrown up by this "highest expected percentage accuracy" measurement. Suppose our strategy is to (always guess heads and) stop if the first toss is heads, otherwise continue for two more tosses. Unless I have just made some silly mistake, this gives 1/2 chance of 100% accuracy, 1/8 chance of 2/3 accuracy, 1/4 chance of 1/3 accuracy and 1/8 chance of 0 accuracy, for an "expected" accuracy of 2/3 -- even though obviously we cannot make money at roulette this way! Is that how we're meant to calculate it? 81.159.105.243 (talk) 15:22, 28 December 2011 (UTC)[reply]
Good point. Sławomir Biały (talk) 15:33, 28 December 2011 (UTC)[reply]
This seems like a very strange way of counting, though. (As you note, it counts one round of 100% accuracy as the same as 1 million rounds of 99% accuracy.) I was thinking that the right way to compute the expected proportion was to add up all the heads and then divide by the total number of coin flips, among all the sample paths. This will give 50%, from the theorem I quoted. I suppose context is important. Sławomir Biały (talk) 21:31, 28 December 2011 (UTC)[reply]

in an infinite amount of time you will get any possible percentage, so you could stop whenever you have some arbitrarily high percentage. — Preceding unsigned comment added by 86.174.173.187 (talk) 15:35, 28 December 2011 (UTC)[reply]

StuRat said the same thing above, I believe. Is everyone certain that this really is true? As I mentioned above, I know it is true for surpluses of heads over tails (or vice versa), but is it true for percentages? Generally, we need to know the limit of the probability of getting at least x% heads anywhere (i.e. at any intermediate point) in a run of N tosses, as N -> infinity. If that number is 1 for any x then you guys are correct. It would surprise me though, on the basis that once you get to huge N it becomes hopelessly unlikely to get even 50.00001% heads (if you haven't already), and (I speculate) these increasingly hopeless odds overwhelm even the fact that N can keep getting bigger and bigger without limit. Any further thoughts about this? Am I wrong? 86.179.2.210 (talk) 18:21, 29 December 2011 (UTC)[reply]
Well, let's look at the probability of anywhere on a run getting >=75% heads. The probability of exceeding this on the first toss is 1/2. Thereafter, we can only pass from <75% to >=75% (actually, always to =75%) on turns that are a multiple of 4, say turn 4n. The probability of this happening on turn 4n is, according to my calculation, C(4n-1, n)/2^(4n). If we sum all these probabilities, we get 1/2 + sum(n = 1,2,3...) C(4n-1, n)/2^(4n). This is an overestimate of the required probability because we are multiple counting: we might pass from <75% to >=75% multiple times on a run. Even though an overestimate, it still looks very unlikely to me that that sum will converge to 1. It looks to me as if it converges to about 0.85526. Of course, it is possible I made a mistake, but for now I remain unconvinced that the claim is correct. 86.179.2.210 (talk) 00:13, 30 December 2011 (UTC)[reply]
I've extended my simulation:
 MAX      WIN 
FLIPS      %
=====  =========  
  1    50.000000
  3    62.500000
  5    68.750000
  7    72.656250
  9    75.390625
 11    77.441406
 13    79.052734
 15    80.361938
 17    81.452942
 19    82.380295
 21    83.181190
 23    83.881973
 25    84.501900
 27    85.055405
 29    85.553558
 31    86.005005
 33    86.4166   (Switched to floating point here, so accuracy is reduced slightly.)
 35    86.7939
It's not converging very quickly, so I'd expect the max value to be quite a bit higher, perhaps even 100% with an infinite number of flips allowed. StuRat (talk) 08:28, 1 January 2012 (UTC)[reply]
If I'm understanding correctly, your simulations are measuring the probability of getting more than 50% heads anywhere in the run. That probability does indeed tend to 1 as the number of tosses tends to infinity. What I am disputing is the claim that any percentage of heads (say 99%, or 75%, or even 51%) will eventually be reached with probability tending to one. That is quite a different matter. 86.171.174.74 (talk) 20:29, 1 January 2012 (UTC)[reply]
I agree that 99% HEADS may not be achievable every time. However, I'm not so sure 51% HEADS wouldn't be. Are you sure about that ? StuRat (talk) 22:47, 1 January 2012 (UTC)[reply]
No. My intuition says that the probability of attaining any fixed percentage of heads other than above 50% does not tend to 1 as the number of tosses tends to infinity, but I am not certain. As I mentioned above, you need to bear in mind that, as the number of tosses becomes huge, the probability of attaining even, say, 50.000001% heads (if not already) becomes so increasingly vanishingly small that even the ever-increasing number of tosses may not be able to overcome it. Another consideration is that, if you allow that 99% heads isn't eventually "certain", but 51% is, then there will be some number between 99% and 51% at which there is some, if you like, "discontinuity". Intuitively it seems unlikely to me that this "discontinuity" could happen anywhere other than at 50%. 86.171.174.74 (talk) 23:15, 1 January 2012 (UTC)[reply]

Is there an algorithm which - for every first-order proposition (with identity and predicate symbols) - determines whether that propostion is consistent?

[edit]

77.124.12.169 (talk) 10:22, 28 December 2011 (UTC)[reply]

Consistent with what?
But, basically, no, whatever you might mean, even if you just mean "consistent with itself". A proposition "inconsistent with itself" would be the negation of a logical validity, and if there were an algorithm to determine whether the negation of a sentence is a validity, then there would also be one to determine if a sentence is a validity, and there isn't. --Trovatore (talk) 10:27, 28 December 2011 (UTC)[reply]
Yes, consistent with itself. 77.124.12.169 (talk) 11:17, 28 December 2011 (UTC)[reply]

Is there an algorithm which - for every consistent first-order proposition (with identity and predicate symbols) - determines that the propostion is consistent?

[edit]

77.124.12.169 (talk) 11:16, 28 December 2011 (UTC)[reply]

If every input is consistent, then the algorithm that always returns "is consistent" would work. That same would apply to your question below. Maybe I'm misreading, though. Phoenixia1177 (talk) 05:34, 2 January 2012 (UTC)[reply]
Yes, you were misreading my question. I didn't ask whether there is an alogorithm such that - if every given proposition is consistent - then the alghorithm determines that its given proposition is consistent, but rather whether there is an algorithm such that - for every given consistent proposition - the algorithm determines that its given proposition is consistent; i.e. the algorithm determines that its given proposition is consistent - if that given proposition is really consistent.
For example, let's assume that the algorithm has two inputs, one of which is a consistent proposition, the other one being an inconsistent proposition; then the algorithm should determine that the first proposition is consistent. The algorithm is not expected to determine also whether the second input is a consistent proposition. You know, not every input must have an output: there are inputs for which a few algorithms don't halt...
84.228.187.129 (talk) 00:53, 3 January 2012 (UTC)[reply]
Validity is semidecidable from Godel's Completeness Theorem, so for your inconsistent sentences, you could run this on their negations. Phoenixia1177 (talk) 13:00, 3 January 2012 (UTC)[reply]
Also, as an aside, you should tighten up your second part to saying it halts exactly when it is consistent; currently, it sounds like you're saying, only, that it halts if the input is consistent. But, if it halts for some inconsistent, then you don't know if it halts if the prop. is consistent, just that it halted. Not trying to be an ass, just thought it was worth mentioning. Phoenixia1177 (talk) 13:34, 3 January 2012 (UTC)[reply]
Anyways, I haven't got an answer: Is there an algorithm which - for every consistent first-order proposition (with identity and predicate symbols) - determines that the propostion is consistent? 77.127.135.82 (talk) 00:24, 4 January 2012 (UTC)[reply]
What are you talking about? You have an answer; the inconsistent cannot be satisfied, thus their negations are valid, valid formulas are, essentially, an RE set. So, your consistent would be, essentially, coRE. So, in short, no, there is no such alg. for the consistent case. On an aside, you come off as kind of rude, perhaps, oddly, condescending.Phoenixia1177 (talk) 08:05, 4 January 2012 (UTC)[reply]

Is there an algorithm which - for every inconsistent first-order proposition (with identity and predicate symbols) - determines that the propostion is inconsistent?

[edit]

77.124.12.169 (talk) 11:16, 28 December 2011 (UTC)[reply]

betting game

[edit]

we throw a dice. if i get a 1 i lose all my money, but if i get a any other number i double my money. the laws of probability suggest that i should always continue playing as it is more likely i will gain then lose. yet obviously i will eventually get a 1 and lose all my money. i assume this is a case of gamblers ruin. so whats the best strategy? — Preceding unsigned comment added by 86.174.173.187 (talk) 15:39, 28 December 2011 (UTC)[reply]

This is a variant of the St. Petersburg paradox. The game has infinite expectation value, yet at some point it is absurd to continue playing. Sławomir Biały (talk) 16:14, 28 December 2011 (UTC)[reply]
mathematically though the st petersburg paradox makes sense to pay any value for. mathamatically here it is stupid to continue playing indefinately. anyway the solutions to the st petersburg dont help here. — Preceding unsigned comment added by 86.174.173.187 (talk) 17:50, 28 December 2011 (UTC)[reply]
Why is it "mathematically" stupid to continue playing indefinitely? You have a 5/6 chance of doubling your money with no risk but your initial investment. Surely "mathematically" the most rational thing to do is to continue playing the game. Actually, you would probably want to sell shares in this game. This hedges some of your own risk. Sławomir Biały (talk) 19:30, 28 December 2011 (UTC)[reply]
Note that there's a chance you could win more than all the money in the world. For example, the chance of getting over a trillion times your initial bet is around 0.07%. StuRat (talk) 20:47, 28 December 2011 (UTC)[reply]
This illustrates the same phenomenon as the St. Petersburg paradox. When the risk of losing the pot outweighs the benefit of doubling down, we would stop playing. But that depends on our individual utility functions. If we had linear utility functions, there would be no incentive ever to stop playing (which leads to a preposterous conclusion, of course). Sławomir Biały (talk) 21:21, 28 December 2011 (UTC)[reply]

there is a 100% chance you will eventually get a one and lose everything — Preceding unsigned comment added by 86.174.173.187 (talk) 11:16, 30 December 2011 (UTC)[reply]

...and the game has infinite expected value. Sławomir Biały (talk) 01:42, 31 December 2011 (UTC)[reply]
Well, it would, except that you can't throw "a dice" anyway, there being no such thing. --Trovatore (talk) 01:47, 31 December 2011 (UTC) [reply]
Al contrario, amico mio :): The OP geolocates to the United Kingdom and in Commonwealth English the correct singular of 'dice' is in fact 'dice'; 'die' is an archaism outside of the States. Source: OED. 24.92.85.35 (talk) 17:47, 1 January 2012 (UTC)[reply]
I'm gonna go full-bore prescriptivist on this one: The Brits were wrong to make that change. A dice is a barbarism; makes my teeth itch. It's as bad as a criteria. --Trovatore (talk) 19:43, 1 January 2012 (UTC) [reply]
More mathematically, when you say the game has infinite expected value, presumably you mean that the expected value of the strategy "play the game for n rounds or until you bust" has an expected value that goes to infinity as n goes to infinity.
However, it's probably more natural to read your statement as being the expected value of the strategy "play the game forever, or until you bust", and the expected value of that strategy is zero. The payoff is infinite if you're allowed to play forever, but the probability of that happening is zero, and in measure theory zero times infinity is generally taken to be zero. --Trovatore (talk) 04:47, 31 December 2011 (UTC)[reply]
Yes, I mean the first statement. The expected value tends to infinity as the number of rounds tends to infinity. The paradox here is that this would seem to imply that a risk-neutral party would play the game indefinitely, and go bust almost surely. Sławomir Biały (talk) 12:24, 31 December 2011 (UTC)[reply]
If someone offers you the opportunity to do this for a single throw. Unless you are extremely risk averse, you would probably play the game for one throw. It represents an extremely good, but risky, investment. Your expected return is 166%, although there is some volatility to worry about. Now, if we're in for more throws, the volatility grows much faster than the return the more tosses you're in for, since it's always "all or nothing": this is the 100% chance that you refer to. One needs to come up with a reasonable model of a rational risk-averse person or market (capital asset pricing model or Harold Markowitz#Choosing the best Portfolio), and work out the variance of the process to maximize your indifference curve. (Also note that if you were allowed to reinvest the cash between turns, that would substantially reduce the risk, although it would likely be a poorer expected payout.) Sławomir Biały (talk) 03:02, 31 December 2011 (UTC)[reply]

Now, what is the conclusion of all this. That the mean value is only relevant when you can repeat the experiment? That gamblers and statisticians and economists are insane? That gambling is more fun than winning? Bo Jacoby (talk) 11:02, 5 January 2012 (UTC).[reply]