Wikipedia:Reference desk/Archives/Mathematics/2009 October 14

From Wikipedia, the free encyclopedia
Mathematics desk
< October 13 << Sep | October | Nov >> October 15 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


October 14[edit]

Constructor[edit]

A constuctor has a 0.80 probability of making $70000, a probability of loosing $20000 and a probability of breaking even. what is the expected value? —Preceding unsigned comment added by 24.0.239.50 (talk) 02:00, 14 October 2009 (UTC)[reply]

You have to know all three probabilities. See expected value#Examples for some examples of how to compute expected value. 66.127.53.204 (talk) 02:38, 14 October 2009 (UTC)[reply]
Although you can find a range for the expected value. At the low end, breaking even has a probability of 0, and at the high end, it has a probability of 0.20. Rckrone (talk) 04:52, 14 October 2009 (UTC)[reply]
If you assume that the three cases you give are all possibilities you can use the fact that the probabilities would then sum to 1 in order to give the expected value as a function of one of the two probabilities that are not given. Taemyr (talk) 05:05, 14 October 2009 (UTC)[reply]
Probably a bit much for this but for a constructor doing one project at a time it can make more sense to use the geometric mean to give the expected nett worth rather than the normal arithmetic one. This is explained in Rate of return Dmcq (talk) 09:19, 14 October 2009 (UTC)[reply]
I'm not sure how the geometric mean applies there. I could imagine wanting to find the expected value of a joint probability distribution of the profit/loss across a bunch of different projects, but that's more complicated than the geometric mean. Alternatively, see Kelly criterion for an approach to treating each project as a straight gamble and deciding how much to bet on it given a specified bankroll. 66.127.53.204 (talk) 10:13, 14 October 2009 (UTC)[reply]
Yes that was what I was saying, if the constructor does one project at a time that gives a more useful estimate of the value. I don't why you say that and then say you're not sure how the geometric mean applies there. Dmcq (talk) 21:50, 14 October 2009 (UTC)[reply]
There is some missing data here. What is the probability of tightening $20000? -- Meni Rosenfeld (talk) 21:05, 14 October 2009 (UTC)[reply]

If x, (0≤x≤0.20), is the untold probability of loosing $20000, then the expected value is μ = 0.80·$70000−x·$20000. So $52000≤μ≤$56000. The geometric mean exp(0.80·log(70000)+x·log(−20000)+(0.20−x)·log(0)) makes no sense to me. Bo Jacoby (talk) 08:09, 16 October 2009 (UTC).[reply]

The constructor would start off with some amount of money like $60000 and end up with say $130000 or $40000. If the constructor started with $20000 or less then bankruptcy is a possible outcome. Since bankruptcy stops any gain for years and years that should mean the value of a deal where that is a possible outcome is zero, in fact one would count it as a low amount as there s always a way back from it. Dmcq (talk) 01:53, 17 October 2009 (UTC)[reply]

Polynomial ideals[edit]

Hi. I've got this problem, from an Algebraic Geometry book:

Consider the polynomial and the ideal . Compute the dimension of as a complex vector space.

So, I've done the problem, and I'm pretty sure the answer is 3, but I think there's something I'm missing. I just used the 3 polynomials to compute a Gröbner basis, and although the process is slightly arduous, the basis turns out to be quite simple: . From that, I can see that the only 3 elements of the standard basis of that aren't in are 1, x and y.

Here's my question: Is there any simpler way to do this? In particular, is it possible to get any mileage from the fact that the polynomials generating the ideal are one polynomial and its two first order partial derivatives? I don't know why the book author would pose the question that way unless there's some way to use that information. I hope this question makes sense. -GTBacchus(talk) 03:02, 14 October 2009 (UTC)[reply]

Isn't xy also not in the ideal , making the vector-space dimension 4, or am I getting confused here?
I can't imagine any easier way to tackle this problem. It feels like much the kind of computational problem Groebner bases is meant for. When and I see the ideal , my instinct is to look for multiple roots of f, but I don't see that helping you any here. What strikes me as odd is that this problem seems to have hardly any connection with algebraic geometry. Since the ideal I isn't radical, the space is not isomorphic to the affine coordinate ring (or whatever it's called) of Z(I), as Z(I) is the single point (0, 0), which has affine coordinate ring .
Out of curiosity, what book are you using?
(Disclaimer: I am very much a novice at algebraic geometry and could easily be overlooking a deeper connection.) Eric. 131.215.159.109 (talk) 05:04, 14 October 2009 (UTC)[reply]
Thanks for catching my silly error. There are 3 quadratic monomials in C[x,y], as you say. (There are also 3 types of people: those who can count, and those who can't. D'oh!)

The book is Introduction to Algebraic Geometry, by Brendan Hassett (ISBN 978-0-521-69141-3). I think I see what you mean about the lack of interesting geometry in the problem. We're in chapter 2, which is just introducing Gröbner bases; varieties and coordinate rings are defined in chapter 3. -GTBacchus(talk) 05:45, 14 October 2009 (UTC)[reply]

Ah, that explains it. Skip ahead to the interesting stuff. My own book (Robin Hartshorne's Algebraic Geometry) has the opposite problem -- it's ridiculously dense, I can scarcely make any headway at all. Otherwise a good book though. Eric. 131.215.159.109 (talk) 05:51, 14 October 2009 (UTC)[reply]
moreover he can play shakuhachi... ;-) --pma (talk) 20:35, 16 October 2009 (UTC) [reply]

Newton's Method[edit]

If I'm trying to use Newton's method to find the zero of a function over an interval, is there any way I can get an upper bound on the number of iterations I'll need to use to get to within a fixed accuracy? Like, if I were to use the bisection method on an interval [a,b], and I wanted to get the zero to within 10^-5, I could just solve |b - a|/2^n <= 10^-5, and that would tell me the maximum number of iterations n I'd need to use; I want something like that for Newton's method. I've looked at the Wikipedia page, but don't see anything like that.Deadlyhair (talk) 04:21, 14 October 2009 (UTC)[reply]

It depends on the behavior of the function and its derivatives. You can even concoct examples where it oscillates forever between two values or stays stuck on one value: see Newton's_method#Counterexamples. But for reasonable functions where the first derivative doesn't change sign between guesses, it converges much faster than bisection. There's tons of analysis of its rate of convergence in numerical analysis texts. 66.127.53.204 (talk) 04:48, 14 October 2009 (UTC)[reply]
If you really need an upper bound on the number of steps then it's best to just stick with the bisection method. Even in cases where you know Newton's method converges, it can converge arbitrarily slowly.--RDBury (talk) 13:13, 14 October 2009 (UTC)[reply]
The usual assumption for the Newton method on an interval is that the function be smooth and convex (or concave) in the interval. In that case one gets a quadratic rate of convergence (so n iterations give an error less than exp(-C2n)exp(-Cn2). Usually one starts with the bisection method (that has linear convergence, i.e. error less than exp(-Cn) ), till one gets close enough to the zero, so that the conditions for the quadratic convergence of the Newton iteration hold, and then one switch to it. --pma (talk) 19:40, 14 October 2009 (UTC)[reply]
I'm fairly certain this is actually . See also my response below. -- Meni Rosenfeld (talk) 20:59, 14 October 2009 (UTC)[reply]
Yes sorry you (and 195) are right! --pma (talk) 21:24, 14 October 2009 (UTC)[reply]
Just to give you an idea on what the bound may look like - if f is sufficiently smooth and is sufficiently close to the root , then , so . Solve for n to get your bound - the important part of the result will be where is your error goal. -- Meni Rosenfeld (talk) 20:59, 14 October 2009 (UTC)[reply]
Newton's method has local quadratic convergence. Quadratic means that (roughly) the number of correct decimal places doubles with every iteration and local means that that only holds once you are close enough to the solution. What close enough means is unfortunately not quite as easy to determine. You do *not* need concavity or convexity for Newtons method, however a zero slope at the intersection with the y-axis will harm you (giving only linear convergence). There are many ways to make Newton's method globally convergent, but they all will result in a linear convergence rate away from the solution and a quadratic rate close to it. So an estimate based on the hybrid bisection/Newton suggested above is likely about the best you can expect.195.128.250.90 (talk) 21:08, 14 October 2009 (UTC)[reply]

The short answer to the OP's question is 'no'. If he is in the lucky situation where bisection works, (continuity and change of sign over an interval), and he wants an upper bound on the number of iterations needed, then why use Newton's method? Bisection has a better worst-case behavior than Newton's method. Bo Jacoby (talk) 16:13, 15 October 2009 (UTC).[reply]

Thank you all for your help.Deadlyhair (talk) 17:23, 16 October 2009 (UTC)[reply]

Russell's paradox[edit]

I am browsing some articles about paradoxes. Can Arthur Prior's resolution for Liar paradox "This statement is false", that every statement includes an implicit assertion of its own truth, be applied to Russell's and other related paradoxes, such as Barber paradox and Grelling-Nelson paradox?

I think it would be "Every set silently includes itself as a member." or something for Russell's paradox. And if there are, what are weak points of the way of resolution? Like sushi (talk) 12:42, 14 October 2009 (UTC)[reply]

Your proposal can't help: Just think about whether Russel's set contains itself as a member. If it does contain - as you propose for every set (hence for Russel's set as well) - then that contradicts Russel's definition for the set, as a set which doesn't contain any set containing itself as a member. HOOTmag (talk) 13:08, 14 October 2009 (UTC)[reply]
I think what the article "Russell's paradox" has is "a set containing exactly the sets that are not members of themselves" (Is it the same?). What I think about it is (as in the case of Liar paradox) if every set silently contains it self, "the sets that are not members of themselves" should be rewritten as "the sets that are not members of themselves and are members of themselves", which does not exist. So the set does not contain anything (But as every set silently contains itself, it does contain it self, or if "exactly" means not even itself, it does not exist).
Like sushi (talk) 02:02, 15 October 2009 (UTC)[reply]
While such a inclusion would resolve the problem of Russell's paradox, ie. it would directly tell us that the set in question does not exist. It will not resolve the implications of Russells paradox. Since we already knew that Russel's set doesn't exist. The problem is that there isn't any direct way from the definition of a set to determining wether a description of a set corresponds to a set that can exist. While your resolution would resolve Russel's set, it would not resolve all such paradoxial sets, or if it does it would dramatically reduce the expressive power of our languag. This because of Gödel's incompleteness theorems. Taemyr (talk) 04:56, 15 October 2009 (UTC)[reply]
Also I think that either your proposed resolution would be unsound, or it would break stuff. Depending on what you mean by the inclusion beeing silent. Because either you do stuff like saying that the empty set contain a member, ie. itself. Or you say that every set is a member of itself is an assertion S.S∈S without actually including the relevant element. Taemyr (talk) 05:04, 15 October 2009 (UTC) could someone fix my tex-code?[reply]
I think the first ofGödel's incompleteness theorems may be dealt in simular way.
The part that seems to be the explanation of it is:
"For each consistent formal theory T having the required small amount of number theory, the corresponding Gödel sentence G asserts: “G cannot be proved to be true within the theory T”. If G were provable under the axioms and rules of inference of T, then T would have a theorem, G, which effectively contradicts itself, and thus the theory T would be inconsistent. This means that if the theory T is consistent then G cannot be proved within it. This means that G's claim about its own unprovability is correct; in this sense G is not only unprovable but true. Thus provability-within-the-theory-T is not the same as truth; the theory T is incomplete.
If G is true: G cannot be proved within the theory, and the theory is incomplete. If G is false: then G can be proved within the theory and then the theory is inconsistent, since G is both provable and refutable from T."
If we assume that every sentence silently asserts itself to be provable to be true,
G should be rewritten as “G can be proved to be true and cannot be proved to be true within the theory T”.
Then it says "A and not A", and is false. That means it is provable to be false. And G is false for the italic part in “G can be proved to be true and cannot be proved to be true within the theory T”. (though it is false in "silent part").
Like sushi (talk) 10:06, 15 October 2009 (UTC)[reply]
When you say "That means it is provable to be false", you are assuming that T is complete - that T can prove all true statements such as the statement "G is false". But this is the whole point of Gödel's first incompleteness theorem - if T were complete then we can prove that G is true and at the same time we can prove that G is false, therefore T is inconsistent. Conversely, if T is consistent, then T cannot be complete. Gandalf61 (talk) 10:22, 15 October 2009 (UTC)[reply]
Not so, we do not need a complete system in order to prove that "Not(A and not A)" is a theorem, even without any information on what A is. The problem is that the statement "G and G is provable in T" is a stronger statement than just "G". So by assuming that ever statment includes the addendum "and this is provable in T" we get in a situation where we don't just have to prove the statement, but we also have to prove the provability of the statement. Which is easy in the meta-language, you just produce the proof. But that is the meta language, which is a different system. Taemyr (talk) 10:36, 15 October 2009 (UTC)[reply]
Also note that Gödel the statement that produces is a statement that is equivalent with G. It is not G itself, the common thought at the time was to avoid paradox by having systems that where incapable of referring to itself. One of the things Gödel notes is that this is impossible if the system is capable of stating any thruths about arithmetic. The actual statement G is purely an arithmetic statement. It just happens to be constructed in a way so that G is true if and only if G can not be proven. Taemyr (talk) 10:49, 15 October 2009 (UTC)[reply]
@Like Sushi, look: Russel's paradox assumes the following intuitive premise: For every definition, there exists the set containing just all elements satisfying that definition. Having assumed that, Russel found a definition which doesn't let the correspondent set exist! this is Russel's paradox: Assuming the existence of such a set - is a contradiction! HOOTmag (talk) 11:51, 15 October 2009 (UTC)[reply]
The empty set does not include itself as a member, silently or otherwise. — Carl (CBM · talk) 11:44, 15 October 2009 (UTC)[reply]
I included that as an example of how the statement "Every set includes itself as a memeber" would break stuff. Taemyr (talk) 11:50, 15 October 2009 (UTC)[reply]
You're right, I missed it. Actually, this thread is making me think I should look into proposed "resolutions" of Russell's paradox. — Carl (CBM · talk) 12:15, 15 October 2009 (UTC)[reply]
This is a different way from what I proposed before. Can the first of Gödel's incompleteness theorems be considered in this way?
Once again, the part of explanation seems to be
"For each consistent formal theory T having the required small amount of number theory, the corresponding Gödel sentence G asserts: “G cannot be proved to be true within the theory T”. If G were provable under the axioms and rules of inference of T, then T would have a theorem, G, which effectively contradicts itself, and thus the theory T would be inconsistent. This means that if the theory T is consistent then G cannot be proved within it. This means that G's claim about its own unprovability is correct; in this sense G is not only unprovable but true. Thus provability-within-the-theory-T is not the same as truth; the theory T is incomplete.
If G is true: G cannot be proved within the theory, and the theory is incomplete. If G is false: then G can be proved within the theory and then the theory is inconsistent, since G is both provable and refutable from T."
Gödel sentence G asserts: “G cannot be proved to be true within the theory T”.
If G were provable (to be true), G would effectively contradict itself. So G cannot be proved (to be true). Then it can be true but unprovable, or is false. G can be "false but is provable to be true".
Like sushi (talk) 08:08, 16 October 2009 (UTC)[reply]
First of all be very carefull about what something means when you say that something is false within a formal system. Often the system is defined by what it means for something to be true or false. Gödel's incompleteness theorems can be resolved by allowing something to be true but unprovable. This means that the system is incomplete, which is a bad thing but many feels it is a price we have to pay. It can also be false but provable, which usually means that the system is inconsistent. This can be very bad, for example clasical logic breaks down when faced with an inconsistency. Having one you can prove any statement. However some feels that logical systems that can reason in a good manner even in the presence of inconsistensy is usefull for when we are reasoning from assumptions that might be faulty. See paraconsistent logic for further treatment. however even when we are operating in a paraconsistent logic we do not actually want inconsistency. Taemyr (talk) 08:35, 16 October 2009 (UTC)[reply]
The variations of Russell's paradox seems to be in the form
The <V>er that <V>s all (and only those) who don't <V> themselves
The result is if the <V>er <V>s itself, it does not <V> itself, and if the <V>er does not <V> itself, it does <V> itself. So in either case of it <V>s or does not <V>, it may be taken to <V> and not <V>. It is in "A and not A", so false.(it is already assumed that A, and resulted in not A, this seems to be the way)
If an assumption has some false statement as a result, it seems to be right to judge it false.
The <V>er that <V>s all (and only those) who don't <V> themselves
can be a false assumption in that the <V>er does not <V> all (and only those) who don't <V> themselves.
Like sushi (talk) 08:50, 16 October 2009 (UTC)[reply]
Yes, the barber described in Barber paradox does not exist. Russell's paradox is a paradox only if you assume that every set that can be defined must also exist. If you accept that it's possible to describe sets such that the description correspond to no existing set then there is no paradox. However it creates the obvious question, when you describe a set, how do you know that a set satisfying your description exists? Taemyr (talk) 09:30, 16 October 2009 (UTC)[reply]
Sorry. About the first of Gödel's incompleteness theorems, in saying "false but is provable to be true", I had an illusion that "provable to be true" includes "true". It must be otherwise, that "true" includes "provable to be true".
And I have no clue about how to answer "the obvious question", "how can we know that a set of descripion exists?". Could you give me any hint about it or examples of inexistable sets? (Please don't just link to ZFC or something, it is too complicated for me to understand the reason)
Like sushi (talk) 10:36, 16 October 2009 (UTC)[reply]
Provable usually implies true. Otherwise it's usually seen as a problem with the proof system. However if you want something to be false but provable then either you must get rid of that implication, so you must accept that provable does not necesarry imply true, or you must accept that something can be both true and false.
Well Russells set, the set of all sets that are not members of themselves, is an example of a set that can not possibly exist. Unless I am mistaken it's not possible to creat a general method to find wether or not a proposed set exists or not. Taemyr (talk) 11:16, 16 October 2009 (UTC)[reply]
I have a suspicion about one of what are written in paraconsistent logic.
Having a single inconsistency, can we really prove any statement?
"if P and its negation are both assumed to be true, then P is assumed to be true, from which it follows that at least one of the claims P and some other (arbitrary) claim A is true. However, if we know that either P or A is true, and also that P is not true (that is true) we can conclude that A, which could be anything, is true"
My suspicion is about "if we know that either P or A is true, and also that P is not true (that is true) we can conclude that A,... is true".
P is not true, but is also true. so "P or A" is already true just with P, so A can be either true or false.
Like sushi (talk) 12:18, 16 October 2009 (UTC)[reply]
Disjunctive syllogism, that is going from (A or B) And Not B, to A, is valid in clasical logic. So the inference you quote is valid in clasical logic.
For making a system paraconsisten it's fairly common to choose to get rid of Disjunctive syllogismm for precisely the reasons you qoute. See Paraconsistent_logic#Tradeoff. The problem is that when you do so you obtain a weaker system, so you can prove less in it. Taemyr (talk) 12:31, 16 October 2009 (UTC)[reply]
By the way, about Gödel's incompleteness theorems, I think saying "Being true does not guarantee provabilty" is the easiest way to present it to non-specialists.
Like sushi (talk) 12:45, 16 October 2009 (UTC)[reply]
Given the theory T is consistent then G is unprovable and true. Does that not mean G has been proven? If it means that G has been proven, If T is consistent there is an inconsistency.
Or if so, G would be provable to be true and unprovable to be true, "A and not A", so is false. Then it must be false but provable to be true. —Preceding unsigned comment added by Like sushi (talkcontribs) 01:19, 17 October 2009 (UTC)[reply]
Like sushi (talk) 01:07, 17 October 2009 (UTC)[reply]
What you can prove, without assuming very much at all, is that if T is consistent, then GT is true. This is not the same as proving that GT is true, without the assumption that T is consistent.
Nevertheless, we know that for all consistent T, GT is indeed true, and we know this precisely because we can prove the implication that if T is consistent then GT is true.
This is more confusing to state than it is to understand — once you see it, it's completely straightforward, but it's still hard to say in a way that doesn't sound like you're pulling a fast one. --Trovatore (talk) 01:28, 17 October 2009 (UTC)[reply]
Is it that we don't know if T is consistent, so can not prove G to be true?
Like sushi (talk) 02:16, 17 October 2009 (UTC)[reply]
That's a fair way of putting it for the general case, sure.
In specific cases, we may well "know" that T is consistent, for whatever value of "know" is being considered, and we will therefore also know that GT is true. But we still won't be able to prove it from T. We may be able to formalize a proof of GT, but it won't be a proof in T; it will be a proof in some theory of higher consistency strength. --Trovatore (talk) 02:21, 17 October 2009 (UTC)[reply]
"Given the theory T is consistent then G is unprovable and true." is not a proof in T, that G is true? (I think it may be for "T is consistent" is not in T.)
Like sushi (talk) 02:46, 17 October 2009 (UTC)[reply]
Exactly, this is not a proof in T that GT is true, because given that T is in fact consistent, T cannot prove that T is consistent. That's the content of Gödel's second incompleteness theorem, and this exchange here is essentially the proof of the second incompleteness theorem. --Trovatore (talk) 03:09, 17 October 2009 (UTC)[reply]
Gödel's incompleteness theorem#Second incompleteness theorem does not have enough explanation about why "given that T is in fact consistent, T cannot prove that T is consistent"
Why cannot it?
Like sushi (talk) 06:29, 17 October 2009 (UTC)[reply]
For exactly the reason you intuited in your message of 02:46! If T could prove T is consistent, then since T can also prove that, if T is consistent, then GT is true, it would follow (by modus ponens) that T could prove GT is true. But we know that, if T is in fact consistent, then T cannot prove GT. --Trovatore (talk) 06:40, 17 October 2009 (UTC)[reply]
Well, thank you. I think I understand it now.
Like sushi (talk) 07:19, 17 October 2009 (UTC)[reply]
About "Having a single inconsistency, can we really prove any statement?"
Again, in paraconsistent logic:
"if P and its negation are both assumed to be true, then P is assumed to be true, from which it follows that at least one of the claims P and some other (arbitrary) claim A is true. However, if we know that either P or A is true, and also that P is not true (that is true) we can conclude that A, which could be anything, is true"
My suspicion is, in this time, about "at least one of the claims P and some other (arbitrary) claim A is true". Can A not be either true or false independent of P?
Like sushi (talk) 02:16, 17 October 2009 (UTC)[reply]
I would strongly advise you to forget about the concepts of "true" and "false" when thinking about paraconsistent logics and Gödel's theorem, at least at first. In a classical logic, every proposition P has an opposite ¬P. Ideally, for every proposition P, you want the formal system to prove either P or ¬P, but not both—i.e. you want it to "settle every question". Such a system is said to be consistent and complete. If neither P nor ¬P is provable for some P, the system is incomplete. If both are provable, it's inconsistent. What Gödel's theorem shows is that in any formal system that contains enough arithmetic, there's a proposition P that's provable if and only if ¬P is provable, so the system is either inconsistent or incomplete. That's all. The stuff about P being true or false is interpretation on top of that, it's not something that Gödel proved. In nonclassical logics this binary pairing of P and ¬P doesn't work any more because there's no dualizing operation that exchanges them. Instead of going P ⇒ ¬P ⇒ P ⇒ ¬P ⇒ ⋯, negation goes P ⇒ ¬P ⇒ ¬¬P ⇒ ¬¬¬P ⇒ ⋯. So you no longer have a black-and-white true-and-false world of yes-no questions. Just because you have P and ¬P doesn't mean you've necessarily covered all the bases. -- BenRG (talk) 10:55, 17 October 2009 (UTC)[reply]
For paraconsistent logics, which really have no interpretation, it might make sense to forget about truth. If for no other reason that, once you start thinking about truth, you'll realize that paraconsistent logic is bogus. (Oh, not completely bogus — it probably does model certain aspects of commonsense reasoning that deserve to be modeled — but bogus in the sense that it's not a useful view of mathematical reality.)
But I completely disagree with you that you shouldn't think about truth in order to understand the Gödel theorems. On the contrary, keeping track of what's actually true, separate from what's provable in T, is the most effective way to avoid getting confused. As to what "Gödel actually proved", you have to keep in mind the times — philosophers in the thirties tended to view truth with some suspicion, and it's quite likely that Gödel wanted to avoid that fight. --Trovatore (talk) 19:03, 17 October 2009 (UTC)[reply]
Sorry, the explanation was not suffice. In saying "in paraconsistent logic" I just wanted to show the source. I wanted to link to Definition section, which shows "principle of explosion" (the explanation of which I quoted) in classical logic. The question in the post above is about that.
Like sushi (talk) 12:14, 17 October 2009 (UTC)[reply]

(Deindenting). A logic is a set of rules for what constitutes a valid inferrence within that logic. In Classical logic we are allowed to go from (A or P) and ¬P, to A. There really isn't anything more to say, if you create a system where Disjunctive syllogism is an invalid inference then you create a system that can prove less. This might be a tradeoff you are willing to make if you can't know that your assumptions are not contradictory. But it is a different system. Taemyr (talk) 14:24, 17 October 2009 (UTC)[reply]

I see now that it is that P and not P then P, and Disjunction introduction P then P or A. So it is that whatever A is true
Thank you.

Like sushi (talk) 03:38, 18 October 2009 (UTC)[reply]

Lottery: 6 numbers out of 49, expected values in case of large jackpots[edit]

Let’s assume that one lottery ticket costs 1$. I can calculate the probabilities of matching 6, 5, 4 or 3 numbers (cases when prizes are paid). 40% of money paid into lottery is for prizes. 44% of total prizes goes for the main prize (for matching 6 numbers), 8% for the second prize (matching 5 numbers). There is a cumulation/jackpot (If no one wins the main prize the money goes for next drawing). Is there a way to calculate when (in terms of the size of the jackpot) buying a lottery ticket gives expected value greater than zero? —Preceding unsigned comment added by 213.158.197.100 (talk) 13:28, 14 October 2009 (UTC)[reply]

Sorry, I don't understand. Does 44% of the jackpot go to the first prize (i.e. all six numbers) and 8% of the jackpot go to the second prize (i.e. five numbers)? Why do you say that 40% goes to prizes and what are the percentages for four and three numbers? If you know the percentage of the jackpot awarded to three, four, five and six numbers then just calculate the size of the jackpot that makes the expected payoff more than $1. Zain Ebrahim (talk) 14:09, 14 October 2009 (UTC)[reply]

100$ - money paid as lottery tickets; 40$ - total prize fund; 17.6 $ - money for the main prize (44% * 40$); Percentages for four and three numbers are not stated —Preceding unsigned comment added by 213.158.197.100 (talk) 14:44, 14 October 2009 (UTC)[reply]

Since the expected winning is a function of the relative amounts paid to the third and fourth prizes, the most I can see is a range of jackpots. Use the hypergeometric distribution to calculate the probabilities. The upper bound of the range will be when 48% of the prize fund goes to the third place (i.e. the less likely) and the lower bound when 48% goes to fourth place. Zain Ebrahim (talk) 15:16, 14 October 2009 (UTC)[reply]
Ignoring the possibility of sharing the jackpot, it should be this easy: For each $1 you invest, you can expect to win back $.40, if there's no jackpot. So the expected jackpot winning must exceed $.60, which means that the jackpot must be at least $.60/P(6), where P(6) is the probability of matching 6 numbers. To take jackpot sharing into account, we would also need to know how many people are playing. —JAOTC 09:54, 15 October 2009 (UTC)[reply]
Also, the cumulative effect would seem to make your odds better when the bonus is built up to a higher level, but remember that this also causes many more people to play (and for each of them to buy more tickets), such that the prize is more likely to be split multiple ways (especially if you bet any type of common pattern that many others will bet, too). StuRat (talk) 14:55, 18 October 2009 (UTC)[reply]