Talk:Almost surely

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

"almost surely" != "probability one"[edit]

Pacerier (talk) 09:36, 9 March 2016 (UTC): ❝[reply]

Re (emphases verbatim): §

an event happens almost surely (sometimes abbreviated as a.s.) if it happens with probability one

The assertion cannot be found in the cited source:

Stroock, D. W. (2011). Probability Theory: An Analytic View (Second ed.). Cambridge University Press. p. 186. ISBN 978-0-521-76158-1.

The only closest reference I found is at page paragraph . However the quote "with probability 1" applies only when goes to infinity.
The assertion (emphases verbatim):

an event happens almost surely (sometimes abbreviated as a.s.) if it happens with probability one

—⁠is unsupported. It is also false, for "probability one" refers to "surely"[1]; not "almost surely". ("surely" does not equal "almost surely".[1])
(Hooking:—⁠.)

References

  1. ^ a b per application of common logic

Just to clarify: The probability1 redirect means Rudolf Carnap's notion "Probability1", which isn't related to the issue Pacerier raised. In contrast, probability 1 (with a space) redirects here, and is related. Per application of common logic, "an event of probability 1" (no index) refers to an event such that its probability is 1, in the first place. The latter property may be different from "an event that will happen in every case", as the picture at almost everywhere shows. Maybe, a similar picture would be helpful here, too. - Jochen Burghardt (talk) 10:27, 9 March 2016 (UTC)[reply]

asymptotically almost surely[edit]

The given definition is just non-sense. Citation: "pn > \tfrac{(1+\epsilon) \ln n}{n}" where the right hand side tends to zero as n -> \infinity (Did I get this right, asymptotically refers to n -> \infinity ?). A probability greater than zero we have always, therefore the non-sense. But I do not know how to improve this section. — Preceding unsigned comment added by 192.41.132.103 (talk) 09:13, 13 August 2014 (UTC)[reply]

Probability 1 Redirect[edit]

What is that Probability 1 redirect at the top. That's pretty confusing. I thought it was vandalism at first, but the user seems like a proper Wikipedia contributor. Monsterman222 (talk) 10:41, 14 February 2014 (UTC)[reply]

I really didn't intend to vandalize. Rudolf Carnap coined the notion "probability1" and "probability2", and e.g. Solomonoff (cf. article "Solomonoff's theory of inductive inference#References", citation "A Formal Theory of Inductive Inference Part I", sect.2, p.2) used it. If someone read Solomonoff's article, (s)he might wish to look-up wikipedia about what "probability1" means, and "Probability 1" links to "Almost surely" (maybe that should be mentioned in the hatnote), so I made the hatnote in question. It redirects to Probability interpretations#References, where I put a corresponding remark in footnote 12 (maybe it is too difficult to find). - Jochen Burghardt (talk) 22:54, 14 February 2014 (UTC)[reply]
I changed the hatnote and the redirect target and hope it is less confusing now. - Jochen Burghardt (talk) 14:43, 16 February 2014 (UTC)[reply]
Pacerier (talk) 10:50, 9 March 2016 (UTC): ❝[reply]
Hooking #.22almost surely.22 .21.3D .22probability_one.22.

Dart example[edit]

I just came across this article and I like it. I think it would be helpful for the example of hitting the diagonal of a square with a dart to point out that the setting described is very idealized in that it depends entirely on an idealized notion of space and spatial measurement: the width of the diagonal is assumed to be 0, and the width of the tip of the dart is likewise assumed to be 0, and the precision of determining where exactly the dart hit is assumed to be arbitrarily precise (i.e. infinite or zero); of course this is not the situation anybody could realize with a physical dart thrown at a physical square with a physically materialized diagonal. Almost surely the authors of this article are very much aware of this situation, but almost as surely some of the more unwary readers of the article are not. — Nol Aders 23:19, 26 December 2005 (UTC)[reply]

I tried a parenthetical comment there. If you can think of a better way to say it, or if you have anything else to add, feel free to edit the article yourself! -Grick(talk to me!) 08:00, 27 December 2005 (UTC)[reply]
Perhaps we should change the dart example entirely, to something more obviously numerical in nature? For example, how about picking an arbitrary number between 1 and 10, and the odds of it being exactly pi (rather than, say, any other irrational or rational number)? Scott Ritchie 21:41, 15 January 2006 (UTC)[reply]


I would just like to agree that this is a good article. Alot of the mathematics articles are hard to understand, but this one is interesting and accessible. Cheers —The preceding unsigned comment was added by 134.10.121.32 (talkcontribs).

I agree! Paul Haymon 10:24, 12 April 2007 (UTC)[reply]
I agree as well - very informative and enlightening.
"No other alternative is imaginable." Wrong, all it's protons could simultaneously decay, this event has non-zero probability. Therefore the probability of impact is below one. Ot that could happen to the square itself. — Preceding unsigned comment added by 11cookeaw1 (talkcontribs) 09:50, 9 June 2011 (UTC)[reply]


I think there is a flaw here. The article assumes that space is not quantized thereby allowing the diagonal to have a zero area, and at the same time it assumes that space is quantized since is allows the possibility of a zero point dart landing exactly on it. I think I can prove that if space can be divided infinitely, then the dart can in fact, never land right on the diagonal. I have only a rude training in Mathematics, so if any mathematician has a comment on my statement, it would be very enlightening.59.144.147.210 17:43, 15 November 2006 (UTC) Bhagwad[reply]

The dart will certainly land somewhere, right? Well, what is so special about the diagonal that sets it apart from the points you assume the dart can hit? There is no point in the square that is off limits or impossible to hit. You have to keep in mind that "impossible" and "probability 0" are not the same thing. "Probability 0" things happen all the time (like if you mapped your exact route to work today with infinite precision) but will almost surely never happen again. On the other hand, "impossible" events cannot happen at all. (Now, you could argue that some physicists have evidenced that space and time could be in fact discrete; and then you could argue that anything we can physically create is finite in every sense, and that the only infinite things are abstract mathematical notions.... then "almost sure" and "sure" would be equivalent... but that's not near the fun!) - grubber 19:48, 15 November 2006 (UTC)[reply]
Hmm. You're right. 59.144.147.210 04:26, 16 November 2006 (UTC) Bhagwad[reply]
I added a paragraph in an attempt to really drive this point home. Floorsheim (talk) 00:12, 4 March 2008 (UTC)[reply]
"There is no point in the square that is off limits or impossible to hit." Yes, there are. Every point is impossible for the hypothetical, 0-dimensional dart: it cannot "hit" any of the points. The article argues "Any such point P will contain zero area and so will have zero probability of being hit by the dart. However, the dart clearly must hit the square somewhere. Therefore, in this case, it is not only possible or imaginable that an event with zero probability will occur; one must occur.", but that's not true. There are an infinite number of points; the dart cannot land on any single point. This may seem ridiculous when you consider "the dart clearly must hit the square somewhere", but this argument is fallacious because it attempts to treat the scenario as something that might really occur. This can't be done; we are talking about a 0-dimensional dart that lands on a single 0-dimensional point. This scenario is exactly like attempting to randomly choose a single real number out of the set of all real numbers: you can't. The probability of randomly choosing any particular real number in this scenario is 0. You could say "but one number clearly has to be chosen, therefore an event with 0 probability must occur", but that assumes that you can randomly choose any real number, which you can't do. This is exactly why the "dart" cannot hit any point, and why this entire article section is incorrect. Jamesa7171 (talk) 04:02, 30 September 2010 (UTC)[reply]
It is a little confusing, but I think the article gets close to explaing the concept. The probability that the dart will land on any (given) point in the square is zero, but the probability that it lands on _a_ point in the square is one. Might it be an idea to include a statement to this effect? Tevildo (talk) 21:39, 14 November 2010 (UTC)[reply]
"The probability that it lands on _a_ point in the square is one." <--- it isn't, though, and that's the thing. The dart cannot actually hit any of the points, because the scenario is impossible. We are talking about a 0-dimensional dart tip striking a single 0-dimensional region of space, all inside a 3-dimensional world...compare with my "choosing a random real number" example above: these situations are just not possible.
"Almost surely" is a bad term, really. It implies that something will happen almost always, but not 100% of the time. However, almost surely = surely = 100% of the time. To see the distinction, you can think of "almost surely" as %. Jamesa7171 (talk) 22:25, 8 December 2010 (UTC)[reply]
That the dart will hit the square is given, which is to say the scenario defined it to be the case. If it helps, consider a perfect (idealized) sphere rolled on a flat but elastic surface with friction, such that it will stop. When it stops, one point will be at the top, but the odds of any given point out of the infinite number of points is "almost none". If you deny the axiom of choice, then you cannot create the theoretical starting conditions (throwing the dart or rolling the d∞). If you accept the axiom of choice, then you can simply pick one number out of the any of the sets, including the set of all real numbers. If the physical world does not take the axiom of choice, then predestination applies and quantum uncertainty is false, since mutable futures and inexact positions require that one possibility out of many be selected in some manner. That is not to say that knowing the exact future or measuring the exact position is possible. Treedel (talk) 05:58, 25 January 2011 (UTC)[reply]
Yes, we can always pick one real number out of the entire set, or a subset. But such a choice cannot be made randomly. In the dart scenario, we are assuming that all points have an equal probability of being hit. That scenario is just not possible, and saying it's possible because the scenario defines it to be thus is fallacious. The result of this is that "the dart must land somewhere" is a flawed statement. Your sphere example is not possible either, for the same reason. There are various geometric issues with these scenarios as well, as briefly mentioned in a previous post. To get past these, we can take a look at the other example given on this page, which ostensibly avoids these issues:
"The infinite sequence of all heads (H-H-H-H-H-H-...), ad infinitum, is possible in some sense (it does not violate any physical or mathematical laws to suppose that tails never appear), but it is very, very improbable. In fact, the probability of tail never being flipped in an infinite series is zero. Thus, though we cannot definitely say tail will be flipped at least once, we can say there will almost surely be at least one tail in an infinite sequence of flips."
This can be rewritten. Suppose we are playing a game. We flip a coin: if it lands on tails, you win; if it lands on heads, we re-flip the coin. What is the probability that you win? It is not "very, very probable, but indefinite", as the article says. It is definite, and it is 100%.
I am extending this argument and claiming that it is impossible to find a valid example for this article at all, because its notions of "almost surely" are incorrect. Again, almost surely is like %, which is identical to 100%. Consider the two questions "If we indefinitely and repeatedly flip a regular die, what is the probability of us rolling at least one 6?" and "If we indefinitely and repeatedly flip a regular die, what is the probability of us rolling at least one 7?". The distinction between the two terms comes in that the first event would happen almost surely, whereas the second event would happen surely, yet they both have identical probabilities (100%). Jamesa7171 (talk) 08:31, 15 May 2011 (UTC) (written late at night, so likely worded poorly)[reply]
THe dart must land somewhere! "This can be rewritten. Suppose we are playing a game. We flip a coin: if it lands on tails, you win; if it lands on heads, we re-flip the coin. What is the probability that you win? It is not "very, very probable, but indefinite", as the article says. It is definite, and it is 100%." Let's say you just choose a random infinite sequence, what's the probability of it not being that sequence? What's the probability of it being ANY sequence. — Preceding unsigned comment added by 11cookeaw1 (talkcontribs) 10:21, 9 June 2011 (UTC)[reply]
If you can produce a random infinite sequence, then I'll have to admit that its probability is zero. But until then, you must agree that the probability of any particular random sequence is, however small, not zero. The probability of hitting any particular "point" (which doesn't exist - it's location is an infinite sequence - whether it's, e.g., ".500...", or "pi", or any random infinite sequence) with a "dart with a point tip" (which also can't exist) is absolutely impossible - since the so-called "point" has no area, and anyhow such a location can't be specified. "Infinite" means no end can be specified - it doesn't mean that some unending series exists. -lifeform (talk) 00:33, 8 July 2016 (UTC)[reply]

No trouble at all with finite sets[edit]

I think it should be mentioned on this page that the need for perplexing terminology only arises if probability is defined as the limit of frequency. However, there is no need to define probability in terms of infinite sets (cf. Cox's derivation of probability theory, in his book "The Algebra of Probable Inference"). Given a finite set of propositions, probability 0 always implies a false proposition ("an impossible event" in your terms) and vice versa, and probability 1 always implies a true proposition ("a certain event"). If you wish to consider what happens with probabilities when a set of propositions (events) becomes infinite, you should pass to the limit in a well-defined fashion. "Well-defined fashion" requires specifying the operation by which you extend the originally finite set to approach infinity. Better yet, restrict yourself to finite sets of propositions in your applications and avoid the need for metaphysical terminology altogether.

For a thorough (but unfortunately difficult to understand) discussion of paradoxes which arise from the overeager introduction of infinite sets into considerations of probability, I refer you to Chapter 15 of Jaynes's book [1]. Jploski 14:39, 11 February 2007 (UTC)[reply]

I'm not quite sure what your point is. But in any case the standard, infinite set axiomatics of probability are quite good if you want foundations for things like Brownian motion. Charles Matthews 16:01, 11 February 2007 (UTC)[reply]
Two comments: Almost sure is a concept that is valid whether you define probabilities based on "limits of frequency" or from a purely mathematical/topological viewpoint. Second, "probability 0" and "impossible" are synonymous in countably infinite sets as well as finite. The issue only arises when we have a space that is larger. - grubber 17:22, 11 February 2007 (UTC)[reply]
I think the source of the confusion is mathematical rigour. This article, and many others on Wikipedia, is an intuitive explanation of very rigorous mathematical constructs; in this case, of something called a probability space. As such, this article provides intuitive insights to aid in the understanding of the concept, but ironically this article is quite immune from intuition. Every phrase in any mathematical article has an absolutely precise, non-vague meaning (at least they can be translated to predicate sentences in ZFC). This include the phrase "almost sure." Unfortunately, "almost sure" already has an English, "normal" meaning, and so discussions can turn metaphysical without warning. At this point, those who are using the precise mathmatical definition are talking about something entirely different than those who are using the "real-life," English meaning. What's worse, phrases like "infinitely thin" and "infinity" have no mathematical meaning until, well, we define them. And there are many pre-existing definitions that vary wildly from context to context (but every one of these definitions is precise and non-vague). Though we can give them "real-life" meanings, it would avoid the entire point of mathematical rigour and, unfortunately, the aim of this and all other mathematics articles. Though it may be argued that these articles bring mathematics to the wider public, and hence "real-life" meanings are therefore to be encouraged, it is wrong to conclude that these meanings are the mathematical definitions. It is dangerous to give the wider public the impression that mathematics is a vague subject and statements can fall into grey areas. Mathematics is an exact and absolutely precise language, discipline, and form of creativity. Even the undecidability results are rigorously proven. And come to think of it, this is absolutely a good thing. It's counterintuitive results like these that make mathematics beautiful---things that we don't see when we use our intuition. Perhaps these events that "almost surely" occur are some of those things. - weixifan 23:21, 29 March 2007 (UTC)[reply]
I think the article does a decent job of offering both intuition and rigor. Can you give an example of something rigorous that this article is missing? - grubber 04:28, 30 March 2007 (UTC)[reply]
Personally, I'd like it if the article mentioned that the probability of the dart hitting any specific point is zero, yet the dart obviously does land on a specific point. Eoseth 15:33, 14 April 2007 (UTC)[reply]

Preferred version[edit]

I like http://en.wikipedia.org/w/index.php?title=Almost_surely&oldid=101537841 better than the current version, don't care for the huge change. Lilgeopch81 20:30, 12 February 2007 (UTC)[reply]

Almost surely there are no primes[edit]

"Almost surely" may produce misleading statements. For example, the fraction of prime numbers tends to 0 as the number of numbers under consideration goes to infinity. When considering all possible numbers then, the fraction prime is almost surely zero while initially the fraction of primes is nearly unity. The set of integers is not necessarily a good measure for the number of primes.

Reply to above: it's not quite valid to estimate the total number of primes by estimating their "proportion" in the set of intergers, and by furthermore seeing what that fraction tends to as the total set tends to infinity. A similar process would lead to the erroneous conclusion that the set of integers is twice as large as the set of even integers, when in fact their cardinality is the same. Or, for that matter, that the set of integers smaller than 10 (or smaller than a googol, or whatever) has a cardinality of zero. These sorts of seeming paradoxes aren't the result of "almost surely" anyway. 72.72.210.38 (talk) 17:19, 12 October 2011 (UTC)[reply]
"Almost surely, a given number is not prime." is the true statement that I think was intended. Treedel (talk) 23:55, 12 October 2011 (UTC)[reply]

"Almost sure" versus "sure"[edit]

This section currently reads as follows:

The difference between an event being almost sure and sure is the same as the subtle difference between something happening with probability 1 and happening always. If an event is sure, then it will always happen. No other event (even events with probability 0) can possibly occur. If an event is almost sure, then there are other events that could happen, but they happen almost never, that is with probability 0.

This wording seems to suggest a sure event is only found in a sigma-algebra which consists only of the empty set and the sample space. Is that what's intended here? --Mark H Wilkinson (t, c) 17:46, 13 September 2007 (UTC)[reply]

Pacerier (talk) 10:50, 9 March 2016 (UTC): ❝[reply]
Hooking #.22almost surely.22 .21.3D .22probability_one.22.

On 'Tossing a coin', a coin can land on it's edge. Heads or tails is far from a sure event, maybe not even 'almost sure', if one considers this article: [2]. —Preceding unsigned comment added by 75.15.205.162 (talk) 11:40, 25 September 2008 (UTC)[reply]

I believe darts sometimes miss dartboards as well. I'm afraid I don't have a link to an Ivy League website to back that up. H.G. 12:06, 25 September 2008 (UTC)[reply]

I am afraid that in the literature, even probabilists use the word "sure" when what they mean is only "almost surely". I think I will look for a reference and add that caveat. 98.109.241.146 (talk) 17:54, 5 June 2013 (UTC)[reply]

No mention of limits[edit]

There is a lot of talk of things being zero, rather than limiting to zero, which seems to be the crux of this issue, is there something that i am missing? Sure seems to be the concept of a zero probability, whereas almost sure seems to be the concept of a probability limiting to zero 129.78.64.101 12:07, 24 September 2007 (UTC)[reply]

Not really. Probability zero doesn't mean 'cannot happen': that's the essential. Charles Matthews 12:35, 24 September 2007 (UTC)[reply]
Actually, the first poster is correct. FMT&A (the only citation covering a significant amount of this lengthy article) is very clear (starting p232) "almost never" is used for asymptotic limits for a p(n)<<p0(n), and absolutely NOT for p(n)=0. The whole premise of this poorly cited article seems incorrect. — Preceding unsigned comment added by Gnassar (talkcontribs) 18:36, 10 September 2011 (UTC)[reply]
No, the first poster and you are incorrect. In the frequency interpretation, if f(n) is the frequency of success after n independent trials, then lim f(n) being zero means that p, the probability of success, exactly equals zero. The literature tends to be sloppy on this point, partly due to the fact that for practical scientific purposes it makes no difference. One well-known statistics teacher at Princeton told his class, something rejected at the 95% confidence level is a rare event, and "rare events never happen to me." I will look for a reference: there are plenty of professional probabilists, academically respectable, who say exactly the same thing as in this article. BTW, what is FMTandA? I never even heard of it. 98.109.241.146 (talk) 18:00, 5 June 2013 (UTC)[reply]
Pacerier (talk) 10:50, 9 March 2016 (UTC): ❝[reply]
Hooking #.22almost surely.22 .21.3D .22probability_one.22.

Is this a correct explanation?[edit]

The dart one is fine and good, but I've been thinking of an explanation for the difference between "surely" and "almost surely" that will be even clearer, and I think I've got one. Can someone tell me if this is mathematically correct?

Consider an operation, F that has only one possible outcome, A. Operation F will surely result in A, no other result is possible. Now consider an alternative operation, G, that has two possible outcomes, A and B, but that outcome A occurs with 100% probability. In this case, G will almost surely result in A (or almost surely not result in B), because although it happens with 100% probability, another result is possible.

My questions are a) is this correct and b) can it be made even clearer? Sloverlord (talk) 14:52, 12 May 2008 (UTC)[reply]

It is definitely correct. The best interpretation of what it means for an event E in a sample space X (if you like you can assume X is a measure space), is that the measure of the complement of E (or the probability of the complementary event X-E) is 0 but E is a proper subset of X. Or in other words, the probability that E will not happen is 0, but it is possible that E can 'not happen'. Your example is perfectly fine and I think that it is quite clear. However, I think that the example given in the article is the best illustration of the definition but your one is good too.

Topology Expert (talk) 08:43, 15 September 2008 (UTC)[reply]

Straddle1985 (talk) 10:38, 5 July 2010 (UTC)Straddle1985Straddle1985 (talk) 10:38, 5 July 2010 (UTC)Could it be the explanation off the all heads possibility is simply wrong?[reply]

However, if instead of an infinite number of flips we stop flipping after some finite time, say a million flips, then the all-heads sequence has non-zero probability. The all-heads sequence has probability 2−1,000,000, thus the probability of getting a tails is 1 − 2−1,000,000 < 1, and the event is no longer almost sure.

I'd say the probability of getting a million heads is just as likely as getting 500.000 times heads and 500.000 times tails ... That probability doesn't change one inch depending on how many times you'd throw it. I think you can safely assume it is likely you'll get a head 1/2 of the times you'd throw the coin, but you can't state the probability is definitly lower. In numbers: P(all heads) = P(all tails) = P(any other combination | the same number of tosses). The probability is just 2^-n (with "n" being the number of throws). Source: Evidence Based Technical Analysis, David Aronson (professor)

The probability of getting 500000 times heads and 500000 times tails is definitely way larger than 2^-1000000. The probability of getting first 500000 times heads and then 500000 times tails is 2^-1000000 (and the same, of course, for any other specified order). The total probability of getting 500000 of each, in any order, is (see binomial distribution). 85.226.204.72 (talk) 10:55, 7 July 2010 (UTC)[reply]

Meaning in practice[edit]

The practical meaning of almost surely is: If E is a potential future event and it will almost surely happen and one has explicitly asked the question whether E will occur, then it will occur. JRSpriggs (talk) 05:39, 26 May 2008 (UTC)[reply]

Can you (or someone else) please elaborate on how this meaning arises? It's not clear in the article why this distinction is necessary, in practice. In what cases would it have any pratical implications (other than some math teacher getting upset) to state that an event happens "surely" when its probability is one? --217.157.165.109 (talk) 15:34, 31 August 2011 (UTC)[reply]

Possible mistake?[edit]

In the section "Tossing a coin" it says that "1/infinity = 0". I'm not a mathematician, but isn't division by infinity impossible? Shouldn't it be ? --Jak86 (talk) 04:20, 6 December 2008 (UTC)[reply]

Whether or not it's impossible is really a question of definitions. Defining 1/infinity to be 0 typically creates no problems, and it's done in the extended real number line, the real projective line or the Riemann sphere. (Defining it to be anything but 0 would be begging for trouble.) -- Jao (talk) 18:43, 12 January 2009 (UTC)[reply]
I agree with Jao. Another thing, and I don't how correct this is. In physics, when I see an in some equation, I often assume that it implying a limit. So, reading , I would have read that as being what you suggested it ought to be.

Monsterman222 (talk) 10:45, 14 February 2014 (UTC)[reply]

What you really can't is subtract one infinity from another, or divide it by another, or divide 0 by 0 (if not as a value of a function within a limit expression of there's the possibilty of applying l'Hôpital). Otherwise, using 1/0 = infty and 1/infty = 0 is unproblematic and often done.--131.159.0.47 (talk) 19:45, 10 July 2014 (UTC)[reply]

So infty * 0 = 1? 92.23.184.194 (talk) 13:58, 28 July 2017 (UTC)[reply]


"(even events with probability 1)"[edit]

Regarding

"If an event is sure, then it will always happen. No other event (even events with probability 1) can possibly occur." (my emphasis),

This sounds to me like it's possible to have two mutually exclusive events A and B such that A is sure to happen and P(B) = 1. I don't have to be a measure theory expert to see that that's impossible. Should we simply remove that confusing parenthesis? —JAOTC 10:02, 13 February 2009 (UTC)[reply]

It may seem implausible, but the fact of having mutually exclusive events each of which have probability 1 is a direct consequence of the convention of assigning probability 1 to both events that are sure to happen and events which will almost surely happen. For example, suppose there is an infinite lottery to take place, which will be done by selling an infinite number of tickets to different people, such that there is one lottery ticket for each positive integer. After distributed all those tickets, one ticket will surely be drawn and declared the winner. For any particular ticket, the probability of that ticket being the unique winner chosen once a selection is made at random from a uniform distribution is 0. Thus the probability of at least one ticket being picked is equal to the limit as x goes to infinity of the summation from i = 1 to x of 0, which sums to zero. The complement gives the probability that no ticket will be picked as 1. So a ticket is sure to be picked, but almost surely no ticket will be picked. So if E is the event of at least one ticket being a winner, P(E) = 1 and P(not E) = 1. —Dromioofephesus (talk) 19:41, 5 April 2018 (UTC)[reply]

Back to the flight of the dart[edit]

I'm no mathematician, and I'm not sure about the throwing of the dart and the almost sure result that it hits the target. The target cannot be the only thing in that universe, as if it were, there would no no dart, no-one to throw the dart, and no oche for the thrower to stand at (if he/she existed anyway). Given the necessity for these additions to the target's universe, one finds that there is a possibility that when the dart is thrown, it is impelled with insufficient force to reach the board. (Even happens in championships - but almost never...) I realise that this is an example for the masses, but it worries me (a little). Peridon (talk) 17:03, 26 September 2011 (UTC)[reply]

The premise that the dart must hit the board is no more unreal than the premise that the dart has a one-dimensional head, and thus hits the board at exactly one point. If we permit the dart to have a cross-sectional area, even one as large as an iron atom, then the chance of including a specific point becomes ε, with σ being the cross-sectional area of the dart. Treedel (talk) 00:04, 13 October 2011 (UTC)[reply]
I also take issue with the dart analogy. What's to stop the thrower directing the dart /away/ from the board? What's to stop the thrower throwing the dart into his foot? — Preceding unsigned comment added by 61.14.115.68 (talk) 00:31, 30 January 2013 (UTC)[reply]

Contradiction[edit]

What is the numerical probability for "Almost surely"? If it is not exactly one, than this article contradicts with the 0.999... article. Mr. Anon515 01:29, 13 October 2011 (UTC)[reply]

Then you might be thankful that the numerical probability for "Almost surely" is exactly one, as this article makes clear in its opening sentence. 85.226.206.18 (talk) 10:26, 5 November 2011 (UTC)[reply]
Pacerier (talk) 10:50, 9 March 2016 (UTC): ❝[reply]
Hooking #.22almost surely.22 .21.3D .22probability_one.22.

"Rigorous definition using measure theory"[edit]

I have recently removed the following text from the article, as I think it is very unclear what it is about. In fact, it seems like it has been deliberately obfuscated. On Wikipedia, our goal isn't to make things unreadable. On the contrary, it is to make them as accessible as possible to a general audience (WP:MTAA). Recently a souce was added, but the page number was wrong (it was to the index of that book). I modified the text and page reference so that it at least agrees with the cited source. Sławomir Biały (talk) 12:45, 10 December 2012 (UTC)[reply]


|- |A rigorous definition from the measure theoretic-perspective is that since is a measure over

Where . i.e. the set of all null sets relative to probability measure over the Algebra , and some index [1]

In other words, there may be outcomes in , which are not in . Yet despite the existence of such outcomes, the events they belong to are in some sense negligible, and have measure 0, as far as is concerned, (if unmeasurable, assign measure 0). i.e. Events in containing these null sets occur almost never, whilst the event occurs almost surely.|}

References

  1. ^ Jacod, Jean (2004). Probability Essentials. Springer. p. 254. ISBN 978-3-540-438717. {{cite book}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)CS1 maint: extra punctuation (link)

Low-importance statistics articles?[edit]

Low-importance statistics articles?
https://en.wikipedia.org/wiki/Category:Low-importance_Statistics_articles This is an astounding way (a bad way) to classify this article. Anytime that one defines a continuous probability density (a distribution), this instantly brings about the possibility of events of zero probability. Among other things, it follows directly from the properties of the Riemann integral.47.215.211.115 (talk) 11:14, 12 January 2017 (UTC)[reply]

Almost surely vs surely[edit]

A significant portion of the current article is dedicated, essentially, to defending the historical choice of the terminology "almost surely" against the possible view that "surely" would have been better choice of terminology for the same concept.

This issue has reasonable arguments to be made on both sides, but it doesn't seem like a suitable task for the wiki. I think examples like the square diagonal should be given, but only to illustrate the mathematical definition, as opposed to (as is the case currently) being used to argue that these scenarios are better described in English by the words "almost surely" instead of just "surely".

In any case, as it currently stands, I believe the sections constitute original research, since they do not cite sources discussing the merits of using "almost surely" or "surely" as the name for this concept. The article as it currently stands may also mislead readers into thinking that the word "surely" has a separate commonly used mathematical definition (which supposedly means about an event that it is equal to the sample space). Discrete (talk) 14:09, 24 April 2017 (UTC)[reply]


I have changed to title for this section and cleaned it up a bit. Discrete (talk) 18:22, 24 April 2017 (UTC)[reply]

Selecting a random number[edit]

I propose to delete this section. A probability cannot be assigned—as has been done in this section—to every integer , because then . But, by the axiom of unitarity, . Hence, cannot be 0 for all .--Jt512 (talk) 06:28, 14 November 2017 (UTC)[reply]

I agree. Another reason is that there is a finite upper bound beyond which a number is too long to be denoted, in any system, during a human's life time. (I naively assume that "guessing" includes denoting in some form, while e.g. dart throwing does not.) - Jochen Burghardt (talk) 10:57, 14 November 2017 (UTC)[reply]
I have deleted the section accordingly. --Jt512 (talk) 23:42, 14 November 2017 (UTC)[reply]

Remark on finite sets incorrect[edit]

The remark In probability experiments on a finite sample space, there is no difference between almost surely and surely. is misleading. It suggests (incorrectly) that in a probability space with finite a probability of 1 means that the event equals (or a probability of 0 means that the event equals ). There is nothing in the axioms of a probability space which requires this. One can come up with this misleading sentence when viewing probability space only via the special case of Laplace experiments, which is was too limiting. — Preceding unsigned comment added by 139.30.119.241 (talk) 21:52, 2 November 2018 (UTC)[reply]

Noun form[edit]

Is there a noun form of the phrase "almost surely"? It's tempting to use the phrase "almost certainty", contrasting with "absolute certainty". But "almost certainty" isn't exactly grammatical.... — Smjg (talk) 18:26, 11 August 2019 (UTC)[reply]

I never saw such a noun form, neither in German (my native language), nor in English. - Jochen Burghardt (talk) 08:22, 14 August 2019 (UTC)[reply]
I've seen no such usage attested. If you're asking whether there could be such a form, yes of course there could -- such as in the variant 'near-certainty'...near and almost are both casual ways to explain the concept in a way that appeals to intuition while being technically inaccurate: you wouldn't expect someone who says they are "almost 50 years old" to be exactly 50 minus an infinitesimal fraction of a second, but you can certainly imagine such a situation. Thus it is with the name of this concept. Arlo James Barnes 01:25, 10 February 2022 (UTC)[reply]

Almost surely?[edit]

If an event will almost surely happen, does it mean it will eventually happen? Regardless of how close you get to a probability if 1, there’s always the possibility that it doesn’t happen when it can happen. --Heymid (contribs) 20:59, 16 April 2021 (UTC)[reply]

You grasp the crux of the term, I think. It's delinking probability and inevitability. An event that has probability one (not just close to one but all the way there) can be infinitely unlikely to not happen, but that doesn't mean it's inevitable; only occurrence / nonoccurence over time can 'reify' the probability. Arlo James Barnes 01:28, 10 February 2022 (UTC)[reply]

An editor has identified a potential problem with the redirect Wp1 and has thus listed it for discussion. This discussion will occur at Wikipedia:Redirects for discussion/Log/2022 February 9#Wp1 until a consensus is reached, and readers of this page are welcome to contribute to the discussion. signed, Rosguill talk 19:20, 9 February 2022 (UTC)[reply]

Embarrassingly stupid article[edit]

I'm sorry, but this is an embarrassingly stupid article, and unfortunately an illustration of the key flaw of wikipedia, where self-proclaimed "experts" write nonsense articles. Most real experts find it frustrating and a waste of time trying to stem the tide of nonsense, and are too busy doing real work to engage in the needed edit wars against the false consensus. So I will leave this comment and perhaps the editors of this article will correct the article, preferably to consist of a brief definition with a few brief examples.

Now, I believe the editors of the article would agree that "surely" (same as "certainly") means 100% probability, means something we are 100% sure about. If you think "almost surely" means 100% then I would hope you would agree that "surely" means 100%. Right?

Moving on to "almost surely", the key word "almost" means that the probability is *almost* (same as "not quite") 100%, but *not* 100%, such as 99.9% probability. If it was 100%, as the article falsely claims, the "almost" would be omitted. Look in any dictionary: "The work is almost done" does *not* mean the work is done, and "We are almost home" does *not* mean we are home. If you want to change the meaning of the word "almost", as the article does, then there is no discussion, because you are not free to define commonly used words as you choose. The upshot: The Emperor has no clothes. The article is nonsense.

The pitiful long-winded "examples" in the article fall flat upon examination. In contrast, there are countless things we can be sure (meaning 100% sure) about, and countless things we can be "almost sure" about. For example, I am "sure" that two goats plus two goats is four goats. I am "almost sure" my car is in the driveway, even though I have not looked for 10 minutes. Who knows, maybe it got towed away while I was not looking. So there is a real and definite difference between "sure" and "almost sure", regardless of the nonsense spouted in the article. 97.113.102.52 (talk) 21:07, 2 December 2022 (UTC)[reply]

Like it or not, "almost surely" is mathematical jargon in probability and measure theory; the article does decscribe its meaning adequately. Your arguments about goats and cars apparently refer to the common-language meaning of "almost surely", which is (here I agree with you) quite different. However, the latter meaning hardly deserves an own article. - Jochen Burghardt (talk) 21:27, 2 December 2022 (UTC)[reply]