Jump to content

Wikipedia:Reference desk/Archives/Mathematics/2013 December 14

From Wikipedia, the free encyclopedia
Mathematics desk
< December 13 << Nov | December | Jan >> December 15 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


December 14

[edit]

(-2) to the x

[edit]
Plot of -2^x

After learning about exponential functions and their graphs, I was curious as to how a function like (-2)x would look like, considering that the function is undefined when x is an even fraction. I tried graphing it in different graphing calculators online, but the only one that gave me an actual graph was this. Is this Wolfram Alpha graph accurate, or would a more accurate graph only be certain points that can exist in this function, without a smooth curve? Also, what would the domain for this kind of graph be, considering it can only be natural numbers and odd fractions? Thanks. 50.101.203.177 (talk) 05:00, 14 December 2013 (UTC)[reply]

I think it is accurate, but realize that the value of the function is a complex number. Bubba73 You talkin' to me? 05:22, 14 December 2013 (UTC)[reply]

You can plot the graph, here is a plot which i have generate myself of -2^x from x=1.001 to x=2.2\\

Perhaps it is easier to look at it in a table format
n -2^n
1. -2.
1.1 -2.03863-0.662392 i
1.2 -1.85863-1.35038 i
1.3 -1.4473-1.99203 i
1.4 -0.815501-2.50985 i
1.5 -5.19574*10^-16-2.82843 i
1.6 0.936764-2.88306 i
1.7 1.90972-2.6285 i
1.8 2.81716-2.04679 i
1.9 3.54947-1.15329 i
2. 4.
2.1 4.07727+1.32478 i
2.2 3.71727+2.70075 i

Ohanian (talk) 03:15, 15 December 2013 (UTC)[reply]

(-2)x = (2(-1))x = 2x(-1)x = 2x(e)x = 2xeiπx. Bo Jacoby (talk) 08:30, 17 December 2013 (UTC).[reply]

Since you mention the even fractions, you're presumably referring to the elementary (non-complex) definition of , defined only for b odd (when a and b are in lowest terms and ). You can see that it differs from only in the sign, which fluctuates at arbitrarily high frequencies. The graph (which appears continuous because fractions with odd denominator are dense set in the reals) is therefore both and its negative.
In a complex context, this selection of powers is highly arbitrary: we define , which is in general multivalued because log is. Each point in the "real power" graph corresponds to a different n in . --Tardis (talk) 15:18, 19 December 2013 (UTC)[reply]

easiest way for answer by heart how many years are between 1944 to 2014

[edit]

what is the easiest way for answer by heart how many years are between 1944 to 2014. For example, I meet someone and he tell me that he was born in 1944 and I want to know what is his age, so what is the easiest way to know that? (my way today is not easy: I make so: 2013-1944= 69 but it's not easy when I make it by heart, so I'm looking for other methods. Thank you. 213.57.113.25 (talk) 09:54, 14 December 2013 (UTC)[reply]

I'm not quite sure what you're asking, but here are two possibilities:
1) How do you quickly subtract 1944 from 2013 ? The way I do it is I subtract 1944 from 2000 to get 56, then I add 13 to that to get 69.
2) How do you calculate ages given the two years, if the birth day isn't known ? Well, find the difference in the years, then you might have to subtract a year if they haven't hit their birthday yet this year (this becomes less likely as it becomes later in the current year, so that by December 31, 2013, you can be certain that everyone born in 1944 is now 69). StuRat (talk) 16:42, 14 December 2013 (UTC)[reply]
Except David Briggs, Ene Ergma, Paolo Serpieri and Phyllis Frelich. Tevildo (talk) 11:57, 15 December 2013 (UTC)[reply]
Briggs is dead, I'm afraid. Tevildo (talk) 12:00, 15 December 2013 (UTC)[reply]
Even for people born on leap day, my method still works to determine their age, in years. You can argue it doesn't calculate their number of birthdays correctly, though, if you claim they only have a birthday every 4 years. StuRat (talk) 13:04, 17 December 2013 (UTC)[reply]

An orbit that any point on the planet gets the same amount of radiation

[edit]

Hi,
I've asked this question in the science desk before,
they advised me to look up the answer here. Anyway my question is if there is an orbit (of a planet), that when you sum up the amount of radiation from its sun in each point on the planet you get the same result .
Somebody has mention that it might be possible with wobbling of the axis of the rotation of this planet, so I would like to know is it possible that it will be existed without any external force from moons or planet around , our imaginary planet.
Thank you.
Exx8 (talk) 10:27, 14 December 2013 (UTC)[reply]

It depends on what you mean by the amount of radiation. With a circular orbit every point on the planet would have 50% day and 50% night when you average over a full orbit. (Actually slightly more night than day if you factor in parallax but I'm assuming you can ignore that.) But the solar radiation per unit area is proportional to the sine of the angle of elevation; the poles are colder than the tropics because when the sun is visible there it's never far from the horizon. If you use that as your definition then I'm guessing that the amount would at be close to equal if the axis of rotation was on the orbital plane, so the sun would be directly overhead each pole once a year. But verifying that seems like more more computation than I'm willing to do. --RDBury (talk) 19:39, 14 December 2013 (UTC)[reply]
Surely slightly more day than night due to the sun having a greater diameter than the planet. But very small. -- SGBailey (talk) 06:38, 16 December 2013 (UTC)[reply]

is the risk of ruin after infinite time always 1?

[edit]

So, let's say you have "risk of ruin" defined as the percent chance you will go to 0 given iterative bets with expected value (EV) of e and principal b (balance), in discrete bets of 1 (risk 1, lose 1 or gain e), however fractions are allowed in the results, and the bets are weighted heads-or-tails.

Let's say E is 1.5 -- you could start with $100 and then end up with $99 or $101.5, and so on.

The risk of ruin is the chance that you go to zero before you climb out of the 'variance pit' that could steal all your money. For example, if you start with $1 then the risk of ruin is 50%+ as if it comes up tails (if that is the losing value) then you immediately die. Then if you live you have $2.5 however 3 tails in a row still kill you, etc.

Now here is my question. Isn't the following a PROOF that risk of ruin always equals 1 in the long term? Even if you start with $1,000,000 only have to bet $1 each turn, and have payout of $1000 if you win $1 loss if you lose?

This is my argument. Suppose that you have made it to turn "t". You now have balance b. Then, immediately, there is 2^ceiling(b) chance of you losing all your money. If you live, you can just repeat the same formula. Therefore, there are infinite chances to lose all your money.

- Given infinite nonzero chances of a bad event happening - the event will happen sooner or later!!!

In effect I'm modeling the "results" of the 2^ceiling(b) event happening as a "tails" and of it not happening as a "heads". So even if heads are weighted at 1-1/2^(b+1) in favor of heads (hugely in favor of heads), the tails still ALWAYS eventually have to come up in the infinite series. The chances that tails comes up "sooner or later" is 1 - isn't it?

What do you think of that argument? Is it correct? 212.96.61.236 (talk) 13:26, 14 December 2013 (UTC)[reply]

No, the probability of ruin is not always equal to one. You can see this with the following simpler example than what you propose. Suppose you start a game with 1 dollar and each round you toss a coin to see if you win 2 dollars or lose 1 dollar. Let the probability of ruin after n rounds be p(n). (So that p(1)=1/2.) By conditioning on the first toss, we see that p(n) satisfies the recurrence relation:
We want to compute . We claim that in fact this limit lies within the interval . Indeed, consider the iteration . We have and and on this interval . So f is a strict contraction of the interval and so its iterates tend to a fixed point of the iteration. We can find this fixed point explicitly by finding the solution of the equation in to be . So the probability of ruin in this game (if played infinitely) is only . Sławomir Biały (talk) 15:51, 14 December 2013 (UTC)[reply]
But that's crazy. It means the answer to the question "if I give you infinite chances to take a 1/2^t shot, there's a good chance you'll miss all infinite of them. " - in fact in your example there's a greater than 1/3 chance you'll miss all infinite of these shots! How can that be true? Even if the odds are vanishingly thin, doesn't the definition of 'infinite' mean that it is 'impossible' to miss all of them? It strikes me that my argument makes a lot more sense. How can you not hit a vanishingly small target given infinite shots? Where does my intuition lead me astray? 212.96.61.236 (talk) 17:04, 14 December 2013 (UTC)[reply]
Because t is not constant. If you've made it to balance b, it would take a string of losses to ruin you. If you avoid ruin, you now have a larger balance, which requires an even longer string to wipe out.--80.109.106.3 (talk) 17:18, 14 December 2013 (UTC)[reply]
I don't follow your recurrence relation, but I have an alternate argument. Let q(b) be the probability of eventual ruin when beginning with balance b. Arguing from the law of the iterated logarithm, q(b) goes to 0 for as b goes to infinity. Note that , so in particular, q(1) must be less than 1. Then q(1) = .5 + .5q(3), which becomes . Since , we can divide by , which gets us , whose only positive root is .62.--80.109.106.3 (talk) 17:18, 14 December 2013 (UTC)[reply]
You've assumed that q(1)<1, which is specifically what needs to be shown. My recurrence follows from the following. Condition on the outcome of the first toss. The probability of losing the dollar is 1/2. Otherwise, with probability 1/2, you will have three dollars. Now you must lose each of these three dollars in n rounds or fewer is p(n). Loss of each dollar are independent events. Sławomir Biały (talk) 19:38, 14 December 2013 (UTC)[reply]
I did not assume that. I said it followed from the law of the iterated logarithm, which I later fleshed out below.
I don't see why those would be independent events. In fact, consider the probability of losing 3 dollars in 1 round; it's 0, not p(1)^3.--80.109.80.78 (talk) 20:41, 14 December 2013 (UTC)[reply]
Yes, you're right. They aren't independent. I wrote that without thinking it through. Sławomir Biały (talk) 02:02, 15 December 2013 (UTC)[reply]
Here's a more general argument that if your expected value for one flip is positive, the probability of ruin is less than 1. Let be the expected value of a single flip, and be the standard deviation. From the law of the iterated logarithm (using the notation defined on that page), we can conclude
almost surely.
So almost surely, for all sufficiently large . Let be the probability that for all . By continuity of measure, for all sufficiently large . Choose with positive and for all , . Consider the situation where our starting balance is so large, it is impossible to bust within the first flips. Then we will have probability at least of never busting at all, since busting would require .
Now consider the game where we start with 1 dollar. There is some positive probability of reaching balance , after which our chance of never busting is at least . So the chance of never busting from 1 dollar is at least .--80.109.106.3 (talk) 17:55, 14 December 2013 (UTC)[reply]

Could someone elaborate on the outcome of the example by Sławomir Biały "So the probability of ruin in this game (if played infinitely) is only 0.62" by putting it into laymen's terms? I'm looking for this kind of statement: If you play long enough, out of 100 candidates, 62 will lose and 38 will win. The 38 must be wrong. Only losers are declared loser, the rest must play again. No one will ever be able to say he's a winner. The chance of winning a game where you cannot become a winner is 0, and I wonder if that doesn't actually mean that 100% must lose. Probably, taking "a 100 candidates" as an example doesn't work when infinity is involved, but if that's the case I don't understand what 0.62 actually does mean. Joepnl (talk) 00:23, 15 December 2013 (UTC)[reply]

You can have winners; a winner is someone who is never ruined. But okay, here's the statement: if 100 people play the game, you expect 62 to eventually run out of money, while the remaining 38 will play forever.--80.109.80.78 (talk) 03:52, 15 December 2013 (UTC)[reply]
Thanks. It's hard to stay away from thinking "so just wait for 62 to be ruined, now the 38 left must have somehow 0 chance of being ruined ever" but I think I get it :) Joepnl (talk) 16:58, 15 December 2013 (UTC)[reply]
Keep in mind that "expect" is a technical term. It doesn't mean it will necessarily happen; you might be waiting forever for the 62nd person to be ruined.--80.109.80.78 (talk) 20:07, 15 December 2013 (UTC)[reply]