Wikipedia:Reference desk/Archives/Mathematics/2018 June 18

From Wikipedia, the free encyclopedia
Mathematics desk
< June 17 << May | June | Jul >> Current desk >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


June 18[edit]

Difference between powers question.[edit]

2^3+1=3^2. Is there another pair of powers bigger than that that only differs by one? Or in other words, does there exist a,b,c & d s.t. a^b+1=c^d where a,c>0, b,d>1 and a^b > 8.Naraht (talk) 01:50, 18 June 2018 (UTC)[reply]

No – see Catalan's conjecture, which is actually a theorem. Loraof (talk) 02:20, 18 June 2018 (UTC)[reply]

Is entropy in the mind of the beholder?[edit]

的我国我 might be completely random for a westerner. I don't know to what degree the next character is unexpected, but a Chinese speaker might recognize some order here. And 0112358132134... might seem random to some, until you expand it to 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ... and realize the rule behind it.

So, could reasonably argue that a string does not have a fixed unique entropy? Or at least that entropy is a range? --Doroletho (talk) 13:05, 18 June 2018 (UTC)[reply]

Courtesy link to Entropy (Information theory). Loraof (talk) 14:40, 18 June 2018 (UTC)[reply]
There's a lot that can be said about this, but the bottom line is that you are absolutely correct. There is very deep confusion in the industry about password strength (which I assume is what you mean by entropy), even among professionals.
The term "entropy" is misused; strings don't have entropy. Entropy is a property of distributions and random variables, and in particular, processes to generate strings. All those online calculators where you put in a string and they give you "its entropy", what they're really doing is trying to guess what the process that generated the string was, and tell you its entropy. This can lead to absurd results. For example, if you put in 0112358132134, they'll assume the process was "choose 13 random digits" and give high entropy, when in reality your process was "concatenate the first 10 terms of the Fibonacci sequence", which arguably has an entropy of 0 since no randomness was involved. Likewise, the displayed entropy will jump through the roof if you put ! in the end, since it will now assume that all characters where chosen randomly from an alphabet that includes special characters, where in fact all you did was add a character at the end.
Another issue is that, even when talking about the underlying process, "entropy" is just the wrong concept to use. There are processes with huge entropies that genereate very weak passwords. For example, suppose I choose passwords this way: I roll a die. If I get 1, I choose a password with 1000 random digits. If I get anything else, I choose the password "0". This has a whopping entropy of 384 bits. And yet, when I generate passwords this way, 83% of the time I will get a password that can be cracked with 1 attempt. Not very strong.
So, strings don't have entropy - what they do have is Kolmogorov complexity, which is the length of the shortest program that can generate them. But this depends on what programming language we are talking about. For long enough strings, it doesn't matter much, because the complexities relative to different languages only differ by a constant number of characters. But for shorter strings, there is more leeway. A string like 0112358132134 will have higher complexity in a language which has no addition operation. A string like 的我国我 will have lower complexity in a language that has a dictionary of Chinese words available.
That might be a problem for calculators, but it's not really a problem for the person choosing a password. He just needs to choose a process that generates a password uniformly randomly between a large number of options, and he gets a theoretical (in the good sense of the word) guarantee on the difficulty of cracking it. -- Meni Rosenfeld (talk) 10:30, 19 June 2018 (UTC)[reply]
的我国我 (Chinese) = My country I (English). DroneB (talk) 11:25, 23 June 2018 (UTC)[reply]

Regarding "[t]here are processes with huge entropies that genereate very weak passwords": that refers to the Shannon entropy. You instead want passwords with high min-entropy to deal with this problem, and you can choose your generation process accordingly. I see that the article about min-entropy is confusing but it's basically -log2(probability of the most likely element of the distribution). So that gives a suitably low entropy to a distribution that makes one possible password being much more likely than the rest. 173.228.123.166 (talk) 07:37, 22 June 2018 (UTC)[reply]

Thanks, I wasn't familiar with that term. But it's a bit of a stretch as a defense for the way the term "entropy" is used for password strength, since unqualified "entropy" pretty much universally refers to Shannon entropy. -- Meni Rosenfeld (talk) 09:06, 22 June 2018 (UTC)[reply]
Carlo Rovelli agrees that entropy depends on the observer. In The Order of Time he says "The entropy of the world does not depend only on the configuration of the world; it also depends on the way in which we are blurring the world, and that depends on what the variables of the world are that we interact with". Gandalf61 (talk) 09:59, 22 June 2018 (UTC)[reply]
Meni Rosenfeld, when one discusses this topic mathematically (say in the theory of randomness extractors), it is usual to use min-entropy as the entropy measure. Shannon entropy is more relevant to topics like thermodynamics or data compression. 173.228.123.166 (talk) 20:04, 22 June 2018 (UTC)[reply]
This comment makes no sense to me. Both because it appears you somehow consider the topic of data compression and related issues in statistics & probability theory to be non-mathematical; and because, AFAICT, Shannon entropy is of great importance in randomness extraction, which you mentioned, as well. If you have a sequence of iid random variables , and you want to extract from it, say, a sequence of iid uniform bits, then I'm fairly sure the number you can asymptotically get per input variable is the Shannon entropy of the domain. Also, the Wikipedia article you linked specifically mentions "min-entropy", rather saying just "entropy" and assuming it will be understood as min-entropy. -- Meni Rosenfeld (talk) 18:25, 23 June 2018 (UTC)[reply]
I don't mean data compression is non-mathematical, but by "this topic" I specifically meant measuring the entropy of distributions of things like passwords or encryption keys (like in the context of cryptography or extractors). Shannon entropy is not too useful for that for the reason you've already described: a skewed enough distribution (low min-entropy) can still have high Shannon entropy. For an extractor it seems to me that the asymptotic case is not very interesting: you want to get good random output from a fixed amount of input, and if you have no better measure than the Shannon entropy of the input, you could need an awful lot.

E.g. if the sample space of your X_i's are 128-bit numbers and the probability distribution is all-zeros 99% of the time and uniform 1% of the time, the Shannon entropy is 1.36 bits per sample if I did that right. But if you input 94 samples expecting ~128 bits of input entropy, you have 39% chance of having put nothing but zeros into the extractor. 173.228.123.166 (talk) 20:46, 23 June 2018 (UTC)[reply]