Jump to content

Wikipedia:Reference desk/Archives/Mathematics/2020 July 18

From Wikipedia, the free encyclopedia
Mathematics desk
< July 17 << Jun | July | Aug >> Current desk >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


July 18

[edit]

Um... waiting for a reply

[edit]

Well, I haven't received a reply yet (see section July 17). Therefore I am including the Python code I made for my idea:

Code
"Category.py" - file containing the 'Category' object used in my main code
import statistics as stat
from math import fsum

class MachineError(Exception):
    ''' Standard exception of the machine'''
    def __init__(self, stmt):
        self.stmt = stmt
        
class Category:
    ''' basic class to represent a category for classification.'''
    '''I have added some extra functions to this class to make dealing with the'''
    ''' statistics used on this class a bit easier.'''
    def __init__(self, x, y, name = None):
        self.category_name = name
        if len(x) != len(y):
            raise MachineError("ALERT ! : Unequal sample sizes for x and y !") 
        self.x = x
        self.y = y
    def __repr__(self):
        '''returns name'''
        return self.category_name
    def mean_of(self, att):
        ''' returns mean of the specified value'''
        return stat.mean(getattr(self, att.lower()))
    def mode_of(self, att):
        '''returns mode of the specified value'''
        return stat.mode(getattr(self, att.lower()))
    def median_of(self, att):
        '''returns median of the specified value'''
        return stat.median(getattr(self, att.lower()))
    def stdev_of(self, att):
        '''returns standard deviation of the specified value'''
        return stat.stdev(getattr(self, att.lower()))
    def pstdev_of(self, att):
        '''returns population standard deviation of the specified value'''
        return stat.pstdev(getattr(self, att.lower()))
    def variance_of(self, att):
        '''returns variance of the specified value'''
        return stat.variance(getattr(self, att.lower()))
    def pvariance_of(self, att):
        '''return population variance of the specified value'''
        return stat.pvariance(getattr(self, att.lower()))
    def category_formula(self):
        '''returns the categorical field formula'''
        self.factor = self.mean_of('x') if self.variance_of('X') > self.variance_of('y') else self.mean_of('y')
        self.req = 'x' if self.variance_of('X') > self.variance_of('y') else 'y'
        # here, we take the mean of the variable having the highest variance
        # as the mass around which we create the field. Thus, the field can fluctuate
        # based on the value we use for mass, thereby creating a fluctuating decision boundary
        self.z_scores_x = [ (i - self.mean_of('x'))/self.stdev_of('x') for i in self.x]
        self.z_scores_y = [ (j - self.mean_of('y'))/self.stdev_of('y') for j in self.y]
        self.total_zscore = fsum([self.z_scores_x[m] * self.z_scores_y[m] for m in range(len(self.x))])
        self.correl_coeff = self.total_zscore / (len(self.x) - 1) # Pearson's correlation coefficient
        self.categorical_formula = lambda point: (self.correl_coeff * self.factor * point[self.req])/((self.mean_of('x') - point['x'])**2 + (self.mean_of('y') - point['y'])**2)
        return self.categorical_formula
    def update(self, point):
       ''' updates the data in the model with the new data'''
       self.x.append(point['x'])
       self.y.append(point['y'])
 And the main code for my project, 'FieldClf.py'
from Category import *

class Category_Field_Machine:
    ''' Categorical Field Machine'''
    def __init__(self, category_a, category_b):
        ''' initialiser'''
        self.category_a = Category(list(i[0] for i in category_a[list(category_a.keys())[0]]),list(j[1] for j in category_a[list(category_a.keys())[0]]), list(category_a.keys())[0])
        self.category_b = Category(list(i[0] for i in category_b[list(category_b.keys())[0]]),list(j[1] for j in category_b[list(category_b.keys())[0]]), list(category_b.keys())[0])
        self.category_a_formula = self.category_a.category_formula()
        self.category_b_formula = self.category_b.category_formula()
    def predict(self, point):
        '''predicts the class to which the point belongs'''
        val_a = self.category_a_formula(point)
        val_b = self.category_b_formula(point)
        if val_a > val_b:
            return str(self.category_a)
        elif val_b > val_a:
            return str(self.category_b)
        else:
            return 'Confused....'
    def train(self, point, actual_class):
        ''' trains the model'''
        if str(self.category_a).lower() == actual_class.lower():
            self.category_a.update(point)
            self.category_a_formula = self.category_a.category_formula()
        elif str(self.category_b).lower() == actual_class.lower():
            self.category_b.update(point)
            self.category_b_formula = self.category_b.category_formula()

--Sam Ruben Abraham (talk) 04:13, 18 July 2020 (UTC)[reply]

I am sorry, but I feel your question goes outside the scope of the Reference desk. (We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.) We are all volunteers and occasionally one of us may do some "original research" if the problem captures their interest, which is more likely if the problem is stated concisely as a pure mathematical problem. But do not expect anyone here to join you in collaborative research.  --Lambiam 11:19, 18 July 2020 (UTC)[reply]
I understand that and sorry for my mistake. Well, I was just trying to find anyone who's ready to help me on this topic. I found this thing a bit interesting, since my thought led me to this belief that a scatter plot where the points of each class (out of the total of 2 classes that I want to use my model upon) can be separated by a linear decision boundary is what this model (the one I referred here) requires. I don't know if this is a conjecture, so I thought a bit of help from here will help me.--Sam Ruben Abraham (talk) 11:03, 21 July 2020 (UTC)[reply]
If the attraction is the resultant of adding two vectors towards the two centres, where the strength of each vector depends solely on the distance to the corresponding centre, then considerations of symmetry show that the watershed between the two basins of attraction is the bisector of the line segment connecting the two centres – and therefore a straight line.  --Lambiam 22:06, 22 July 2020 (UTC)[reply]
Thanks, @Lambiam:, for the details. I am a 10th grader, so I have no much idea of what you said (it seems to me that you mistook me for a PG-level guy after seeing me talk of 'decision boundary', right ? I talked from the abstract idea I had of it). By the way, that finding is good, but my main question is about the feasibility of the model. Plus, I have no idea of how I can train the model to fit into the data.--Sam Ruben Abraham (talk) 04:55, 23 July 2020 (UTC)[reply]

Likelihood of another Fermat prime

[edit]

It is known that the likelihood of another Fermat prime is less than one billionth (see https://arxiv.org/pdf/1605.01371.pdf). Since "nano-" is the SI prefix for "one billionth", is there a "nano-" word describing a probability of one billionth?

The paper even says that there could be an updated version with "one trillionth" in its title. If so, then the "nano-" prefix would change to "pico-". GeoffreyT2000 (talk) 04:37, 18 July 2020 (UTC)[reply]

An event with probability 1 is said to be almost sure. Perhaps we can then say that an event with probability 10−9 is nanosure. An a.s. event happening can be called an "almost certainty", so here we may have a nanocertainty. However, these terms do not feel right when used for an upper bound, as in the cited likelihood values for another Fermat prime.  --Lambiam 07:39, 18 July 2020 (UTC)[reply]
What about "the likelihood of another Fermat prime is subnanosure"?  --Lambiam 08:57, 18 July 2020 (UTC)[reply]

Overcoming limitations of Gödel's incompleteness theorems

[edit]

Is it true that everything has the potential to be proved as long as mathematicians continuously developing new (consistent but not complete) axiomatic systems to prove them? I would think of it as some cocktail treatment for math and logic. - Justin545 (talk) 10:55, 18 July 2020 (UTC)[reply]

Paradoxically, Gödel's result constructs a proposition P that is at the same time provably unprovable and provably true. This is not a contradiction because these two results are proven in different systems. The unprovability result is with respect to a given logic system L that is consist and sufficiently powerful to be able to express the formula P. The second result, that P actually holds, depends on a semantic (metalogical) interpretation of P. If we construct a new logic LP by adding P as an axiom to L, this new logic is still consistent, but now P is (trivially) a theorem of LP. The same trick can be applied to independence results, such as that for AC, for which we can branch ZF into two formal systems, one in which AC is an axiom (ZFC) and one in which ¬AC is an axiom. If ZF is consistent, this is preserved in either branch. You can now work in the branch that best suits your mood. Does this answer your question sufficiently?  --Lambiam 11:40, 18 July 2020 (UTC)[reply]
Sorry, I do not know much about the math behind it. But the conclusion of the incompleteness theorem sounded somewhat stunning to me in the first place. Since math is extensively used nearly in every aspect of science. On the other hand, there are still unknown numbers of propositions/statements that can neither be justified nor be falsified by (part of) math. So, hopefully, I would think that using a different axiomatic system to prove the proposition that can't be proved in the original axiomatic system. - Justin545 (talk) 15:49, 18 July 2020 (UTC)[reply]
You can always take the proposition you're trying to prove as an axiom. Then it's easily provable. I know that seems trite, but I don't know that there's a way to formalize what you're asking that doesn't fall prey to that issue.--101.98.109.114 (talk) 09:29, 19 July 2020 (UTC)[reply]
Indeed, although adding a proposition P as a new axiom may introduce an inconsistency into a previously consistent formal system. In that case, however, ¬P was provable in the original system, so P was falsifiable. The main point is that Gödel's result dashed Hilbert's dream of turning mathematics into a closed formal system in which provability is the same as being true. The rules of the game are not fixed. For an entirely different view on what it means to prove a mathematical statement, see Intuitionism.  --Lambiam 09:49, 19 July 2020 (UTC)[reply]
Incompleteness theorem seems like a no-go theorem to me. So how do you guys think the incompleteness theorem personally? Is it positive or negative? Does it give hopes or destroy dreams with respect to math? - Justin545 (talk) 14:39, 19 July 2020 (UTC)[reply]
We don't answer requests for opinions. We know that Gödel's incompleteness theorems definitely established that Hilbert's program was futile. We do not know how Hilbert took this blow. Intuitionists such as L. E. J. Brouwer thought that Hilbert's program was meaningless anyway (see Brouwer–Hilbert controversy), so it is not likely he was disappointed.  --Lambiam 15:49, 19 July 2020 (UTC)[reply]
In general the incompleteness theorem doesn't come up in applications to science. As with anything, math has its limitations, but that doesn't mean it can't be very useful. For example the mathematical models used in weather prediction fail which you try to apply them to predict the weather a year from now. (Nothing to do with incompleteness btw, rather the chaotic nature of weather systems.) That doesn't mean we shouldn't use mathematical models of the weather to predict what will happen tomorrow. --RDBury (talk) 18:24, 19 July 2020 (UTC)[reply]
In some sense it's weird to talk about whether mathematical facts are good or bad. They couldn't possibly be different, so what possible world can you compare them with?
That said, for my former academic field, set theory, the theorems have been an unalloyed good. The fact that (with appropriate caveats) a theory can't prove its own consistency is the basis for the hierarchy of consistency strength, which tracks the large cardinals.
Slightly contra Lambiam, it is not really reasonable to say that, once a statement is shown to be independent of your previous formal theory it is equally reasonable to take it or its negation as an axiom. In the case of large cardinals, for example, there is a clear choice; if a large-cardinal axiom is consistent, we should assume it to be true. That's because these axioms encode the "maximality" of the von Neumann hierarchy.
On another note, many of these statements are not really "axiom-like"; it is rarely profitable, for example, to assume either the continuum hypothesis or its negation as an axiom. In that case you might say that both choices are "equally reasonable" axioms, but neither one is very reasonable; the research project is to find more reasonable axioms that do settle the question. Ω-logic and ultimate L are two such attempts. --Trovatore (talk) 19:53, 19 July 2020 (UTC)[reply]
I did not mention reasonability, but suggested to allow letting the mood of the working mathematician prevail in their making a choice. Platonists may prefer to assume the universe they are exploring is reasonable. How could one define the notion of a proposition being independent of a given axiomatic system but nevertheless "true", other than in relation to a pre-existing universe? Consider the famous parallel postulate from Euclid's Elements. Is it more "reasonable" to assume it to be "true"? Personally I feel that mathematicians embracing AD are collectively more reasonable than those embracing AC. I also feel that it is entirely reasonable to maintain the position that CH as usually formulated is neither true nor false, but fails to have a well-defined meaning and needs to be made more precise before it can be addressed.[1]. 21:40, 20 July 2020 (UTC) 08:14, 20 July 2020 (UTC)
I believe that the incompleteness theorem itself is also an inference based on some axioms and some of those axioms couldn't be justified or falsified by proofs, too. If so, how to say the incompleteness theorem is a "mathematical fact" (if that's what you mean)? The theorem should be built on top of solid truths rather than something that might be fragile, shouldn't it? - Justin545 (talk) 12:49, 20 July 2020 (UTC)[reply]
Working mathematicians typically prove their results without explicitly appealing to the inference rules of a specified axiomatic system. They do this is in the conviction that, if necessary, the somewhat informal proof can be transformed into a completely formal proof, such as in ZFC. If ZFC is inconsistent, this isn't of any help. The general belief is that ZFC is consistent, but who knows, one day a genius may come up with an antinomy that can be formulated in ZFC as it is. This will be a genuine surprise, but the reaction will not be one of despair, but a scramble of who can come up with the most elegant way of erecting a formal fence around the gap. If the emerging winner was proposed by a mathematician named Oliver, say, then from then on mathematicians will put their trust in a new system that may be known as ZFOC :). In the end, it is a matter of trust, the foremost aspect of which is that a proof that is valid today (with respect to a given system) will still be valid tomorrow. This means you can simply use the results of recorded theorems and do not have to repeat their proofs. But of course, the latest edition of the textbook in which you find the theorem may contain a misprint, so in practice this guarantee of timelessness is not absolute. And, as I wrote already above, the rules of the game are not fixed. No mathematician is bound to trust ZFC, and if they feel confident enough they may add rules and axioms that they believe to be obviously valid – which may take some power of persuasion to convince their colleagues, especially those peer-reviewing their papers. See also Philosophy of mathematics#Social constructivism. If you are interested in this kind of stuff, also read our article on the Foundations of mathematics.  --Lambiam 12:11, 21 July 2020 (UTC)[reply]

I'm going to give you the hardline Platonist response here. (Someone has to.)
Mathematical facts are facts about mathematical objects, which exist independently of our reasoning about them. The incompleteness theorems are among those facts. This is the only way to understand the theorems that doesn't have you constantly tripping over your own tongue. (For some reason, a lot of people seem to think the theorems refute Platonism. Actually they come very close to refuting two of its main competitors of the time, formalism and logicism. They don't have much to say about intuitionism, but intuitionism is so limiting that few really want to work in it other than for ideological reasons.)
Formal systems like ZFC are not the starting point. The starting point is the objects themselves; the natural numbers in the case of arithmetic, and the sets of the von Neumann hierarchy in the case of set theory. The axiomatic systems are tools, to help you find out truths about the objects. --Trovatore (talk) 17:47, 21 July 2020 (UTC)[reply]
For a file stored on your computer of a size larger than 100 Mb, you can ask whether or not a self extracting program of size less than 100 Kb exists that will generate precisely that 100 Mb file. It's obviously exceedingly unlikely that such a program exists if the data in the file was not generated by a short program (such as a random generator). So, it's then safe to assume that there doesn't exist a program of size less than 100 Kb that will generate your file. But does there then exist a mathematical proof for that fact? Suppose that a proof exists. Then that means that a search program that exhaustively generates all proofs of statements of the form: "The file .... cannot be compressed to under 100 Kb" where "...." is some file larger than some lower limit, will halt and output a file that it says cannot be compressed to less than 100 Kb. But the program can be written in much less than 100 Kb, so this is a contradiction. This means that the program which would halt at the proof for the first file larger than the lower limit, will never halt. So, for no file that cannot be significantly compressed can there exist a proof of that fact. Count Iblis (talk) 00:01, 23 July 2020 (UTC)[reply]
@Count Iblis: If I'm not mistaken, it can be done if one knows the busy beaver of the largest Turing machine size encodable in 100 kB. One could then use that to discard all the non-halting programs and then simulate all the halting ones to completion. The busy beaver function is defined for all program sizes, so for every pair of file sizes there exists a program that can demonstrate the existence or non-existence of such "compression". However, one can also note that there are values of the function that cannot be proven correct in ZFC.--Jasper Deng (talk) 20:03, 25 July 2020 (UTC)[reply]