Talk:Jeffreys prior

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Fisher Information Matrix[edit]

It's not clear from this article what do to for priors with multiple parameters. In that case it's the square root of the determinant of the Fisher information matrix. I haven't edited it though because I wonder whether the term "Fisher information" conventionally encompasses this definition... the Fisher information entry doesn't say as much though so either this or that entry ought to be amended. --Russell E 04:44, 27 March 2006 (UTC)[reply]

I re-wrote it to make the multiple parameters case more apparent. Maybe this addresses your comments? Quantling (talk) 15:26, 30 January 2009 (UTC)[reply]

Wht is o?[edit]

The main article keeps referring to

o

without ever defining it. Not helpful at all...

David B. Benson 21:28, 24 August 2007 (UTC)[reply]

frequentist judgement[edit]

Do others disagree with the appropriateness of the recently added text reading, "In general, use of Jeffreys priors violates the likelihood principle; many statisticians therefore regard their use as unjustified." Pdbailey (talk) 03:53, 13 December 2007 (UTC)[reply]

note on likelihood principle
I added the line about Jeffreys priors violating the likelihood principle. This is directly deducible from the referenced page on the likelihood principle. Many statisticians regard this as one reason why Jeffreys priors are not justified - this would include just about every "subjectivist" Bayesian, for instance, including me (a professor of Statistics at the University of Toronto).
The page ends with a description of it as a stub, so I don't think extensive references can be expected. I don't have time to write more - I just came across it casually and noted that the current stub is a one-sided justification (without references) of what is actually a controversial method. Radford Neal (talk) 05:08, 20 December 2007 (UTC)[reply]

Equivalence to logarithmic prior[edit]

While there are cases that the Jeffreys prior for the positive reals is equivalent to the (unnormalized) logarithmic prior, I don't think that this is always the case. For instance, for the positive real number that parameterizes a Poisson distribution, the (unnormalized) Jeffreys prior is proportional to . I have removed the discussion of the logarithmic prior, pending this discussion.

Quantling (talk) 16:11, 2 March 2009 (UTC)[reply]

Point taken -- thanks!
I've fixed the article to incorporate your point; as I understand it, you're saying that a distribution is a Jeffreys prior for a given question -- so the log prior is the Jeffreys prior for unknown standard deviation of a Gaussian, but not for an unknown rate of a Poisson process, though these have the same parameter space.
In future, could you please tell me (on my talk page) when I make a mistake so I can fix it? (I fixed this because I was advised by another editor.)
Hope the current page is both correct and useful!
—Nils von Barth (nbarth) (talk) 22:53, 16 September 2009 (UTC)[reply]
My apologies for not alerting you that I reverted some of your edits on the page for the Jeffreys prior. It was a failure on my part; I did not understand the proper etiquette. Thank you for your updated contribution to that page. Quantling (talk) 14:49, 17 September 2009 (UTC)[reply]
No worries Quantling, and thanks for the correction!
—Nils von Barth (nbarth) (talk) 02:35, 19 September 2009 (UTC)[reply]


Notation and [edit]

Is later in the article the same as in the lede? Olli Niemitalo (talk) 08:54, 6 March 2019 (UTC)[reply]

Definition of "invariant"[edit]

Thank you User:Tercer for your recent edits. However, I disagree with your definition of invariant. I venture that what you have given as the definition of invariant is the change of variables theorem. It tells how to compute the probability density under a change of parameterization for any probability density. What makes the Jeffries prior "invariant" is that when

is true then the manifestly similar defintion

will also be correct -- that is, it is guaranteed to agree with the universally applicable change of variables theorem.

The article would benefit from some clarification and I welcome your further edits. However, without those further edits, I fear the article would be better if we undid the two edits you have just made. —Quantling (talk | contribs) 18:16, 21 December 2020 (UTC)[reply]

My definition is correct. Unfortunately it is not in the reference I added to the article, sorry about that, I was being lazy, but you can see it here, between equations (20) and (21), or here, at the top of page 2.
Let me clarify: what I meant is that if is the prior obtained via method M from a statistical model with parameter , and is the prior obtained also via M from a statistical model with parameter , then M is called "invariant" if . The point is that the priors produced by M agree with the usual change of variables.
Your definition is notoriously obscure. I felt the need to edit it because I saw people complaining that they didn't understand it here and here. And I didn't understand it either. First of all you use dummy variables to distinguish the functions; this is very confusing, because is clearly not the same function as . Also, what is the definition of and ? If you define them both to be the Jeffreys prior then your definition is just a tautology. But if you define to be the Jeffreys prior, and define as , then this is just a roundabout way of stating my definition. Tercer (talk) 19:50, 21 December 2020 (UTC)[reply]
Thank you for your thoughtful reply. I will make an edit to show more clearly what I am trying to say. Hopefully our edits will converge to something we both agree is good. —Quantling (talk | contribs) 01:54, 25 December 2020 (UTC)[reply]