Jump to content

Wikipedia:Reference desk/Archives/Mathematics/2010 May 9

From Wikipedia, the free encyclopedia
Mathematics desk
< May 8 << Apr | May | Jun >> May 10 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


May 9

[edit]

Best statistical method to compare sensitivity in 2 binary tests?

[edit]

I need to compare 2 different tests on known cancer patients. Should I use both tests on all patients and use McNemara's/Liddell's/Kappa/Fisher's Exact test or group them and compare between the 2 groups using chi-square? Are there any differences in the reliability of the results? And how do I chose among the alternatives above? —Preceding unsigned comment added by Makischa (talkcontribs) 05:29, 9 May 2010 (UTC)[reply]

It's definitely preferable from a statistical perspective to perform both tests on all patients, provided it's practical, ethical and affordable. This maximises the precision with which you can measure the accuracy of each test separately, and also maximises the power of the comparison between them. Qwfp (talk) 07:29, 9 May 2010 (UTC)[reply]
Thank you for your answer. But how should I choose between all the different statistical methods? And how can I show on the results the advantages you mentioned? Makischa (talk) 10:03, 9 May 2010 (UTC)[reply]
Sorry i was in a bit of a rush this morning. If you've performed both binary tests on all patients then you have paired (matched) data and McNemar's test is appropriate, or its 'exact' version that is sometimes called Liddell's exact test. Fisher's exact test or Pearson's chi-square test wouldn't be appropriate with paired data as they assume independence of the two samples. Doing both tests on all patients is preferable from a statistical perspective both because you maximise the sample size for estimating both sensitivities, and because you can be sure that the characteristics of the individual patients don't confound the difference in sensitivities. There's nothing to stop you calculating Cohen's kappa as a measure of agreement, but in the context of a diagnostic tests it would be more usual to present the sensitivities of both tests (you can calculate confidence intervals for both each sensitivity using any of the methods for a Binomial proportion confidence interval, – Wilson's score interval or the Agresti-Coull interval are often recommended), and it would probably also be useful to present the difference in sensitivities with a confidence interval for this difference – i can't see a Wikipedia page with the formula for that but it's on p378 of Fleiss, Levin & Paik (2003) Statistical Methods for Rates and Proportions (3rd ed. New York: Wiley, ISBN 9780471526292), and also in the books on statistical methods for diagnostic tests by Margaret Pepe (ISBN 9780198565826) and Zhou, Obuchowski, & McClish (ISBN 9780471347729). It's also available in statistical software, e.g. the -mcc- command in Stata. --Qwfp (talk) 17:10, 9 May 2010 (UTC)[reply]
Thank you again for being so helpful. I really appreciate it. I am going tomorrow morning at the library to check the books you suggested. I would like to ask you one final thing: the whole trial is about two different methods of biopsy extraction. If I find one biomarker that is present in a bigger proportion in the target tissue that the nearby tissue and measure it in both tests, what method should I use to prove that one of the tests is better than the other? Let me explain it: I have two imprecise biopsy methods and I need to compare which "captures" more cells of the tissue where the cancer cells are. If I find one biomarker that is known to be higher in the target tissue and measure it in the two samples, how can I prove that one of the test methods captures significantly more tissue from the target where the cancer cells are known to be? (talk) 18:18, 9 May 2010 (UTC)[reply]
Sorry, you've lost me. I can't see how these are binary tests. This seems to be moving beyond a mathematical question into more general issues of study design, which isn't really practical to discuss here. I suggest you consult a statistician or epidemiologist with competence in diagnostic tests, or read a suitable book - this one is less mathematical than those i mentioned above. Regards, Qwfp (talk) 10:34, 11 May 2010 (UTC)[reply]

Markov chain on an n-cycle

[edit]

Hi all, I was hoping you could help me with this.

We construct a Markov chain on an n-gon by joining adjacent vertices - our state space being {0, 1,... N-1} and our transition probabilities are given by P(i,j)=1/2 if j=i±1, 0 otherwise - i.e. we move from vertex to vertex in the obvious way, like 'walking around a fence'. Starting from (say) 0, how would one go about finding the probability distribution of the very last vertex to be reached? Obviously it's going to be symmetric about N/2, but I'm not sure how to go about finding the probability that each vertex is the last to be reached given that this can happen in a vast number of ways.

I've calculated the invariant distribution fairly trivially - the valency of each vertex is 2, so the invariant distribution is h=(1/N,1/N,...,1/N) satisfying hP=h, and I'm aware the long-term probability of being found in any given vertex tends to h_i=1/N, but that doesn't really seem to help me. Any suggestions?

Thanks a lot, 82.6.96.22 (talk) 13:11, 9 May 2010 (UTC)[reply]

The probability is 1/(N+1) 1/(N-1), independent of which vertex you want to be last. This is fairly easy to see: it doesn't matter what the chain does until it moves adjacent to the chosen vertex for the first time, and once that's happened it doesn't matter which vertex you chose. Algebraist 14:43, 9 May 2010 (UTC)[reply]
Can you clarify the question? Algebraist's answer above makes no sense to me, which means that at least one of us has completely misunderstood it. I thought you were talking about k-step transition probabilities for some unspecified k. Finding these for any specific k and N is trivial (for example, for you have {0, 3/8, 1/8, 1/8, 3/8}), and presumably you are after some sort of closed form. -- Meni Rosenfeld (talk) 15:12, 9 May 2010 (UTC)[reply]
Sure - sorry if it was unclear. What I'm trying to do is say, starting at vertex '0', and continuing to move around your 'fence' with p=1/2 in each direction until you have reached every vertex at least once, what is the probability that the last vertex you reach is vertex 'k' (where k is arbitrary); i.e. how likely is it that you end up on any given vertex last, having visited every other vertex at least once (so the probability would be 0 for vertex 0 e.g., since you start on 0). 131.111.185.75 (talk) 15:37, 9 May 2010 (UTC)[reply]
Of course, silly me. So yes, what Algebraist said - except that it should have been 1/(N-1), or 1/N if you don't treat starting at 0 as visiting it. -- Meni Rosenfeld (talk) 17:48, 9 May 2010 (UTC)[reply]
I think I see what you're saying to some extent - sorry if I'm being slow - but why does it 'not matter what the chain does until it moves adjacent to the chosen vertex for the first time'? Surely different chains which touch various vertices different numbers of times (such as a chain which goes directly around until it reaches our last vertex and back, or a chain which shuffles back and forth many times before reaching the end) will have different probabilities, and since each vertex is a different distance away from the origin 0, we can't say that these will necessarily be the same chains for each vertex? And for all we know there might be different numbers of chains of each length for different vertices, surely? Sorry I'm obviously missing something! Thanks for all the help so far, 131.111.185.68 (talk) 20:51, 9 May 2010 (UTC)[reply]
A more detailed argument would go like this: Let's calculate the probability that the last state reached is i. With probability 1, we will at some point reach a state adjacent to i (i+1 or i-1). Let be the first time this happens. We have . Let's calculate . By definition of , if then at time we have never visited i+1, which means that the last is i iff, starting at i-1 in time , we will reach i+1 before reaching i (since in this case we have gone full circle and reached all other states). The probability of that happening is independent on what the Markov chain did before. Whatever this probability is, it is by reflectional symmetry equal to , so we have . By rotational symmetry, this is the same no matter what i is. So all states have equal probability to be last. -- Meni Rosenfeld (talk) 09:18, 10 May 2010 (UTC)[reply]