Jump to content

User:Jwhitfield7/sandbox

From Wikipedia, the free encyclopedia

Reliability[edit]

Reliability refers to whether the scores are reproducible. Not all of the different types of reliability apply to the way that the CAGE is typically used. Internal consistency (whether all of the items measure the same construct) is not usually reported in studies of the CAGE; nor is inter-rater reliability (which would measure how similar peoples' responses were if the interviews were repeated again, or different raters listened to the same interview).

Explanation with references</tbody>

Rubric for evaluating norms and reliability for the CAGE questionnaire* <tbody role="presentation">
Criterion Rating (adequate, good, excellent, too good*)
Norms Not applicable Normative data are not gathered for screening measures of this sort
Internal_consistency Not reported A meta-analysis of 22 studies reported the median internal consistency was

α= 0.74.[1]

Inter-rater reliability Not usually reported Inter-rater reliability studies examine whether people's responses are scored the same by different raters, or whether people disclose the same information to different interviewers. These may not have been done yet with the CAGE; however, other research has shown that interviewer characteristics can change people's tendencies to disclose information about sensitive or stigmatized behaviors, such as alcohol or drug use.[2][3]
Test-retest reliability (stability) Not usually reported Retest reliability studies help measure whether things behave more as a state or trait; they are rarely done with screening measures
Repeatability Not reported Repeatability studies would examine whether scores tend to shift over time; these are rarely done with screening tests

Validity[edit]

Validity describes the evidence that an assessment tool measures what it was supposed to measure. There are many different ways of checking validity. For screening measures such as the CAGE, diagnostic accuracy and discriminative validity are probably the most useful ways of looking at validity.

Rating (adequate, good, excellent, too good*)</tbody>

Evaluation of validity and utility for the CAGE questionnaire* <tbody role="presentation">
Criterion Explanation with references
Content_validity Adequate Items are face valid; not clear that they comprehensively cover all aspects of problem drinking
Construct_validity (e.g., predictive, concurrent, convergent, and discriminant validity) Good Multiple studies show screening and predictive value across a range of age groups and samples
Discriminative validity Excellent Studies not usually reporting AUCs, but combined sensitivity and specificity often excellent
Validity generalization Excellent Multiple studies show screening and predictive value across a range of age groups and samples
Treatment sensitivity Not applicable CAGE not intended for use as an outcome measure
Clinical utility Good Free (public domain), extensive research base, brief.

*Table from Youngstrom et al., extending Hunsley & Mash, 2008;[4] *indicates new construct or category

  1. ^ Shields, Alan L.; Caruso, John C. (2004-04-01). "A Reliability Induction and Reliability Generalization Study of the Cage Questionnaire". Educational and Psychological Measurement. 64 (2): 254–270. doi:10.1177/0013164403261814. ISSN 0013-1644.
  2. ^ Griensven, Frits van; Naorat, Sataphana; Kilmarx, Peter H.; Jeeyapant, Supaporn; Manopaiboon, Chomnad; Chaikummao, Supaporn; Jenkins, Richard A.; Uthaivoravit, Wat; Wasinrapee, Punneporn (2006-02-01). "Palmtop-assisted Self-Interviewing for the Collection of Sensitive Behavioral Data: Randomized Trial with Drug Use Urine Testing". American Journal of Epidemiology. 163 (3): 271–278. doi:10.1093/aje/kwj038. ISSN 0002-9262. PMID 16357109.
  3. ^ Gribble, James N.; Miller, Heather G.; Cooley, Philip C.; Catania, Joseph A.; Pollack, Lance; Turner, Charles F. (2000-01-01). "The Impact of T-ACASI Interviewing on Reported Drug Use among Men Who Have Sex with Men". Substance Use & Misuse. 35 (6–8): 869–890. doi:10.3109/10826080009148425. ISSN 1082-6084. PMID 10847215.
  4. ^ Hunsley, John; Mash, Eric (2008). A Guide to Assessments that Work. New York, NY: Oxford Press. pp. 1–696. ISBN 978-0195310641.