Analysis of competing hypotheses

From Wikipedia, the free encyclopedia

The analysis of competing hypotheses (ACH) is a methodology for evaluating multiple competing hypotheses for observed data. It was developed by Richards (Dick) J. Heuer, Jr., a 45-year veteran of the Central Intelligence Agency, in the 1970s for use by the Agency.[1] ACH is used by analysts in various fields who make judgments that entail a high risk of error in reasoning. ACH aims to help an analyst overcome, or at least minimize, some of the cognitive limitations that make prescient intelligence analysis so difficult to achieve.[1]

ACH was a step forward in intelligence analysis methodology, but it was first described in relatively informal terms. Producing the best available information from uncertain data remains the goal of researchers, tool-builders, and analysts in industry, academia and government. Their domains include data mining, cognitive psychology and visualization, probability and statistics, etc. Abductive reasoning is an earlier concept with similarities to ACH.

Process[edit]

Heuer outlines the ACH process in considerable depth in his book, Psychology of Intelligence Analysis.[1] It consists of the following steps:

  1. Hypothesis – The first step of the process is to identify all potential hypotheses, preferably using a group of analysts with different perspectives to brainstorm the possibilities. The process discourages the analyst from choosing one "likely" hypothesis and using evidence to prove its accuracy. Cognitive bias is minimized when all possible hypotheses are considered.[1]
  2. Evidence – The analyst then lists evidence and arguments (including assumptions and logical deductions) for and against each hypothesis.[1]
  3. Diagnostics – Using a matrix, the analyst applies evidence against each hypothesis in an attempt to disprove as many theories as possible. Some evidence will have greater "diagnosticity" than other evidence—that is, some will be more helpful in judging the relative likelihood of alternative hypotheses. This step is the most important, according to Heuer. Instead of looking at one hypothesis and all the evidence ("working down" the matrix), the analyst is encouraged to consider one piece of evidence at a time, and examine it against all possible hypotheses ("working across" the matrix).[1]
  4. Refinement – The analyst reviews the findings, identifies any gaps, and collects any additional evidence needed to refute as many of the remaining hypotheses as possible.[1]
  5. Inconsistency – The analyst then seeks to draw tentative conclusions about the relative likelihood of each hypothesis. Less consistency implies a lower likelihood. The least consistent hypotheses are eliminated. While the matrix generates a definitive mathematical total for each hypothesis, the analyst must use their judgment to make the final conclusion. The result of the ACH analysis itself must not overrule analysts' own judgments.
  6. Sensitivity – The analyst tests the conclusions using sensitivity analysis, which weighs how the conclusion would be affected if key evidence or arguments were wrong, misleading, or subject to different interpretations. The validity of key evidence and the consistency of important arguments are double-checked to assure the soundness of the conclusion's linchpins and drivers.[1]
  7. Conclusions and evaluation – Finally, the analyst provides the decisionmaker with his or her conclusions, as well as a summary of alternatives that were considered and why they were rejected. The analyst also identifies milestones in the process that can serve as indicators in future analyses.[1]

Strengths[edit]

Some benefits of doing an ACH matrix are:

  • It is auditable.
  • It is widely believed to help overcome cognitive biases, though there is a lack of strong empirical evidence to support this belief.[2]
  • Since the ACH requires the analyst to construct a matrix, the evidence and hypotheses can be backtracked. This allows the decisionmaker or other analysts to see the sequence of rules and data that led to the conclusion.

Weaknesses[edit]

Weaknesses of doing an ACH matrix include:

  • The process to create an ACH is time-consuming.
  • The ACH matrix can be problematic when analyzing a complex project.
  • It can be cumbersome for an analyst to manage a large database with multiple pieces of evidence.
  • Evidence also presents a problem if it is unreliable.
  • The evidence used in the matrix is static and therefore it can be a snapshot in time.

Especially in intelligence, both governmental and business, analysts must always be aware that the opponent(s) is intelligent and may be generating information intended to deceive.[3][4] Since deception often is the result of a cognitive trap, Elsaesser and Stech use state-based hierarchical plan recognition (see abductive reasoning) to generate causal explanations of observations. The resulting hypotheses are converted to a dynamic Bayesian network and value of information analysis is employed to isolate assumptions implicit in the evaluation of paths in, or conclusions of, particular hypotheses. As evidence in the form of observations of states or assumptions is observed, they can become the subject of separate validation. Should an assumption or necessary state be negated, hypotheses depending on it are rejected. This is a form of root cause analysis.

According to social constructivist critics, ACH also fails to stress sufficiently (or to address as a method) the problematic nature of the initial formation of the hypotheses used to create its grid. There is considerable evidence, for example, that in addition to any bureaucratic, psychological, or political biases that may affect hypothesis generation, there are also factors of culture and identity at work. These socially constructed factors may restrict or pre-screen which hypotheses end up being considered, and then reinforce confirmation bias in those selected.[5]

Philosopher and argumentation theorist Tim van Gelder has made the following criticisms:[6]

  • ACH demands that the analyst makes too many discrete judgments, a great many of which contribute little if anything to discerning the best hypothesis
  • ACH misconceives the nature of the relationship between items of evidence and hypotheses by supposing that items of evidence are, on their own, consistent or inconsistent with hypotheses.
  • ACH treats the hypothesis set as "flat", i.e. a mere list, and so is unable to relate evidence to hypotheses at the appropriate levels of abstraction
  • ACH cannot represent subordinate argumentation, i.e. the argumentation bearing up on a piece of evidence.
  • ACH activities at realistic scales leave analysts disoriented or confused.

Van Gelder proposed hypothesis mapping (similar to argument mapping) as an alternative to ACH.[7][8]

Structured analysis of competing hypotheses[edit]

The structured analysis of competing hypotheses offers analysts an improvement over the limitations of the original ACH.[discuss][9] The SACH maximizes the possible hypotheses by allowing the analyst to split one hypothesis into two complex ones.

For example, two tested hypotheses could be that Iraq has WMD or Iraq does not have WMD. If the evidence showed that it is more likely there are WMDs in Iraq then two new hypotheses could be formulated: WMD are in Baghdad or WMD are in Mosul. Or perhaps, the analyst may need to know what type of WMD Iraq has; the new hypotheses could be that Iraq has biological WMD, Iraq has chemical WMD and Iraq has nuclear WMD. By giving the ACH structure, the analyst is able to give a nuanced estimate.[10]

Other approaches to formalism[edit]

One method, by Valtorta and colleagues uses probabilistic methods, adds Bayesian analysis to ACH.[11] A generalization of this concept to a distributed community of analysts lead to the development of CACHE (the Collaborative ACH Environment),[12] which introduced the concept of a Bayes (or Bayesian) community. The work by Akram and Wang applies paradigms from graph theory.[13]

Other work focuses less on probabilistic methods and more on cognitive and visualization extensions to ACH, as discussed by Madsen and Hicks.[14] DECIDE, discussed under automation is visualization-oriented.[15]

Work by Pope and Jøsang uses subjective logic, a formal mathematical methodology that explicitly deals with uncertainty.[16] This methodology forms the basis of the Sheba technology that is used in Veriluma's intelligence assessment software.

Software[edit]

A few online and downloadable software tools help automate the ACH process. These programs leave a visual trail of evidence and allow the analyst to weigh evidence.

  • PARC ACH 2.0[17][18] was developed by Palo Alto Research Center (PARC) in collaboration with Richards J. Heuer, Jr. It is a standard ACH program that allows analysts to enter evidence and rate its credibility and relevance.
  • Decision Command software was developed by Willard Zangwill.[19]
  • DECIDE was developed by the analytic research firm SSS Research, Inc.[15][20] DECIDE not only allows analysts to manipulate ACH, but it provides multiple visualization products.[21]
  • Analysis of Competing Hypotheses (ACH) is an open-source ACH implementation.[22]
  • ACH Template[23] is an Excel sheet that implements the scoring and weighting methodology of ACH, more specifically the weighted inconsistency counting algorithm.

See also[edit]

Notes[edit]

  1. ^ a b c d e f g h i Heuer, Richards J. Jr, "Chapter 8: Analysis of Competing Hypotheses", Psychology of Intelligence Analysis, Center for the Study of Intelligence, Central Intelligence Agency, archived from the original on June 13, 2007
  2. ^ Thomason, Neil (2010), "Alternative Competing Hypotheses", Field Evaluation in the Intelligence and Counterintelligence Context: Workshop Summary, National Academies Press, doi:10.17226/12854, ISBN 978-0-309-15016-3
  3. ^ Elsaesser, Christopher; Stech, Frank J. (2007), "Detecting Deception", in Kott, Alexander; McEneaney, William (eds.), Adversarial Reasoning: Computational Approaches to Reading the Opponent's Mind, Chapman & Hall/CRC, pp. 101–124
  4. ^ Stech, Frank J.; Elsaesser, Christopher, Deception Detection by Analysis of Competing Hypotheses (PDF), MITRE Corporation, archived from the original (PDF) on 2008-08-07, retrieved 2008-05-01 MITRE Sponsored Research Project 51MSR111, Counter-Deception Decision Support
  5. ^ Chapters one to four, Jones, Milo L. and; Silberzahn, Philippe (2013). Constructing Cassandra, Reframing Intelligence Failure at the CIA, 1947-2001. Stanford University Press. ISBN 978-0804793360.
  6. ^ van Gelder, Tim (December 2008), "Can we do better than ACH?", AIPIO News (55), Australian Institute of Professional Intelligence Officers
  7. ^ van Gelder, Tim (11 December 2012). "Exploring new directions for intelligence analysis". timvangelder.com. Retrieved 30 September 2018.
  8. ^ Chevallier, Arnaud (2016). Strategic Thinking in Complex Problem Solving. Oxford; New York: Oxford University Press. p. 113. doi:10.1093/acprof:oso/9780190463908.001.0001. ISBN 9780190463908. OCLC 940455195.
  9. ^ Wheaton, Kristan J., et al. (November–December 2006), "Structured Analysis of Competing Hypotheses: Improving a Tested Intelligence Methodology" (PDF), Competitive Intelligence Magazine, 9 (6): 12–15, archived from the original (PDF) on 2007-09-28, retrieved 2008-05-01
  10. ^ Chido, Diane E., et al. (2006), Structured Analysis Of Competing Hypotheses: Theory and Application, Mercyhurst College Institute for Intelligence Studies Press, p. 54
  11. ^ Valtorta, Marco; et al. (May 2005), "Extending Heuer's Analysis of Competing Hypotheses Method to Support Complex Decision Analysis", International Conference on Intelligence Analysis Methods and Tools (PDF)
  12. ^ Shrager, J., et al. (2009) Soccer science and the Bayes community: Exploring the cognitive implications of modern scientific communication. Topics in Cognitive Science, 2(1), 53–72.
  13. ^ Akram, Shaikh Muhammad; Wang, Jiaxin (23 August 2006), "Investigative Data Mining: Connecting the dots to disconnect them", Proceedings of the 2006 Intelligence Tools Workshop (PDF), pp. 28–34
  14. ^ Madsen, Fredrik H.; Hicks, David L. (23 August 2006), "Investigating the Cognitive Effects of Externalization Tools", Proceedings of the 2006 Intelligence Tools Workshop (PDF), pp. 4–11
  15. ^ a b Cluxton, Diane; Eick, Stephen G., "DECIDE Hypothesis Visualization Tool", 2005 Intl conf on Intelligence Analysis (PDF), archived from the original (PDF) on 2008-08-07, retrieved 2008-05-01
  16. ^ Pope, Simon; Josang, Audun (June 2005), Analysis of Competing Hypotheses using Subjective Logic (ACH-SL), Queensland University, Brisbane, Australia, ADA463908, archived from the original on April 8, 2013
  17. ^ Xerox Palo Alto Research Center and Richards J. Heuer, ACH2.0.3 Download Page: Analysis of Competing Hypotheses (ACH), archived from the original on 2008-03-18, retrieved 2008-03-13
  18. ^ "Download ACH by PARC". ach1.software.informer.com. Retrieved 2020-12-28.
  19. ^ "Quantinus | Alignment | Execution". 2009-08-17. Archived from the original on 2009-08-17. Retrieved 2020-12-28.
  20. ^ Lankenau, Russell A., et al. (July 2006), SSS Research, Inc. – DECIDE, VAST 2006 Contest Submission
  21. ^ SSS Research, DECIDE: from Complexity to Clarity, archived from the original on March 28, 2007
  22. ^ Burton, Matthew (2020-12-11), "Burton/Analysis-of-Competing-Hypotheses", GitHub, retrieved 2020-12-27
  23. ^ Stirparo, Pasquale (2020-12-10), "pstirparo/threatintel-resources", GitHub, retrieved 2020-12-28