Elementary effects method

From Wikipedia, the free encyclopedia

Published in 1991 by Max Morris[1] the elementary effects (EE) method[2] is one of the most used[3][4][5][6] screening methods in sensitivity analysis.

EE is applied to identify non-influential inputs for a computationally costly mathematical model or for a model with a large number of inputs, where the costs of estimating other sensitivity analysis measures such as the variance-based measures is not affordable. Like all screening, the EE method provides qualitative sensitivity analysis measures, i.e. measures which allow the identification of non-influential inputs or which allow to rank the input factors in order of importance, but do not quantify exactly the relative importance of the inputs.

Methodology[edit]

To exemplify the EE method, let us assume to consider a mathematical model with input factors. Let be the output of interest (a scalar for simplicity):

The original EE method of Morris [2] provides two sensitivity measures for each input factor:

  • the measure , assessing the overall importance of an input factor on the model output;
  • the measure , describing non-linear effects and interactions.

These two measures are obtained through a design based on the construction of a series of trajectories in the space of the inputs, where inputs are randomly moved One-At-a-Time (OAT). In this design, each model input is assumed to vary across selected levels in the space of the input factors. The region of experimentation is thus a -dimensional -level grid.

Each trajectory is composed of points since input factors move one by one of a step in while all the others remain fixed.

Along each trajectory the so-called elementary effect for each input factor is defined as:

,

where is any selected value in such that the transformed point is still in for each index

elementary effects are estimated for each input by randomly sampling points .

Usually ~ 4-10, depending on the number of input factors, on the computational cost of the model and on the choice of the number of levels , since a high number of levels to be explored needs to be balanced by a high number of trajectories, in order to obtain an exploratory sample. It is demonstrated that a convenient choice for the parameters and is even and equal to , as this ensures equal probability of sampling in the input space.

In case input factors are not uniformly distributed, the best practice is to sample in the space of the quantiles and to obtain the inputs values using inverse cumulative distribution functions. Note that in this case equals the step taken by the inputs in the space of the quantiles.

The two measures and are defined as the mean and the standard deviation of the distribution of the elementary effects of each input:

,
.

These two measures need to be read together (e.g. on a two-dimensional graph) in order to rank input factors in order of importance and identify those inputs which do not influence the output variability. Low values of both and correspond to a non-influent input.

An improvement of this method was developed by Campolongo et al.[7] who proposed a revised measure , which on its own is sufficient to provide a reliable ranking of the input factors. The revised measure is the mean of the distribution of the absolute values of the elementary effects of the input factors:

.

The use of solves the problem of the effects of opposite signs which occurs when the model is non-monotonic and which can cancel each other out, thus resulting in a low value for .

An efficient technical scheme to construct the trajectories used in the EE method is presented in the original paper by Morris while an improvement strategy aimed at better exploring the input space is proposed by Campolongo et al..

References[edit]

  1. ^ https://www.stat.iastate.edu/people/max-morris Home Page of Max D. Morris at Iowa State University
  2. ^ a b Morris, M. D. (1991). Factorial sampling plans for preliminary computational experiments. Technometrics, 33, 161–174.
  3. ^ Borgonovo, Emanuele, and Elmar Plischke. 2016. “Sensitivity Analysis: A Review of Recent Advances.” European Journal of Operational Research 248 (3): 869–87. https://doi.org/10.1016/J.EJOR.2015.06.032.
  4. ^ Iooss, Bertrand, and Paul Lemaître. 2015. “A Review on Global Sensitivity Analysis Methods.” In Uncertainty Management in Simulation-Optimization of Complex Systems, edited by G. Dellino and C. Meloni, 101–22. Boston, MA: Springer, Boston, MA. https://doi.org/10.1007/978-1-4899-7547-8_5.
  5. ^ Norton, J.P. 2015. “An Introduction to Sensitivity Assessment of Simulation Models.” Environmental Modelling & Software 69 (C): 166–74. https://doi.org/10.1016/j.envsoft.2015.03.020.
  6. ^ Wei, Pengfei, Zhenzhou Lu, and Jingwen Song. 2015. “Variable Importance Analysis: A Comprehensive Review.” Reliability Engineering & System Safety 142: 399–432. https://doi.org/10.1016/j.ress.2015.05.018.
  7. ^ Campolongo, F., J. Cariboni, and A. Saltelli (2007). An effective screening design for sensitivity analysis of large models. Environmental Modelling and Software, 22, 1509–1518.