Draft:MEC

From Wikipedia, the free encyclopedia
  • Comment: Most citations predate the first apparent use of the term. Largely referenced. Apparent significant COI. Stuartyeates (talk) 20:37, 21 January 2024 (UTC)

Memory-equivalent Capacity (MEC) is a measure of a model's intellectual capacity, derived from the number of functions a machine learning model can represent (memorize). It provides an upper bound on the intellectual capacity needed to solve a task, such as memorizing a training set or a game tree. The measure uses the unit bits. MEC is usually defined through the number of parameters used in a model as well its topology.

Definition[edit]

A model is said to have an MEC of bits, when the model is able to represent all binary labeling functions of points. MEC can be extended to -ary classification problems and regression tasks, providing a unified capacity view on all models.[1].

Relationship to VC Dimension[edit]

MEC and VC Dimension are directly connected but not equivalent. If a model can represent all binary labeling functions of arbitrarily-distributed points, its VC dimension is . However, the VC-Dimension can be larger than the MEC for a concrete set of points if the model is able to generalize. For example, VC Dimension can be infinite. MEC requires all possible functions of points to be representable. This allows to assign MEC the unit of bits. Since every model has functions that it cannot generalize (see Lossless compression, No Free Lunch Theorem, Pigeonhole Principle Principle, and others), MEC is always finite.[citation needed]

Relationship to Information Capacity[edit]

MEC is not equivalent to information capacity from information theory, which depends on the dimensionality of the data and the type of separation function used. As a consequence, information capacity can only be measured approaching infinity. For example, the information capacity of a linear classifier model is 2 bits per parameter for binary classification, as the number of parameters approaches infinity[2]. The MEC of a linear separator is 1 bit per parameter, no matter the number of parameters or the dimensionality of the input[3]

Applications[edit]

MEC can be used to determine the maximum size of a machine learner before overfitting becomes a risk. For example, if the MEC is larger than the number of instances in a balanced-binary binary classification problem, the machine learner is too big and may memorize every training-set instance directly (that is, overfit).[citation needed]

MEC also helps to understand the upper bound of the intellectual capacity needed to solve a task. For example, a binary decision tree with 19 levels can perfectly play Tic-Tac-Toe, and the MEC for chess is approximately 400 bits[4]. That is, one instance of a perfect game can be represented in 400 bits. These examples demonstrate how MEC can be used to quantify the upper-limit size required for a machine learners for specific tasks based on their difficulty: So the MEC for training a chess model should ideally be much smaller than 400 times the number of games in the training set to guarantee generalization to unseen games. A model could still generalize with a larger MEC but there is no guarantee. Most importantly, ignoring MEC, it is not measurable how well a model generalizes, it can only be tested anecdotally using validations samples from a test set. This leaves models vulnerable to adversarial examples.[citation needed]

The use of the unit bits allows the use of MEC to compare the complexity of machine learning models against each other, independent of task. MEC also allows for optimizing models for deployment in environments with limited memory resources. Last but not least, it serves as an educational tool in understanding machine learning model limits and capabilities.[citation needed]

Challenges[edit]

Exact capacity calculations on machine learning models can be complex, particularly due to the stochastic nature of many training algorithms (such as stochastic gradient descent or gradient boosting).[citation needed]

To calculate the VC Dimension, one has to know the structure of the data which is usually modeled as a function (see for example, [5]). In other words, one has to have a model of the data to calculate the VC dimension. Since the practical purpose of a capacity measurement is to estimate the complexity of the problem before creating a model, this needs to be considered a catch 22. Calculating the exact information capacity requirement of a model can be challenging as the information capacity is a function of the intrinsic dimensionality of the data. Algorithms for this are known, for example total correlation[6], but they usually take exponential time. Furthermore, models below information capacity can still overfit as information capacity is usually larger than MEC.[citation needed]

The calculation of MEC, at least as an upper bound, is more practical than the calculation of VC Dimension or information capacity. However, while an upper-bound MEC for neural networks[7][8][9] and Hopfield Networks[10] is known, MEC calculations for other algorithms, such as XGBoost, Gaussian Mixture Models, RBF-kernel Support Vector Machines, or recursive models are yet to be found. Similarly, since the intrinsic dimensionality of data is not known a-priori, estimating the MEC requirement for a given training set can only be done as an upper-bound. Note that memory capacity requirements are usually understood as upper bounds.[citation needed]

See Also[edit]

References[edit]

  1. ^ Friedland, G (2023): "Information-driven Machine Learning: Data Science as an Engineering Discipline", Springer Nature
  2. ^ Cover, T. M. (1965). "Geometrical and Statistical Properties of Systems of Linear Inequalities with Applications in Pattern Recognition". IEEE Transactions on Electronic Computers. 14 (3): 326–334
  3. ^ Mackay, D. J. C. (2003). Information Theory, Inference, and Learning Algorithms. Cambridge University Press
  4. ^ Shannon, C. E. (1950). "Programming a Computer for Playing Chess". Philosophical Magazine. 41 (314): 256–275
  5. ^ Bartlett, Peter L., and Wolfgang Maass. "Vapnik-Chervonenkis dimension of neural nets." The handbook of brain theory and neural networks (2003): 1188-1192.
  6. ^ Watanabe, S. (1960). Information theoretical analysis of multivariate correlation. IBM Journal of Research and Development, 4(1), 66-82. DOI: 10.1147/rd.41.0066
  7. ^ Baum, E. B., & Haussler, D. (1989). What size net gives valid generalization? Neural Computation, 1(1), 151-160.
  8. ^ Zhang, J. (2019). Information-Theoretic Methods in Machine Learning (Doctoral dissertation, University of Hong Kong)
  9. ^ Hoeiness, H., Harstad, A., & Friedland, G. (2021). From Tinkering to Engineering: Measurements in Tensorflow Playground. 25th International Conference on Pattern Recognition (ICPR)
  10. ^ McEliece, R. J., Posner, E. C., Rodemich, E. R., & Venkatesh, S. S. (1987). The capacity of the Hopfield associative memory. IEEE Transactions on Information Theory, 33(4), 461-482