Talk:Generalization error

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Criticism on the definition[edit]

In the definition of generalization error, the "function" should be a "random measure". In mathematics, a function is based on a certain correspondence. However, the generalization error should be a random variable in the context, and any random variable does not have a certain correpsondence with other random variables. Thus,the term "function" is abused here. --Yuanfangdelang (talk) 03:50, 6 October 2010 (UTC)[reply]

Well, here is your chance, Yuanfangdelang, be bold and go ahead editing the article, citing the necessary sources. Dieter Simon (talk) 23:39, 6 October 2010 (UTC)[reply]

Teacher and Student[edit]

I don't really get the terms 'teacher' and 'student' in the context of the definition. Can someone elaborate a little on them or provide references? --Sopasakis p (talk) 10:16, 11 April 2012 (UTC)[reply]

Confusing[edit]

"It is measured as the distance between the error on the training set and the test set and is averaged over the entire set of possible training data that can be generated after each iteration of the learning process."

How can training data be generated after each iteration by the learning process? This is rather confusing. — Preceding unsigned comment added by 131.180.159.71 (talk) 14:52, 7 June 2013 (UTC)[reply]

non-standard definition of generalization error[edit]

Basically, the generalization error is here defined as "the difference between the expected and empirical error", which actually seems to be the definition of the so called generalization gap. reddit comment with further supporting sources

Is there a reference supporting the currently given definition?

Musteresel (talk) 22:19, 30 August 2017 (UTC)[reply]

recent change to generalization error definition[edit]

I see this recent change in the log:

21:16, 22 February 2020‎ Francisbach talk contribs‎ 12,936 bytes -75‎ The definition of generalization error was *wrong*. The correct definition (see for example book Foundations of ML) is the expected error, and not the difference of expected and empirical errors.

The definition Franciscbach is using seems inconsistent with what I see in: https://arxiv.org/abs/1808.01174, as well as in: https://link.springer.com/chapter/10.1007/978-3-319-73074-5_5 -- "The generalization error of a machine learning model is the difference between the empirical loss of the training set and the expected loss of ..."

These seem to use the term to mean the same as "generalization gap." If the term has multiple acceptable uses, should we be including both? Am I misunderstanding these other uses?

-nairbv