Jump to content

英文维基 | 中文维基 | 日文维基 | 草榴社区

Talk:Cross-validation (statistics)

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia


Claim about OLS' downward bias in the expected MSE

[edit]

The article makes the following claim:

If the model is correctly specified, it can be shown under mild assumptions that the expected value of the MSE for the training set is (n − p − 1)/(n + p + 1) < 1 times the expected value of the MSE for the validation set (the expected value is taken over the distribution of training sets).

The text cites Trippa et al. (2015) specifically about the bias factor . However, the paper does not seem to contain any discussion of this bias factor for OLS. Is there an algebraic proof available for OLS?

Based on a simple simulation, the claim seems to be true.

Simulation in R
draw_sample <- function(n) {
    X <- rnorm(n)
    Z <- rnorm(n)
    epsilon <- rnorm(n)

    data.frame(
        Y = .1 + .3 * X + .4 * Z + epsilon,
        X = X,
        Z = Z)
}

mse <- function(model, data) {
    Y_hat <- predict(model, data)

    mean((data$Y - Y_hat)^2)
}

draw_mse <- function(n_training, n_validation) {
    data <- draw_sample(n_training + n_validation)
    data_training <- data[1:n_training,]
    data_validation <- data[(n_training + 1):nrow(data),]

    model <- lm(Y ~ X + Z, data = data_training)

    c(mse(model, data_training),
      mse(model, data_validation))
}

simulate <- function(n_samples) {
    sapply(
        1:n_samples,
        function(x) {
            draw_mse(n_training = 50, n_validation = 50)
        })
}

x <- simulate(10000)
mean(log(x[1,]) - log(x[2,]))

The resulting mean log ratio of the MSEs on the training set and the validation set is very similar to the formula given by the article. E.g., which is close to .

chery (talk) 16:32, 17 June 2022 (UTC); edited 17:37, 17 June 2022 (UTC)[reply]

"Swap sampling"

[edit]

Is there another paper describing this method? The cited paper doesn't even call it "swap sampling" 24.13.125.183 (talk) 00:57, 15 February 2023 (UTC)[reply]

Unclear definition of cross validation

[edit]

In "Motivation", the article says/defines cross-validation as: " If an independent sample of validation data is taken from the same population as the training data, it will generally turn out that the model does not fit the validation data as well as it fits the training data." What if instead, an independent sample of validation data is taken from a different population as the training data? It seems like a bad choice of syntax for that sentence. Eigenvoid (talk) 13:24, 18 May 2023 (UTC)[reply]

Outer test set

[edit]

In the chapter "k*l-fold cross-validation", in the sentence: "The inner training sets are used to fit model parameters, while the outer test set is used as a validation set to provide an unbiased evaluation of the model fit." the text "outer test set" shouldn't be changed into "inner test set" since this is the validation of the fit of the model parameters? Cadoraghese (talk) 14:58, 30 September 2023 (UTC)[reply]

Variance estimation missing

[edit]

https://www.jmlr.org/papers/volume5/grandvalet04a/grandvalet04a.pdf No Unbiased Estimator of the Variance of K-Fold Cross-Validation Yoshua Bengio, Could this be added in a new section it would be a very valuable discussion? Biggerj1 (talk) 15:12, 8 November 2024 (UTC)[reply]