Resampling methods involve:
Repeatedly drawing a sample from the training data;
Refitting the model of interest with each new sample;
Examining all the refitted models and then drawing conclusions.
There are two major resampling techniques: cross-validation and bootstrapping, both are easy to implement and and broadly applicable. Cross-validation is used for model assessment and model selection, while bootstrapping is most commonly used to measure the accuracy of a parameter estimate of a given learning model.
Resampling methods are useful because they can address the following drawbacks of traditional validation-test approach:
data are often scarce and we cannot afford to set aside a validation or test set when training a model;
the model performance on the validation data is highly dependent on how we split the data, and validation error tends to overestimate the test error rate.