What is 'Overfitting' in Machine learning?

What is 'Overfitting' in Machine learning?




Answer: In machine learning, when a statistical model describes random error or noise instead of underlying relationship 'overfitting' occurs. When a model is excessively complex, overfitting is normally observed, because of having too many parameters with respect to the number of training data types. The model exhibits poor performance which has been overfit. In layman's terms the model fits too closely to the training set and does not generalize to test set.


Why overfitting happens?


Answer: The possibility of overfitting exists as the criteria used for training the model is not the same as the criteria used to judge the efficacy of a model.

How can you avoid overfitting ?


By using a lot of data overfitting can be avoided, overfitting happens relatively as you have a small dataset, and you try to learn from it. But if you have a small database and you are forced to come with a model based on that. In such situation, you can use a technique known as cross validation. In this method the dataset splits into two section, testing and training datasets, the testing dataset will only test the model while, in training dataset, the datapoints will come up with the model.

In this technique, a model is usually given a dataset of a known data on which training (training data set) is run and a dataset of unknown data against which the model is tested. The idea of cross validation is to define a dataset to "test" the model in the training phase.


Learn More :