Lasso regression

In order to avoid overfitting in regression due to too many feature while at the same time have enough features to minimize the sum of squared errors in order to get a more accurate fit on the test data, you need to regularize the regression.

This can be done with a Lasso regression where you want to minimize the sim of squared errors + plus a penalty parameters times the coefficient of the regression (which indicates the amount of features)

minimize SSE + λ|β|

 

Outlier Rejection

To detect and get rid of outliers in a dataset (which may for instance have been caused by sensor error or data entry error) you first train your data, and remove the data point that has the highest residual error (over 10%) and then train again.

Otherwise erroneous data entries may give you an incorrect regression line.

Classification vs. Regression

Two slightly similar concepts in supervised machine learning are Supervised classification, and regression.
With supervised classification you will get a discrete output (a label or boolean value) and in regression your output is continuous (i.e. a number).
The thing you are trying to find in the different cases is a decision boundary when using classification and a best fit line in regression. You evalueate the former with it’s accuracy value, and the latter using the “sum of squared errors” or r2.