forward selection
Last updated
Was this helpful?
Last updated
Was this helpful?
One such method is forward selection, which is an example of stepwise regression that performs feature selection in a series of steps. With forward selection, the idea is to start out with an empty model that has no features selected. We then perform k simple linear regressions (one for every feature that we have) and pick the best one. Here, we are comparing models that have the same number of features so that we can use the R2 statistic to guide our choice, although we can use metrics such as AIC as well. Once we have chosen our first feature to add, we then pick another feature to add from the remaining k-1 features. Therefore, we now run k-1 multiple regressions for every possible pair of features, where one of the features in the pair is the feature that we picked in the first step. We continue adding in features like this until we have evaluated the model with all the features included and stop. Note that, in every step, we make a hard choice about which feature to include for all future steps.
For example, models that have more than one feature in them and do not include the feature we chose in the first step of this process are never considered. Therefore, we do not exhaustively search our space. In fact, if we take into account that we also assess the null model, we can compute the total number of models we perform a linear regression on as follows:
The order of magnitude of this computation is on the scale of k2, which for even small values of k is already considerably less than 2k. At the end of the forward selection process, we have to choose between k+1 models, corresponding to the subsets we obtained at the end of every step of the process. As the final part of the process involves comparing models with different numbers of features, we usually use a criterion such as the AIC or the adjusted R2 to make our final choice of model. We can demonstrate this process for our CPU dataset by running the following commands:
There are various methods to add or remove variables to determine the best possible model.
In the case of forward, we will start with no variables and keep on adding significant variables until the overall model's fit improves.
In the backward method, iterations start with considering all the variables and we will remove variables one by one until all the prescribed statistics are met (such as no insignificance and multi-collinearity, and so on ). Finally, the overall statistic will be checked, such as if R-squared value is > 0.7 , it is considered a good model, else reject it. In industry, practitioners mainly prefer to work on backward methods.