The way we have been doing model selection thus far is definitely not the only way. What other options are out there? Many - let’s consider a few.
This section also includes some miscellaneous R notes (making summary tables, and sources of inspiration for cool figures).
18.0.1 Rationale
Until now, we have focused on using information criteria for model selection, in order to get very familiar with one coherent framework for choosing variables across model types. But:
In some fields, using hypothesis tests for variable selection is preferred
For datasets that are large and/or models that are complex, dredge() can be a challenge (taking a very long time to run and perhaps timing out on the server)
Using hypothesis tests for selection is quite common, so we should know how it’s done!
18.0.2 Hypotheses
Basically, for each (fixed effect) variable in a model, we’d like to test:
\[H_0: \text{all } \beta\text{s for this variable are 0; it's not a good predictor}\]\[H_1: \text{ at least one } \beta\text{ is non-zero; it's a good predictor}\]
We want to test these hypotheses given that all the other predictors in the current full model are included. Because of this condition, and because there are multiple\(\beta\)s for categorical predictors with more than 2 categories, we can not generally just use the p-values from the model summary() output.
Instead, we use Anova() from the package car. lm() example:
iris_mod <-lm(Petal.Length ~ Petal.Width + Species + Sepal.Length, data = iris)summary(iris_mod)
Call:
lm(formula = Petal.Length ~ Petal.Width + Species + Sepal.Length,
data = iris)
Residuals:
Min 1Q Median 3Q Max
-0.76508 -0.15779 0.01102 0.13378 0.66548
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.45957 0.22387 -6.520 1.09e-09 ***
Petal.Width 0.50641 0.11528 4.393 2.15e-05 ***
Speciesversicolor 1.73146 0.12762 13.567 < 2e-16 ***
Speciesvirginica 2.30468 0.19839 11.617 < 2e-16 ***
Sepal.Length 0.55873 0.04583 12.191 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.2664 on 145 degrees of freedom
Multiple R-squared: 0.9778, Adjusted R-squared: 0.9772
F-statistic: 1600 on 4 and 145 DF, p-value: < 2.2e-16
Notice that Anova() reports one p-value for each predictor (excellent!). If the p-value is small, that gives evidence against \(H_0\), and we’d conclude we should keep the predictor in the model. Many people use \(\alpha = 0.05\) as the “dividing line” between “small” and “large” p-values and thus “statistically significant” and “non-significant” test results, but remember the p-value is a probability - there’s no magical difference between 0.049 and 0.051.
Warning: be careful with your capitalization! The R function anova() does someting kind of similar to Anova() butNOTthe same and should be avoided – it does sequential rather than marginal tests.
18.1 Backward selection
How do we use p-value-based selection to arrive at a best model? There are many options and much controversy about different approaches; here I’ll suggest one. None of these methods are guaranteed to arrive at a model that is theoretically “best” in some specific way, but they do give a framework to guide decision-making and are computationally quick. The premise is that we’d like a simple algorithm to implement, and we will begin with a full model including all the predictors that we think should or could reasonably be important (not just throwing in everything possible).
18.1.1 Algorithm
Obtain p-values for all predictors in full model
Remove the predictor with the largest p-value that you judge to be “not small” or “not significant”
Re-compute p-values for the new, smaller model
Repeat until all p-values are “significant”
18.1.2 Example
Let’s consider a logistic regression to predict whether a person in substance-abuse treatment is homeless.
home_mod0 <-glm(homeless ~ sex + substance + i1 + cesd + racegrp + age,data = HELPrct, family =binomial(link ='logit'))Anova(home_mod0)
Another option may be to use backward stepwise selection (same algorithm as above), but using AIC or BIC as the criterion at each stage instead of p-values. If the IC value is better (by any amount) without a variable, it gets dropped. Variables are dropped one by one until no further IC improvement is possible.
This evaluates many fewer models than dredge so should be much faster, but may not find the best of all possible models.
For example, for our model using AIC (note: this may or may not work for all model types.):
Call: glm(formula = homeless ~ sex + i1, family = binomial(link = "logit"),
data = HELPrct)
Coefficients:
(Intercept) sexmale i1
0.92969 -0.39976 -0.02657
Degrees of Freedom: 452 Total (i.e. Null); 450 Residual
Null Deviance: 625.3
Residual Deviance: 595.1 AIC: 601.1
Note that we might want to still remove one more variable than stepAIC() does! Above, you see that if you were to remove age, the AIC would only go up by about 1 unit. So according to our \(\Delta AIC \sim 3\) threshold, we would take age out too.
Using BIC instead, we need to specify the input k = log(nrow(data)) (the BIC penalty multiplier):
You may also want to make a table of proportion observations in each category by groups, potentially for many variables.
For just one variable, we can use tally:
tally(~substance | sex, data = HELPrct, format ='prop') |>kable(caption ='Proportion using each substance', digits =2)
Proportion using each substance
female
male
alcohol
0.34
0.41
cocaine
0.38
0.32
heroin
0.28
0.27
For many variables we can use a loop. For example, we might want to know the proportion homeless and housed and proportion using each substance, both by sex, from the HELPrct dataset. Above we were using the function knitr::kable() to make tables, but we can use pander::pander() too:
# select only variables needed for the table# make the first variable the groups onecat_data <- HELPrct |> dplyr::select(sex, substance, homeless) for (i inc(2:ncol(cat_data))){tally(~cat_data[,i] | cat_data[,1], format ='prop') |> pander::pander(caption =paste('Proportion in each ',names(cat_data)[i]))# can rename variables in cat_data if you want better captions }
18.3 Figures
We’ve made a lot of figures in this class, and almost all have been kind of mediocre. To aim for awesome, here are a couple of great references for inspiration, ideas, and best practices: