Chapter 17 Other Model Selection Approaches

The way we have been doing model selection thus far is definitely not the only way. What other options are out there? Many - let’s consider a few.

This section also includes some miscellaneous R notes (making summary tables, and sources of inspiration for cool figures).

17.0.1 Rationale

Until now, we have focused on using information criteria for model selection, in order to get very familiar with one coherent framework for choosing variables across model types. But:

  • In some fields, using hypothesis tests for variable selection is preferred
  • For datasets that are large and/or models that are complex, dredge() can be a challenge (taking a very long time to run and perhaps timing out on the server)
  • Using hypothesis tests for selection is quite common, so we should know how it’s done!

17.0.2 Hypotheses

Basically, for each (fixed effect) variable in a model, we’d like to test:

\[H_0: \text{all } \beta\text{s for this variable are 0; it's not a good predictor}\] \[H_1: \text{ at least one } \beta\text{ is non-zero; it's a good predictor}\]

We want to test these hypotheses given that all the other predictors in the current full model are included. Because of this condition, and because there are multiple \(\beta\)s for categorical predictors with more than 2 categories, we can not generally just use the p-values from the model summary() output.

Instead, we use Anova() from the package car. lm() example:

## 
## Call:
## lm(formula = Petal.Length ~ Petal.Width + Species + Sepal.Length, 
##     data = iris)
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## -0.765 -0.158  0.011  0.134  0.665 
## 
## Coefficients:
##                   Estimate Std. Error t value Pr(>|t|)    
## (Intercept)        -1.4596     0.2239   -6.52  1.1e-09 ***
## Petal.Width         0.5064     0.1153    4.39  2.1e-05 ***
## Speciesversicolor   1.7315     0.1276   13.57  < 2e-16 ***
## Speciesvirginica    2.3047     0.1984   11.62  < 2e-16 ***
## Sepal.Length        0.5587     0.0458   12.19  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.266 on 145 degrees of freedom
## Multiple R-squared:  0.978,  Adjusted R-squared:  0.977 
## F-statistic: 1.6e+03 on 4 and 145 DF,  p-value: <2e-16
## Anova Table (Type II tests)
## 
## Response: Petal.Length
##              Sum Sq  Df F value  Pr(>F)    
## Petal.Width    1.37   1    19.3 2.1e-05 ***
## Species       13.61   2    95.9 < 2e-16 ***
## Sepal.Length  10.55   1   148.6 < 2e-16 ***
## Residuals     10.29 145                    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Notice that Anova() reports one p-value for each predictor (excellent!). If the p-value is small, that gives evidence against \(H_0\), and we’d conclude we should keep the predictor in the model. Many people use \(\alpha = 0.05\) as the “dividing line” between “small” and “large” p-values and thus “statistically significant” and “non-significant” test results, but remember the p-value is a probability - there’s no magical difference between 0.049 and 0.051.

Warning: be careful with your capitalization! The R function anova() does someting kind of similar to Anova() but NOT the same and should be avoided – it does sequential rather than marginal tests.

17.1 Backward selection

How do we use p-value-based selection to arrive at a best model? There are many options and much controversy about different approaches; here I’ll suggest one. None of these methods are guaranteed to arrive at a model that is theoretically “best” in some specific way, but they do give a framework to guide decision-making and are computationally quick. The premise is that we’d like a simple algorithm to implement, and we will begin with a full model including all the predictors that we think should or could reasonably be important (not just throwing in everything possible).

17.1.1 Algorithm

  • Obtain p-values for all predictors in full model
  • Remove the predictor with the largest p-value that you judge to be “not small” or “not significant”
  • Re-compute p-values for the new, smaller model
  • Repeat until all p-values are “significant”

17.1.2 Example

Let’s consider a logistic regression to predict whether a person in substance-abuse treatment is homeless.

## Analysis of Deviance Table (Type II tests)
## 
## Response: homeless
##           LR Chisq Df Pr(>Chisq)   
## sex           3.38  1     0.0658 . 
## substance     2.23  2     0.3275   
## i1           10.63  1     0.0011 **
## cesd          1.18  1     0.2784   
## racegrp       3.18  3     0.3645   
## age           0.44  1     0.5075   
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Removing age:

## Analysis of Deviance Table (Type II tests)
## 
## Response: homeless
##           LR Chisq Df Pr(>Chisq)    
## sex           3.22  1    0.07282 .  
## substance     2.69  2    0.26088    
## i1           11.08  1    0.00087 ***
## cesd          1.13  1    0.28777    
## racegrp       3.16  3    0.36818    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Removing racegrp

## Analysis of Deviance Table (Type II tests)
## 
## Response: homeless
##           LR Chisq Df Pr(>Chisq)    
## sex           3.59  1     0.0582 .  
## substance     3.63  2     0.1625    
## i1           11.02  1     0.0009 ***
## cesd          1.61  1     0.2051    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Remove cesd (a score indicating depression level)

## Analysis of Deviance Table (Type II tests)
## 
## Response: homeless
##           LR Chisq Df Pr(>Chisq)    
## sex           2.79  1    0.09512 .  
## substance     3.67  2    0.15927    
## i1           13.24  1    0.00027 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Remove substance

## Analysis of Deviance Table (Type II tests)
## 
## Response: homeless
##     LR Chisq Df Pr(>Chisq)    
## sex     2.96  1      0.085 .  
## i1     25.79  1    3.8e-07 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Remove sex

## Analysis of Deviance Table (Type II tests)
## 
## Response: homeless
##    LR Chisq Df Pr(>Chisq)    
## i1     27.2  1    1.8e-07 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

17.1.3 Can’t this be automated?

Strangely…functions are not widely available.

17.1.4 Stepwise IC-based selection

Another option may be to use backward stepwise selection (same algorithm as above), but using AIC or BIC as the criterion at each stage instead of p-values. If the IC value is better (by any amount) without a variable, it gets dropped. Variables are dropped one by one until no further IC improvement is possible.

This evaluates many fewer models than dredge so should be much faster, but may not find the best of all possible models.

For example, for our model using AIC (note: this may or may not work for all model types.):

## Start:  AIC=606
## homeless ~ sex + substance + i1 + cesd + racegrp + age
## 
##             Df Deviance AIC
## - racegrp    3      589 603
## - substance  2      588 604
## - age        1      587 605
## - cesd       1      587 605
## <none>              586 606
## - sex        1      590 608
## - i1         1      597 615
## 
## Step:  AIC=603
## homeless ~ sex + substance + i1 + cesd + age
## 
##             Df Deviance AIC
## - age        1      590 602
## - substance  2      592 602
## - cesd       1      591 603
## <none>              589 603
## - sex        1      593 605
## - i1         1      600 612
## 
## Step:  AIC=602
## homeless ~ sex + substance + i1 + cesd
## 
##             Df Deviance AIC
## - cesd       1      591 601
## - substance  2      593 601
## <none>              590 602
## - sex        1      593 603
## - i1         1      601 611
## 
## Step:  AIC=601
## homeless ~ sex + substance + i1
## 
##             Df Deviance AIC
## - substance  2      595 601
## <none>              591 601
## - sex        1      594 602
## - i1         1      605 613
## 
## Step:  AIC=601
## homeless ~ sex + i1
## 
##        Df Deviance AIC
## <none>         595 601
## - sex   1      598 602
## - i1    1      621 625
## 
## Call:  glm(formula = homeless ~ sex + i1, family = binomial(link = "logit"), 
##     data = HELPrct)
## 
## Coefficients:
## (Intercept)      sexmale           i1  
##      0.9297      -0.3998      -0.0266  
## 
## Degrees of Freedom: 452 Total (i.e. Null);  450 Residual
## Null Deviance:       625 
## Residual Deviance: 595   AIC: 601

Note that we might want to still remove one more variable than stepAIC() does! Above, you see that if you were to remove age, the AIC would only go up by about 1 unit. So according to our \(\Delta AIC \sim 3\) threshold, we would take age out too.

Using BIC instead, we need to specify the input k = log(nrow(data)) (the BIC penalty multiplier):

## Start:  AIC=613
## homeless ~ sex + substance + i1 + cesd + racegrp + age
## 
##             Df Deviance AIC
## - racegrp    3      589 608
## - substance  2      588 610
## - age        1      587 611
## - cesd       1      587 611
## <none>              586 613
## - sex        1      590 614
## - i1         1      597 621
## 
## Step:  AIC=608
## homeless ~ sex + substance + i1 + cesd + age
## 
##             Df Deviance AIC
## - substance  2      592 606
## - age        1      590 606
## - cesd       1      591 607
## <none>              589 608
## - sex        1      593 609
## - i1         1      600 616
## 
## Step:  AIC=606
## homeless ~ sex + i1 + cesd + age
## 
##        Df Deviance AIC
## - age   1      593 604
## - cesd  1      594 605
## <none>         592 606
## - sex   1      596 607
## - i1    1      612 623
## 
## Step:  AIC=604
## homeless ~ sex + i1 + cesd
## 
##        Df Deviance AIC
## - cesd  1      595 603
## <none>         593 604
## - sex   1      597 605
## - i1    1      616 624
## 
## Step:  AIC=603
## homeless ~ sex + i1
## 
##        Df Deviance AIC
## <none>         595 603
## - sex   1      598 603
## - i1    1      621 626
## 
## Call:  glm(formula = homeless ~ sex + i1, family = binomial(link = "logit"), 
##     data = HELPrct)
## 
## Coefficients:
## (Intercept)      sexmale           i1  
##      0.9297      -0.3998      -0.0266  
## 
## Degrees of Freedom: 452 Total (i.e. Null);  450 Residual
## Null Deviance:       625 
## Residual Deviance: 595   AIC: 601

To get less verbose output, set trace = 0 – but then you won’t know whether it would make sense to perhaps remove additional variables…

## 
## Call:  glm(formula = homeless ~ sex + i1, family = binomial(link = "logit"), 
##     data = HELPrct)
## 
## Coefficients:
## (Intercept)      sexmale           i1  
##      0.9297      -0.3998      -0.0266  
## 
## Degrees of Freedom: 452 Total (i.e. Null);  450 Residual
## Null Deviance:       625 
## Residual Deviance: 595   AIC: 601

17.2 Summary tables

You may want to compute and display summary tables for your projects. Here are a few examples of how to do it.

17.2.1 Mean (or sd, median, IQR, etc.) by groups

Compute the mean and sd (could use any other summary stats you want, though) for several quantitative variables, by groups.

Example: find mean and sd of iris flower Petal.Length and Petal.Width by Species and display results in a pretty table. The dataset is called iris.

Make a little one-row table for each variable being summarized, then stick them together.

Species mean sd variable
setosa 1.462 0.174 Petal Length
versicolor 4.260 0.470 Petal Length
virginica 5.552 0.552 Petal Length
setosa 0.246 0.105 Petal Width
versicolor 1.326 0.198 Petal Width
virginica 2.026 0.275 Petal Width

What if we want to round all table entries to 2 digits after the decimal?

Species mean sd variable
setosa 1.46 0.17 Petal Length
versicolor 4.26 0.47 Petal Length
virginica 5.55 0.55 Petal Length
setosa 0.25 0.11 Petal Width
versicolor 1.33 0.20 Petal Width
virginica 2.03 0.27 Petal Width

What if we want the column order to be Variable, Species, mean, sd, and sort by Species and then Variable?

variable Species mean sd
Petal Length setosa 1.46 0.17
Petal Width setosa 0.25 0.11
Petal Length versicolor 4.26 0.47
Petal Width versicolor 1.33 0.20
Petal Length virginica 5.55 0.55
Petal Width virginica 2.03 0.27

What if we actually want a column for mean length, sd length, etc. and one row per species?

Species mean Petal Length mean Petal Width sd Petal Length sd Petal Width
setosa 1.46 0.25 0.17 0.11
versicolor 4.26 1.33 0.47 0.20
virginica 5.55 2.03 0.55 0.27

17.2.2 Proportions in categories by groups

You may also want to make a table of proportion observations in each category by groups, potentially for many variables.

For just one variable, we can use tally:

Table 17.1: Proportion using each substance
female male
alcohol 0.34 0.41
cocaine 0.38 0.32
heroin 0.28 0.27

For many variables we can use a loop. For example, we might want to know the proportion homeless and housed and proportion using each substance, both by sex, from the HELPrct dataset. Above we were using the function knitr::kable() to make tables, but we can use pander::pander() too:

17.3 Figures

We’ve made a lot of figures in this class, and almost all have been kind of mediocre. To aim for awesome, here are a couple of great references for inspiration, ideas, and best practices: