Strategies for model selection

STATS 191

2024-04-01

Outline

  • Case studies:

    A. SAT scores by state

    B. Deeper dive into sex discrimination in base salary

  • Model selection

  • Best subsets

  • Stepwise methods

Case study A: State SAT scores

  • Predicting average 1982 SAT score by state.

  • Features:

    1. Takers (percentage)

    2. Income (parental income of SAT takers)

    3. Years (years of education in key subjects)

    4. Public (percent of takers in public high school)

    5. Expend (state expenditure on high schools per student)

    6. Rank (average percentile of students writing SAT

SAT data

Case study B: Sex discrimination in employment

  • Same data as earlier, with additional features:

    1. Age (years)

    2. Educ (years)

    3. Exper (months)

    4. Senior (months)

    5. Sal77 (salary in 1977 – used for a different analysis in the book)

Salary data

Model selection

  • In a given regression situation, there are often many choices to be made. Recall our usual setup (with intercept)

\[\begin{equation} Y_{n \times 1} = X_{n \times (p+1)} \beta_{(p+1) \times 1} + \epsilon_{n \times 1}. \end{equation}\]

  • Any subset \(A \subset \{1, \dots, p\}\) yields a new regression model

\[\begin{equation} {\cal M}(A): Y_{n \times 1} = X[,A] \beta[A] + \epsilon_{n \times 1} \end{equation}\]

Why model selection?

Possible goals in the SAT data

  • States with low Takers typically have high Rank

  • Beyond this effect, what is important for understanding variability in SAT scores?

  • Maybe want a parsimonious model.

General problem & goals

  • When we have many predictors (with many possible interactions), it can be difficult to formulate a good model from outside considerations.

  • Which main effects do we include?

  • Which interactions do we include?

  • Model selection procedures try to simplify / automate this task.

General comments

  • This is generally an “unsolved” problem in statistics: there are no magic procedures to get you the “best model” or “correct model”.

  • Inference after selection is full of pitfalls!

General approach

  • To “implement” a model selection procedure, we first need a criterion / score / benchmark to compare two models.

  • Given a criterion, we also need a search strategy.

Search strategies

  • Exhaustive: With a limited number of predictors, it is possible to search all possible models.

  • Sequential: Select a model in a sequence of steps.

Exhaustive search (leaps in R)

Candidate criteria

  • \(R^2\): not a good criterion. Always increase with model size \(\implies\) “optimum” is to take the biggest model.

  • Adjusted \(R^2\): better. It “penalized” bigger models. Follows principle of parsimony / Occam’s razor.

  • Mallow’s \(C_p\) – attempts to estimate a model’s predictive power, i.e. the power to predict a new observation.

Setup for leaps

  • leaps takes a design matrix as argument: throw away the intercept column or leaps will complain:

The problem with R^2

Figure

  • Plot of \(R^2\) of a model as a function of the model size.

  • The “best” model in terms of \(R^2\) does indeed include all variables.

Adjusted \(R^2\)

  • As we add more and more variables to the model – even random ones, \(R^2\) will increase to 1.

  • Recall that adjusted \(R^2\) tries to take this into account by replacing sums of squares by mean squares

\[R^2_a = 1 - \frac{SSE/(n-p-1)}{SST/(n-1)} = 1 - \frac{MSE}{MST}.\]

Figure

  • Plot of \(R^2_a\) of a model as a function of the model size.

  • The “best” model in terms of \(R^2_a\) is not the full model: has size 5 (4 variables plus intercept)

Mallow’s \(C_p\)

Definition:

\[C_p({\cal M}) = \frac{SSE({\cal M})}{\widehat{\sigma}^2} + 2 \cdot p({\cal M}) - n.\]

Notes:

  • \(\widehat{\sigma}^2=SSE(F)/df_F\) is the “best” estimate of \(\sigma^2\) we have (use the fullest model), i.e. in the SAT data it uses all 6 main effects.

  • \(SSE({\cal M})\) is the \(SSE\) of the model \({\cal M}\).

  • \(p({\cal M})\) is the number of predictors in \({\cal M}\).

  • This is an estimate of the expected mean-squared error of \(\widehat{Y}({\cal M})\), it takes bias and variance into account.

Figure

  • Plot of \(C_p\) of a model as a function of the model size.

  • The “best” model in terms of \(C_p\) is not the full model: has size 5 (4 variables plus intercept)

  • Agrees with R^2_a.

Sequential search (step in R)

  • The step function uses a specific score for comparing models.

Akaike Information Criterion (AIC)

\[AIC({\cal M}) = - 2 \log L({\cal M}) + 2 \cdot p({\cal M})\]

  • Above, \(L({\cal M})\) is the maximized likelihood of the model.

Bayesian Information Criterion (BIC)

\[BIC({\cal M}) = - 2 \log L({\cal M}) + \log n \cdot p({\cal M})\]

  • Note: AIC/BIC be used for whenever we have a likelihood, so this generalizes to many statistical models.

AIC for regression

  • In linear regression with unknown \(\sigma^2\)

\[-2 \log L({\cal M}) = n \log(2\pi \widehat{\sigma}^2_{MLE}) + n\]

  • In linear regression with known \(\sigma^2\)

\[-2 \log L({\cal M}) = \frac{1}{\sigma^2} SSE({\cal M})\]

  • AIC is very much like Mallow’s \(C_p\) with known variance.

Properties of AIC / BIC

  • BIC will typically choose a model as small or smaller than AIC (if using the same search direction).

  • As our sample size grows, under some assumptions, it can be shown that

    1. AIC will (asymptotically) always choose a model that contains the true model, i.e. it won’t leave any variables out.
    2. BIC will (asymptotically) choose exactly the right model.

SAT example

Let’s take a look at step in action. Probably the simplest strategy is forward stepwise which tries to add one variable at a time, as long as it can find a resulting model whose AIC is better than its current position.

When it can make no further additions, it terminates.

Interactions and hierarchy

  • Wildcard: . denotes any variable in SAT.df except SAT

  • .^2 denotes all 2-way interactions

  • Note: when looking at trace=TRUE we see that step does not include an interaction unless both main effects are already in the model.

BIC example

The only difference between AIC and BIC is the price paid per variable. This is the argument k to step. By default k=2 and for BIC we set k=log(n). If we set k=0 it will always add variables.

Compare to AIC

Backward selection

Just for fun, let’s consider backwards stepwise. This starts at a full model and tries to delete variables.

There is also a direction="both" option.

Compare to forward

Summarizing results

The model selected depends on the criterion used.

Criterion Model
\(R^2\) ~ .
\(R^2_a\) ~ Years + Public + Expend + Rank
\(C_p\) ~ Years + Public + Expend + Rank
AIC forward ~ Years + Public + Expend + Rank
BIC forward ~ Years + Expend + Rank
AIC forward .^2 ~ Years + Public + Expend + Rank + Years:Expend + Years:Public + Rank:Public

The selected model is random and depends on which method we use!

Pretty stable for this analysis.

Where we are so far

  • Many other “criteria” have been proposed: cross-validation (CV) a very popular criterion.

  • Some work well for some types of data, others for different data.

  • Check diagnostics!

  • These criteria (except cross-validation) are not “direct measures” of predictive power, though Mallow’s \(C_p\) is a step in this direction.

  • \(C_p\) measures the quality of a model based on both bias and variance of the model. Why is this important?

  • Bias-variance tradeoff is ubiquitous in statistics!

Case study B: sex discrimination

Possible goals in the salary data

  • Interested in the effect of Sex but there could lots of other covariates we might adjust for…

  • Considering all second order effects (besides Sex) in salary: \(2^{14}\) models!

  • Remove confounders, reduce variability of Sex effect.

Approach

  1. Fit a model without Sex using a model selection technique.

  2. Estimate the effect of Sex by adding it to this model.

How are we going to choose \(2^{14}\) possible choices!

Best second order model using \(C_p\) with leaps

Plot of \(C_p\) scores (Fig 12.11 in book)

Figure:

  • Plot of \(C_p\) score as a function of model size.

  • Two models are lower than the rest but quite close. These models used in subsequent analysis.

Estimating the effect of Sex

  • On inspection, book drops one of these interactions:

What if we used step?

Larger example

  • As \(p\) grows, leaps will be too slow

  • Even step can get slow

HIV resistance

  • Resistance of \(n=633\) different HIV+ viruses to drug 3TC.

  • Features \(p=91\) are mutations in a part of the HIV virus, response is log fold change in vitro.