# Risk and Loss Minimization

## Contents

# 17.5. Risk and Loss Minimization¶

Regularization, train-test split, and cross-validation all have the goal to reduce over-fitting. The problem with over-fitting comes from using data to both fit a model and estimate the model’s error in predicting new observations. In this section, we formalize this idea.

Throughout this book, our approach to fitting and evaluating models has taken the perspective of minimizing loss. We first introduced the notion of a loss function in Chapter 4 where we used absolute loss to fit a constant to data. There, the loss function was simply \( {\cal l}(\theta, y) = | y - \theta | \). Then, in Chapter 15 we introduced the multiple linear model and used squared loss to fit them, with the loss function \( {\cal l}(\theta, \mathbf{x}, y) = (y - \mathbf{x}\boldsymbol{\theta})^2\). Here \( \mathbf{x} \) consists of \( p \) features and \( \boldsymbol{\theta} \) contains \( p \) coefficients (aka parameters), one for each feature. In Chapter 19, we use cross-entropy loss for the situation when \(y\) represents a categorical feature and takes on values of 0 and 1, only. Our focus is on squared loss in this chapter.

Generally speaking, a loss function \({\cal l}(f, \mathbf{x}, y)\) gives the model’s loss for a single data point where we use \( f(\mathbf{x}) \) to predict \( y \). So for squared loss, we have

In model fitting, we have minimized average loss, \( L(f, \textbf{X}, \mathbf{y}) \),

Here \( \textbf{X} \) is an \( n \times p \) design matrix, and \( \mathbf{x}_i \) is the \(i\)th row of the design matrix, which corresponds to the \( i \)th observation in the data set. For multiple linear models and squared error, we have an average loss:

The average loss is an approximation for the expected loss for a new observation. We describe expected loss next.

## 17.5.1. Loss for a New Observation¶

The train-test split and cross-validation are techniques to estimate the loss for a new observation, an observation that we have not come across as we fit our model. In essence these techniques aim to mimic the expected loss for a new observation. We call this expected loss the **risk** for a model \( f \). That is if we have a new observation \( (\mathbf{x}_0 , y_0) \), the risk is expressed as

Since we don’t know the underlying distribution of \( (\mathbf{x}_0 , y_0) \), we can’t calculate the risk, but we can approximate it by the average loss over the data we have collected:

This approximation goes by the name of *empirical risk*. But, hopefully you recognize it as mean square error (MSE).

When model fitting we find the model, \( \hat{f} \) that minimizes the risk over all possible models, \( \cal{F} = \{ f \} \),

This technique is aptly called *empirical risk minimization*. The collection of models could be linear models,
\( f( \mathbf{x}) = \mathbf{x} \boldsymbol{{\theta}} \), or degree 2 polynomial models,
\( f({x}) = \theta_0 + \theta_1x + \theta_2 x^2 \), or other types of models.

In this chapter, we have seen problems arise when we use the empirical risk to both fit a model and evaluate it. When we estimate the risk of a new observation empirically, we have

the data have been used to find \( \hat{f} \) and to estimate the risk. The train-test split helps us get around this problem, $\( \begin{aligned} \frac{1}{n_S} \sum_{j = 1}^{n_S} (y_j - \hat{f}_T (\mathbf{x}_j))^2, ~\textrm{where} \end{aligned} \)$

we the notation, \( \hat{f}_T \) means that we have used the train set to fit the model. The risk for a new observation is approximated by \( y_j \) and \( \mathbf{x}_j \) from the test set. Regularization takes a different approach and penalizes the empirical risk minimization to keep the model from fitting the data too closely. Ideally, we want to estimate the risk

where the expected value is over both the new observation \( (\mathbf{x}_0, y_0) \) and over \( {\hat{f}} \) (which involves the original data \( \textbf{x}_i\) and \( {y}_i \), \( i = 1, \ldots, n \).

Before we examine this expectation in greater detail, we provide an example that shows how this risk can be broken down into three pieces: model bias, model variance, and irreducible error.

## 17.5.2. Model Bias and Variance¶

A general model of data might be expressed as follows:

The errors are assumed to have no trends or patterns, constant variance, and independent of one another. The *signal* in the model is the function, \(g()\). The data are the \( (\mathbf{x}_i, y_i) \) pairs, and we fit models by minimizing the empirical risk,

Note that \(g\) might not be in the collection of functions that we are using to fit a model.

As an example, let’s take \( g(x) = \sin(x) + 0.3x \) and add errors are from a normal curve to get \( y \). The following functions set up this data generation process.

```
def g(x): return np.sin(x) + 0.3 * x
def noise(n):
return np.random.normal(scale=0.2, size=n)
def draw(n):
points = np.random.choice(np.arange(0, 10, 0.05), size=n)
return points, g(points) + noise(n)
```

We generate 50 \( (x, y) \) pairs from this model and plot them.

```
np.random.seed(42)
xs, ys = draw(50)
error = ys-g(xs)
```

The plot on the left shows \(g\) as a dashed curve. We can also see the \( (x, y) \) pairs form a scatter of dots about this curve. The right hand plot shows the errors, \( y - g(x) \), for the 50 points. Notice that they do not form a pattern.

Now let’s fit a second degree polynomial to these data. In other words, we take \(\cal{F}\) to be the collection of all functions that can be expressed as \( \theta_0 + \theta_1 x + \theta_2 x^2\). Since \( g(x) = \sin(x) + 0.3x \), it doesn’t belong to the collection of functions that we are optimizing over.

```
poly = PolynomialFeatures(degree=2, include_bias=False)
poly_features = poly.fit_transform(xs.reshape(-1,1))
model_deg2 = LinearRegression().fit(poly_features, ys)
model_deg2.coef_
```

```
array([-0.19, 0.05])
```

Again, we know the true model is not quadratic (because we built it). Let’s plot the data and the fitted curve.

The quadratic doesn’t fit the data well, and it doesn’t represent the underlying curve well either. There is *model bias* present in this fitting procedure because the set of models that we are fitting to the data (second order polynomials) does not include the truth, \( g \).

If we repeat this process, and generate another 50 points from the true model and fit a second degree polynomial to these data, then the fitted coefficients of the quadratic will change because it depends on the new set of data. We can repeat this process many times, and average the fitted curves. This average curve will resemble the expected curve, \( \mathbb{E}[ \hat{f} ] \). To demonstrate this, let’s generate 25 sets of data and fit a quadratic to each set of points.

```
def fit(n):
xs_new = np.random.choice(np.arange(0, 10, 0.05), size=n)
ys_new = g(xs_new) + noise(n)
X_new = xs_new.reshape(-1,1)
mod_new = LinearRegression().fit(poly.fit_transform(X_new), ys_new)
return mod_new.predict(poly_features_x_full).flatten()
```

```
fits = [fit(50) for j in range(25)]
```

The plot below shows all 25 fitted models along with the true function, \(g\), and the average of the fitted curves \( \bar{f} \).

We can see the 25 fitted quadratics vary with the data. This concept is called *model variation*. The average of the 25 quadratics is represented by the solid black line. The difference between the average quadratic and the true curve is called the *model bias*.

Now, let’s return to the general problem. We can decompose the risk for square loss into parts representing the model bias, model variance, and the *irreducible error* from \( \epsilon \).

(To derive the last equality from the previous line, we need to formally prove that the cross product terms in the expansion of the quadratic are all 0. This takes quite a bit of algebra and we don’t present it here.) The first of the three terms in the final equation is model bias (squared). When, the signal, \( g \), does not belong to the model space, we have model bias. If the model space can approximate \(g\) well, then the bias is small. For instance, a 10-degree polynomial can get pretty close to the \(g\) used in our example. The second term represents the variability in the fitted model that comes from the data. We have seen earlier in this chapter that higher degree polynomials over fit the data, so they vary a lot from one set of data to the next. The more complex the model space, the greater the variability in the fitted model. Finally, the last term is the variability in the errors, the “irreducible error”. This error sticks around whether we have under fit with a simple model (high bias) or over fit with a complex model (high variance).

This representation of the expected loss is called the *bias-variance decomposition* of the model. Model selection aims to balance these competing sources of error. This balancing problem is called the *bias-variance trade off*.