• Descriptive
    • Moments
    • Concentration
    • Central Tendency
    • Variability
    • Stem-and-Leaf Plot
    • Histogram & Frequency Table
    • Data Quality Forensics
    • Conditional EDA
    • Quantiles
    • Kernel Density Estimation
    • Normal QQ Plot
    • Bootstrap Plot

    • Multivariate Descriptive Statistics
  • Distributions
    • Binomial Probabilities
    • Geometric Probabilities
    • Negative Binomial Probabilities
    • Hypergeometric Probabilities
    • Multinomial Probabilities
    • Dirichlet
    • Poisson Probabilities

    • Exponential
    • Gamma
    • Erlang
    • Weibull
    • Rayleigh
    • Maxwell-Boltzmann
    • Lognormal
    • Pareto
    • Inverse Gamma
    • Inverse Chi-Square

    • Beta
    • Power
    • Beta Prime (Inv. Beta)
    • Triangular

    • Normal (area)
    • Logistic
    • Laplace
    • Cauchy (standard)
    • Cauchy (location-scale)
    • Gumbel
    • Fréchet
    • Generalized Extreme Value

    • Normal RNG
    • ML Fitting
    • Tukey Lambda PPCC
    • Box-Cox Normality Plot
    • Noncentral t
    • Noncentral F
    • Sample Correlation r

    • Empirical Tests
  • Hypotheses
    • Theoretical Aspects of Hypothesis Testing
    • Bayesian Inference
    • Minimum Sample Size

    • Empirical Tests
    • Multivariate (pair-wise) Testing
  • Models
    • Manual Model Building
    • Guided Model Building
  • Time Series
    • Time Series Plot
    • Decomposition
    • Exponential Smoothing

    • Blocked Bootstrap Plot
    • Mean Plot
    • (P)ACF
    • VRM
    • Standard Deviation-Mean Plot
    • Spectral Analysis
    • ARIMA

    • Cross Correlation Function
    • Granger Causality
  1. Regression Models
  2. 134  Simple Linear Regression Model (SLRM)
  • Preface
  • Getting Started
    • 1  Introduction
    • 2  Why Do We Need Innovative Technology?
    • 3  Basic Definitions
    • 4  The Big Picture: Why We Analyze Data
  • Introduction to Probability
    • 5  Definitions of Probability
    • 6  Jeffreys’ axiom system
    • 7  Bayes’ Theorem
    • 8  Sensitivity and Specificity
    • 9  Naive Bayes Classifier
    • 10  Law of Large Numbers

    • 11  Problems
  • Probability Distributions
    • 12  Bernoulli Distribution
    • 13  Binomial Distribution
    • 14  Geometric Distribution
    • 15  Negative Binomial Distribution
    • 16  Hypergeometric Distribution
    • 17  Multinomial Distribution
    • 18  Poisson Distribution

    • 19  Uniform Distribution (Rectangular Distribution)
    • 20  Normal Distribution (Gaussian Distribution)
    • 21  Gaussian Naive Bayes Classifier
    • 22  Chi Distribution
    • 23  Chi-squared Distribution (1 parameter)
    • 24  Chi-squared Distribution (2 parameters)
    • 25  Student t-Distribution
    • 26  Fisher F-Distribution
    • 27  Exponential Distribution
    • 28  Lognormal Distribution
    • 29  Gamma Distribution
    • 30  Beta Distribution
    • 31  Weibull Distribution
    • 32  Pareto Distribution
    • 33  Inverse Gamma Distribution
    • 34  Rayleigh Distribution
    • 35  Erlang Distribution
    • 36  Logistic Distribution
    • 37  Laplace Distribution
    • 38  Gumbel Distribution
    • 39  Cauchy Distribution
    • 40  Triangular Distribution
    • 41  Power Distribution
    • 42  Beta Prime Distribution
    • 43  Sample Correlation Distribution
    • 44  Dirichlet Distribution
    • 45  Generalized Extreme Value (GEV) Distribution
    • 46  Frechet Distribution
    • 47  Noncentral t Distribution
    • 48  Noncentral F Distribution
    • 49  Inverse Chi-Squared Distribution
    • 50  Maxwell-Boltzmann Distribution
    • 51  Distribution Relationship Map

    • 52  Problems
  • Descriptive Statistics & Exploratory Data Analysis
    • 53  Types of Data
    • 54  Datasheets

    • 55  Frequency Plot (Bar Plot)
    • 56  Frequency Table
    • 57  Contingency Table
    • 58  Binomial Classification Metrics
    • 59  Confusion Matrix
    • 60  ROC Analysis

    • 61  Stem-and-Leaf Plot
    • 62  Histogram
    • 63  Data Quality Forensics
    • 64  Quantiles
    • 65  Central Tendency
    • 66  Variability
    • 67  Skewness & Kurtosis
    • 68  Concentration
    • 69  Notched Boxplot
    • 70  Scatterplot
    • 71  Pearson Correlation
    • 72  Rank Correlation
    • 73  Partial Pearson Correlation
    • 74  Simple Linear Regression
    • 75  Moments
    • 76  Quantile-Quantile Plot (QQ Plot)
    • 77  Normal Probability Plot
    • 78  Probability Plot Correlation Coefficient Plot (PPCC Plot)
    • 79  Box-Cox Normality Plot
    • 80  Kernel Density Estimation
    • 81  Bivariate Kernel Density Plot
    • 82  Conditional EDA: Panel Diagnostics
    • 83  Bootstrap Plot (Central Tendency)
    • 84  Survey Scores Rank Order Comparison
    • 85  Cronbach Alpha

    • 86  Equi-distant Time Series
    • 87  Time Series Plot (Run Sequence Plot)
    • 88  Mean Plot
    • 89  Blocked Bootstrap Plot (Central Tendency)
    • 90  Standard Deviation-Mean Plot
    • 91  Variance Reduction Matrix
    • 92  (Partial) Autocorrelation Function
    • 93  Periodogram & Cumulative Periodogram

    • 94  Problems
  • Hypothesis Testing
    • 95  Normal Distributions revisited
    • 96  The Population
    • 97  The Sample
    • 98  The One-Sided Hypothesis Test
    • 99  The Two-Sided Hypothesis Test
    • 100  When to use a one-sided or two-sided test?
    • 101  What if \(\sigma\) is unknown?
    • 102  The Central Limit Theorem (revisited)
    • 103  Statistical Test of the Population Mean with known Variance
    • 104  Statistical Test of the Population Mean with unknown Variance
    • 105  Statistical Test of the Variance
    • 106  Statistical Test of the Population Proportion
    • 107  Statistical Test of the Standard Deviation \(\sigma\)
    • 108  Statistical Test of the difference between Means -- Independent/Unpaired Samples
    • 109  Statistical Test of the difference between Means -- Dependent/Paired Samples
    • 110  Statistical Test of the difference between Variances -- Independent/Unpaired Samples

    • 111  Hypothesis Testing for Research Purposes
    • 112  Decision Thresholds, Alpha, and Confidence Levels
    • 113  Bayesian Inference for Decision-Making
    • 114  One Sample t-Test
    • 115  Skewness & Kurtosis Tests
    • 116  Paired Two Sample t-Test
    • 117  Wilcoxon Signed-Rank Test
    • 118  Unpaired Two Sample t-Test
    • 119  Unpaired Two Sample Welch Test
    • 120  Two One-Sided Tests (TOST) for Equivalence
    • 121  Mann-Whitney U test (Wilcoxon Rank-Sum Test)
    • 122  Bayesian Two Sample Test
    • 123  Median Test based on Notched Boxplots
    • 124  Chi-Squared Tests for Count Data
    • 125  Kolmogorov-Smirnov Test
    • 126  One Way Analysis of Variance (1-way ANOVA)
    • 127  Kruskal-Wallis Test
    • 128  Two Way Analysis of Variance (2-way ANOVA)
    • 129  Repeated Measures ANOVA
    • 130  Friedman Test
    • 131  Testing Correlations
    • 132  A Note on Causality

    • 133  Problems
  • Regression Models
    • 134  Simple Linear Regression Model (SLRM)
    • 135  Multiple Linear Regression Model (MLRM)
    • 136  Logistic Regression
    • 137  Generalized Linear Models
    • 138  Multinomial and Ordinal Logistic Regression
    • 139  Cox Proportional Hazards Regression
    • 140  Conditional Inference Trees
    • 141  Leaf Diagnostics for Conditional Inference Trees
    • 142  Conditional Random Forests
    • 143  Hypothesis Testing with Linear Regression Models (from a Practical Point of View)

    • 144  Problems
  • Introduction to Time Series Analysis
    • 145  Case: the Market of Health and Personal Care Products
    • 146  Decomposition of Time Series
    • 147  Ad hoc Forecasting of Time Series
  • Box-Jenkins Analysis
    • 148  Introduction to Box-Jenkins Analysis
    • 149  Theoretical Concepts
    • 150  Stationarity
    • 151  Identifying ARMA parameters
    • 152  Estimating ARMA Parameters and Residual Diagnostics
    • 153  Forecasting with ARIMA models
    • 154  Intervention Analysis
    • 155  Cross-Correlation Function
    • 156  Transfer Function Noise Models
    • 157  General-to-Specific Modeling
  • Model Building Strategies
    • 158  Introduction to Model Building Strategies
    • 159  Manual Model Building
    • 160  Model Validation
    • 161  Regularization Methods
    • 162  Hyperparameter Optimization Strategies
    • 163  Guided Model Building in Practice
    • 164  Diagnostics, Revision, and Guided Forecasting
    • 165  Leakage, Target Encoding, and Robust Regression
  • References
  • Appendices
    • Appendices
    • A  Method Selection Guide
    • B  Presentations and Teaching Materials
    • C  R Language Concepts for Statistical Computing
    • D  Matrix Algebra
    • E  Standard Normal Table (Gaussian Table)
    • F  Critical values of Student’s \(t\) distribution with \(\nu\) degrees of freedom
    • G  Upper-tail critical values of the \(\chi^2\)-distribution with \(\nu\) degrees of freedom
    • H  Lower-tail critical values of the \(\chi^2\)-distribution with \(\nu\) degrees of freedom

Table of contents

  • 134.1 Introduction
  • 134.2 Least Squares Criterion
  • 134.3 Ordinary Least Squares for Simple Linear Regression
  • 134.4 Assumptions of Ordinary Least Squares
  • 134.5 Statistical Inference with Ordinary Least Squares
    • 134.5.1 Mathematical Expectation and Variance of Simple Linear Regression Parameters
    • 134.5.2 Confidence Intervals of Simple Linear Regression Parameters
    • 134.5.3 Forecasting errors of Simple Linear Regression
  • 134.6 Worked Example and R Code
  • 134.7 Interactive Software
  • 134.8 Tasks
  1. Regression Models
  2. 134  Simple Linear Regression Model (SLRM)

134  Simple Linear Regression Model (SLRM)

134.1 Introduction

The Simple Linear Regression Model (SLRM) is used in this section as a bridge between the Scatter Plot, Pearson Correlation, and Simple Linear Regression as a descriptive/explorative tool on the one hand and the Multiple Linear Regression Model (MLRM) on the other.

Terminology note: this chapter uses the econometrics terms “exogenous” and “endogenous”, which correspond to “predictor” and “response” in mainstream statistics texts.

Unlike the MLRM, the SLRM can be explained with standard algebra. This is the main reason why the SLRM (which is not often used in practice) is treated here.

134.2 Least Squares Criterion

There are many techniques in statistics that use the least squares criterion. In regression models, however, this criterion is of particular importance.

Why should a criterion be used at all? The answer to this question is quite obvious: one has to have an objective measure for discrepancies between the estimated values (generated by the statistical model) and the (true) observed values. In fact we wish to create mathematical models of our surrounding world in order to be able to describe it, to draw conclusions from it, to forecast future behaviour of some phenomena, and to explain why certain things happened in the past.

For obvious reasons these mathematical models are not deterministic but instead, probabilistic or stochastic. This is the reason why we have a need for a good criterion to decide whether our model describes the real world well enough to be of practical importance.

Since we cannot hope for a model to describe a real phenomenon perfectly, the only thing we can do is to design a method for getting as close to the real behaviour as possible. This can be achieved by minimising the error of the mathematical model.

On first sight, the most intuitive way to express the error made by a probabilistic model is to calculate the sum of the deviations between the predicted and the real values

\[ \sum_{i=1}^{n} e_i = \sum_{i=1}^{n} (Y_i - F_i) \]

where \(e_i\) is the prediction error, \(F_i\) is the prediction or forecast, \(Y_i\) is the observed value, and \(i\) represents the observation index (\(i= 1, 2, …, n\)).

This criterion, however, is problematic because the sum of errors will be very close to zero when positive and negative errors compensate each other. Therefore, a much better criterion would be based on the absolute values of errors

\[ \sum_{i=1}^{n} \|e_i\| = \sum_{i=1}^{n} \|Y_i - F_i\| \]

This criterion can be used in practice but the problem is that the mathematical expressions are rather cumbersome. Therefore most statisticians prefer to use the criterion of the sum of squared errors which has nice mathematical properties

\[ \sum_{i=1}^{n} e_i^2 = \sum_{i=1}^{n} (Y_i - F_i)^2 \]

Using the square of the deviations results in generating only positive values (like in the previous criterion) but above that, it tends to give more weight to large discrepancies in stead of small ones (which is not always a good thing). Although this (third) criterion is frequently used, it does not always yield better results than the second criterion. This is because in some cases, e.g. when a very long structural shift (in time) exists, the second criterion will describe specifically the long shift better than the third criterion whereas the latter performs better in regard to overall predictive power. Moreover, the second criterion is much more robust in the context of outliers.

134.3 Ordinary Least Squares for Simple Linear Regression

Consider the following SLRM

\[ \forall i = 1, 2, ...,n : Y_i = \alpha + \beta x_i + e_i \]

where \(x_i = X_i - \bar{X}\). We use \(x_i\) instead of the original observations \(X_i\) because this reduces the mathematical complexity without loss of generality.

Using the first order condition, it is possible to find a solution for both parameters that minimize the sum of squared prediction errors. Hence, we need to use the first partial derivative of the Sum of Squared Residuals (SSR) with respect to \(\alpha\) and equate it to zero to find the optimum

\[ \begin{align*}&\frac{\partial SSR}{\partial \alpha} &= \frac{\partial \left( \sum_{i=1}^{n} \left( Y_i - \alpha - \beta x_i \right)^2 \right)}{\partial \alpha} &= 0 \\&&= \sum_{i=1}^{n} \left( 2 \left( Y_i - \alpha - \beta x_i \right)(-1) \right) &=0\\&&= \sum_{i=1}^{n} Y_i - n \alpha - \beta \sum_{i=1}^{n} x_i &= 0 \\&& \alpha = \frac{\sum_{i=1}^{n}Y_i}{n} &= \bar{Y} \end{align*} \]

Note that the formula is relatively simple because we used \(x_i\) instead of \(X_i\) which allows us to drop the sum of exogenous values

\[ \sum_{i=1}^{n} x_i = \sum_{i=1}^{n} \left( X_i - \bar{X} \right) = \sum_{i=1}^{n} \left( X_i - \frac{\sum_{i=1}^{n}X_i}{n} \right) = 0 \]

Similarly, it is also possible to calculate the partial derivative of the SSR with respect to \(\beta\) and equate it to zero

\[ \begin{align*}&\frac{\partial SSR}{\partial \beta} &= \frac{\partial \left( \sum_{i=1}^{n} \left( Y_i - \alpha - \beta x_i \right)^2 \right)}{\partial \beta} &= 0 \\&&=\sum_{i=1}^{n} \left( 2 \left( Y_i - \alpha - \beta x_i \right) \left( -x_i \right) \right) &= 0 \\&&=\sum_{i=1}^{n} \left( Y_i x_i \right) - \alpha \sum_{i=1}^{n} x_i - \beta \sum_{i=1}^{n} x_i^2 &= 0\\&& \beta = \frac{\sum_{i=1}^{n}\left( Y_i x_i \right)}{\sum_{i=1}^{n}x_i^2}\end{align*} \]

In practice the above results are used to estimate the parameters \(\alpha\) and \(\beta\) by the so-called Ordinary Least Squares (OLS) method. In order to indicate that both formulas are used to “estimate” the true population parameters \(\alpha\) and \(\beta\), we write the “hat” symbol above the parameter:

\[ \begin{cases}\hat{\alpha} = \bar{Y}\\\hat{\beta} = \frac{\sum_{i=1}^{n}\left( Y_i x_i \right)}{\sum_{i=1}^{n}x_i^2}\end{cases} \]

Once the parameters have been estimated it is possible to generate predictions, based on the SLRM

\[ \hat{Y_i} = \hat{\alpha}_0 + \hat{\beta} X_i \]

where the estimate of the intercept is \(\hat{\alpha}_0 = \hat{\alpha} - \hat{\beta} \bar{X}\) (because we replaced \(X_i\) by \(x_i\) in the computation of the first derivatives).

134.4 Assumptions of Ordinary Least Squares

The assumptions of the SLRM have already been described in Chapter 74. For convenience we summarize the key assumptions again:

  • The conditional mean is linear in parameters: \(\text{E}(Y_i \mid X_i) = \alpha + \beta X_i\).
  • Exogeneity: \(\text{E}(e_i \mid X_i)=0\).
  • The prediction errors (residuals) have a fixed Variance \(\sigma^2\) (i.e. the residuals are “homoskedastic”, not “heteroskedastic”). This implies that
    • For any randomly chosen, sizable subset of residuals the Variance (of residuals) should be the same.
    • In time series, the Variance is fixed in time (i.e. it must not increase or decrease over time).
    • The “uncertainty” of predictions made by the SLRM is always the same and independent of the observation index \(i\).
  • The residuals are mutually not correlated (their covariances are zero). This means that
    • It is not possible to improve the predictions for observational index \(i\) based on errors made for observational index \(i \pm k\) (for \(k \neq 0\)).
    • In time series it is not possible to improve forecasts by using the information from past residuals.
  • The explanatory variable has variation: \(\sum_{i=1}^{n}(X_i-\bar{X})^2 > 0\).

For causal interpretation (not only prediction), a stronger assumption is additionally needed: no omitted confounders that are correlated with \(X_i\).

Note: normality is not implied by the above assumptions. Exact finite-sample t- and F-inference requires an additional normal-error assumption. For large samples, approximate inference is usually justified by asymptotic theory for the estimators.

134.5 Statistical Inference with Ordinary Least Squares

134.5.1 Mathematical Expectation and Variance of Simple Linear Regression Parameters

In order to be able to obtain reliable information about the population parameters (of the real mathematical model), based only on the sample observations, it is necessary to compute the expectation and the variance of both estimated parameters.

The expectation of the estimated constant term can be derived as follows

\[ \begin{align*}\text{E}(\hat{\alpha}) = \text{E}(\bar{Y}) &= \text{E} \left( \frac{\sum_{i=1}^{n}Y_i}{n} \right) \\&= \frac{1}{n} \text{E} \left( \sum_{i=1}^{n} \left( \alpha + \beta x_i \right) \right) \\&= \frac{1}{n} \text{E} \left( n \alpha + \beta \sum_{i=1}^{n} x_i \right) = \frac{n \alpha}{n} = \alpha\end{align*} \]

This result implies that the OLS estimation of the parameter \(\alpha\) is unbiased.

The Variance of the constant term is easily derived

\[ \text{V}(\hat{\alpha}) = \text{V} \left( \frac{\sum_{i=1}^{n} Y_i}{n} \right) = \frac{1}{n^2} \sum_{i=1}^{n} \text{V}(Y_i) = \frac{n \sigma^2}{n^2} = \frac{\sigma^2}{n} \]

Now we consider the derivation of the expectation of the slope parameter

\[ \begin{align*}\text{E}(\hat{\beta}) &= \text{E} \left( \frac{\sum_{i=1}^{n}(Y_i x_i) }{\sum_{i=1}^{n} x_i^2} \right) \\&= \text{E} \left( \frac{x_1}{\sum_{i=1}^{n}x_i^2} Y_1 + \frac{x_2}{\sum_{i=1}^{n}x_i^2} Y_2 + … + \frac{x_n}{\sum_{i=1}^{n}x_i^2} Y_n \right) \\&= \sum_{i=1}^{n} \text{E} \left( \frac{x_i}{\sum_{i=1}^{n}x_i^2} Y_i \right) \\&= \sum_{i=1}^{n} \left( \frac{x_i}{\sum_{i=1}^{n}x_i^2} \text{E}(Y_i) \right) \\&= \sum_{i=1}^{n} \left( \frac{x_i}{\sum_{i=1}^{n}x_i^2} \text{E}(\alpha + \beta x_i) \right) \\&= \frac{\alpha \sum_{i=1}^{n}x_i}{\sum_{i=1}^{n}x_i^2} + \frac{\beta \sum_{i=1}^{n}x_i^2}{\sum_{i=1}^{n}x_i^2} \\&= \beta\end{align*} \]

This result implies that the OLS estimation of the parameter \(\beta\) is unbiased.

The variance of the slope parameter is derived as follows

\[ \begin{align*}\text{V}(\hat{\beta}) &= \text{V} \left( \frac{x_1}{\sum_{i=1}^{n}x_i^2} Y_1 + \frac{x_2}{\sum_{i=1}^{n}x_i^2} Y_2 + … + \frac{x_n}{\sum_{i=1}^{n}x_i^2} Y_n \right) \\&= \sum_{i=1}^{n} \left( \frac{x_i^2}{\left( \sum_{i=1}^{n} x_i^2 \right)^2} \text{V} (Y_i) \right) \\&= \sum_{i=1}^{n} \left( \frac{x_i^2}{\left( \sum_{i=1}^{n} x_i^2 \right)^2} \sigma^2 \right) \\&= \frac{\sigma^2}{\left( \sum_{i=1}^{n} x_i^2 \right)^2} \sum_{i=1}^{n} x_i^2 \\&= \frac{\sigma^2}{\sum_{i=1}^{n} x_i^2}\end{align*} \]

From these results it can be concluded that the variance of the estimated parameters can be reduced by

  • increasing the sample size
  • using a process with a small error variance \(\sigma^2\)
  • using an exogenous variable with a large Range or Variance

134.5.2 Confidence Intervals of Simple Linear Regression Parameters

In order to find the t-statistic we first derive the z-transformation of the estimated value of \(\beta\) (i.e. \(\hat{\beta}\))

\[ z = \frac{\hat{\beta} - \text{E}(\hat{\beta})}{\sigma_{\hat{\beta}}} = \frac{\hat{\beta} - \beta}{\sqrt{\frac{\sigma^2}{\sum_{i=1}^{n}x_i^2}}} \]

where the unobservable \(\sigma\) is replaced by an estimator based on the residual variance. Under the classical normal-error assumption, this replacement yields a t-statistic (with \(n-2\) degrees of freedom). Hence we can write

\[ t = \frac{\hat{\beta} - \beta}{s_{\hat{\beta}}} \]

The 95% confidence interval for \(\beta\) is given by the following expression

\[ \begin{align*}0.95 &= \text{P}\left( -t_{0.025} < \frac{\hat{\beta} - \beta}{s_{\hat{\beta}}} < t_{0.025} \right) \\&= \text{P} \left( -t_{0.025} s_{\hat{\beta}} < \hat{\beta} - \beta < t_{0.025} s_{\hat{\beta}} \right) \\&= \text{P} \left( -\hat{\beta} -t_{0.025} s_{\hat{\beta}} < -\beta < -\hat{\beta} + t_{0.025} s_{\hat{\beta}} \right) \\&= \text{P} \left( \hat{\beta} -t_{0.025} s_{\hat{\beta}} < \beta < \hat{\beta} + t_{0.025} s_{\hat{\beta}} \right) \end{align*} \]

where \(t_{0.025}\) is obtained from Appendix F or through numerical integration.

Note: the confidence interval for the intercept can be obtained in just the same way.

134.5.3 Forecasting errors of Simple Linear Regression

The mean estimator of the SLRM is unbiased

\[ \text{E}\left( \hat{\mu}_o \right) = \alpha + \beta x_o = \mu_o \]

and the Variance of the mean estimator is

\[ \text{V} \left( \hat{\mu}_o \right) = \frac{\sigma^2}{n} + x_o^2 \frac{\sigma^2}{\sum_{i=1}^{n} x_i^2} = \sigma^2 \left( \frac{1}{n} + \frac{x_o^2}{\sum_{i=1}^{n} x_i^2} \right) \]

This implies that the forecast performance depends on:

  • the Variance of the endogenous variable \(\sigma^2\)
  • the sample size \(n\)
  • the Range/Variance of the exogenous variable
  • \(x_o\) which represents the distance between the forecast origin and the mean of the exogenous variable

The forecast uncertainty for an individual observation is larger

\[ \text{V} \left( \hat{Y}_o \right) = \frac{\sigma^2}{n} + x_o^2 \frac{\sigma^2}{\sum_{i=1}^{n} x_i^2} + \sigma^2 = \sigma^2 \left( \frac{1}{n} + \frac{x_o^2}{\sum_{i=1}^{n} x_i^2} + 1 \right) \]

134.6 Worked Example and R Code

The following example uses the cars dataset to estimate a simple linear regression of stopping distance (dist) on speed (speed).

data(cars)

# Fit SLRM
slr_model <- lm(dist ~ speed, data = cars)
summary(slr_model)

Call:
lm(formula = dist ~ speed, data = cars)

Residuals:
    Min      1Q  Median      3Q     Max 
-29.069  -9.525  -2.272   9.215  43.201 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) -17.5791     6.7584  -2.601   0.0123 *  
speed         3.9324     0.4155   9.464 1.49e-12 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 15.38 on 48 degrees of freedom
Multiple R-squared:  0.6511,    Adjusted R-squared:  0.6438 
F-statistic: 89.57 on 1 and 48 DF,  p-value: 1.49e-12
Code
par(mfrow = c(2, 2))
plot(slr_model)
par(mfrow = c(1, 1))
Figure 134.1: SLRM diagnostics for dist ~ speed

Interpretation checklist:

  • Slope sign and magnitude: does speed increase/decrease stopping distance?
  • t-test and confidence interval for the slope: is there evidence that \(\beta \neq 0\)?
  • Residual-vs-fitted plot: does homoskedasticity look plausible?
  • Q-Q plot: does normality look reasonable for finite-sample inference?

134.7 Interactive Software

An interactive SLRM analysis (including plots and inference output) is available in the Linear Regression module:

Interactive Shiny app (click to load).
Open in new tab

134.8 Tasks

  1. Fit an SLRM on cars with dist ~ speed. Report the slope estimate, its 95% confidence interval, and the p-value.
  2. Refit the model after removing the two largest residuals. Compare the slope and discuss sensitivity to outliers.
  3. Add a quadratic term (dist ~ speed + I(speed^2)) and compare with the linear model using an F-test or AIC.

For practical diagnostics (residual analysis, heteroskedasticity testing, multicollinearity/VIF, and interaction effects), see Chapter 143.

Regression Models
135  Multiple Linear Regression Model (MLRM)

© 2026 Patrick Wessa. Provided as-is, without warranty.

Feedback: e-mail | Anonymous contributions: click to copy (Sats) | click to copy (XMR)

Cookie Preferences