• Descriptive
    • Moments
    • Concentration
    • Central Tendency
    • Variability
    • Stem-and-Leaf Plot
    • Histogram & Frequency Table
    • Data Quality Forensics
    • Conditional EDA
    • Quantiles
    • Kernel Density Estimation
    • Normal QQ Plot
    • Bootstrap Plot

    • Multivariate Descriptive Statistics
  • Distributions
    • Binomial Probabilities
    • Geometric Probabilities
    • Negative Binomial Probabilities
    • Hypergeometric Probabilities
    • Multinomial Probabilities
    • Poisson Probabilities

    • Exponential
    • Gamma
    • Erlang
    • Weibull
    • Rayleigh
    • Lognormal
    • Pareto
    • Inverse Gamma

    • Beta
    • Power
    • Beta Prime (Inv. Beta)
    • Triangular

    • Normal (area)
    • Logistic
    • Laplace
    • Cauchy (standard)
    • Cauchy (location-scale)
    • Gumbel

    • Normal RNG
    • ML Fitting
    • Tukey Lambda PPCC
    • Box-Cox Normality Plot
    • Sample Correlation r

    • Empirical Tests
  • Hypotheses
    • Theoretical Aspects of Hypothesis Testing
    • Bayesian Inference
    • Minimum Sample Size

    • Empirical Tests
    • Multivariate (pair-wise) Testing
  • Models
    • Manual Model Building
  • Time Series
    • Time Series Plot
    • Decomposition
    • Exponential Smoothing

    • Blocked Bootstrap Plot
    • Mean Plot
    • (P)ACF
    • VRM
    • Standard Deviation-Mean Plot
    • Spectral Analysis
    • ARIMA

    • Cross Correlation Function
    • Granger Causality
  1. Appendices
  2. D  Matrix Algebra
  • Preface
  • Getting Started
    • 1  Introduction
    • 2  Why Do We Need Innovative Technology?
    • 3  Basic Definitions
    • 4  The Big Picture: Why We Analyze Data
  • Introduction to Probability
    • 5  Definitions of Probability
    • 6  Jeffreys’ axiom system
    • 7  Bayes’ Theorem
    • 8  Sensitivity and Specificity
    • 9  Naive Bayes Classifier
    • 10  Law of Large Numbers

    • 11  Problems
  • Probability Distributions
    • 12  Bernoulli Distribution
    • 13  Binomial Distribution
    • 14  Geometric Distribution
    • 15  Negative Binomial Distribution
    • 16  Hypergeometric Distribution
    • 17  Multinomial Distribution
    • 18  Poisson Distribution

    • 19  Uniform Distribution (Rectangular Distribution)
    • 20  Normal Distribution (Gaussian Distribution)
    • 21  Gaussian Naive Bayes Classifier
    • 22  Chi Distribution
    • 23  Chi-squared Distribution (1 parameter)
    • 24  Chi-squared Distribution (2 parameters)
    • 25  Student t-Distribution
    • 26  Fisher F-Distribution
    • 27  Exponential Distribution
    • 28  Lognormal Distribution
    • 29  Gamma Distribution
    • 30  Beta Distribution
    • 31  Weibull Distribution
    • 32  Pareto Distribution
    • 33  Inverse Gamma Distribution
    • 34  Rayleigh Distribution
    • 35  Erlang Distribution
    • 36  Logistic Distribution
    • 37  Laplace Distribution
    • 38  Gumbel Distribution
    • 39  Cauchy Distribution
    • 40  Triangular Distribution
    • 41  Power Distribution
    • 42  Beta Prime Distribution
    • 43  Sample Correlation Distribution

    • 44  Problems
  • Descriptive Statistics & Exploratory Data Analysis
    • 45  Types of Data
    • 46  Datasheets

    • 47  Frequency Plot (Bar Plot)
    • 48  Frequency Table
    • 49  Contingency Table
    • 50  Binomial Classification Metrics
    • 51  Confusion Matrix
    • 52  ROC Analysis

    • 53  Stem-and-Leaf Plot
    • 54  Histogram
    • 55  Data Quality Forensics
    • 56  Quantiles
    • 57  Central Tendency
    • 58  Variability
    • 59  Skewness & Kurtosis
    • 60  Concentration
    • 61  Notched Boxplot
    • 62  Scatterplot
    • 63  Pearson Correlation
    • 64  Rank Correlation
    • 65  Partial Pearson Correlation
    • 66  Simple Linear Regression
    • 67  Moments
    • 68  Quantile-Quantile Plot (QQ Plot)
    • 69  Normal Probability Plot
    • 70  Probability Plot Correlation Coefficient Plot (PPCC Plot)
    • 71  Box-Cox Normality Plot
    • 72  Kernel Density Estimation
    • 73  Bivariate Kernel Density Plot
    • 74  Conditional EDA: Panel Diagnostics
    • 75  Bootstrap Plot (Central Tendency)
    • 76  Survey Scores Rank Order Comparison
    • 77  Cronbach Alpha

    • 78  Equi-distant Time Series
    • 79  Time Series Plot (Run Sequence Plot)
    • 80  Mean Plot
    • 81  Blocked Bootstrap Plot (Central Tendency)
    • 82  Standard Deviation-Mean Plot
    • 83  Variance Reduction Matrix
    • 84  (Partial) Autocorrelation Function
    • 85  Periodogram & Cumulative Periodogram

    • 86  Problems
  • Hypothesis Testing
    • 87  Normal Distributions revisited
    • 88  The Population
    • 89  The Sample
    • 90  The One-Sided Hypothesis Test
    • 91  The Two-Sided Hypothesis Test
    • 92  When to use a one-sided or two-sided test?
    • 93  What if \(\sigma\) is unknown?
    • 94  The Central Limit Theorem (revisited)
    • 95  Statistical Test of the Population Mean with known Variance
    • 96  Statistical Test of the Population Mean with unknown Variance
    • 97  Statistical Test of the Variance
    • 98  Statistical Test of the Population Proportion
    • 99  Statistical Test of the Standard Deviation \(\sigma\)
    • 100  Statistical Test of the difference between Means -- Independent/Unpaired Samples
    • 101  Statistical Test of the difference between Means -- Dependent/Paired Samples
    • 102  Statistical Test of the difference between Variances -- Independent/Unpaired Samples

    • 103  Hypothesis Testing for Research Purposes
    • 104  Decision Thresholds, Alpha, and Confidence Levels
    • 105  Bayesian Inference for Decision-Making
    • 106  One Sample t-Test
    • 107  Skewness & Kurtosis Tests
    • 108  Paired Two Sample t-Test
    • 109  Wilcoxon Signed-Rank Test
    • 110  Unpaired Two Sample t-Test
    • 111  Unpaired Two Sample Welch Test
    • 112  Two One-Sided Tests (TOST) for Equivalence
    • 113  Mann-Whitney U test (Wilcoxon Rank-Sum Test)
    • 114  Bayesian Two Sample Test
    • 115  Median Test based on Notched Boxplots
    • 116  Chi-Squared Tests for Count Data
    • 117  Kolmogorov-Smirnov Test
    • 118  One Way Analysis of Variance (1-way ANOVA)
    • 119  Kruskal-Wallis Test
    • 120  Two Way Analysis of Variance (2-way ANOVA)
    • 121  Repeated Measures ANOVA
    • 122  Friedman Test
    • 123  Testing Correlations
    • 124  A Note on Causality

    • 125  Problems
  • Regression Models
    • 126  Simple Linear Regression Model (SLRM)
    • 127  Multiple Linear Regression Model (MLRM)
    • 128  Logistic Regression
    • 129  Generalized Linear Models
    • 130  Multinomial and Ordinal Logistic Regression
    • 131  Cox Proportional Hazards Regression
    • 132  Conditional Inference Trees
    • 133  Leaf Diagnostics for Conditional Inference Trees
    • 134  Hypothesis Testing with Linear Regression Models (from a Practical Point of View)

    • 135  Problems
  • Introduction to Time Series Analysis
    • 136  Case: the Market of Health and Personal Care Products
    • 137  Decomposition of Time Series
    • 138  Ad hoc Forecasting of Time Series
  • Box-Jenkins Analysis
    • 139  Introduction to Box-Jenkins Analysis
    • 140  Theoretical Concepts
    • 141  Stationarity
    • 142  Identifying ARMA parameters
    • 143  Estimating ARMA Parameters and Residual Diagnostics
    • 144  Forecasting with ARIMA models
    • 145  Intervention Analysis
    • 146  Cross-Correlation Function
    • 147  Transfer Function Noise Models
    • 148  General-to-Specific Modeling
  • References
  • Appendices
    • Appendices
    • A  Method Selection Guide
    • B  Presentations and Teaching Materials
    • C  R Language Concepts for Statistical Computing
    • D  Matrix Algebra
    • E  Standard Normal Table (Gaussian Table)
    • F  Critical values of Student’s \(t\) distribution with \(\nu\) degrees of freedom
    • G  Upper-tail critical values of the \(\chi^2\)-distribution with \(\nu\) degrees of freedom
    • H  Lower-tail critical values of the \(\chi^2\)-distribution with \(\nu\) degrees of freedom

Table of contents

  • D.1 Vectors
  • D.2 Matrices and Basic Operations
  • D.3 Special Matrices
  • D.4 Linear Independence, Rank, and Determinants
    • D.4.1 Worked Example: Cofactor Expansion (\(3 \times 3\))
  • D.5 Inverse Matrices
    • D.5.1 Worked Example: Inverse of a \(2 \times 2\) Matrix
  • D.6 Idempotent Centering Matrix and Mean Deviations
  • D.7 Trace, Quadratic Forms, and Definiteness
    • D.7.1 Worked Example: Quadratic Form
  • D.8 Eigenvalues and Eigenvectors
    • D.8.1 Real Eigenvalues of Real Symmetric Matrices
    • D.8.2 Orthogonality for Distinct Eigenvalues
    • D.8.3 Worked Example: Eigendecomposition of a Symmetric Matrix
  • D.9 Summary for Practice
DRAFT This draft is under development — DO NOT CITE OR SHARE.
  1. Appendices
  2. D  Matrix Algebra

Appendix D — Matrix Algebra

This section provides a compact introduction to matrix algebra used throughout the handbook. The focus is on definitions, core properties, and short worked examples that support later material in statistics and econometrics. For any matrix dimension notation \(r \times c\), the first index denotes rows and the second index denotes columns; symbols such as \(m,n,k,p\) are reused by context. For direct applications, see Chapter 127 and Chapter 134.

D.1 Vectors

A column vector and its transpose (row vector) are written as

\[ x = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix}, \qquad x^\top = \begin{bmatrix} x_1 & x_2 & \cdots & x_n \end{bmatrix}. \]

The inner product of vectors \(x,y \in \mathbb{R}^n\) is

\[ x^\top y = \sum_{i=1}^n x_i y_i. \]

Here \(x^\top y\) is an inner product (a scalar), while \(xy^\top\) and \(yx^\top\) are outer products (matrices).

Example:

\[ x=\begin{bmatrix}1\\2\end{bmatrix},\quad y=\begin{bmatrix}3\\4\end{bmatrix} \Rightarrow x^\top y=11,\quad xy^\top=\begin{bmatrix}3&4\\6&8\end{bmatrix},\quad yx^\top=\begin{bmatrix}3&6\\4&8\end{bmatrix}. \]

The squared Euclidean norm is

\[ \|x\|^2 = x^\top x = \sum_{i=1}^n x_i^2, \]

and the Euclidean norm (length) is

\[ \|x\| = \sqrt{x^\top x}. \]

Useful properties and cautions:

\[ x^\top y = y^\top x, \qquad xy^\top \neq yx^\top \text{ in general}. \]

D.2 Matrices and Basic Operations

An \(n \times m\) matrix is written as

\[ A = [a_{ij}] = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1m} \\ a_{21} & a_{22} & \cdots & a_{2m} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nm} \end{bmatrix}. \]

If \(A,B \in \mathbb{R}^{n \times m}\), then matrix addition is elementwise:

\[ A+B=[c_{ij}], \qquad c_{ij}=a_{ij}+b_{ij}. \]

If \(A \in \mathbb{R}^{n \times m}\) and \(B \in \mathbb{R}^{m \times p}\), then

\[ C=AB \in \mathbb{R}^{n \times p}, \qquad c_{ik}=\sum_{j=1}^m a_{ij}b_{jk}. \]

Matrix multiplication is generally non-commutative: even when both products exist, typically \(AB \neq BA\). For example, with

\[ A=\begin{bmatrix}1&1\\0&1\end{bmatrix}, \qquad B=\begin{bmatrix}1&0\\1&1\end{bmatrix}, \]

we get

\[ AB=\begin{bmatrix}2&1\\1&1\end{bmatrix} \neq \begin{bmatrix}1&1\\1&2\end{bmatrix}=BA. \]

Transpose rules:

\[ (A^\top)^\top=A, \qquad (AB)^\top=B^\top A^\top, \qquad (A+B)^\top=A^\top+B^\top, \qquad (\alpha A)^\top=\alpha A^\top. \]

D.3 Special Matrices

A diagonal matrix \(D=[d_{ij}] \in \mathbb{R}^{n \times n}\) satisfies

\[ d_{ij}=0 \text{ for } i \neq j. \]

Diagonal entries \(d_{ii}\) may be zero.

The identity matrix \(I_n\) has ones on the diagonal and zeros elsewhere. If \(A\in\mathbb{R}^{n\times m}\), then

\[ I_nA=A, \qquad AI_m=A. \]

The zero matrix \(0\) is additive neutral:

\[ 0+A=A+0=A. \]

If \(D_1,D_2\) are diagonal matrices of the same order, then

\[ D_1D_2=D_2D_1 \]

and the product is diagonal.

A diagonal matrix is nonsingular if and only if all diagonal entries are nonzero; then

\[ D^{-1}=\operatorname{diag}\!\left(\frac{1}{d_{11}},\ldots,\frac{1}{d_{nn}}\right). \]

D.4 Linear Independence, Rank, and Determinants

Vectors \(v_1,\ldots,v_k\) are linearly independent if

\[ \sum_{i=1}^k c_i v_i=0 \quad\Rightarrow\quad c_1=\cdots=c_k=0. \]

The rank of a matrix is the maximum number of linearly independent rows (equivalently columns). In regression with design matrix \(X\in\mathbb{R}^{n\times k}\), unique OLS coefficients require full column rank: \(\operatorname{rank}(X)=k\).

If \(A\in\mathbb{R}^{m\times n}\), rank-nullity gives

\[ \ker(A)=\{x\in\mathbb{R}^n:Ax=0\}, \qquad \operatorname{nullity}(A)=\dim(\ker(A)), \]

and therefore

\[ \operatorname{rank}(A)+\operatorname{nullity}(A)=n. \]

For square matrices, determinant tools:

  • The minor \(M_{ij}\) of \(a_{ij}\) is the determinant of the matrix obtained by deleting row \(i\) and column \(j\).
  • The cofactor is \(C_{ij}=(-1)^{i+j}M_{ij}\).
  • Determinant can be expanded along any row or column using cofactors.

For a \(2 \times 2\) matrix,

\[ \det\!\begin{bmatrix}a&b\\c&d\end{bmatrix}=ad-bc. \]

D.4.1 Worked Example: Cofactor Expansion (\(3 \times 3\))

Let

\[ A=\begin{bmatrix} 1&2&0\\ 0&3&4\\ 2&0&5 \end{bmatrix}. \]

Expanding along the first row:

\[ \det(A) =1\cdot\det\!\begin{bmatrix}3&4\\0&5\end{bmatrix} -2\cdot\det\!\begin{bmatrix}0&4\\2&5\end{bmatrix} +0\cdot\det\!\begin{bmatrix}0&3\\2&0\end{bmatrix} =15-2(-8)=31. \]

Useful rank and determinant properties:

\[ \operatorname{rank}(AB)\le\min\{\operatorname{rank}(A),\operatorname{rank}(B)\}, \qquad \operatorname{rank}(A^\top)=\operatorname{rank}(A). \]

If \(B\) and \(C\) are nonsingular and dimensions are compatible,

\[ \operatorname{rank}(AB)=\operatorname{rank}(A), \qquad \operatorname{rank}(CA)=\operatorname{rank}(A). \]

For square matrices \(A,B\),

\[ \det(AB)=\det(A)\det(B). \]

D.5 Inverse Matrices

A square matrix \(A\) is nonsingular if and only if there exists \(A^{-1}\) such that

\[ AA^{-1}=A^{-1}A=I. \]

Key identities (when inverses exist):

\[ (A^{-1})^{-1}=A, \qquad (A^\top)^{-1}=(A^{-1})^\top, \]

and, if \(A\) and \(B\) are both nonsingular,

\[ (AB)^{-1}=B^{-1}A^{-1}. \]

For a \(2\times 2\) matrix

\[ A=\begin{bmatrix}a&b\\c&d\end{bmatrix}, \qquad \det(A)=ad-bc\neq 0, \]

the inverse is

\[ A^{-1}=\frac{1}{ad-bc}\begin{bmatrix}d&-b\\-c&a\end{bmatrix}. \]

D.5.1 Worked Example: Inverse of a \(2 \times 2\) Matrix

\[ A=\begin{bmatrix}2&1\\1&1\end{bmatrix}, \qquad \det(A)=1 \neq 0, \qquad A^{-1}=\begin{bmatrix}1&-1\\-1&2\end{bmatrix}. \]

D.6 Idempotent Centering Matrix and Mean Deviations

A matrix \(M\) is idempotent if \(M^2=M\).

An important example in regression is the centering matrix

\[ M=I_n-\frac{1}{n}\iota\iota^\top, \]

where \(\iota=(1,\ldots,1)^\top \in \mathbb{R}^n\).

This matrix is symmetric by construction, \(M^\top=M\), because both \(I_n\) and \(\iota\iota^\top\) are symmetric. In OLS, the hat matrix \(H=X(X^\top X)^{-1}X^\top\) is another symmetric idempotent matrix used in fitted values and diagnostics (Chapter 127).

Its idempotency is shown step by step:

\[ \begin{aligned} M^2 &=\left(I_n-\frac{1}{n}\iota\iota^\top\right) \left(I_n-\frac{1}{n}\iota\iota^\top\right) \\ &=I_n-\frac{2}{n}\iota\iota^\top+\frac{1}{n^2}\iota(\iota^\top\iota)\iota^\top \\ &=I_n-\frac{2}{n}\iota\iota^\top+\frac{1}{n}\iota\iota^\top \quad (\iota^\top\iota=n) \\ &=I_n-\frac{1}{n}\iota\iota^\top=M. \end{aligned} \]

For \(B \in \mathbb{R}^{n \times k}\), define centered data by

\[ B_c=MB = B-\iota\bar b^\top, \qquad \bar b^\top=\frac{1}{n}\iota^\top B. \]

Here \(\bar b^\top\in\mathbb{R}^{1\times k}\) is the row vector of column means.

Thus each column of \(B_c\) is the corresponding column of \(B\) minus its sample mean.

Sums of squares and cross-products are

\[ B_c^\top B_c=(MB)^\top(MB)=B^\top MB, \]

because \(M^\top=M\) and \(M^2=M\), and for \(B \in \mathbb{R}^{n \times k}\), \(C \in \mathbb{R}^{n \times \ell}\),

\[ B_c^\top C_c=(MB)^\top(MC)=B^\top MC. \]

The trace-rank application for the centering matrix is given in the next section.

D.7 Trace, Quadratic Forms, and Definiteness

For a square matrix \(A=[a_{ij}]\),

\[ \operatorname{tr}(A)=\sum_{i=1}^n a_{ii}. \]

Useful rules (for compatible dimensions):

\[ \operatorname{tr}(kA)=k\operatorname{tr}(A), \qquad \operatorname{tr}(A+B)=\operatorname{tr}(A)+\operatorname{tr}(B), \qquad \operatorname{tr}(AB)=\operatorname{tr}(BA). \]

For compatible products, the cyclic extension is

\[ \operatorname{tr}(ABC)=\operatorname{tr}(BCA)=\operatorname{tr}(CAB). \]

If \(A\) is idempotent, then

\[ \operatorname{tr}(A)=\operatorname{rank}(A). \]

Applied to the centering matrix \(M=I_n-\frac{1}{n}\iota\iota^\top\) from the previous section:

\[ \operatorname{tr}(M)=\operatorname{tr}\!\left(I_n-\frac{1}{n}\iota\iota^\top\right)=n-\frac{1}{n}\operatorname{tr}(\iota\iota^\top)=n-\frac{1}{n}\iota^\top\iota=n-1, \]

and therefore \(\operatorname{rank}(M)=n-1\).

A quadratic form for symmetric \(A\) is

\[ q(x)=x^\top A x. \]

Definitions for symmetric \(A\):

  • Positive definite (PD): \(x^\top A x>0\) for all \(x\neq 0\).
  • Positive semidefinite (PSD): \(x^\top A x\ge 0\) for all \(x\).
  • Negative definite / semidefinite: analogous with \(<0\) (for all \(x\neq 0\)) / \(\le 0\) (for all \(x\)).
  • Indefinite: there exist \(x,y\) such that \(x^\top A x>0\) and \(y^\top A y<0\).

In statistics, covariance matrices are always PSD; they are PD when no nonzero linear combination has zero variance.

For symmetric matrices, Sylvester’s criterion states:

\[ A \text{ is PD } \Longleftrightarrow \text{all leading principal minors of } A \text{ are positive}. \]

Here, the leading principal minors are the determinants of the top-left \(k\times k\) submatrices, \(k=1,\ldots,n\).

If \(A\) is PD then \(A\) is nonsingular.

If \(A\) is PD (or PSD) and \(B\) is nonsingular, then \(B^\top A B\) is also PD (or PSD).

If \(A \in \mathbb{R}^{m \times n}\) has rank \(m<n\) (full row rank), then

\[ AA^\top \text{ is PD}, \qquad A^\top A \text{ is PSD but not PD}. \]

When rank deficiency makes \(A^\top A\) singular, least-squares solutions can be written with the Moore-Penrose pseudoinverse \(A^+\).

If \(\operatorname{rank}(A)=r<m\) and \(r<n\), then both \(AA^\top\) and \(A^\top A\) are PSD and neither is PD.

If \(A\) is symmetric PD, there exists a nonsingular matrix \(P\) such that

\[ A=P^\top P \]

(a square-root factorization; Cholesky is the special case \(A=LL^\top\) with \(L\) lower triangular).

D.7.1 Worked Example: Quadratic Form

With

\[ A=\begin{bmatrix}2&1\\1&2\end{bmatrix}, \qquad x=\begin{bmatrix}1\\-1\end{bmatrix}, \]

we get

\[ x^\top A x =\begin{bmatrix}1&-1\end{bmatrix} \begin{bmatrix}2&1\\1&2\end{bmatrix} \begin{bmatrix}1\\-1\end{bmatrix} =\begin{bmatrix}1&-1\end{bmatrix}\begin{bmatrix}1\\-1\end{bmatrix} =2>0. \]

Applying Sylvester’s criterion to the same matrix:

\[ a_{11}=2>0, \qquad \det(A)=\det\!\begin{bmatrix}2&1\\1&2\end{bmatrix}=3>0, \]

so \(A\) is positive definite.

D.8 Eigenvalues and Eigenvectors

For square \(A\), a scalar \(\lambda\) and nonzero vector \(x\) satisfy

\[ Ax=\lambda x \]

if and only if \(\lambda\) is an eigenvalue and \(x\) is a corresponding eigenvector.

Eigenvalues are obtained from the characteristic equation

\[ \det(A-\lambda I)=0. \]

Terminology: eigenvalues are also called latent roots (or characteristic roots). Older econometrics texts often use these terms.

We prove two key properties of real symmetric matrices used in practice (especially PCA and covariance analysis): real eigenvalues and orthogonality of eigenvectors for distinct eigenvalues.

For \(A \in \mathbb{R}^{n \times n}\) with eigenvalues \(\lambda_1,\ldots,\lambda_n\) (counted with algebraic multiplicity):

\[ \sum_{i=1}^n \lambda_i=\operatorname{tr}(A), \qquad \prod_{i=1}^n \lambda_i=\det(A). \]

D.8.1 Real Eigenvalues of Real Symmetric Matrices

Let \(A=A^\top \in \mathbb{R}^{n \times n}\) and let \(z\in\mathbb{C}^n\setminus\{0\}\) satisfy \(Az=\lambda z\). Then

\[ \lambda=\frac{z^*Az}{z^*z}, \]

where \(z^*\) is conjugate transpose.

Now (using \((UVW)^*=W^*V^*U^*\) and \(A^*=A^\top\) for real \(A\))

\[ (z^*Az)^*=z^*A^*z=z^*A^\top z, \]

and since \(A=A^\top\),

\[ z^*A^\top z=z^*Az, \]

so \(z^*Az\) is real, and \(z^*z>0\) is real. Therefore \(\lambda\in\mathbb{R}\).

D.8.2 Orthogonality for Distinct Eigenvalues

If \(A=A^\top\), \(Ax_i=\lambda_i x_i\), and \(Ax_j=\lambda_j x_j\), then

\[ x_j^\top A x_i=\lambda_i x_j^\top x_i, \qquad x_i^\top A x_j=\lambda_j x_i^\top x_j. \]

Because \(A=A^\top\), we have \(x_j^\top A x_i=x_i^\top A x_j\). Subtracting gives

\[ (\lambda_i-\lambda_j)x_i^\top x_j=0. \]

Hence, if \(\lambda_i\neq\lambda_j\), we must have

\[ x_i^\top x_j=0. \]

So eigenvectors for distinct eigenvalues are orthogonal. Since they are nonzero, they are also linearly independent.

For real symmetric matrices, this extends to the spectral decomposition:

\[ A = Q\Lambda Q^\top, \]

where \(Q\) is orthogonal (columns are orthonormal eigenvectors) and \(\Lambda\) is diagonal (eigenvalues). When eigenvalues repeat, orthonormal eigenvectors can still be chosen within each eigenspace; this is guaranteed by the Spectral Theorem.

D.8.3 Worked Example: Eigendecomposition of a Symmetric Matrix

For

\[ A=\begin{bmatrix}2&1\\1&2\end{bmatrix}, \]

the characteristic equation is

\[ \det(A-\lambda I) =\det\!\begin{bmatrix}2-\lambda&1\\1&2-\lambda\end{bmatrix} =(2-\lambda)^2-1 =\lambda^2-4\lambda+3 =(\lambda-3)(\lambda-1)=0, \]

the eigenvalues are \(\lambda_1=3\) and \(\lambda_2=1\), with normalized eigenvectors

\[ u_1=\frac{1}{\sqrt2}\begin{bmatrix}1\\1\end{bmatrix}, \qquad u_2=\frac{1}{\sqrt2}\begin{bmatrix}1\\-1\end{bmatrix}. \]

They are orthogonal: \(u_1^\top u_2=0\). Set

\[ Q=\begin{bmatrix}u_1&u_2\end{bmatrix} =\frac{1}{\sqrt2}\begin{bmatrix}1&1\\1&-1\end{bmatrix}, \qquad \Lambda=\operatorname{diag}(3,1). \]

The scalar factor \(\tfrac12\) below comes from \(\tfrac{1}{\sqrt2}\cdot\tfrac{1}{\sqrt2}=\tfrac12\).

Then

\[ Q\Lambda Q^\top =\frac{1}{2} \begin{bmatrix}1&1\\1&-1\end{bmatrix} \begin{bmatrix}3&0\\0&1\end{bmatrix} \begin{bmatrix}1&1\\1&-1\end{bmatrix} =\frac{1}{2}\begin{bmatrix}4&2\\2&4\end{bmatrix} =\begin{bmatrix}2&1\\1&2\end{bmatrix} =A, \]

which verifies the spectral decomposition in this example.

D.9 Summary for Practice

  • Dimension convention: \(m\times n\) means \(m\) rows and \(n\) columns.
  • OLS identifiability requires full column rank: for \(X\in\mathbb{R}^{n\times k}\), need \(\operatorname{rank}(X)=k\).
  • Covariance matrices are PSD (and PD when no nonzero linear combination has zero variance).
  • Centering and hat matrices are symmetric idempotent matrices that drive regression geometry.
  • For real symmetric matrices: eigenvalues are real, eigenvectors can be chosen orthonormal, and \(A=Q\Lambda Q^\top\) underpins PCA-style decompositions; see Chapter 134 (Principal Components).
C  R Language Concepts for Statistical Computing
E  Standard Normal Table (Gaussian Table)

© 2026 Patrick Wessa. Provided as-is, without warranty.

Feedback: e-mail | Anonymous contributions: click to copy (Sats) | click to copy (XMR)

Cookie Preferences