• Descriptive
    • Moments
    • Concentration
    • Central Tendency
    • Variability
    • Stem-and-Leaf Plot
    • Histogram & Frequency Table
    • Data Quality Forensics
    • Conditional EDA
    • Quantiles
    • Kernel Density Estimation
    • Normal QQ Plot
    • Bootstrap Plot

    • Multivariate Descriptive Statistics
  • Distributions
    • Binomial Probabilities
    • Geometric Probabilities
    • Negative Binomial Probabilities
    • Hypergeometric Probabilities
    • Multinomial Probabilities
    • Dirichlet
    • Poisson Probabilities

    • Exponential
    • Gamma
    • Erlang
    • Weibull
    • Rayleigh
    • Maxwell-Boltzmann
    • Lognormal
    • Pareto
    • Inverse Gamma
    • Inverse Chi-Square

    • Beta
    • Power
    • Beta Prime (Inv. Beta)
    • Triangular

    • Normal (area)
    • Logistic
    • Laplace
    • Cauchy (standard)
    • Cauchy (location-scale)
    • Gumbel
    • Fréchet
    • Generalized Extreme Value

    • Normal RNG
    • ML Fitting
    • Tukey Lambda PPCC
    • Box-Cox Normality Plot
    • Noncentral t
    • Noncentral F
    • Sample Correlation r

    • Empirical Tests
  • Hypotheses
    • Theoretical Aspects of Hypothesis Testing
    • Bayesian Inference
    • Minimum Sample Size

    • Empirical Tests
    • Multivariate (pair-wise) Testing
  • Models
    • Manual Model Building
    • Guided Model Building
  • Time Series
    • Time Series Plot
    • Decomposition
    • Exponential Smoothing

    • Blocked Bootstrap Plot
    • Mean Plot
    • (P)ACF
    • VRM
    • Standard Deviation-Mean Plot
    • Spectral Analysis
    • ARIMA

    • Cross Correlation Function
    • Granger Causality
  1. Descriptive Statistics & Exploratory Data Analysis
  2. 93  Periodogram & Cumulative Periodogram
  • Preface
  • Getting Started
    • 1  Introduction
    • 2  Why Do We Need Innovative Technology?
    • 3  Basic Definitions
    • 4  The Big Picture: Why We Analyze Data
  • Introduction to Probability
    • 5  Definitions of Probability
    • 6  Jeffreys’ axiom system
    • 7  Bayes’ Theorem
    • 8  Sensitivity and Specificity
    • 9  Naive Bayes Classifier
    • 10  Law of Large Numbers

    • 11  Problems
  • Probability Distributions
    • 12  Bernoulli Distribution
    • 13  Binomial Distribution
    • 14  Geometric Distribution
    • 15  Negative Binomial Distribution
    • 16  Hypergeometric Distribution
    • 17  Multinomial Distribution
    • 18  Poisson Distribution

    • 19  Uniform Distribution (Rectangular Distribution)
    • 20  Normal Distribution (Gaussian Distribution)
    • 21  Gaussian Naive Bayes Classifier
    • 22  Chi Distribution
    • 23  Chi-squared Distribution (1 parameter)
    • 24  Chi-squared Distribution (2 parameters)
    • 25  Student t-Distribution
    • 26  Fisher F-Distribution
    • 27  Exponential Distribution
    • 28  Lognormal Distribution
    • 29  Gamma Distribution
    • 30  Beta Distribution
    • 31  Weibull Distribution
    • 32  Pareto Distribution
    • 33  Inverse Gamma Distribution
    • 34  Rayleigh Distribution
    • 35  Erlang Distribution
    • 36  Logistic Distribution
    • 37  Laplace Distribution
    • 38  Gumbel Distribution
    • 39  Cauchy Distribution
    • 40  Triangular Distribution
    • 41  Power Distribution
    • 42  Beta Prime Distribution
    • 43  Sample Correlation Distribution
    • 44  Dirichlet Distribution
    • 45  Generalized Extreme Value (GEV) Distribution
    • 46  Frechet Distribution
    • 47  Noncentral t Distribution
    • 48  Noncentral F Distribution
    • 49  Inverse Chi-Squared Distribution
    • 50  Maxwell-Boltzmann Distribution
    • 51  Distribution Relationship Map

    • 52  Problems
  • Descriptive Statistics & Exploratory Data Analysis
    • 53  Types of Data
    • 54  Datasheets

    • 55  Frequency Plot (Bar Plot)
    • 56  Frequency Table
    • 57  Contingency Table
    • 58  Binomial Classification Metrics
    • 59  Confusion Matrix
    • 60  ROC Analysis

    • 61  Stem-and-Leaf Plot
    • 62  Histogram
    • 63  Data Quality Forensics
    • 64  Quantiles
    • 65  Central Tendency
    • 66  Variability
    • 67  Skewness & Kurtosis
    • 68  Concentration
    • 69  Notched Boxplot
    • 70  Scatterplot
    • 71  Pearson Correlation
    • 72  Rank Correlation
    • 73  Partial Pearson Correlation
    • 74  Simple Linear Regression
    • 75  Moments
    • 76  Quantile-Quantile Plot (QQ Plot)
    • 77  Normal Probability Plot
    • 78  Probability Plot Correlation Coefficient Plot (PPCC Plot)
    • 79  Box-Cox Normality Plot
    • 80  Kernel Density Estimation
    • 81  Bivariate Kernel Density Plot
    • 82  Conditional EDA: Panel Diagnostics
    • 83  Bootstrap Plot (Central Tendency)
    • 84  Survey Scores Rank Order Comparison
    • 85  Cronbach Alpha

    • 86  Equi-distant Time Series
    • 87  Time Series Plot (Run Sequence Plot)
    • 88  Mean Plot
    • 89  Blocked Bootstrap Plot (Central Tendency)
    • 90  Standard Deviation-Mean Plot
    • 91  Variance Reduction Matrix
    • 92  (Partial) Autocorrelation Function
    • 93  Periodogram & Cumulative Periodogram

    • 94  Problems
  • Hypothesis Testing
    • 95  Normal Distributions revisited
    • 96  The Population
    • 97  The Sample
    • 98  The One-Sided Hypothesis Test
    • 99  The Two-Sided Hypothesis Test
    • 100  When to use a one-sided or two-sided test?
    • 101  What if \(\sigma\) is unknown?
    • 102  The Central Limit Theorem (revisited)
    • 103  Statistical Test of the Population Mean with known Variance
    • 104  Statistical Test of the Population Mean with unknown Variance
    • 105  Statistical Test of the Variance
    • 106  Statistical Test of the Population Proportion
    • 107  Statistical Test of the Standard Deviation \(\sigma\)
    • 108  Statistical Test of the difference between Means -- Independent/Unpaired Samples
    • 109  Statistical Test of the difference between Means -- Dependent/Paired Samples
    • 110  Statistical Test of the difference between Variances -- Independent/Unpaired Samples

    • 111  Hypothesis Testing for Research Purposes
    • 112  Decision Thresholds, Alpha, and Confidence Levels
    • 113  Bayesian Inference for Decision-Making
    • 114  One Sample t-Test
    • 115  Skewness & Kurtosis Tests
    • 116  Paired Two Sample t-Test
    • 117  Wilcoxon Signed-Rank Test
    • 118  Unpaired Two Sample t-Test
    • 119  Unpaired Two Sample Welch Test
    • 120  Two One-Sided Tests (TOST) for Equivalence
    • 121  Mann-Whitney U test (Wilcoxon Rank-Sum Test)
    • 122  Bayesian Two Sample Test
    • 123  Median Test based on Notched Boxplots
    • 124  Chi-Squared Tests for Count Data
    • 125  Kolmogorov-Smirnov Test
    • 126  One Way Analysis of Variance (1-way ANOVA)
    • 127  Kruskal-Wallis Test
    • 128  Two Way Analysis of Variance (2-way ANOVA)
    • 129  Repeated Measures ANOVA
    • 130  Friedman Test
    • 131  Testing Correlations
    • 132  A Note on Causality

    • 133  Problems
  • Regression Models
    • 134  Simple Linear Regression Model (SLRM)
    • 135  Multiple Linear Regression Model (MLRM)
    • 136  Logistic Regression
    • 137  Generalized Linear Models
    • 138  Multinomial and Ordinal Logistic Regression
    • 139  Cox Proportional Hazards Regression
    • 140  Conditional Inference Trees
    • 141  Leaf Diagnostics for Conditional Inference Trees
    • 142  Conditional Random Forests
    • 143  Hypothesis Testing with Linear Regression Models (from a Practical Point of View)

    • 144  Problems
  • Introduction to Time Series Analysis
    • 145  Case: the Market of Health and Personal Care Products
    • 146  Decomposition of Time Series
    • 147  Ad hoc Forecasting of Time Series
  • Box-Jenkins Analysis
    • 148  Introduction to Box-Jenkins Analysis
    • 149  Theoretical Concepts
    • 150  Stationarity
    • 151  Identifying ARMA parameters
    • 152  Estimating ARMA Parameters and Residual Diagnostics
    • 153  Forecasting with ARIMA models
    • 154  Intervention Analysis
    • 155  Cross-Correlation Function
    • 156  Transfer Function Noise Models
    • 157  General-to-Specific Modeling
  • Model Building Strategies
    • 158  Introduction to Model Building Strategies
    • 159  Manual Model Building
    • 160  Model Validation
    • 161  Regularization Methods
    • 162  Hyperparameter Optimization Strategies
    • 163  Guided Model Building in Practice
    • 164  Diagnostics, Revision, and Guided Forecasting
    • 165  Leakage, Target Encoding, and Robust Regression
  • References
  • Appendices
    • Appendices
    • A  Method Selection Guide
    • B  Presentations and Teaching Materials
    • C  R Language Concepts for Statistical Computing
    • D  Matrix Algebra
    • E  Standard Normal Table (Gaussian Table)
    • F  Critical values of Student’s \(t\) distribution with \(\nu\) degrees of freedom
    • G  Upper-tail critical values of the \(\chi^2\)-distribution with \(\nu\) degrees of freedom
    • H  Lower-tail critical values of the \(\chi^2\)-distribution with \(\nu\) degrees of freedom

Table of contents

  • 93.1 Definition
    • 93.1.1 Horizontal axis
    • 93.1.2 Vertical axis
  • 93.2 R Module
    • 93.2.1 Public website
    • 93.2.2 RFC
  • 93.3 Purpose
  • 93.4 Pros & Cons
    • 93.4.1 Pros
    • 93.4.2 Cons
  • 93.5 Pedagogical example
  • 93.6 Research example
  • 93.7 Task
  1. Descriptive Statistics & Exploratory Data Analysis
  2. 93  Periodogram & Cumulative Periodogram

93  Periodogram & Cumulative Periodogram

93.1 Definition

The Periodogram of a time series \(Y_t\) identifies the cyclical components that are present by computing a sinusoidal decomposition. Formally, we compute the squared correlation between the time series and cyclical waves of frequency \(\omega\) which leads to the Periodogram \(I(\omega)\)

\[ \begin{align*}I(\omega) &= \frac{1}{2 \pi T} \left| \sum_{t=1}^{T} e^{-i \omega t} Y_t \right|^2 \\&= \frac{1}{2 \pi T} \left[ \left( \sum_{t=1}^{T} Y_t \sin(\omega t) \right)^2 + \left( \sum_{t=1}^{T} Y_t \cos (\omega t) \right)^2 \right]\end{align*} \]

which can be shown to be a mathematical transformation of the Autocorrelation Function. In other words, the ACF and the Periodogram contain the same information. This, however, does not imply that one should use only one technique: sometimes the ACF is easier to interpret and sometimes the Periodogram is easier. Also note, that one often applies a kernel function to obtain a smoothed estimator of the Periodogram.

The Cumulative Periodogram \(U(\omega)\) is simply

\[ U(\omega) = \frac{\sum_{0 < \omega_k \leq \omega}^{} I(\omega_k) }{\sum_{1}^{\frac{n}{2}} I(\omega_k) } \]

93.1.1 Horizontal axis

The horizontal axis of the Cumulative Periodogram represents the frequency of cyclical wave \(\omega\).

93.1.2 Vertical axis

The vertical axis of the Cumulative Periodogram represents \(U(\omega)\) which is a normalized measure (between 0 and 1).

93.2 R Module

93.2.1 Public website

The Cumulative Periodogram is available on the public website:

  • https://compute.wessa.net/rwasp_spectrum.wasp

93.2.2 RFC

The Cumulative Periodogram is also available on in RFC under the “Time Series / Spectral Analysis” menu item.

To compute the Cumulative Periodogram on your local machine, the following script can be used in the R console:

x <- 100 + cumsum(rnorm(150))
summary(x)
par1 = 1 #Box-Cox transformation parameter 
par2 = 0 #Degree (d) of non-seasonal differencing 
par3 = 0 #Degree (D) of seasonal differencing 
par4 = 12 #Seasonal Period 
if (par1 == 0) {
  x <- log(x)
} else {
  x <- (x ^ par1 - 1) / par1
}
if (par2 > 0) x <- diff(x,lag=1,difference=par2)
if (par3 > 0) x <- diff(x,lag=par4,difference=par3)
r <- spectrum(x,main='Raw Periodogram')

cpgram(x,main='Cumulative Periodogram')

   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
  99.48  104.72  106.70  107.00  108.94  116.11 

To compute the Cumulative Periodogram, the R code uses the standard spectrum and cpgram functions to compute the analysis.

93.3 Purpose

In practice, the Cumulative Periodogram can be used to:

  • describe/summarize the dynamical properties of time series
  • identify non-seasonal and seasonal trends
  • identify various types of typical patterns that correspond to well-known forecasting models
  • check the independence assumption of the residuals of regression and forecasting models

93.4 Pros & Cons

93.4.1 Pros

The Cumulative Periodogram has the following advantages:

  • It provides a lot of information about the dynamical properties of a time series.
  • With some practice, it is (generally speaking) easy to interpret.

93.4.2 Cons

The Cumulative Periodogram has the following disadvantages:

  • It cannot be computed with many software packages.
  • In many disciplines, readers are not familiar with this type of analysis.

93.5 Pedagogical example

Before turning to a real dataset, it is helpful to study the inverse problem. In the Fourier Builder app below, we specify the cyclical ingredients of a time series directly and then ask the periodogram and cumulative periodogram to recover them from the finished signal.

Start with the default Trend + seasonality preset and move between the Waves, Time series, and Diagnostics tabs. Several lessons become visible immediately:

  • increasing the amplitude of the 12-observation seasonal wave makes the peak near period 12 more dominant
  • adding harmonics at periods 6 and 4 creates extra peaks at those shorter seasonal frequencies
  • lengthening the long cycle beyond the sample length produces low-frequency power near the left edge of the periodogram, but not a perfectly resolved long-period peak because the sample does not contain a full cycle

The app is therefore pedagogically useful because it makes the frequency-domain logic concrete: we build a signal from known sine waves first, and the spectral tools then try to reconstruct those waves from the observed series.

For the best experience, open the app in a new tab. On many screens the handbook column is too narrow to display the control panel and the diagnostic panels comfortably side by side.

Interactive Shiny app (click to load).
Open in new tab

93.6 Research example

We now turn to a real-data example and consider the Airline Data. In a first step, we compute the Cumulative Periodogram (CP) for the original time series, i.e. without applying any differencing (\(d = D = 0\)).

The output shows that there are 6 frequencies where a peak of \(I(\omega)\) can be detected. The actual frequencies can be examined, in detail, in the accompanying table. The frequencies with peak spectrum values are: 0.0069, 0.0833, 0.1667, 0.25, 0.3333, and 0.4167. Since frequencies are simply the inverse of the period that is described by a cyclical wave, we can convert the frequencies into the following periods: 144, 12, 6, 4, 3, 2.4 (all expressed as months).

Interactive Shiny app (click to load).
Open in new tab

The information provided by the Periodogram is simply that the Airline time series can be essentially described as a long-run trend (i.e. a cycle with a period of 144 months) and the sum of five remaining cyclical waves which have a period of 12, 6, 4, 3, and 2.4 months respectively. All five of these remaining sine waves (i.e. 12, 6, 4, 3, and 2.4 months) describe a seasonal pattern because they fit exactly an integer number of times into one year (e.g. there are 5 cycles of the 2.4-month sine wave in one year). Hence the seasonal pattern of the Airline data can be described as the sum of five sine waves.

The plots of the analysis show exactly the same information. Let us look more closely at the Cumulative Periodogram which looks like a step function:

  • The first step corresponds to \(\omega = 0.0069\) and represents the long-run trend. This implies that we need to set \(d = 1\) if we wish to remove the trend.
  • The second step corresponds to \(\omega = 0.0833\) or, equivalently, \(\frac{1}{\omega} = 12\) which represents the 12 month cycle in the data. This is an indication that we should apply differencing with \(D = 1\) if we wish to remove the seasonal pattern.
  • The third step corresponds to \(\omega = 0.1667\) or, equivalently, \(\frac{1}{\omega} = 6\) which represents a 6 month cycle and which fits exactly twice in one year. Again, this is an indication that we should use differencing with \(D = 1\) if we wish to remove the seasonal pattern.
  • Steps four, five, and six represent sine wave of periods 4, 3 and 2.4 months, each of which fit perfectly in one year. Again, this indicates the presence of a seasonal pattern.

Since the \(U(\omega)\) is expressed as a normalized measure, we can interpret the steps of the Cumulative Periodogram in terms of the percentage of the Variability that is explained by the sine wave. The first step (i.e. the long-run trend) explains roughly 80% of the dynamical behavior of the time series (the first step as a height of 0.8). The combined effect of the first two sine waves (i.e. the long-run trend and the 12 month cycle) is more than 90%. This means that the 12 month cycle contributes about 10% to explaining the Variability of the time series. Similarly, the importance of each sine wave can be measured as the height of each step.

Now we compute the CP for \(d = 1\) and \(D = 0\) (set the slider accordingly). The output shows that the differenced time series does no longer exhibit a long-run trend (the first step has disappeared). The remaining steps are located in the same position as before but they are more clearly visible because the long-run trend is not dominating anymore.

It can be concluded that there is a strong seasonal pattern in the time series which can be removed by setting \(D = 1\). The output for \(d = D = 1\) shows that there are no big steps in the curve. This implies that there are no important sine waves anymore which are able to explain the differenced time series. The CP curve consists of very small step sizes which are (more or less) evenly distributed.

93.7 Task

Examine the monthly Divorces time series. Does this time series exhibit a long-run trend and/or strong form of seasonality?

92  (Partial) Autocorrelation Function

© 2026 Patrick Wessa. Provided as-is, without warranty.

Feedback: e-mail | Anonymous contributions: click to copy (Sats) | click to copy (XMR)

Cookie Preferences