Skip to content

ACF & PACF

The Autocorrelation Function (ACF) and Partial Autocorrelation Function (PACF) are essential tools for identifying the internal structure and dependencies within a time series.

1. Fundamental Notation

For a stationary process, the covariance and correlation depend only on the lag \(k\), rather than the specific time \(t\):

  • Auto-covariance Function (\(\gamma_{k}\)): $\(\gamma_{k} = Cov(Y_{t}, Y_{t+k}) = E[Y_{t}Y_{t+k}] - \mu_{t}\mu_{t+k}\)$
    This measures the linear relationship between observations at \(t\) and \(t+k\).
  • Autocorrelation Function (\(\rho_{k}\)): $\(\rho_{k} = Corr(Y_{t}, Y_{t+k}) = \frac{\gamma_{k}}{\gamma_{0}}\)$
    This normalizes the covariance by the variance (\(\gamma_{0}\)).

2. Properties of ACF

  • Symmetry: \(\rho_{k} = \rho_{-k}\). The correlation between \(Y_{t}\) and \(Y_{t+k}\) is the same as between \(Y_{t}\) and \(Y_{t-k}\).
  • Boundaries/Limits: \(|\rho_{k}| \leq 1\).
  • Non-uniqueness: While a stationary Normal (Gaussian) process is completely determined by its mean, variance, and ACF, it is possible to find several non-normal processes that share the exact same ACF.

3. ACF of a White Noise Process

For a white noise process, there is no correlation between different time points:
$\(\rho_{k} = \begin{cases} 1 & \text{if } k=0 \\ 0 & \text{otherwise} \end{cases}\)$

4. Sample ACF and the Correlogram

In practice, we estimate the theoretical ACF using observed data:
* Sample ACF (\(r_{k}\)): The sample version of the correlation at lag \(k\) is calculated as:
$\(r_{k} = \frac{\sum_{t=1}^{n-k}(y_{t}-\bar{y})(y_{t+k}-\bar{y})}{\sum_{t=1}^n(y_{t}-\bar{y})^{2}}\)$
* Correlogram: A plot of the sample autocorrelation coefficients \(r_{k}\) against the lag \(k\). This visual aid helps identify which lags have significant correlations.

5. Partial Autocorrelation Function (PACF)

The PACF of order \(k\) (\(\alpha_{k}\)) represents the correlation between \(Y_{t}\) and \(Y_{t-k}\) after removing the linear influence of the intermediate variables (\(Y_{t-1}, Y_{t-2}, \dots, Y_{t-k+1}\)).
$\(\alpha_{k} = Corr(Y_{t}, Y_{t-k} \mid Y_{t-1}, Y_{t-2}, \dots, Y_{t-k+1})\)$

The Markovian Property

In an \(AR(1)\) process, once \(Y_{t-1}\) is known, \(Y_{t}\) becomes independent of all previous lags (like \(Y_{t-2}\)). This "memoryless" behavior where the future depends only on the present is known as the Markovian property.

6. Moments of an AR(1) Process

Assuming the process is stationary (\(E[Y_{t}] = E[Y_{t-k}]\)), we can derive the mean and variance for an \(AR(p)\) process:

  • Mean (\(\mu\)):
    $\(\mu = \frac{c}{1-\phi_{1}-\phi_{2}-\dots-\phi_{p}}\)$
  • Variance (\(\sigma_{y}^{2}\)): For an \(AR(1)\) model \(Y_{t} = \phi_{1}Y_{t-1} + e_{t}\), the variance is:
    $\(\sigma_{y}^{2} = E[\phi_{1}Y_{t-1} + e_{t}]^2 = \phi_{1}^{2}\sigma_{y}^{2} + \sigma_{e}^{2}\)$
    Solving for \(\sigma_{y}^{2}\), we get:
    $\(\sigma_{y}^{2} = \frac{\sigma_{e}^{2}}{1-\phi_{1}^{2}}\)$