Skip to content

L21 Forecasting Basics

Forecasting

There is a subtle difference between the three:

  • Estimation: Unknown parameters. eg. Estimating \(\phi_{1}\)
  • Prediction: Value of random process, using estimated value of parameters. \(Y_{t} = c + \phi_{1}y_{t} +e_{t}\) (replacing the model params with estimated ones)
  • Forecasting: Value of any future random process, which is not observed by sample. Using the fitted model (with estimated parameters) to predict something in the future (outside the sample).

\(Y_{t} = \phi Y_{t-1} + a_{t}\) → Estimation of \(\hat{\phi}\) → Prediction off \(\hat{Y}_{t} = \hat{\phi}Y_{t-1}\) → Forecasting of \(\hat{Y}_{t+1}\)

ARMA Model forecasting

Minimum Mean Squared Error Forecasts

  • Using the observed time series \(y_{1},y_{2},\dots,y_{n}\) forecasting unobserved values \(y_{n+1}, y_{n+2},\dots\)
  • \(n\) is the forecast origin
  • \(\hat{Y}_{n}(l)\): forecast value of \(Y_{n+l}\) = \(l\)-step ahead forecast, obtained using the minimum MSE criteria.
\[ \hat{Y}_{n}(l) = E(Y_{n+l}| Y_{n},Y_{n-1},\dots,Y_{1}) \]

is a conditional expectation given the known data.

\(a_{t}\) is the same as \(e_{t}\) in the previous lectures (the errors) as we will use \(e_{t}\) to denote something else.

\[ \phi_{p}(B) Y_{t} = \theta_{0} + \theta_{q}(B)a_{t} \]
  • This is rewritten as polynomials of coefficients

Considering the random shock form (by cross multiplying the \(\phi_{p}(B)\))

\[ Y_{n+l} = \theta_{0} + \dfrac{\theta_{q}(B)}{\phi_{p}(B)}a_{t} = \theta_{0} + \psi(B)a_{t} \]

where

\[ E(a_{n+j}|Y_{n},\dots,Y_{1}) = \begin{cases} 0, & j \gt 0 \\ a_{n+j}, & j \leq 0 \end{cases} \]
  • We already know the \(a_{n+j}\) errors for \(j\leq 0\) but we don't know the future errors, so their expected value would be \(0\).

Forecast Error

Please #verify this on your own

\[ e_{n}(l) = Y_{n+l} - \hat{Y}_{n}(l) = \sum_{i=0}^{l-1} \psi_{i}a_{n+l-i} \]
  • \(E(e_{n}(l)) = 0\) for \(l \gt 0\)

1-step Ahead Forecast

\[ \begin{align} Y_{n+1} & = \theta_{0} + a_{n+1} + \psi_{1}a_{n} + \psi_{2}a_{n-1} + \dots \\ \hat{Y}_{n}(1) & = \theta_{0} + \psi_{1}a_{n} + \psi_{2}a_{n-1} + \dots \\ e_{n}(1) & = Y_{n+1} - \hat{Y}_{n}(1) = a_{n+1} \\ V(e_{n}(1)) & = \sigma^{2}_{a} \end{align} \]

2-step Ahead Forecast

\[ \begin{align} Y_{n+2} & = \theta_{0} + a_{n+2} + \psi_{1}a_{n+1} + \psi_{2}a_{n} + \dots \\ \hat{Y}_{n}(2) & = \theta_{0} + \psi_{2}a_{n} + \psi_{3}a_{n-1} + \dots \\ e_{n}(1) & = Y_{n+1} - \hat{Y}_{n}(1) = a_{n+1} \\ V(e_{n}(1)) & = \sigma^{2}_{a} \end{align} \]