Causality Tests & Further¶
Causality tests are used to assess whether one time series can predict or "cause" a change in another. In time series analysis, we specifically use the concept of Granger Causality.
- Granger causality test: Used to determine if the past values of \(X\) contain some useful information about the future values of \(Y\) that is not already contained in the past values of \(Y\) itself.
Understanding Granger Causality¶
- \(X\) "Granger causes" \(Y\): Knowing all past \(X\) values improves the prediction of \(Y\).
Important Distinction
Granger Causality \(\neq\) true philosophical or physical causality. It is strictly a measure of "predictive causality" or precedence.
Mathematical Foundation¶
The test involves comparing two models to see if adding \(X\) significantly improves the fit.
- Model 1 (Unrestricted): Includes lagged values of both \(Y\) and \(X\).
- Model 2 (Restricted): Includes only lagged values of \(Y\) (not \(X\)).
Unrestricted Model:
$\(Y_{t} = \alpha + \sum_{i=1}^{p} \beta_{i}Y_{t-i} + \sum_{j=1}^{p} \gamma_{j} X_{t-j} + \epsilon_{t}\)$
Restricted Model:
$\(Y_{t} = \alpha + \sum_{i=1}^{p} \beta_{i} Y_{t-i} + \epsilon_{t}\)$
Hypothesis Setup:
$$
\begin{align}
H_{0} & : \gamma_{j} = 0 \text{ for all } j \implies X \text{ does not Granger-cause } Y \
H_{1} & : \gamma_{j} \neq 0 \text{ for at least one } j \implies X \text{ Granger-causes } Y
\end{align}
$$
- Testing: Use an F-test for small samples or a Wald test for large samples.
- Decision: If the result is statistically significant, we reject \(H_{0}\), meaning \(X\) Granger-causes \(Y\).
Assumptions of Granger Causality¶
- Stationarity: Both \(X\) and \(Y\) should be stationary. If they are not, transform them (usually by differencing) before testing.
- Appropriate Lag Length (\(p\)): Choosing \(p\) is crucial. Use information criteria like AIC or BIC to find the optimal lag.
- No Cointegration: For standard VAR models, the series should not be cointegrated.
- If they are cointegrated, use an Error Correction Model (ECM) or Vector Error Correction Model (VECM) instead, as they are more appropriate for handling long-run relationships.
Extensions of Granger Causality¶
- VECM Causality: Used if variables are cointegrated. It includes both the Error Correction Term (ECT) and lagged differences to capture both short-term and long-term causality.
- Nonlinear Granger Causality: Used when the relationship isn't linear, requiring neural networks or kernel-based methods.
- Instantaneous Causality: Occurs if a change in \(X\) at time \(t\) immediately affects \(Y\) at the same time \(t\). This is usually handled within a Simultaneous Equation Model (SEM) framework.
The Haugh-Pierce Test¶
The context for this test is two stationary time series. It examines their cross-correlation to identify if \(X_{t-i}\) can improve the prediction for \(Y_{t+k}\) without the need to estimate a full model.
Procedure:¶
- Pre-whiten Each Series: Fit an ARIMA model to each time series and remove autocorrelations. This isolates the residuals (white noise) for each series.
- Compute Cross-Correlation of Residuals: The Cross-Correlation Function (CCF) then represents the relationship between the unexplained changes in one series and the past of another.
Hypotheses:
- \(H_{0}\): No Causality (cross-correlation = 0).
- \(H_{1}\): Causality exists (cross-correlation is significant).
Test Statistic (\(Q\)):
It is the sum of squared cross-correlations up to a specified maximum lag \(K\):
$\(Q = T \sum_{k=1}^{K}\hat{\rho}_{XY}(k)^{2}\)$
- \(T\) is the number of observations.
- \(\hat{\rho}_{XY}(k)\) is the cross-correlation at lag \(k\).
- \(Q\) follows a \(\chi^{2}\) distribution with \(K\) degrees of freedom.
Hsiao Procedure¶
Pronunciation
The "H" is silent, so it is pronounced "Siao."
Introduced by Cheng Hsiao in 1981, this procedure was designed to overcome limitations of traditional Granger causality tests, especially regarding lag selection.
- Adaptability: Provides a more systematic way to determine causal relationships.
- FPE Criterion: It combines Akaike's Final Prediction Error (FPE) criterion with the traditional test.
- Motivation: It balances model fit and complexity to ensure the resulting lag structure is both accurate and parsimonious.