Skip to content

L23 Smoothing Techniques (SMA,EMA)

  • Simple Moving Average
  • Exponential Smoothing
  • Double Exponential - Holt's
  • Triple Exponential - Holt Winter's

Why do we need smoothing?

To reduce the noise (random variations) in the data. They will reveal the underlying trend for better forecasting.

May also reveal any other important patterns in the data.

Simple Moving Average Smoothing (SMA)

Smoothed series is derived from the average of the last \(k\) (order) elements of the series.

\[ S_{t} = \dfrac{Y_{t}+Y_{t-1} + \dots + Y_{t-k+1}}{k} \]

L23__Smoothing Techniques (SMA,EMA)-1769073506840.webp

  • More the order, more the smoothening

Exponential Smoothing (EMA)

  • Uses a weighted average (exponentially decreasing)
  • Pure more weight on more recent observations
  • \(S_{0} = Y_{0}\) (starting point) and we write the following recursive structure.
\[ S_{t} = \alpha Y_{t} + (1-\alpha)S_{t-1} \]

Where, \(0 \lt \alpha \lt 1\) is the smoothing factor.

L23__Smoothing Techniques (SMA,EMA)-1769073775918.webp

  • Low \(\alpha\) \(\implies\) Lower influence of each observation. Less responsive to recent changes.
  • Larger \(\alpha\) \(\implies\) Reduces the effect of smoothing.
  • Exponentially weighted average of past observations
  • More recent values to have greater influence on forecast.
    • Exponential decay of influence of past data.
  • Simple, computational efficiency, ease of adjusting changes in process.
\[ S_{t} = \hat{Y}_{t+1}, \quad S_{t-1} = \hat{Y}_{t} \]

Thus,

\[ \hat{Y}_{t+1} = \alpha Y_{t} + (1-\alpha)\hat{Y}_{t} \]

By recursive substitution,

\[ \hat{y}_{t+1} = \alpha \sum_{j=1}^{t-1}(1-\alpha)^j y_{t-j} \]
  • How to select appropriate \(\alpha\)?
    • Use MSE or RMSE
    • Simulate all values of \(0 \lt \alpha \lt 1\), to minimize (R)MSE
    • Select \(\alpha\) based on the data (case-by-case basis).