Stationarity Dilemmas in Time Series Analysis (ADF and KPSS)

We are going to talk about the KPSS(Kwiatkowski-Phillips-Schmidt-Shin) test today. It is employed to evaluate a time series’ stationarity. But wait, do we really need this test given we’ve already reviewed the ADF (Augmented Dickey-Fuller) test for stationarity? Let’s investigate.

KPSS Test:

  • Null Hypothesis: The time series is stationary around a deterministic trend.
  • Alternative Hypothesis : The time series has a unit root (non-stationary).
  • Test Statistic: The KPSS test statistic is based on comparing the variance of the observed series around a deterministic trend to the variance of a random walk (non-stationary) around a deterministic trend.
  • Interpretation:
    • If the test statistic is less than a critical value, you fail to reject the null hypothesis, suggesting stationarity. (Stationary)
    • If the test statistic is greater than a critical value, you reject the null hypothesis of stationarity. (Non Stationary)

What exactly is stationarity around deterministic trend? Let’s say the statistical properties of the temperature data (like the average temperature or the variability) remain roughly the same over time, you have a stationary time series. It’s like the weather patterns are consistent. Now, let’s say there’s a clear, predictable pattern (deterministic trend) in the temperature data, like a gradual increase every year. This is a deterministic trend, it follows a known, consistent pattern.

You can state that your data is stationary around a deterministic trend if the annual average temperature rises reliably but statistical characteristics (such as the monthly average temperature) do not change significantly.

In simple terms, it’s similar to having a stable set of weather patterns (called stationarity), but within that, there is also a trend that is predictable (such as an annual rise in temperature).

What distinguishes it from the ADF test? All that defines stationarity are two things: no regular patterns (trends or seasonality) and a constant mean and variance. Stationarity in the context of the ADF test refers to the absence of a unit root in a time series. Refer https://spillai.sites.umassd.edu/2023/11/17/day-2-with-with-tsfp-by-marco-peixeiro/  for better understanding about unit roots and the ADF test. Essentially, presence of a unit root is bad and it implies that that the series has a long-term memory or persistence in its random fluctuations.

Let’s consider 4 cases.

  1. Both ADF and KPSS say “Stationary” : This is a ideal scenario. The time series is stationary, according to both tests.
  2. ADF says “Non-Stationary,” KPSS says “Stationary”: This is called trend-stationary. A trend-stationary time series is one that may be made stationary by removing a deterministic trend. The mean, variance, and autocorrelation structure of the series change gradually over time due to a systematic and predictable trend and that needs to be removed.
  3. ADF says “Stationary,” KPSS says “Non-Stationary”: This is called difference-stationary. A time series is difference-stationary if it becomes stationary after differencing. Remember how we applied second order differencing on flight data before our ADF test cleared us for stationarity?
  4. Both ADF and KPSS say “Non-Stationary”: Ouch. Both tests suggest the presence of a unit root or a trend that needs addressing.

Keep in mind that the null hypothesis of the ADF test is that the time series has a unit root, indicating non-stationarity, and you want to reject it. While the null hypothesis of the KPSS test is that the time series is stationary around a deterministic trend, and that is welcome.When the ADF and KPSS tests produce contradictory results, there is no hard and fast rule for giving more weight to one test over the other. The decision is influenced by a number of things, including your judgement and the features of the data.

Allow me to return to our ‘Analyse Boston’ data collection called economic-indicators once again.

Even after first order differencing, the ADF test indicates that the logan international flight data is non-stationary, even though KPSS indicates that it is. If KPSS suggests stationarity, it implies that the data may be stationary around a trend, and differencing might not be necessary.

  • Some models, such as the autoregressive integrated moving average (ARIMA) model, are intended for non-stationary data and can directly contain differencing.
  • Other models, such as autoregressive (AR) and moving average (MA), presuppose stationarity and may necessitate differencing.

Anything in time series is often an iterative process. To determine the optimal approach for your data, you may need to experiment with different transformations and model specifications.

Now comes the interesting part. If you remember, a second-order differencing approach was used in our initial analysis of the flight data until the Augmented Dickey-Fuller (ADF) test indicated non-stationarity, despite the Kwiatkowski-Phillips-Schmidt-Shin (KPSS) test indicating stationarity after first-order differencing. The differenced data was then fitted with autoregressive (AR) and moving average (MA) models. The model was chosen by minimising the Akaike Information Criterion (AIC). The resulting AR model had a Mean Absolute Error (MAE) of around 175, whereas the MA model had an MAE of around 275.

When the differencing technique was revisited and we stopped with the differencing when KPSS revealed stationarity, the AR model’s performance improved, providing a lower MAE of 98. The MA model’s performance, on the other hand, declined, resulting in an increased MAE of 350.

This observed change in model performance is related to the differencing approach adjustment.

  • Initially, continuing with differencing until ADF signalled non-stationarity most likely caused excessive differencing, negatively damaging our models.
  • Following that, stopping the differencing at the point of KPSS-identified stationarity allowed for a more appropriate balance, resulting in enhanced AR model performance with a lower MAE of 98.
  • This adjustment, however, resulted in an increased MAE of 350 for the MA model, emphasising the significance of prudent differencing in time series modelling.
  • AR models record dependencies based on lag values, and over-differencing might impair the model’s capacity to recognise patterns in data.
  • The MA model might be responding to the reduced differencing by emphasizing noise or fluctuations that were suppressed during excessive differencing

Sometimes, less differencing is more, but models can be a bit moody about it. You have find that sweet spot. By stopping differencing when one of the tests(KPSS) confirmed stationarity and ending up with a better model proved to us that the original series was closer to stationarity without the need for further differencing.

Leave a Reply

Your email address will not be published. Required fields are marked *