### Recommended Links

Like this piece? Share it with your friends:

Consider a time series, generated using

set.seed(1)
E=rnorm(240)
X=rep(NA,240)
rho=0.8
X[1]=0
for(t in 2:240){X[t]=rho*X[t-1]+E[t]}

The idea is to assume that an autoregressive model can be considered,
but we don't know the value of the parameter. More precisely, we can't
choose if the parameter is either one (and the series is integrated), or
some value strictly smaller than 1 (and the series is stationary).
Based on past observations, the higher the autocorrelation, the lower
the variance of the noise.

rhoest=0.9; H=260
u=241:(240+H)
P=X[240]*rhoest^(1:H)
s=sqrt(1/(sum((rhoest^(2*(1:300))))))*sd(X)

Now that we have a model, consider the following forecast, including a confidence interval,

plot(1:240,X,xlab="",xlim=c(0,240+H),
ylim=c(-9.25,9),ylab="",type="l")
V=cumsum(rhoest^(2*(1:H)))*s
polygon(c(u,rev(u)),c(P+1.96*sqrt(V),
rev(P-1.96*sqrt(V))),col="yellow",border=NA)
polygon(c(u,rev(u)),c(P+1.64*sqrt(V),
rev(P-1.64*sqrt(V))),col="orange",border=NA)
lines(u,P,col="red")

Here, forecasts can be derived, with any kind of possible
autoregressive coefficient, from 0.7 to 1. I.e. we can chose to model
the time series either with a stationary, or an integrated series,

As we can see above, assuming either that the series is stationary
(parameter lower - strictly - than 1) or integrated (parameter equal to
1), the shape of the prediction can be quite different. So yes, assuming
an integrated model is a big deal, since it has a strong impact on
predictions.