Member-only story
A Complete Introduction To Time Series Analysis (with R):: ARMA processes (Part I)
Perhaps one of the most famous and best-studied approaches to working with time series, still widely used today is the ARMA(p,q) models and its derivatives. As you can guess, these essentially introduce a generalization of the AR(1) and MA(1) processes that we have previously seen. Before we start, let’s introduce some useful operators that will allow us to simplify our notation.
Autoregressive and Moving-average Operators
Simply put, these operators are no more than polynomials defined in the way above. Now, we are fully equipped to define the ARMA process.
ARMA(p,q) processes
A stationary process is said to be ARMA(p,q), denoted {X_t} ~ ARMA(p,q), if it satisfies, for all t :
where B is the backshift operator, and Phi and Theta are the operators we defined above.
Clearly, we can also write the ARMA(p,q) process as
.In this form, we can see that our process is modeled as having dependency not only q steps on past noise, but also on p observations in the past. We will switch notations and use the operators expressions when convenient. You can verify that both expressions are indeed equivalent!
AR(p) and MA(q) processes
Now that we understand the shape of an ARMA(p,q) process, we can immediately see two trivial cases:
In other words,
- The AR(p) process means that, in addition to X_{t} , being explained by some random noise, all the covariance/correlation in the process can be explained by the p previous lags.
- The MA(p) process is saying that X_{t} does not only depend on the current noise, but also on past noise.
- The ARMA(p,q) model implies that even if we adjust for autocorrelation on previous observations, there is still a dependence on past noise.
Shifted ARMA processes