A Complete Introduction To Time Series Analysis (with R):: ARMA processes (Part I)
Perhaps one of the most famous and best-studied approaches to working with time series, still widely used today is the ARMA(p,q) models and its derivatives. As you can guess, these essentially introduce a generalization of the AR(1) and MA(1) processes that we have previously seen. Before we start, let’s introduce some useful operators that will allow us to simplify our notation.
Autoregressive and Moving-average Operators
Simply put, these operators are no more than polynomials defined in the way above. Now, we are fully equipped to define the ARMA process.
ARMA(p,q) processes
A stationary process is said to be ARMA(p,q), denoted {X_t} ~ ARMA(p,q), if it satisfies, for all t :
where B is the backshift operator, and Phi and Theta are the operators we defined above.
Clearly, we can also write the ARMA(p,q) process as
.In this form, we can see that our process is modeled as having dependency not only q steps on past noise, but also on p observations…