MathType

Saturday, 9 September 2017

Johansen Cointegration Test with EViews



In the previous discussion we had shown that how we do the cointegration test what we called it as Engle and Granger test. This test has the advantage that it is intuitive, easy to perform and once we master it we will also realize it limitation and why there are other tests. There are drawbacks when we perform the Engle and Granger test. First, since the test involves and ADF test in the second step, all the problems of ADF test are valid here as well, especially choosing the number of lags in the augmentation is a critical factor. Second, the test is based on the assumptions of one cointegration vector, captured by the cointegration regression. Thus, care must be taking when applying the test to models with more than two variables.  If two variables cointegrate adding a third integrated variable to the model will not change the outcome of the test. If the third variable do not belong in the cointegrating vector, OLS estimation will simply put its parameter to zero, leaving the error process unchanged. The advantage of the procedure is that it is easy, and therefore relatively costless to apply compared with other approaches, especially when two variables can work quite well.

The superior test for cointegration is Johansen’s test (1995). The weakness of the test is that it relies on asymptotic properties and sensitive to specification errors in limited samples.
The method start with a VAR representation of the variables (economic systems we like to investigate).

We have a \(p\)-dimensional process, integrated of order \(d,{{x}_{t}}\sim I\left( d \right)\), with VAR representation

                \({{\text{x}}_{t}}=\text{v + }{{\text{A}}_{k}}{{\text{x}}_{t-1}}+{{\varepsilon }_{t}}\)                                        (1)

Typically , we will assume that the system is integrated of order one.
By using the difference operator \(\Delta =1-L\) , or \(L=1-\Delta\)  , the VAR in levels can be transformed to a vector error correction model (VECM).

\(\Delta {{x}_{t}}=v+{{\Gamma }_{1}}\Delta {{x}_{t-1}}+...+{{\Gamma }_{k-1}}\Delta {{x}_{t-k-1}}+\Pi {{x}_{t-1}}+{{\varepsilon }_{t}}\)              (2)

where the \({{\Gamma }_{i}}\)‘s and \(\Pi\)  are matrixes of variables. The lag length in the VAR is \(k\)  lags on each variable.
After transforming the model, using \(L=1-\Delta\)  ,we ‘lose’ on lag at the end, leading to \(k-1\)  lags in VECM.
The more compact for the VECM becomes;
\(\Delta {{\text{x}}_{t}}=\text{v}+\sum\limits_{i=1}^{k-1}{{{\Gamma }_{1}}\Delta {{\text{x}}_{t-i}}}+\Pi {{x}_{t-1}}+{{\varepsilon }_{t}}\)                                               (3)

The number cointegrating vectors are identical to the number of stationary relationship in the  \(\Pi\) -matrix. If there is no cointegration , all row in \(\Pi\)  must be filled with zeros. If there are stationary combinations, or stationary variables, some parameter in \(\Pi\)  will be nonzero.

The rank of \(\Pi\) matrix determines the number of independent rows in \(\Pi \), and the number of cointegration vectors. The rank of \(\Pi \) is given by the number of significant eigenvalues found in \(\hat{\Pi }\) Each significant eigenvalue represent a stationary relation.

From Eq(3), the test for cointegration;

·         If rank \(\Pi =0\)  implies that all \(\text{x}\) ’s are non-stationary. There is no linear combination of variables that leads to stationary.
·         If rank \(\Pi =\rho \) , so \(\Pi \) has a full rank , then all variables in \({{\text{x}}_{t}}\) must be stationary.
·         If \(\Pi \) has reduced rank, \(0<r<p\) , there are cointegration relations among the \(\text{x}\)’s. The cointegrating vectors are given as \(\Pi =\alpha \beta '\) where \({{\beta }_{i}}\) represents the i-th cointegration vectors, and \({{\alpha }_{j}}\)represents the effect of each cointegrating vector on the \(\Delta {{x}_{p,t}}\) variables in the model.

Johansen derived two test, the \(\lambda -\max \) (or maximum eigenvalue) and the  \(\lambda -\text{trace}\) (or trace test).

The Max test is constructed as;
\({{\lambda }_{\max }}\left[ {{H}_{1}}\left( r-1 \right){{H}_{1}}\left( r \right) \right]=-T\log \left( 1-{{{\hat{\lambda }}}_{r}} \right)\)                               (4)

For \(=0,1,2,...,p-2,p-1\) . The null is that there exist \(r\) cointegrating vectors against the alternative of \(r+1\) vectors.

The trace test is
\({{\lambda }_{\text{trace}}}\left[ {{H}_{1}}\left( r \right){{H}_{0}} \right]=-T\sum\limits_{i=r+1}^{p}{\log \left( 1-{{{\hat{\lambda }}}_{i}} \right)}\)                              (5)

where the null hypothesis is \({{\lambda }_{i}}=0\), so only the first \(r\) eigenvalue are non-zero.

It has found that the trace test is the better test, since it appears to be more robust to skewness and excess kurtosis. Furthermore, the trace test can be adjusted for degrees of freedom, which can be important in small samples by replacing \(T\) in the trace statistics by \(T-nk\) (Reimers,1992).

Deterministic trends in a cointegration VECM can stem from two distant sources; the mean of the cointegrating relationship and the mean of the difference series.

Allowing for a constant and a linear trend and assuming that there are  cointegration relations, we can rewrite the VECM in (3) as
\(\Delta {{\text{x}}_{t}}=\text{v}+\sum\limits_{i=1}^{k-1}{{{\Gamma }_{1}}\Delta {{\text{x}}_{t-i}}}+\alpha \beta '{{\text{x}}_{t-1}}+\delta t+{{\varepsilon }_{t}}\)                                                              (6)


where \(\delta\)  is a \(K\times 1\)  vector parameter. Because the Eq (6) is the difference of the data, the constant implies a linear time trend in the levels, and the time trend \(\delta t\)  implies a quadratic time trend in the levels of the data. VECMs exploit the properties of the matrix \(\alpha \) to achieve this flexibility.
Because \(\alpha \) is a \(K\times r\) rank matrix, we can rewrite the deterministic components in Eq(6) as

                \(\text{v = }\alpha \mu \text{+}\gamma \)                                                                                          (7)
                \(\delta t=\alpha \rho t+\tau t\)                                                                                               (8)

where \(\mu \) and \(\rho \) are \(r\times 1\) vector of parameters and \(\gamma \) and \(\tau \) are \(k\times 1\) vectors of parameter. \(\gamma \) is orthogonal to \(\alpha \mu \)  and \(\tau \) is orthogonal to \(\alpha \rho \), such that \(\gamma '\alpha \mu =0\) , and \(\tau '\alpha \rho =0\) .

Following this motivation, Eq(6) can be written as VECM as below;

\(\Delta {{\text{x}}_{t}}=\text{v}+\sum\limits_{i=1}^{k-1}{{{\Gamma }_{1}}\Delta {{\text{x}}_{t-i}}}+\alpha \left( \beta '{{\text{x}}_{t-1}}+\mu +\rho t \right)+\gamma +\tau t+{{\varepsilon }_{t}}\)                        (9)


Placing restriction on the trend terms in Eq(9) yields five cases;

Case 1 : No trend,  \(\tau =0\) ,\(\rho =0\) , \(\gamma =0\)  and \(\mu =0\). The level data \({{\text{x}}_{t}}\) have no deterministic trends and the cointegration equations do not have intercepts;

\(\Delta {{\text{x}}_{t}}=\text{v}+\sum\limits_{i=1}^{k-1}{{{\Gamma }_{1}}\Delta {{\text{x}}_{t-i}}}+\alpha \beta '{{\text{x}}_{t-1}}+{{\varepsilon }_{t}}\)                                                                                (10)

This model is uninteresting because it assumes that all variables in the cointegrating vectors have the same mean.

Cace 2: Restricted constant, \(\tau =0\) , \(\rho =0\) and \(\gamma =0\) The level data \({{\text{x}}_{t}}\) have no deterministic trends and the cointegration equations have intercepts;
\(\Delta {{\text{x}}_{t}}=\text{v}+\sum\limits_{i=1}^{k-1}{{{\Gamma }_{1}}\Delta {{\text{x}}_{t-i}}}+\alpha \left( \beta '{{\text{x}}_{t-1}}+\mu \right)+{{\varepsilon }_{t}}\)

 (11)

Case 3 :Unrestricted constant, \(\tau =0\) and \(\rho =0\). The level data \({{\text{x}}_{t}}\) have linear trends but the cointegration equations have  only intercepts;
 \(\Delta {{\text{x}}_{t}}=\text{v}+\sum\limits_{i=1}^{k-1}{{{\Gamma }_{1}}\Delta {{\text{x}}_{t-i}}}+\alpha \left( \beta '{{\text{x}}_{t-1}}+\mu \right)+\gamma +{{\varepsilon }_{t}}\)
(12)

Case 4: Restricted trend , \(\tau =0\).The level data  \({{\text{x}}_{t}}\) and cointegration equations have linear trends;
\(\Delta {{\text{x}}_{t}}=\text{v}+\sum\limits_{i=1}^{k-1}{{{\Gamma }_{1}}\Delta {{\text{x}}_{t-i}}}+\alpha \left( \beta '{{\text{x}}_{t-1}}+\mu +\rho t \right)+\gamma +{{\varepsilon }_{t}}\)     

(13)                       

In practice, this is model of last resort. If no meaningful cointegration vector are found using Model 2 or 3, a trend component in the vectors might do a trick. Having trend in cointegrating vectors can be understood as a type of growth in target problem (i.e. productivity growth, technological development etc, which the model cannot account for).

Model 5 : Unrestricted trend. The level data \({{\text{x}}_{t}}\) have a quadratic trends and the cointegrating equations have a linear trends.

\(\Delta {{\text{x}}_{t}}=\text{v}+\sum\limits_{i=1}^{k-1}{{{\Gamma }_{1}}\Delta {{\text{x}}_{t-i}}}+\alpha \left( \beta '{{\text{x}}_{t-1}}+\mu +\rho t \right)+\gamma +\tau t+{{\varepsilon }_{t}}\)                       

(9 = 13)
This model quite unrealistic and should not to be considered in applied work. The reason is difficulty in motivation quadratic trends in a multivariate model. Eg, from an economic point of view, it totally unrealistic to assume that technological or productivity growth is an increasingly expanding process.


Estimating Using Eiews

For our discussion using the EViews, we will use the data macro2.

Lets now we want to test the cointegration between the linv (log for investment), linc (log for income) and lcons (log for consumption)
Before we going through the test, let we first look the plot between these two series.

Click the icon for linv, linc and lcons simultaneously, and then right click mouse and then select Open > as Group.

Save the group by click Object\Name…

 

And lets we named it by group01, and then click OK.


From the group window group01, at the window bar, click View \Graph…

 


From the Graph Options window, select
Graph Type – Basic Type
General: Basic graph
Spesific: Line & Symbol

And then, click OK.
 
 



The graph for linv, linc and lcons series show the strong trend and the series is seen moving together.
Perform the unit root test for each variables, linv, linc and lcons to make sure that all the variables is in I(1) condition.

To perform the ADF test for linv series, click the linc icon. At Series : GDP  taskbar, select View\Unit root test…

 


In Unit Root test window, select
Test type : Augmented Dickey-Fuller
Test for unit root in : Level
Include in test equation : Tend and intercept
Lag length : Automatic selection – Akaike information Criterion
and then click OK.

 

The results show that, the unit root is exists at level form for linv series.
To test the linv in first difference form, we follows the step before but at Unit Root test window, we now select; 

Test for unit root in : 1st difference

 


and then click OK

 


The results show that, linv series is stationary at 10% significance level.

Now, we do the same step for the linc and lcons series.

The results for linc series at level form;

 
and for linc series at first difference;

 

The results for lcons series at level form;

 

and for lcons series at first difference;

 


Unit root test for linv, linc and lcons series show that the variables is non-stationary at level form but stationary at first difference at 10% significance level.

After we satisfies that each series, namely linv, linc and lcons is I(1) condition based on the ADF test,  now we can proceed to the Johansen cointegration test.

Before we perfom the cointegration test, first we need select an appropriate lags order for VAR by information criterion. That means, we need to estimate the VAR model first.

To do this, click the group01 icon again. At taskbar, click Proc\Make Vector Autoregression…


 


In VAR Specification window, at Basics tab, select;

VAR type                             : Unrestricted VAR
Endogenous Variable     : linv linc lcons
Lag Interval for Endo       : 1 2

and then click OK.

We can see the results for the VAR, but we not interested about that. 

Save the VAR output by click Object\Name…




And lets we named it by var01, and then click OK.

To get the lag order of our VAR, click again the var01 icon, and at taskbar, click View\ Lag Structure > Lag Length Criteria…

 


and then click OK.

 

 


The results for lag order selection criteria will appear and EViews provide five lag order criteria for we to chosen. Based on AIC, the order lag we chosen is 3 lags.

Now, to perform Johansen cointegration test for variables linv, linc and lcons, click group01 icon, and at taskbar, click View \ Cointegration Test > Johansen System Cointegration Test…


 



In Johansen Cointegration Test window, EViews give an options what the specification of cointegration test we want to choose;

1)      Case 1:no trend. The level data \({{\text{x}}_{t}}\) have no deterministic trends and the cointegration equations do not have intercepts.

2)      Case 2:restricted constant. The level data \({{\text{x}}_{t}}\)  have no deterministic trends and the cointegration equations have intercepts.

3)      Case 3:unrestricted constant . The level data  \({{\text{x}}_{t}}\) have linear trends but the cointegration equations have  only intercepts.

4)      Case 4:restricted trend . The level data \({{\text{x}}_{t}}\) and cointegration equations have linear trends.

5)      Case 5:unrestricted trend . The level data  \({{\text{x}}_{t}}\) have a quadratic trends and the cointegrating equations have a linear trends

Lets now we perform the cointegration test for the variables  in Case 3.
In Johansen Cointegration Test window,select;

Deterministic trend assumption of test 
3)  Intercept (no trend) in CE and test VAR

Lag intervals       : 1 3
Critical Values    : Osterwald-Lenum

and then click OK.

 



The upper panel is for \({{\lambda }_{\text{trace}}}\)   and the lower panel is for \({{\lambda }_{\text{max}}}\).

The \({{\lambda }_{\text{trace}}}\) show that at \(r=0\)  of 33.248 exceeds its critical value of 29.68 at 5% level, and we can reject the null hypothesis of no cointegration equations. But at \(r=1\), the \({{\lambda }_{\text{trace}}}\) value of 11.021 is less than its critical value of 15.41 at 5% level, which means we fail to reject the null hypothesis that there is only one cointegration equations exist.

That means, the Johansen test based on\({{\lambda }_{\text{trace}}}\) suggest that there is one cointegration relationships exist between the variables linv, linc and lcons.

The similar result also get for the \({{\lambda }_{\text{max}}}\) at the lower panel. The  \({{\lambda }_{\text{max}}}\) value for  \(r=0\) is 22.227 which is exceeds its critical value of 29.97 at 5% level and we can reject the null hypothesis of no cointegration equations. But at \(r=1\), the \({{\lambda }_{\text{max}}}\) value of 9.124 is less than its critical value of 14.04 at 5% level, which means we fail to reject the null hypothesis that there is only one cointegration equations exist.

That means, same as in \({{\lambda }_{\text{trace}}}\) test, the \({{\lambda }_{\text{max}}}\) test suggest that there is one cointegration relationships exist between the variables linv, linc and lcons.

No comments:

Post a Comment