Theoretical Economics Letters
Vol.08 No.06(2018), Article ID:83936,22 pages
10.4236/tel.2018.86083

On the Estimation of Causality in a Bivariate Dynamic Probit Model on Panel Data with Stata Software: A Technical Review

Richard Moussa1,2*, Eric Delattre1

1ThEMA, Université de Cergy-Pontoise, Cergy-Pontoise, France

2ENSEA, Abidjan, Côte D’Ivoire

Copyright © 2018 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: January 25, 2018; Accepted: April 20, 2018; Published: April 24, 2018

ABSTRACT

In order to assess causality between binary economic outcomes, we consider the estimation of a bivariate dynamic probit model on panel data that has the particularity to account the initial conditions of the dynamic process. Due to the intractable form of the likelihood function that is a two dimensions integral, we use an approximation method: The adaptative Gauss-Hermite quadrature method. For the accuracy of the method and to reduce computing time, we derive the gradient of the log-likelihood and the Hessian of the integrand. The estimation method has been implemented using the d1 method of Stata software. We made an empirical validation of our estimation method by applying on simulated data set. We also analyze the impact of the number of quadrature points on the estimations and on the estimation process duration. We then conclude that when exceeding 16 quadrature points on our simulated data set, the relative differences in the estimated coefficients are around 0.01% but the computing time grows up exponentially.

Keywords:

Causality, Bivariate Dynamic Probit, Gauss-Hermite Quadrature, Simulated Likelihood, Gradient, Hessian

1. Introduction

Testing Granger causality has generated a large set of paper in the literature. The larger part of this literature concerns the case where we have continuous dependent variables. For binary outcomes, there is also a way to consider the causality problem. As described by [1] for a vector of dependant variables, the one order Granger causality can be analyse as a probability conditional independence given a set of exogenous variables and the first order lagged dependent variables. And for a binary outcome in the dependent vector, one can use a probit probability that implies the use of latent variable.

For panel data case, as the one way fix effects model estimated on a finite sample has necessarily inconsistent estimators [2] , the random effect model is used. Due to the fact that we aim to test for one order Granger causality, lagged dependent variables are included as explanatory variables. For the first wave of the panel, we do not have previous values for the dependent variables, and treating them casually or as exogenous leads to inconsistent estimators [2] . So we specify an other equation for initial conditions as described by [3] . The equation is allowed to have different explanatory variables and different idiosyncratic error terms from the dynamic equation.

This specification leads to a likelihood function with an intractable form that is a two dimensions integral with a large set of parameters to be estimated. The estimation of this likelihood function requires the use of numerical approximation of integral function such as maximum simulated likelihood (see [4] for more details) or Gauss-Hermite quadrature (for more details see [5] [6] [7]).

The main goal of this paper is to propose and to test a method for estimating a two equations system where the explanatory variables are binary in a panel data framework. To the extent of our knowledge, there is no program to do so, especially as we propose the calculation of the Hessian matrix and the gradient vector of our maximisation program.

In this paper, we discuss on the problem of testing Granger causality with a bivariate dynamic probit model taking into account the initial conditions. The organization of this paper is the following one. In Section 2 we explain the causality test method for bivariate probit model with panel data. In Section 3, we describe the estimation method available when the likelihood function has an intractable form (two dimensions integral in our case). Section 4 presents the calculation of the gradient with respect to the model parameters and the calculation of the Hessian matrix with respect to the random effects vector. In Section 5, we present a robustness analysis of our selected estimation method by doing some simulations1.

2. Testing Causality with a Bivariate Dynamic Probit Model

This section aims to describe causality test method in the case of binary variables. We start by presenting the general approach in time series before introducing panel data case. We end this section by a discussion on the initial conditions problem.

2.1. Testing Causality: General Approach

Causality concept was introduced by [8] as a better predictability of a variable Y by the use of it lag values, the lag value of an other variable Z and some controls X. In his paper, [8] distinguishes instantaneous causality that means Zt is causing Yt (if Zt is included in the model it improves the predictability of Yt than if not) from lag causality that means lag values of Z improve the predictability of Yt. In this section, we rule out the instantaneous causality and deal with lag causality of one period.

The one period Granger causality can be rephrase in terms of conditional independence. Without lost of generality, we present the univariate case for time series. Let’s Yt and Zt denote some dependent variables and Xt denote a set of controls variables. One period Granger non-causality from Z to Y is the conditional independence of Yt from Z t 1 conditionally to Xt and Y t 1 . More clearly, Granger non-causality from Z to Y is:

f ( Y t | Y t 1 , X t , Z t 1 ) = f ( Y t | Y t 1 , X t ) (1)

Note that the same kind of relationship can be written for Granger non-causality from Y to Z. As Yt and Zt are binary outcome variables, we can use latent variables ( Y * and Z * respectively) and make the assumption that Y and Z have positive outcomes (equals to 1) if their latent variables are positive. The latent variables are defined as follows:

For the left side of the Equation (1) ( f ( Y t | Y t 1 , X t , Z t 1 ) ):

Y t * = X t β 1 + δ 11 Y t 1 + δ 12 Z t 1 + ϵ t 1 (2)

Z t * = X t β 2 + δ 21 Y t 1 + δ 22 Z t 1 + ϵ t 2 (3)

For the right side of the Equation (1) ( f ( Y t | Y t 1 , X t ) ):

Y t * = X t β 1 + δ 11 Y t 1 + ϵ t 1 (4)

Z t * = X t β 2 + δ 21 Z t 1 + ϵ t 2 (5)

where

( ϵ 1 ϵ 2 ) N ( 0, Σ ϵ ) with Σ ϵ = ( 1 ρ ϵ ρ ϵ 1 )

To fit the joint distribution of Y and Z conditionally to X (meaning that we estimate a bivariate model), we need to analyze four available situations that are ( Y = Z = 1 ) , ( Y = Z = 0 ) , ( Y = 1 ; Z = 0 ) and ( Y = 0 ; Z = 1 ) . For each of these situations, we have:

P ( Y t = 1 , Z t = 1 | X t ) = P ( ϵ t 1 > X t β 1 δ 11 Y t 1 δ 12 Z t 1 , ϵ t 2 > X t β 2 δ 21 Y t 1 δ 22 Z t 1 )

P ( Y t = 0 , Z t = 0 | X t ) = P ( ϵ t 1 < X t β 1 δ 11 Y t 1 δ 12 Z t 1 , ϵ t 2 < X t β 2 δ 21 Y t 1 δ 22 Z t 1 )

P ( Y t = 1 , Z t = 0 | X t ) = P ( ϵ t 1 > X t β 1 δ 11 Y t 1 δ 12 Z t 1 , ϵ t 2 < X t β 2 δ 21 Y t 1 δ 22 Z t 1 )

P ( Y t = 0 , Z t = 1 | X t ) = P ( ϵ t 1 < X t β 1 δ 11 Y t 1 δ 12 Z t 1 , ϵ t 2 > X t β 2 δ 21 Y t 1 δ 22 Z t 1 )

As we can see, by assuming q t 1 = 2 Y t 1 and q t 2 = 2 Z t 1 , we can rewrite the probabilities above as:

P ( Y t , Z t | X t ) = Φ 2 ( q t 1 ( X t β 1 + δ 11 Y t 1 + δ 12 Z t 1 ) , q t 2 ( X t β 2 + δ 21 Y t 1 + δ 22 Z t 1 ) , q t 1 q t 2 ρ ϵ ) (6)

where Φ 2 ( ) stands for the bivariate normal c.d.f.

Then testing Granger non-causality in this specification is testing H 0 : δ 12 = 0 for Z is not causing Y and testing H 0 : δ 21 = 0 for Y is not causing Z.

2.2. Testing Causality: Panel Data Case

For panel data case, two major approaches can be used. The first one is to consider that causal effect is not the same for all individuals in the panel ( [9] ). This approach is useful when individuals are heterogeneous or when the causal effect is not homogeneous. The specification for latent variables are:

Y i t * = X t 1 β 1 + δ 11 , i Y i , t 1 + δ 12 , i Z i , t 1 + η i 1 + ζ i t 1 (7)

Z i t * = X t 2 β 2 + δ 21 , i Y i , t 1 + δ 22 , i Z i , t 1 + η i 2 + ζ i t 2 (8)

where ( η i 1 , η i 2 ) denotes the individual random effects which are zero mean and covariance matrix Σ η and ( ζ i t 1 , ζ i t 2 ) denote the idiosyncratic shocks which are zero mean and covariance matrix Σ ζ with

Σ η = ( σ 1 2 σ 1 σ 2 ρ η σ 1 σ 2 ρ η σ 2 2 ) and Σ ζ = ( 1 ρ ζ ρ ζ 1 )

In this approach, testing Granger non-causality is equivalent to test δ 12 , i = 0 , i = 1 , , N for Z is not causing Y and to test δ 21 , i = 0 , i = 1 , , N for Y is not causing Z.

The second approach (that is used in this paper) is to assume the causal effects, if they exist, are the same for all individuals in the panel. With the same notation that the previous case, the latent variables are:

Y i t * = X t 1 β 1 + δ 11 Y i , t 1 + δ 12 Z i , t 1 + η i 1 + ζ i t 1 (9)

Z i t * = X t 2 β 2 + δ 21 Y i , t 1 + δ 22 Z i , t 1 + η i 2 + ζ i t 2 (10)

Then testing Granger non-causality is equivalent to test H 0 : δ 12 = 0 for Z is not causing Y and to test H 0 : δ 21 = 0 for Y is not causing Z.

Finally, Equations (9) and (10) are the core of our problem. Since Y and Z are binary panel outcomes and each equation includes lag dependent variables, estimating jointly these two equations can be viewed as the estimation of a bivariate dynamic probit model.

2.3. Dealing with Initial Conditions

For the first wave of the panel (initial conditions), due to the fact that we do not have data for the previous state on Y and Z (no values for Y i ,0 and Z i ,0 ) we are not able to evaluate P ( Y i 1 , Z i 1 | Y i ,0 , Z i ,0 , X i ) . By ignoring it in the individual likelihood, researchers also ignore the data generation process for the first wage of the panel. This means that they assume the data generating process of the first wave of the panel to be exogenous or to be in equilibrium. These assumptions hold only if the individual random effects are degenerated. If this assumption is not fulfilled, the initial conditions (the first wave of the panel) are explained by the individual random effects and ignoring them leads to inconsistent parameter estimates [2] .

The solution proposed by [2] for the univariate case and generalized by [3] is to estimate a static equation for the first wave of the panel (meaning that we do not introduce lagged dependent variables). In this static equation, the random effects are a linear combination of the random effects in the next wave of the panel and idiosyncratic error terms may have different structure from the idiosyncratic error terms in the dynamic equation. Formally, the latent variables for the first wave of the panel are defined as follows:

Y i , 1 * = X i 1 γ 1 + λ 11 η i 1 + λ 12 η i 2 + ϵ i 1 (11)

Z i , 1 * = X i 2 γ 2 + λ 21 η i 1 + λ 22 η i 2 + ϵ i 2 (12)

where ( ϵ i 1 , ϵ i 2 ) denotes the vector of idiosyncratic shocks which are zero mean and covariance matrix Σ ϵ with Σ ϵ = ( 1 ρ ϵ ρ ϵ 1 ) .

As η 1 and η 2 are individual random effects respectively on Y and Z, λ 12 and λ 21 can be interpreted as the influence of the Y random individual effects (respectively Z random individual effects) on Z (respectively on Y) at the first wave of the panel.

3. Estimation Methods

Due to the fact that the likelihood function has an intractable form (an integral function), it is impossible to estimate this likelihood by usual methods. We then deal with numerical integration methods that are numerical approximation method for an integral. In this section we describe two major methods and argue for one of them to estimate our likelihood function.

3.1. Gauss-Hermite Quadrature Method

The Gauss-Hermite quadrature is a numerical approximation method use to close the value of an integral function. The default approach is related to an univariate integral of the form:

f ( x ) exp ( x 2 ) d x (13)

where exp ( x 2 ) denotes the Gaussian factor2. Then the integral above can be approximated using:

f ( x ) exp ( x 2 ) d x = q = 1 Q w q f ( x q ) (14)

where x q , q = 1 , , Q are nodes from the Hermite polynomial and w q , q = 1 , , Q are corresponding weights.

This approximation supposes that the integrand can be well approximated by an 2 Q + 1 order polynomial and that the integrand is sampled on a symmetric range centered on zero. So, for suitable results, these two assumptions must be taken into account.

We first assume that finding the optimal number of quadrature points can be achieved numerically. For the accuracy of the approximation, it is required to choose the optimal number of quadrature points. To do this, one can start with a number q ¯ of quadrature points and increase it to assess if it significantly changes the result, and repeat this process until convergence in terms of overall likelihood value variation and estimated coefficients variation. But, it is also important to take into account the fact that increasing number of quadrature points also increases the computing time. An example of the impact of number of quadrature points on estimated results is given in Section 5.

For the problem of suitable sampling range, the solution of using the adaptative Gauss-Hermite quadrature was proposed by [5] and [6] . In this approach, instead of using exp ( x 2 ) as a Gaussian factor, they use a Gaussian density ϕ ( μ , σ 2 ) of mean μ and variance σ2. That means (see [5] ):

f ( x ) d x = q = 1 Q w q f ( x q * ) (15)

Then the sampling range is transformed and the new nodes are x q * = μ + 2 σ x q and weights are w q * = 2 σ w q exp ( x q 2 ) . For [5] , one can choose the normal density with posterior mean and variance equal respectively to μ and σ2. For the implementation, we can start with μ = 0 and σ = 1 and at each iteration of the likelihood maximization process, calculate the posterior weighted mean and variance of the quadrature points and use them to calculate the nodes and weights for the next iteration. For [6] , one can choose μ to be the mode of the integrand f ( x ) and σ to be the square of the Hessian of the log of integrand taken in the mode.

σ = ( 2 x 2 log ( f ( x ) ) | x = x ^ ) 1 / 2 (16)

For the multivariate integral case, the same approach is used. Without lost of generality, we discuss the bivariate case that can be apply to others multivariate cases. The function to approximate is written as follows:

2 f ( x , y ) d x d y (17)

With the assumption of independence between x and y (that can be overcome by using a Cholesky decomposition x = x and y = ρ x + y , see [5] or [7] for more precision on these Cholesky transformation or other transformations that can lead to similar results) the integral above can be approximated by:

2 f ( x , y ) d x d y = q 1 = 1 , q 2 = 1 Q w q 1 * w q 2 * f ( x q 1 * , y q 1 * ) (18)

And in this case, the nodes and weights are derived as follows:

( x q 1 * y q 1 * ) = x ^ + 2 ( 2 x 2 log ( f ( x , y ) ) | x , y = x ^ ) 1 / 2 ( x q 1 y q 1 ) (19)

and

( w q 1 * w q 2 * ) = 2 | 2 x 2 log ( f ( x , y ) ) | ( x , y ) = x ^ | 1 / 2 ( w q 1 exp ( x q 1 2 ) w q 2 exp ( x q 2 2 ) ) (20)

where | A | denotes the determinant of matrix A, and x ^ denotes the mode of the integrand f ( x , y ) .

Jackel (2005) also suggests that for the nodes with low weights (when contributions to the integral value are not significant) we can prune the range from those nodes in order to save calculation time. That means to set a scalar

τ = w 1 w | ( Q + 1 ) / 2 | Q and drop all nodes with weights lower than this scalar.

3.2. Maximum Simulated Likelihood Method

Maximum Simulated Likelihood method was introduced by [4] as a solution to maximization problems that have an integral as objective function. In this approach, the likelihood function is supposed to be defined as:

f ( x , y ) = 2 f * ( x , y , u 1 , u 2 ) g ( u 1 , u 2 ) d u 1 d u 2 (21)

where g ( u 1 , u 2 ) is a probability distribution function, f * ( x , y , u 1 , u 2 ) is called simulator and denotes the function from which the mean value at some draws u1 and u2 gives an approximation of the overall likelihood. Without lost of generality, we only define the two dimensions case that can be generalized to fewer or larger dimensions integral. For this kind of likelihood function, [4] proposed as simulator the function f * ( x , y , u 1 , u 2 ) with u1 and u2 drawn from the same probability distribution function g (the probability distribution function of the individual random effects). Then the overall likelihood function can be approximated by (u1d denotes the dth draw from u1; the same definition holds for u2d):

f ( x , y ) = 1 D d = 1 D f * ( x , y , u 1 d , u 2 d ) (22)

where D denotes the number of draws.

To implement this method, we start by simulating a bivariate normal draw N ( 0, I 2 ) and we give them the ( u 1 , u 2 ) covariance matrix structure. Then we calculate the value of the simulator at these transformed draws and we repeat D times. The overall likelihood is the mean of the simulator value at each transformed draw. At each iteration, once the random effects covariance matrix is calculated, we apply it to the simulated first normal draws to transform them in draws of the random effects and use them to calculate the likelihood. This process is repeated until convergence.

The simulated likelihood estimator is consistent and asymptotically equivalent to the likelihood estimator ( [4] ) if the number of draws tend to infinity faster than N .

3.3. GHQ or MSL: What Method to Choose?

As described above, they are two main methods to estimate our likelihood function. To choose which method to implement, we deal with the accuracy and the computing time requirement.

For our estimations, we choose the adaptative Gauss-Hermite quadrature proposed by [7] for three main reasons.

• Our dataset is an unbalanced panel data with 10,569 individuals observed in mean over 26 years, that leads 255,206 observations. Due to the fact that the simulated likelihood method requires that the number of draw D be larger than the square of the number of observations, we do not use it to avoid waste of time in computing process.

• The Gauss-Hermite quadrature requires that we find the best number of quadrature Q that is the one for whom the integrand can be well approximated by an 2 Q + 1 order polynomial. If Q is small, that reduces computing time. Our estimations are achieved in general for Q between 8 and 14. It means that at each iteration, for the likelihood value calculation, we do a weighted sum of between 8 2 = 64 and 14 2 = 196 terms.

• Using the Gauss-Hermite quadrature method reduces computing time but this computing time remains very long if the integrand is not sampled at the suitable range (meaning that the adaptative method has not been used). And in this case, the maximization process spends between two and three weeks before achieving convergence on an Intel Core i7 computer at 3.4 GHz with 8 GB of RAM memory. By applying the adaptative Gauss-Hermite quadrature, the computing time is significantly reduced and then, we spend between two and three days for achieving convergence on the same computer.

Note that the reduced convergence time mentioned above is in part due to the implementation of the first order derivatives of the likelihood function. Using the overall log-likelihood approximated by the Liu and Pierce adaptative Gauss-Hermite quadrature method, we can get derivatives with respect of all model parameters. The implementation of these derivative in the maximization process allows us to used the Stata’s d1 method. The convergence time saved by this method is clearly huge. On our overall data set, with 8 quadratures points, when we use a non adaptative quadrature method, the convergence is not achieved: after 3 weeks of computation, the model underflows. When we use the [6] adaptative Gauss-Hermite quadrature, but without implementing the first order derivatives, the estimation process takes 11 days and 10 hours to achieve convergence. When we use the adaptative Gauss-Hermite quadrature in [6] with implemented the first order derivatives, the estimation process achieve convergence only after 1 day and 17 hours.

4. Chosen Method Requirements

In this section we describe some requirements of the selected method that is the adaptative Gauss-Hermite Quadrature. The first one is the fact that the adaptative Gauss-Hermite quadrature requires to derive the Hessian of the log of the integrand ( [6] ). The second one is that we derive the gradient of the overall likelihood function in order to use Stata’s d1 method (see [10] ) for more accuracy and more speed in the calculations.

4.1. Gradient Vector Calculation

The gradient of the overall log-likelihood function has been calculated to speed up the maximization process. This will allow us to use the Stata’s d1 method that requires the implementation of the gradient vector in addition to the log-likelihood. The likelihood function for an individual i is:

L i = 2 Φ 2 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) t = 2 T i Φ 2 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ ) ϕ ( η i , Σ η ) d η i 1 d η i 2 (23)

where

q i t 1 = 2 y i t 1 1 i , t

q i t 2 = 2 y i t 2 1 i , t

h i 0 = Z i 1 γ 1 + λ 11 η i 1 + λ 12 η i 2

w i 0 = Z i 2 γ 2 + λ 21 η i 1 + λ 22 η i 2

h ¯ i t = X i t 1 β 1 + δ 11 h i , t 1 + δ 12 w i , t 1 + η i 1

w ¯ i t = X i t 2 β 2 + δ 21 h i , t 1 + δ 22 w i , t 1 + η i 2

Using the adaptative Gauss-Hermite quadrature method by [6] , the overall likelihood function is given by (we use the same notation that those used in Section 3):

L i = k = 1 , j = 1 Q w k * w j * Φ 2 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) × t = 2 T i Φ 2 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ ) ϕ ( η i , Σ η ) | η i 1 = x k * , η i 2 = x j * (24)

To get the gradient vector, the log-likelihood above must be derived with respect to 13 parameters that are: β ¯ 1 = ( β 1 , δ 11 , δ 12 ) , β ¯ 2 = ( β 2 , δ 21 , δ 22 ) , γ 1 , γ 2 , λ 11 , λ 12 , λ 21 , λ 22 , σ 1 , σ 2 , ρ η , ρ ζ , and ρ ϵ .

Let’s l k j denotes:

l k j = Φ 2 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) t = 2 T i Φ 2 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ ) ϕ ( η i , Σ η ) | η i 1 = x k * , η i 2 = x j *

Then, the first order derivatives with respect to each α of the 13 parameters is given by:

log ( L i ) α = k = 1 , j = 1 Q l k j / α L i

With respect to β ¯ 1 the first order derivative is:

l k j β ¯ 1 = l k j t = 2 T i q i t 1 ϕ ( q i t 1 h ¯ i t ) Φ 1 ( q i t 2 w ¯ i t q i t 2 ρ ζ h ¯ i t 1 ρ ζ 2 ) Φ 2 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ )

With respect to β ¯ 2 the first order derivative is:

l k j β ¯ 2 = l k j t = 2 T i q i t 2 ϕ ( q i t 2 w ¯ i t ) Φ 1 ( q i t 1 h ¯ i t q i t 1 ρ ζ w ¯ i t 1 ρ ζ 2 ) Φ 2 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ )

With respect to γ 1 the first order derivative is:

l k j γ 1 = l k j q i 0 1 ϕ ( q i 0 1 h i 0 ) Φ 1 ( q i 0 2 w i 0 q i 0 2 ρ ϵ h i 0 1 ρ ϵ 2 ) Φ 2 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ )

With respect to γ 2 the first order derivative is:

l k j γ 2 = l k j q i 0 2 ϕ ( q i 0 2 w i 0 ) Φ 1 ( q i 0 1 h i 0 q i 0 1 ρ ϵ w i 0 1 ρ ϵ 2 ) Φ 2 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ )

With respect to λ 11 the first order derivative is:

l k j λ 11 = l k j q i 0 1 x k * ϕ ( q i 0 1 h i 0 ) Φ 1 ( q i 0 2 w i 0 q i 0 2 ρ ϵ h i 0 1 ρ ϵ 2 ) Φ 2 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ )

With respect to λ 12 the first order derivative is:

l k j λ 12 = l k j q i 0 1 x j * ϕ ( q i 0 1 h i 0 ) Φ 1 ( q i 0 2 w i 0 q i 0 2 ρ ϵ h i 0 1 ρ ϵ 2 ) Φ 2 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ )

With respect to λ 21 the first order derivative is:

l k j λ 21 = l k j q i 0 2 x k * ϕ ( q i 0 2 w i 0 ) Φ 1 ( q i 0 1 h i 0 q i 0 1 ρ ϵ w i 0 1 ρ ϵ 2 ) Φ 2 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ )

With respect to λ 22 the first order derivative is:

l k j λ 22 = l k j q i 0 2 x j * ϕ ( q i 0 2 w i 0 ) Φ 1 ( q i 0 1 h i 0 q i 0 1 ρ ϵ w i 0 1 ρ ϵ 2 ) Φ 2 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ )

With respect to σ 1 the first order derivative is:

l k j log ( σ 1 ) = l k j ( 1 + ( x k * / σ 1 ) 2 ρ η x k * x j * / ( σ 1 σ 2 ) 1 ρ η 2 )

With respect to σ 2 the first order derivative is:

l k j log ( σ 2 ) = l k j ( 1 + ( x j * / σ 2 ) 2 ρ η x k * x j * / ( σ 1 σ 2 ) 1 ρ η 2 )

With respect to ρ η the first order derivative is:

l k j log ( 1 + ρ η 1 ρ η ) 1 / 2 = l k j ( ρ η ρ η ( ( x k * / σ 1 ) 2 + ( x j * / σ 2 ) 2 ) ( 1 + ρ η 2 ) x k * x j * / ( σ 1 σ 2 ) 1 ρ η 2 )

With respect to ρ ζ the first order derivative is:

l k j log ( 1 + ρ ζ 1 ρ ζ ) 1 / 2 = l k j t = 2 T i q i t 1 q i t 2 ϕ ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ ) Φ 2 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ )

With respect to ρ ϵ the first order derivative is:

l k j log ( 1 + ρ ϵ 1 ρ ϵ ) 1 / 2 = l k j q i 0 1 q i 0 2 ϕ ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) Φ 2 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ )

Remarks:

• For σ 1 , σ 2 , ρ η , ρ ζ , and ρ ϵ , we used some transformations on parameters to insure that in the maximization process, each σ remains positive and each ρ remains between −1 and 1 at all iteration. For σ we use exponential transformation then in the derivation, we derive with respect to

log ( σ ) . For ρ we use arc-tangency transformation (i.e. exp ( 2 ρ ) 1 exp ( 2 ρ ) + 1 ) then in the derivation, we derive with respect to log ( 1 + ρ 1 ρ ) 1 / 2 .

• To easily derive a bivariate normal probability with zero mean, variance one and correlation ρ, we can transform it into an integral where the integrand is a product of an univariate normal density and an univariate normal probability as follows:

Φ 2 ( x , y , ρ ) = y ϕ ( v ) Φ ( x ρ v 1 ρ 2 ) d v = x ϕ ( u ) Φ ( y ρ u 1 ρ 2 ) d u .

• Given the transformation above, the first order derivatives of Φ 2 ( x , y , ρ ) with respect to x and y are respectively given by:

Φ 2 ( x , y , ρ ) x = ϕ ( x ) Φ ( y ρ x 1 ρ 2 )

Φ 2 ( x , y , ρ ) y = ϕ ( y ) Φ ( x ρ y 1 ρ 2 )

4.2. Hessian Matrix Calculation

For the requirement of the adaptative Gauss-Hermite quadrature method, we need to derive the Hessian matrix of the log of the integrand function with respect to the random effects vector3. From the individual likelihood function defined in Equation 23, the log of the integrand is:

log ( g ( η i 1 , η i 2 ) ) = log ( Φ 2 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) t = 2 T i Φ 2 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ ) ϕ ( η i , Σ η ) ) (25)

We derive from the log of the integrand in Equation (25) the Hessian matrix by calculating:

2 ( η i 1 ) 2 log ( g ( η i 1 , η i 2 ) )

2 ( η i 2 ) 2 log ( g ( η i 1 , η i 2 ) )

2 η i 1 η i 1 log ( g ( η i 1 , η i 2 ) )

The first order derivatives are given by:

η i log ( g ) = Φ 2 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) Φ 2 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) t = 2 T i Φ 2 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ ) Φ 2 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ ) + η i 1 / σ 1 2 ρ η i 2 / ( σ 1 σ 2 ) 1 ρ η 2

With respect to η i 1 we have:

Φ 2 η i 1 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) = q i 0 1 λ 11 ϕ ( q i 0 1 h i 0 ) Φ 1 ( q i 0 2 w i 0 q i 0 2 ρ ϵ h i 0 1 ρ ϵ 2 ) + q i 0 2 λ 21 ϕ ( q i 0 2 w i 0 ) Φ 1 ( q i 0 1 h i 0 q i 0 1 ρ ϵ w i 0 1 ρ ϵ 2 )

Φ 2 η i 1 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ ) = q i t 1 ϕ ( q i t 1 h ¯ i t ) Φ 1 ( q i t 2 w ¯ i t q i t 2 ρ ζ h ¯ i t 1 ρ ζ 2 )

And with respect to η i 2 we have:

Φ 2 η i 2 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) = q i 0 1 λ 12 ϕ ( q i 0 1 h i 0 ) Φ 1 ( q i 0 2 w i 0 q i 0 2 ρ ϵ h i 0 1 ρ ϵ 2 ) + q i 0 2 λ 22 ϕ ( q i 0 2 w i 0 ) Φ 1 ( q i 0 1 h i 0 q i 0 1 ρ ϵ w i 0 1 ρ ϵ 2 )

Φ 2 η i 2 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ ) = ϕ ( q i t 2 w ¯ i t ) Φ 1 ( q i t 1 h ¯ i t q i t 1 ρ ζ w ¯ i t 1 ρ ζ 2 )

The second order derivatives are given by:

2 ( η i 1 ) 2 log ( g ) = Φ 2 η i 1 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) Φ 2 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) Φ 2 2 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) + Φ 2 η i 1 2 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) Φ 2 2 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) t = 2 T i ( Φ 2 η i 1 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ ) Φ 2 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ ) Φ 2 2 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ ) Φ 2 η i 1 2 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ ) Φ 2 2 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ ) ) + 1 σ 1 2 ( 1 ρ η 2 ) (26)

2 ( η i 2 ) 2 log ( g ) = Φ 2 η 2 1 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) Φ 2 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) Φ 2 2 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) + Φ 2 η i 2 2 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) Φ 2 2 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) t = 2 T i ( Φ 2 η i 2 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ ) Φ 2 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ ) Φ 2 2 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ ) Φ 2 η i 2 2 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ ) Φ 2 2 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ ) ) + 1 σ 1 2 ( 1 ρ η 2 ) (27)

2 η i 1 δ η i 2 log ( g ) = Φ 2 η i 1 η i 2 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ε ) Φ 2 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) Φ 2 2 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) + Φ 2 η 1 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) Φ 2 η 2 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) Φ 2 2 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) t = 2 T i ( Φ 2 η i 1 η i 2 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ ) Φ 2 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ ) Φ 2 2 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ )

Φ 2 η 1 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ ) Φ 2 η 2 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ ) Φ 2 2 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ ) ) ρ η σ 1 σ 2 ( 1 ρ η 2 ) (28)

where

Φ 2 η i 1 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ ) = h ¯ i t Φ 2 η i 1 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ ) ρ ζ ϕ η i 1 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ )

Φ 2 η i 2 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ ) = w ¯ i t Φ 2 η i 2 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ ) ρ ζ ϕ η i 1 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ )

Φ 2 η i 1 η i 2 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ ) = q i t 1 q i t 2 ρ ζ ϕ η i 1 ( q i t 1 h ¯ i t , q i t 2 w ¯ i t , q i t 1 q i t 2 ρ ζ )

Φ 2 η i 1 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) = ( 2 λ 11 λ 21 ρ ϵ ( λ 11 2 + λ 21 2 ) ) ϕ ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) λ 11 2 h i 0 ϕ ( q i 0 1 h i 0 ) Φ 1 ( q i 0 2 w i 0 ρ ε q i 0 2 h i 0 1 ρ ε 2 ) λ 21 2 w i 0 ϕ ( q i 0 2 w i 0 ) Φ 1 ( q i 0 1 h i 0 ρ ϵ q i 0 1 w i 0 1 ρ ϵ 2 )

Φ 2 η i 2 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) = ( 2 λ 12 λ 22 ρ ϵ ( λ 12 2 + λ 22 2 ) ) ϕ ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) λ 12 2 h i 0 ϕ ( q i 0 1 h i 0 ) Φ 1 ( q i 0 2 w i 0 ρ ϵ q i 0 2 h i 0 1 ρ ϵ 2 ) λ 22 2 w i 0 ϕ ( q i 0 2 w i 0 ) Φ 1 ( q i 0 1 h i 0 ρ ε q i 0 1 w i 0 1 ρ ϵ 2 )

Φ 2 η i 1 η i 1 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) = q i 0 1 q i 0 2 ( λ 11 λ 22 + λ 12 λ 21 ρ ϵ ( λ 11 λ 12 + λ 21 λ 22 ) ) ϕ 2 ( q i 0 1 h i 0 , q i 0 2 w i 0 , q i 0 1 q i 0 2 ρ ϵ ) λ 11 λ 12 h i 0 ϕ ( q i 0 1 h i 0 ) Φ 1 ( q i 0 2 w i 0 ρ ϵ q i 0 2 h i 0 1 ρ ϵ 2 ) λ 21 λ 22 w i 0 ϕ ( q i 0 2 w i 0 ) Φ 1 ( q i 0 1 h i 0 ρ ϵ q i 0 1 w i 0 1 ρ ϵ 2 )

Then, the Hessian matrix is given by:

H = ( 2 ( η i 1 ) 2 log ( g ) 2 η i 1 η i 2 log ( g ) 2 η i 1 δ η i 2 log ( g ) 2 ( η i 2 ) 2 log ( g ) ) (29)

As described in Section 3.1, after having derived this Hessian matrix, we calculate its value at the mode of the integrand and use it to re-sample the integrand.

5. Robustness Analysis Based on Simulations

This section aims to insure that the implemented method gives suitable results. We consider that the implemented method give us suitable results if for a given relationship between variables, by applying the estimation method on these variables we find approximatively the same coefficients. To reach this goal, we perform a robustness analysis on the estimation method. This robustness analysis is an empirical one based on simulations. We use two different approaches for that.

The first approach is to simulate bivariate binary variables by specifying a relationship between some explanatory variables (it means that we set coefficients of explanatory variables) and estimate this relationship with the implemented method in order to compare the results with the relationship specified before. In the second approach, we introduce new variables (that were not used in the data generating process) when estimating the relationship with the implemented method and compare the new results with the first ones. The implemented method is robust when it is able to correctly estimate the relationship specified even if we introduce other variables and also to estimate non significant coefficients to those other variables. Finally, the method we make use of to check for the robustness is the same that in [11] .

As the estimation method implemented is a numerical approximation method, the results will depend on the selected number of quadrature points. We deal with the incidence of number of quadrature points on results in the last part of this section. For a better analysis of the results we also add the standard errors of each estimated coefficients.

5.1. Simulated Relationship between Real Variables

In this section, we use variables from the French SIP (Santé et Itinéraire Professionnel) survey data set and we simulate error terms and a relationship between some selected variables. The subset of the database use for this section is an unbalanced panel of 1202 individuals with total waves per individual between 5 and 10 waves.

We set the error terms parameters as σ 1 = 2.1 , σ 2 = 3.1 , ρ η = 0.7 , ρ ζ = 0.5 and ρ ϵ = 0.4 .

We simulate idiosyncratic errors vectors ζ = ( ζ 1 , ζ 2 ) and ϵ = ( ϵ 1 , ϵ 2 ) as bivariate normal variables with zero mean, variance equal to 1 and covariances respectively equal to ρ ζ and ρ ϵ . We also simulate individual random effects as bivariate normal variables with zero mean, covariance equals to ρ η and variance equals to σ 1 2 for the first component of the random effects vector and equals to σ 2 2 for the second component of the random effects vector. It has been done as follows:

ϵ 1 = r n o r m a l (0,1)

ϵ 2 = r n o r m a l ( 0 , 1 ) 1 ρ ϵ 2 + ρ ϵ ϵ 1

ζ 1 = r n o r m a l ( 0 , 1 )

ζ 2 = r n o r m a l ( 0 , 1 ) 1 ρ ζ 2 + ρ ζ ζ 1

where r n o r m a l ( μ , σ ) denote the random normal density with mean μ and standard deviation σ. As individuals effects are time invariant, we simulate η as follows:

η 1 = r n o r m a l ( 0 , σ 1 ) if t = 1

η 2 = r n o r m a l ( 0 , σ 2 ) 1 ρ η 2 + ρ η σ 2 σ 1 η 1 if t = 1

η 1 = η 1 [ t = 1 ] if t 1

η 2 = η 2 [ t = 1 ] if t 1

For the initial conditions ( t = 1 ), the simulated relationship is the following:

y 1 * = 0.2 + 0.3 i l l 0.2 u n e m p + 0.4 η 1 0.5 η 2 + ϵ 1

y 2 * = 2 0.2 i l l 0.08 a g e + 0.3 η 1 + 0.5 η 2 + ϵ 2

y 1 = I ( y 1 * > 0 )

y 2 = I ( y 2 * > 0 )

For t > 1 , we specify the following relationship:

y 1 t * = 1.9 + 0.3 y 1 , t 1 + 0.1 y 2 , t 1 0.05 M a l e t 0.2 u n e m p t + η 1 + ζ 1 t

y 2 t * = 0.4 0.1 y 1 , t 1 + 0.4 y 2 , t 1 + 0.05 M a l e t 0.5 d e n s t + η 2 + ζ 2 t

y 1 t = I ( y 1 t * > 0 )

y 2 t = I ( y 2 t * > 0 )

The variable ill denotes having an illness episode in the year, unemp denotes being out of labour marking during the year, age denotes the age of individual, and Male is 1 if individual is male and 0 otherwise. Estimation results for 16 quadrature points are displayed in Table 1. For all equations, we give the coefficients that are used in the DGP and those that are estimated by our program. As we can see, all the coefficients from the DGP are very closed from the estimates ones.

5.2. Simulated Relationship with Additional Variables

In this section, we keep the same DGP than in Section 5.1 and we add other variables in the model that we estimate in order to evaluate the robustness of the estimation method by the fact that all estimated coefficients for variables in the DGP should remain the same and the added variables coefficients should not

Table 1. Simulated data set estimation’s results.

Estimated standard deviations for estimated coefficients are given within parenthesis. ***: significant at the 1% level, **: significant at the 5% level.

significant. We introduce two variables rural and nationality (not French) in the dynamic equations of the regression.

Results are in Table 2. Columns 1 and 2 in Table 2 are the same than corresponding columns in Table 1. We provide in Table 2, column 3, the new results with the additional variables in order to compare with previous estimates4. As we can see in the Table 2, the coefficients estimated (using again 16 quadrature points) for those variables are not significant and all initial coefficients in the model remain approximately the same.

Table 2. Simulated data set with added variables estimation’s results.

Estimated standard deviations for estimated coefficients are given within parenthesis. ***: significant at the 1% level. **: significant at the 5% level.

5.3. Impact of Number of Quadrature Points on Estimated Results

As the accuracy of the method depends on the number of quadrature points used for the likelihood calculation, we propose an assessment of how it affects the results when this number increases. For doing so, we fit the same model with different numbers of quadrature points and we calculate the relative difference in log-likelihood and in estimated parameters.

We fit some models by using the same simulated relationship between variables as in Section 5.1.

The results are displayed in the Table 3 for dynamic equations and in the Table 4 for initial conditions equations and errors terms covariance matrix structure.

As we can see from Table 3 and Table 4, by increasing the number of quadrature points the changes in results decline and the relative differences are around 0.01% for significant coefficients and 0.1% or at most 1% for non significant coefficients. After 16 quadrature points, the relative differences in log-likelihood and in estimated coefficients become fewer as we increase the number of quadrature points. The estimations with 22 quadrature points are closer to those with 24 quadrature points than the others. So when we increase the number of quadrature points the changes in estimated coefficients are not significant but the computing time grows up exponentially. For these models, estimation time on an i5 core computer at 2.5 GHz with 6 GB of RAM memory for the different number of quadrature points are given in Table 5.

6. Conclusions

This paper describes the bivariate dynamic probit model with endogenous initial conditions starting by justifying the econometric specification of the model, giving the estimation method and its requirements and ending by presenting a robustness analysis. We calculate the derivatives of the log-likelihood function (the gradient) with respect to the 13 parameters in the model. This is the main contribution of our research as many programs use numerical computation of the gradient vector instead of encoding the mathematically derived expression of

Table 3. Impact of the number of quadrature points on estimation results. Part A.

Estimated standard deviations for estimated coefficients are given within parenthesis. ***: significant at the 1% level. **: significant at the 5% level.

Table 4. Impact of the number of quadrature points on estimation results. Part B.

Estimated standard deviations for estimated coefficients are given within parenthesis. ***: significant at the 1% level. **: significant at the 5% level. *: significant at the 10% level.

Table 5. Computing time for different number of quadrature points.

the gradient. Furthermore, for the use of the adaptative Gauss-Hermite quadrature, we also calculate the Hessian matrix with respect to individual random effects vector.

The implementation has been done using Stata software. We wrote 2 ado-files for this purpose. We use Stata’s d1 method for the maximization process. For the use of this method, we implement the gradient vector for the 13 parameters and we also implement the Hessian matrix with respect the random effects vector in order to use the adaptative Gauss-Hermite quadrature. We also wrote two others ado-files for the estimation of the bivariate probit for panel data and the bivariate dynamic probit without initial conditions for panel data. These ado-files are written using the same method (Stata’s d1 method) with the adaptative Gauss-Hermite quadrature. These ado-files are available upon request.

Due to the fact that the integration is bi-dimensional, estimation time is very high and stills increasing when the number of quadrature points or the number of observation or the number of explanatory variable increase. For an estimated model, one should insure that when increasing the number of quadrature point, the computed results don’t change significantly before using them. It means that the relative difference in the results must be around 0.1% or fewer, and if so, we can conclude that the results remain stable when increasing the number of quadrature points. And, it means that there is no need to increase the number of quadrature points that will increase computing time but will not improve significantly the results. However, increasing the number of quadrature points also increases the computation time. One way for major improvement of the program is the use of multi-core (parallel) computing scheme. This scheme allows to make the computation of the contributions to the likelihood (Equation (23)) at each quadrature point separately and simultaneously on several cores. It has the advantage to save time since the contributions are computed in the same time.

Finally, our method gives reasonable computing durations with real dataset. In [12] , we make use of the full SIP data set with 10,569 individuals and 255,206 observations.

Acknowledgements

We acknowledge the Centre Maurice Halbwachs (Réseau Quetelet) for access to the SIP 2007 data set (Santé et itinéraire professionnel-2007. DARES producteur. Centre Maurice Halbwachs diffuseur).

Cite this paper

Moussa, R. and Delattre, E. (2018) On the Estimation of Causality in a Bivariate Dynamic Probit Model on Panel Data with Stata Software: A Technical Review. Theoretical Economics Letters, 8, 1257-1278. https://doi.org/10.4236/tel.2018.86083

References

  1. 1. Adams, P., Hurd, M.D., McFadden, D., Merril, A. and Ribeiro, T. (2003) Healthy, Wealthy, and Wise? Tests for Direct Causal Paths between Health and Socioeconomic Status. Journal of Econometrics, 112, 3-56.

  2. 2. Heckman, J.J. (1979) The Incidental Parameters Problem and the Problem of Initial Conditions in Estimating: A Discrete Time-Discrete Data Stochastic Process and Some Monte Carlo Evidence. Graduate School of Business and Department of Economics, University of Chicago, Chicago.

  3. 3. Alessie, R., Hochguertel, S. and Van Soest, A. (2004) Ownership of Stocks and Mutual Funds: A Panel Data Analysis. The Review of Economics and Statistics, 86, 783-796.

  4. 4. Gourieroux, C. and Monfort, A. (1996) Simulation-Based Econometric Methods. CORE Lectures, Oxford University Press, Oxford.

  5. 5. Naylor, J.C. and Smith, A.F.M. (1982) Applications of a Method for the Efficient Computation of Posterior Distributions. Applied Statistics, 31, 214-225.

  6. 6. Liu, Q. and Pierce, D. (1994) A Note on Gauss-Hermite Quadrature. Biometrika, 83, 624-629. https://doi.org/10.2307/2337136

  7. 7. Jackel, P. (2005) A Note on Multivariate Gauss-Hermite Quadrature. ABN AMRO, London.

  8. 8. Granger, C.W.J. (1969) Investigating Causal Relations by Econometric Models and Cross-Spectral Methods. Econometrica, 37, 428-438. https://doi.org/10.2307/1912791

  9. 9. Nair-Reichert, A. and Weinhold, D. (2000) Causality Tests for Cross-Country Panels: New Look at FDI and Economic Growth in Developing Countries. Oxford Bulletin of Economics and Statistics, 63, 153-171.

  10. 10. Gould, W., Pitblado, J. and Poi, B. (2010) Maximum Likelihood Estimation with Stata. 4th Edition, Stata Press, College Station.

  11. 11. Miranda, A. (2007) Migrant Networks, Migrant Selection and High School Graduation in Mexico, IZA dp No. 3204, Institute for the Study of Labor, Bonn.

  12. 12. Delattre, E., Moussa, R. and Sabatier, M. (2015) Health Condition and Job Status Interactions: Econometric Evidence of Causality from a French Longitudinal Survey. THEMA Working Paper, No. 19, 1-28.

NOTES

1For each section, specifics notations are down at the beginning of the section. Otherwise, in general f ( x ) | x = a denote the value of the function or the matrix f at the point a. When not specify, | a | denote the integer part of the scalar a.

2Notice that even without this factor, one can use the Gauss-Hermite quadrature by using a straightforward transformation that is to multiply and divide the integrand f ( x ) by a Gaussian factor exp ( x 2 ) .

3In this section, ϕ ( x ) denotes the univariate normal density function, ϕ ( x , y , ρ ) denote the bivariate normal density with correlation r, Φ 1 ( x ) denote the univariate normal probability function, and Φ 2 ( x , y , ρ ) denote the bivariate normal probability function with correlation ρ.

4We do the same with columns 1’, 2’ of Table 1 and Table 2 (new results are in column 3’) and with columns 4 and 5 of both tables (new results in column 6).