Skip to main content
Log in

Panel bootstrap tests of slope homogeneity

  • Published:
Empirical Economics Aims and scope Submit manuscript

Abstract

This paper proposes two bootstrap-based tests that can be used to infer whether the individual slopes in a panel regression model are homogenous. The first test is suitable when wanting to infer the null of homogeneity versus the general alternative, while the second is suitable when wanting to infer the units of the panel that can be pooled. Both approaches are shown to be asymptotically valid, a property that is verified in small samples using Monte Carlo simulation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. As pointed out by MacKinnon (2007), the advantage of bootstrapping \((y_{i,t}, x_{i,t}^\prime )\) rather than residuals often comes at the cost of poor small-sample performance, a finding that is supported by our preliminary Monte Carlo results.

  2. An alternative bootstrap approach based on cross-sectional resampling has been suggested by Kapetanios (2008). Unlike the bootstrap approach used here, the cross-sectional resampling scheme provides asymptotically valid bootstrap procedures when \(N \rightarrow \infty \) but T remains fixed, but this is based on cross-sectional independence.

  3. See Gonçalves (2011) for a set of assumptions that can be used in the case of arbitrary cross-sectional dependence and N asymptotics.

  4. Kapetanios (2003) assumes the existence of a consistent pooled estimator of some common parameter \(\beta \), regardless of whether or not the units are poolable. Typically, this requires that a random coefficient assumption is satisfied, or alternatively, that the fraction of non-poolable units tends to zero as \(N \rightarrow \infty \).

  5. We also considered a moving average model for \(f_{\varepsilon , t}\). The results were, however, very similar to the ones based the autoregressive model considered here and are therefore omitted.

  6. The power results are not size corrected because such a correction is generally not available in practice. Hence, a test is useful for applied work only if it respects roughly the nominal significance level.

  7. For a detailed discussion of these procedures, we refer to Lehmann and Romano (2005, Chapter 9).

References

  • Andrews DWK (1991) Heteroskedasticity and autocorrelation consistent covariance matrix estimation. Econometrica 59:817–858

    Article  Google Scholar 

  • Anselin L, Le Gallo J, Jayet H (2008) Spatial panel econometrics. In: Mátyás L, Sevestre P (eds) The econometrics of panel data. Springer, Berlin

    Google Scholar 

  • Baltagi BH (2008) Econometric analysis of panel data. Wiley, Chichester

    Google Scholar 

  • Baltagi BH, Bresson G, Pirotte A (2008) To pool or not to pool? In: Mátyás L, Sevestre P (eds) The econometrics of panel data. Springer, Berlin

    Google Scholar 

  • Bun MJG (2004) Testing poolability in a system of dynamic regressions with nonspherical disturbances. Empir Econ 29:89–106

    Article  Google Scholar 

  • Chudik A, Pesaran MH, Tosetti E (2011) Weak and strong cross-section dependence and estimation of large panels. Econom J 14:C45–C90

    Article  Google Scholar 

  • Davidson J (1994) Stochastic limit theory. Oxford University Press, Oxford

    Book  Google Scholar 

  • Davidson R, MacKinnon JG (1999) The size distortion of bootstrap tests. Econom Theory 15:361–376

    Article  Google Scholar 

  • Fitzenberger B (1997) The moving blocks bootstrap and robust inference for linear least squares and quantile regressions. J Econom 82:235–287

    Article  Google Scholar 

  • Freedman DA (1981) Bootstrapping regression models. Ann Stat 9:1218–1228

    Article  Google Scholar 

  • Gonçalves S (2011) The moving blocks bootstrap for panel linear regression models with individual fixed effects. Econom Theory 27:1048–1082

    Article  Google Scholar 

  • Gonçalves S, White H (2005) Bootstrap standard error estimates for linear regression. J Am Stat Assoc 100:970–979

    Article  Google Scholar 

  • Hall P, Horowitz JL, Jing B-Y (1995) On blocking rules for the bootstrap with dependent data. Biometrika 82:561–574

    Article  Google Scholar 

  • Hidalgo J (2003) An alternative bootstrap to moving blocks for time series regression models. J Econom 117:369–399

    Article  Google Scholar 

  • Hsiao C (2003) Analysis of panel data. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Hsiao C, Pesaran MH (2008) Random coefficient panel data models. In: Mátyás L, Sevestre P (eds) The econometrics of panel data. Springer, Berlin

    Google Scholar 

  • Kapetanios G (2003) Determining the poolability properties of individual series in panel datasets. Queen Mary, University of London Working Paper No. 499

  • Kapetanios G (2006) Cluster analysis of panel data sets using non-standard optimisation of information criteria. J Econ Dyn Control 30:1389–1408

    Article  Google Scholar 

  • Kapetanios G (2008) A bootstrap procedure for panel data sets with many cross-sectional units. Econom J 11:377–395

    Article  Google Scholar 

  • Lahiri SN (2002) On the jackknife-after-bootstrap method for dependent data and its consistency properties. Econom Theory 18:79–98

    Article  Google Scholar 

  • Lahiri SN, Furukawa K, Lee Y-D (2007) A nonparametric plug-in rule for selecting optimal block lengths for block bootstrap methods. Stat Methodol 4:292–321

    Article  Google Scholar 

  • Lehmann EL, Romano JP (2005) Testing statistical hypotheses. Springer, New York

    Google Scholar 

  • Lin C-C, Ng S (2012) Estimation of panel data models with parameter heterogeneity when group membership is unknown. J Econom Methods 1:42–55

    Article  Google Scholar 

  • MacKinnon JG (2007) Bootstrap hypothesis testing. Queen’s economics department Working Paper No. 1127

  • Mathai AM, Provost SB (1992) Quadratic forms in random variables. Marcel Dekker, New York

    Google Scholar 

  • Newey WK, West KD (1994) Automatic lag selection in covariance matrix estimation. Rev of Econ Stud 61:613–653

    Article  Google Scholar 

  • Pesaran MH, Tosetti E (2010) Large panels with common factors and spatial correlations. J Econom 161:182–202

    Article  Google Scholar 

  • Pesaran MH, Yamagata T (2008) Testing slope homogeneity in large panels. J Econom 142:50–93

    Article  Google Scholar 

  • Pesaran MH, Chudik A (2013) Econometric analysis of high dimensional VARs featuring a dominant unit. Econom Rev 32:592–649

    Article  Google Scholar 

  • Pesaran MH, Smith R, Im KS (1996) Dynamic linear models for heterogeneous panels. In: Mátyás L, Sevestre P (eds) The econometrics of panel data. Springer, Berlin

    Google Scholar 

  • Phillips PCB, Sul D (2003) Dynamic panel estimation and homogeneity testing under cross section dependence. Econom J 6:217–259

    Article  Google Scholar 

  • Smeekes S (2015) Bootstrap sequential tests to determine the stationary units in a panel. J Time Ser Anal (forthcoming)

  • White H (2000) A reality check for data snooping. Econometrica 68:1097–1126

    Article  Google Scholar 

  • White H (2001) Asymptotic theory for econometricians. Academic Press, New York

    Google Scholar 

  • Zhou Z, Shao X (2013) Inference for linear models with dependent errors. J R Stat Soc Ser B 75:323–343

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Joakim Westerlund.

Additional information

The authors would like to thank Badi Baltagi (Editor), Edith Madsen, Hans Christian Kongsted, David Edgerton, and seminar participants at Lund University for many valuable comments and suggestions. Financial support from the Knut and Alice Wallenberg Foundation and the Jan Wallander and Tom Hedelius Foundation is gratefully acknowledged.

Appendix: Proofs

Appendix: Proofs

This appendix is concerned with the proofs of Lemma 1 and Theorems 13. The convergence is in probability, but we generally do not add this explicitly in order to simplify the notation. The sequence \(\{a_T\}\) is at most of order \(T^{\kappa }\) in probability, denoted \(a_T = O_p(T^{\kappa })\), if \(T^{-\kappa }a_T\) converges in distribution. The sequence is of order smaller than \(T^{\kappa }\) in probability, denoted by \(a_T = o_p(T^{\lambda })\), if \(T^{-\kappa } a_T \rightarrow _p 0\). The bootstrap stochastic order symbols, denoted \(O_{p^*}(\cdot )\) and \(o_{p^*}(\cdot )\), are defined in an analogous manner.

We begin by defining the following quantities which will be used throughout this Appendix:

$$\begin{aligned} \xi _{i,T}= & {} \frac{1}{\sqrt{T}} \sum _{t=1}^T z_{i,t} = \frac{1}{\sqrt{T}} \sum _{t=1}^T (x_{i,t} - \overline{x}_{i}) \varepsilon _{i,t}, \\ \tilde{\xi }_{i,T}= & {} \frac{1}{\sqrt{T}} \sum _{t=1}^T \tilde{z}_{i,t} = \frac{1}{\sqrt{T}} \sum _{t=1}^T (x_{i,t} - \mu _{i}) \varepsilon _{i,t}, \\ \xi _{i,T}^*= & {} \frac{1}{\sqrt{T}} \sum _{t=1}^T z_{i,t}^* = \frac{1}{\sqrt{T}} \sum _{t=1}^T (x_{i,t} - \overline{x}_{i}) \varepsilon _{i,t}^*, \\ \tilde{\xi }_{i,T}^{*}= & {} \frac{1}{\sqrt{T}} \sum _{t=1}^T \tilde{z}_{i,t}^{*} = \frac{1}{\sqrt{T}} \sum _{t=1}^T (x_{i,t} - \mu _{i}) \tilde{\varepsilon }_{i,t}^{*}, \end{aligned}$$

where \(\tilde{\varepsilon }_{i,t}^{*} = y_{i,t}^* - \theta _i - x_{i,t}^\prime \beta _i\). Also, let \(z_t\) and \(\tilde{z}_t\) be the \(mN \times 1\) stacked vectors \(z_t = (z_{1,t}^\prime ,\ldots , z_{N,t}^\prime )^\prime \) and \(\tilde{z}_t = (\tilde{z}^{\prime }_{1,t},\ldots , \tilde{z}^{\prime }_{N,t})^\prime \), with similar definitions of \(z_{t}^{*}\) and \(\tilde{z}_{t}^{*}\).

Lemma 2

Under Assumption ERR, as \(T \rightarrow \infty \),

$$\begin{aligned} \Sigma ^{-1/2} \frac{1}{\sqrt{T}} \sum _{t=1}^T z_t \rightarrow _d N({0},{I}_{mN}). \end{aligned}$$

Proof of Lemma 2

Clearly

$$\begin{aligned} \frac{1}{\sqrt{T}} \sum _{t=1}^T z_{i,t}= & {} \frac{1}{\sqrt{T}} \sum _{t=1}^T (x_{i,t} - \overline{x}_i) \varepsilon _{i,t}\\= & {} \frac{1}{\sqrt{T}} \sum _{t=1}^T (x_{i,t} - \mu _i) \varepsilon _{i,t} - ( \overline{x}_i - \mu _i) \frac{1}{\sqrt{T}} \sum _{t=1}^T \varepsilon _{i,t}\\= & {} \frac{1}{\sqrt{T}} \tilde{z}_{i,t} - ( \overline{x}_i - \mu _i) \frac{1}{\sqrt{T}} \sum _{t=1}^T \varepsilon _{i,t}. \end{aligned}$$

By Corollary 3.48 in White (2001), \(( \overline{x}_{i} - \mu _{i}) = T^{-1} \sum _{t=1}^T (x_{i,t} - \mu _{i}) = o_p(1)\), and by further use of his Theorem 5.20, \(T^{-1/2} \sum _{t=1}^T \varepsilon _{i,t} = O_p(1)\). It follows that

$$\begin{aligned} \frac{1}{\sqrt{T}} \sum _{t=1}^T z_{i,t} = \frac{1}{\sqrt{T}}\sum _{t=1}^T \tilde{z}_{i,t} + o_p(1). \end{aligned}$$

The required result now follows by applying to \(T^{-1/2} \sum _{t=1}^T \tilde{z}_t\) a central limit theorem for mixing processes (see, e.g., White 2001, Theorem 5.20). \(\square \)

Lemma 2*

Under Assumptions ERR and BL, as \(T \rightarrow \infty \),

$$\begin{aligned} \Sigma ^{-1/2} \frac{1}{\sqrt{T}} \sum _{t=1}^T z_t^* \rightarrow _{d^*} N(0,I_{mN}) \quad \text{ in } \text{ probability }. \end{aligned}$$

Proof of Lemma 2*

We have \(\varepsilon _{i,t}^* = \tilde{\varepsilon }_{i,t}^{*} - (\hat{\theta }_i - \theta _i) - x_{i,t}^\prime ({\hat{\beta }}_i - \beta _i)\). Similarly, \(\hat{\varepsilon }_{i,t} = \varepsilon _{i,t} - (\hat{\theta }_i - \theta _i) - x_{i,t}^\prime ({\hat{\beta }}_i - \beta _i)\). Using these relationships, we can write

$$\begin{aligned} \frac{1}{\sqrt{T}} \sum _{t=1}^T z_{i,t}^* = \frac{1}{\sqrt{T}} \sum _{t=1}^T (x_{i,t} - {\overline{x}}_i) \varepsilon _{i,t}^* = \frac{1}{\sqrt{T}} \sum _{t=1}^T [(x_{i,t} - \overline{x}_i) \tilde{\varepsilon }_{i,t}^{*} - (x_{i,t} - \overline{x}_i) \varepsilon _{i,t}], \end{aligned}$$

where we have used that fact that \(\sum _{t=1}^T (x_{i,t} - \overline{x}_i) \hat{\varepsilon }_{i,t} = 0\) by the first-order conditions for \({\hat{\beta }}_i\). By adding and subtracting appropriately, we have

$$\begin{aligned} \frac{1}{\sqrt{T}} \sum _{t=1}^T z_{i,t}^*&= \frac{1}{\sqrt{T}} \sum _{t=1}^T [(x_{i,t} - \mu _i) \tilde{\varepsilon }_{i,t}^{*} - (x_{i,t} - \mu _i) \varepsilon _{i,t}] \\&\quad - \frac{1}{\sqrt{T}} \sum _{t=1}^T (\overline{x}_i - \mu _i) \tilde{\varepsilon }_{i,t}^{*} - \frac{1}{\sqrt{T}} \sum _{t=1}^T (\overline{x}_i - \mu _i) \varepsilon _{i,t} \\&= \frac{1}{\sqrt{T}} \sum _{t=1}^T [(x_{i,t} - \mu _i) \tilde{\varepsilon }_{i,t}^{*} - (x_{i,t} - \mu _i) \varepsilon _{i,t} ] + o_p(1), \end{aligned}$$

where the last equality follows from using \((\overline{x}_i - \mu _i) = o_p(1)\) (see White 2001, Corollary 3.48), \(T^{-1/2}\sum _{t=1}^T \tilde{\varepsilon }_{i,t}^{*} = O_{p^*}(1)\) (see Fitzenberger 1997, Theorem 3.1), and \(T^{-1/2} \sum _{t=1}^T \varepsilon _{i,t} = O_p(1)\) (see White 2001, Theorem 5.20). The required result now follows from the same argument as in Fitzenberger 1997, giving

$$\begin{aligned} \Sigma ^{-1/2} \frac{1}{\sqrt{T}} \sum _{t=1}^T (\tilde{z}_t^{*} - \tilde{z}_t ) \rightarrow _{d^*} N({0}, {I}_{mN}) \quad \text{ in } \text{ probability } \end{aligned}$$

as \(T \rightarrow \infty \). \(\square \)

Proof of Theorem 1

We begin by proving the asymptotic distribution under \(H_0\), in which case

$$\begin{aligned} {\hat{\beta }}_i - {\hat{\beta }}_\mathrm{{WFE}} = \frac{1}{\sqrt{T}} { {Q}}_{i,T}^{-1} \xi _{i,T} - \frac{1}{\sqrt{T}} \left( \sum _{i=1}^N \hat{\sigma }_i^{-2} { {Q}}_{i,T} \right) ^{-1} \sum _{i=1}^N \hat{\sigma }_i^{-2} \xi _{i,T}. \end{aligned}$$

This implies

$$\begin{aligned} S = \sum _{i=1}^N \frac{ \xi _{i,T}^\prime {{Q}}_{i,T}^{-1} \xi _{i,T}}{\hat{\sigma }_i^{2}} - \left( \sum _{i=1}^N \hat{\sigma }_i^{-2} \xi _{i,T} \right) ^\prime \left( \sum _{i=1}^N \hat{\sigma }_i^{-2} {{Q}}_{i,T} \right) ^{-1} \left( \sum _{i=1}^N \hat{\sigma }_i^{-2} \xi _{i,T} \right) , \end{aligned}$$
(2)

or, in stacked form,

$$\begin{aligned} S = \xi _{T}^\prime \big ( \hat{\Sigma }_T {Q}_{T} \big )^{-1} \xi _{T} - \big ( {D} \hat{\Sigma }_T^{-1} \xi _{T} \big )^\prime \big ( {D} \hat{\Sigma }_T^{-1} {Q}_{T} {D}^\prime \big )^{-1} \big ({D} \hat{\Sigma }_T^{-1} \xi _{T}\big ) , \end{aligned}$$
(3)

where \(\xi _T = \big ( \xi _{1,T}^\prime ,\ldots ,\xi ^\prime _{N,T}\big )^\prime \), \({D} = {\tau }_N^\prime \otimes {I}_k\), \({Q}_{T} = \mathrm {diag}({Q}_{1,T} ,\ldots , {Q}_{N,T})\), and \(\hat{\Sigma }_T\) is a diagonal \(Nm\times Nm\) matrix whose diagonal is given by \((\hat{\sigma }_1^2,\ldots , \hat{\sigma }_N^2)^\prime \otimes {\tau }_k\).

By the properties of the LS residuals and Corollary 3.48 in White (2001), we have that \(\hat{\sigma }_i^2 - \sigma _i^2 = o_p(1)\), where \(\sigma _{i}^2 = E\big (\varepsilon _{i,t}^2\big )\). Note also that, by Lemma 2 and Assumption REGR, \(\xi _T \rightarrow _d \xi \) as \(T \rightarrow \infty \), where \(\xi \sim N({0}, \Sigma )\), and \(Q_{T} - Q =o_p(1)\), where \({Q} = \mathrm {diag}({Q}_{1} ,\ldots , {Q}_{N})\). It follows that

$$\begin{aligned} S&\rightarrow _d {Z}_1^\prime Z_1 - \big ( {D} \Sigma _0^{-1/2} Q^{1/2} Z_1 \big )^\prime \big [ {D}\Sigma _0^{-1/2} Q^{1/2} \big ({D} \Sigma _0^{-1/2} Q^{1/2} \big )^\prime \big ]^{-1}\big ({D} \Sigma _0^{-1/2} Q^{1/2} Z_1 \big )\nonumber \\&=_d Z_1^\prime \big ( {I}_{kN} -{H}^\prime \big ({H}{H}^\prime \big )^{-1}{H}\big ) Z_1 = Z_1^\prime A Z_1, \end{aligned}$$
(4)

where \(=_d\) signifies equality in distribution, \(Z_1 =\Sigma _0^{-1/2} Q^{-1/2} \xi \), \(Z_1 \sim N(0, B)\) with \({B} = \Sigma _0^{-1/2} Q^{-1/2}\Sigma \Sigma _0^{-1/2} Q^{-1/2}\), \(\Sigma _0\) is \(\hat{\Sigma }_T\) with \(\hat{\sigma }_i^2\) replaced by \(\sigma _i^2\), \(H = {D} \Sigma _0^{-1/2} Q^{1/2}\) and \(A = {I}_{kN} -{H}^\prime ({H}{H}^\prime )^{-1}{H}\). Since A is symmetric,

$$\begin{aligned} Z_1^\prime {A} Z_1 =_d \sum _{j=1}^{kN} \lambda _j U_j^2, \end{aligned}$$

where \(\lambda _1, \ldots , \lambda _{mN}\) are the eigenvalues of \({B}^{1/2} {A} {B}^{1/2}\) (see Mathai and Provost 1992). Noting that A and B are (non-stochastic) positive semidefinite matrices, we have that \(\lambda _j \ge 0\) for all j. This establish the required result under \(H_0\).

To show consistency, we consider an alternative hypothesis of the form \(H_{1} : \beta _i = \beta + \delta _i\), where \(\delta _i\) are \(m \times 1\) vectors of fixed constants such that \(\Vert \delta _i \Vert \le C < \infty \) for all i and \(\delta _i \ne \delta _h\) for some pair \(i \ne h\). Under this alternative,

$$\begin{aligned} \sqrt{T} ( {\hat{\beta }}_i - {\hat{\beta }}_\mathrm{{WFE}} )= & {} \sqrt{T} \delta _i + Q_{i,T}^{-1} \xi _{i,T} - \sqrt{T} \left( \sum _{i=1}^N \frac{ Q_{i,T} }{\hat{\sigma }_i^2} \right) ^{-1} \sum _{i=1}^N \frac{ Q_{i,T} \delta _i }{\hat{\sigma }_i^2}\\&-\, \sqrt{T}\left( \sum _{i=1}^N \frac{ Q_{i,T} }{\hat{\sigma }_i^2} \right) ^{-1} \sum _{i=1}^N \frac{ \xi _{i,T} }{\hat{\sigma }_i^2} \\= & {} \sqrt{T} \delta _i + Q_{i,T}^{-1} \xi _{i,T} - \sqrt{T}c_T - \sqrt{T} \left( \sum _{i=1}^N \frac{ Q_{i,T} }{\hat{\sigma }_i^2} \right) ^{-1} \sum _{i=1}^N \frac{ Q_{i,T} \delta _i }{\hat{\sigma }_i^2}, \end{aligned}$$

with an obvious definition of \(c_T\). The last term on the right is \(O_p(1)\), suggesting that

$$\begin{aligned} \sqrt{T}( {\hat{\beta }}_i - {\hat{\beta }}_\mathrm{{WFE}} ) = \sqrt{T} ( \delta _i - c_T) + O_p(1). \end{aligned}$$

Since \(\delta _i \ne \delta _h\) for some \(i \ne h\), it must hold that \(\delta _i - c_T \ne 0\) for at least one i. Since \({Q}_i/\sigma _i^2\) is positive definite, this means that \(T( {\hat{\beta }}_i - {\hat{\beta }}_\mathrm{{WFE}})^\prime Q_{i,T} ( {\hat{\beta }}_i - {\hat{\beta }}_\mathrm{{WFE}} )/\hat{\sigma }_i^2 \rightarrow \infty \) as \(T \rightarrow \infty \), and so the proof is complete. \(\square \)

Proof of Theorem 2

Define \(\hat{\Sigma }_T^*\) as the diagonal matrix whose diagonal is given by \((\hat{\sigma }_1^{*2},\dots , \hat{\sigma }_N^{*2})^\prime \otimes {\tau }_k\), and assume for the moment that there exists a diagonal matrix \(\Sigma _0^*\) satisfying \(\hat{\Sigma }_T^* - \Sigma _0^* = o_{p^*}(1)\). Under \(H_0\),

$$\begin{aligned} {\hat{\beta }}^*_i - {\hat{\beta }}^*_\mathrm{{WFE}} = \frac{1}{\sqrt{T}} {{Q}}_{i,T}^{-1} \xi ^*_{i,T} - \frac{1}{\sqrt{T}}\left( \sum _{i=1}^N (\hat{\sigma }_i^{*2})^{-1} { {Q}}_{i,T} \right) ^{-1} \sum _{i=1}^N (\hat{\sigma }_i^{*2})^{-1} \xi ^*_{i,T}, \end{aligned}$$

where \(\hat{\sigma }_i^{*2}\) is the usual LS estimate of the variance from the bootstrap procedure. This result, together with the same arguments used in the proof of Theorem 1, implies that

$$\begin{aligned} S^* \rightarrow _{d^*} Z _1^{*\prime } ( {I}_{kN} - {H}^{*\prime } ( {H}^* {H}^{*\prime })^{-1}{H}^*) Z_1^* =_{d^*} Z_1^{*\prime } {A}^* Z_1^* \quad \hbox {in probability}, \end{aligned}$$
(5)

where \({H}^* = {D}(\hat{\Sigma }_T^*)^{-1/2} Q^{1/2}\) and \(Z_1^* \sim N({0}, {B}^*)\) with \({B}^* = (\hat{\Sigma }_T^*)^{-1/2} Q^{-1/2} \Sigma (\hat{\Sigma }_T^*)^{-1/2} Q^{-1/2}\). Hence, as in the proof of Theorem 1, if we can show that \(\hat{\Sigma }_T^* - \Sigma _0 =o_{p^*}(1)\), then

$$\begin{aligned} Z_1^{*\prime } {A}^* Z_1^* =_{d^*} \sum _{j=1}^{kN} \lambda _j U_j^2 \quad \text{ in } \text{ probability }, \end{aligned}$$
(6)

and the second result of the theorem follows. We now verify that \(\hat{\Sigma }_T^* - \Sigma _0 =o_{p^*}(1)\). Under \(H_0\),

$$\begin{aligned} \frac{T-k-1}{T} (\hat{\sigma }_i^{*})^2= & {} \frac{1}{T} ({y}_i^* - x_{i}{\hat{\beta }}_i^*)^\prime {M}_{\tau } ({y}_i^* - x_{i} {\hat{\beta }}_i^*) = \frac{1}{T} ({M}_{\tau } \varepsilon ^*_i)^\prime {M}_{x_i} ({M}_{\tau } \varepsilon _i^*) \\= & {} \frac{1}{T} \sum _{t=1}^T ( \varepsilon _{i,t}^* - \overline{\varepsilon }_i^*)^2 + o_{p^*}(1) = \frac{1}{T} \sum _{t=1}^T (\varepsilon _{i,t}^{*})^2 + o_{p^*}(1), \end{aligned}$$

where \({M}_{x_i} = {I}_T - x_i (x_i^\prime {M}_{\tau } x_i)^{-1}x_i^\prime \). The second equality follows from straightforward algebra under the null hypothesis. We can use an argument similar to Lemma 2 to show that \(T^{-1/2} x_i^\prime {M}_{\tau } \varepsilon ^*_i = O_{p^*}(1)\), and the third equality follows. Finally, by Theorem 3.1 in Fitzenberger (1997), since \(\varepsilon _{i,t}^* = \tilde{\varepsilon }_{i,t}^{*} + \hat{\varepsilon }_{i,t} - \varepsilon _{i,t}\) and \(T^{-1} \sum _{t=1}^T \hat{\varepsilon }_{i,t} = 0\), we have \(\overline{\varepsilon }_i^* = T^{-1} \sum _{t=1}^T (\tilde{\varepsilon }_{i,t}^{*} - \varepsilon _{i,t}) = O_{p^*}(T^{-1/2})\), and so the fourth equality follows. It remains to consider \(T^{-1} \sum _{t=1}^T (\varepsilon _{i,t}^{*})^2\). By MBB–Lemma A.3 in Fitzenberger (1997), \(T^{-1} \sum _{t=1}^T [(\tilde{\varepsilon }_{i,t}^{*})^2 - \varepsilon _{i,t}^2] = o_{p^*}(1)\). We now show that \(T^{-1} \sum _{t=1}^T [(\varepsilon _{i,t}^{*})^2 - (\tilde{\varepsilon }_{i,t}^{*})^2] = o_{p^*}(1)\). For this purpose, it is convenient to define \(v_{i,t} = w_{i,t} (w_i^\prime w_i)^{-1} \sum _{s=1}^T w_{i,s}^\prime \varepsilon _{i,s}\), where \(w_i = (\tau _T, x_i)\). Also, by the properties of LS residuals, \(\hat{\varepsilon }_{i,t} = \varepsilon _{i,t} - v_{i,t}\). Hence,

$$\begin{aligned} \frac{1}{T} \sum _{t=1}^T \big (\varepsilon _{i,t}^{*}\big )^2= & {} \frac{1}{T} \sum _{t=1}^T \big (\tilde{\varepsilon }_{i,t}^{*}\big )^2 + \frac{2}{T} \sum _{t=1}^T \tilde{\varepsilon }_{i,t}^{*}(\hat{\varepsilon }_{i,t} - \varepsilon _{i,t}) + \frac{1}{T} \sum _{t=1}^T (\hat{\varepsilon }_{i,t} - \varepsilon _{i,t})^2 \\= & {} \frac{1}{T} \sum _{t=1}^T \big (\tilde{\varepsilon }_{i,t}^{*}\big )^2 - \frac{2}{T} \sum _{t=1}^T \tilde{\varepsilon }_{i,t}^{*} v_{i,t} + o_p(1) \\= & {} \frac{1}{T} \sum _{t=1}^T \big (\tilde{\varepsilon }_{i,t}^{*}\big )^2 + o_{p^*}(1) + o_p(1). \end{aligned}$$

The second equality follows from the fact that \(T^{-1} \sum _{t=1}^T \hat{\varepsilon }_{i,t}^2\rightarrow _p\sigma _i^2\). As for the third equality, by Corollary 3.48 in White (2001), \(T^{-1} \sum _{s=1}^T w_{i,s}^\prime \varepsilon _{i,s} = o_p(1)\), by Assumption REGR, \((T^{-1}w_i^\prime w_i)^{-1}\) converges to a nonzero deterministic matrix, and by further use of MBB–Lemma A.3 in Fitzenberger (1997), \(T^{-1} \sum _{t=1}^T (\tilde{\varepsilon }_{i,t}^{*}w_{i,t} - \varepsilon _{i,t}w_{i,t}) =o_{p^*}(1)\). Finally, by Corollary 3.48 in White (2001), \(T^{-1} \sum _{t=1}^T \varepsilon _{i,t}w_{i,t} =o_p(1)\). This establishes that \(\hat{\Sigma }_T^* - \Sigma _0 =o_{p^*}(1)\), and so the proof of the first statement of the theorem is complete.

The second part of the theorem follows from the fact that in the bootstrap the data are generated under the null of equal slope coefficients. Consequently, the result in (6) also holds under the alternative hypothesis. \(\square \)

Proof of Lemma 1

Consider the following alternative hypothesis \(H_{1} : \beta _i = \beta _b + \delta _i\), where \(\delta _i\) is a \(m \times 1\) vector of fixed constants such that \(\Vert \delta _i \Vert \le C < \infty \). By the properties of the LS estimator, \({\hat{\beta }}_i - \beta _i = O_p(T^{-1/2})\). It follows that \(\hat{\beta }_i - \hat{\beta }_b - \delta _i = o_p(1)\). Also, \(( \hat{\sigma }_i^2 Q_{i,T}^{-1} + \hat{\sigma }_b^2 Q_{b,T}^{-1} )^{-1}\) converges to a positive definite matrix by Assumption REGR. Consequently, provided that \(i \notin \mathcal {Z}\), \(W_i \rightarrow \infty \) as \(T \rightarrow \infty \), which in turn implies

$$\begin{aligned} W_{(1)}, \ldots , W_{(q)} \rightarrow \infty . \end{aligned}$$

Consider the second result of the lemma. We only need to consider the order statistics for which \(i \in \mathcal {Z}\), as \(W_{(k+1)} \rightarrow \infty \) if \(k + 1 \le q\) by the first part. Hence,

$$\begin{aligned} W_{(k+1)} \rightarrow _p W_{1:\mathcal {Z}}. \end{aligned}$$

The second result then follows from application of the continuous mapping theorem (see, e.g., White 2000, Proposition 2.2).

In order verify the third result, we proceed in two steps. First, we establish the exact expression of \(G_i\), the limiting distribution of \(W_i\) for \(i \in \mathcal {Z}\). Second, we show that the bootstrap analog, \(W_i^*\), converges to the same distribution. Define the \(m \times Nm\) matrix \({P}_i = (0 , \ldots , 0, {I}_{m} , 0, \ldots , 0, -{I}_{m}, 0, \ldots ,0)\), where \({I}_m\) and \(-{I}_m\) are in positions i and b, respectively. In addition, define \({\hat{\beta }} = ( {\hat{\beta }}_1, \ldots , {\hat{\beta }}_N )^\prime \). In this notation, since \(\beta _i = \beta _b\) for all \(i \in \mathcal {Z}\), \(W_i\) can be written as

$$\begin{aligned} W_i = T (P_i {\hat{\beta }})^\prime \big ( {P}_i \hat{\Sigma }_T Q_T^{-1} {P}_i^\prime \big )^{-1}({P}_i {\hat{\beta }}) = \tilde{{Z}}_2^\prime \big ( {P}_i \hat{\Sigma }_T Q_T^{-1} {P}_i^\prime \big )^{-1} \tilde{{Z}}_2, \end{aligned}$$

where \(\tilde{{Z}}_2 = \sqrt{T} \, {P}_i {\hat{\beta }} = {P}_i Q_T^{-1} \xi _T\). By Lemma 2 and Assumption REGR,

$$\begin{aligned} \tilde{{Z}}_2 \rightarrow _d N \big ({0}, {P}_i Q^{-1} \Sigma Q^{-1} {P}_i^\prime \big ) \end{aligned}$$

as \(T \rightarrow \infty \), and \(\hat{\Sigma }_T Q_T^{-1} - \Sigma _0^* Q^{-1} =o_p(1)\).

To analyze the bootstrap test, \(W_i^*\), define \({\hat{\beta }}^* = ( {\hat{\beta }}_1^*, \ldots , {\hat{\beta }}_N^* )^\prime \). It follows that we can write \(W_i^*\) as

$$\begin{aligned} W_i^* = T ({P}_i {\hat{\beta }}^*)^\prime \big ( {P}_i \hat{\Sigma }_T^*Q_T^{-1} {P}_i^\prime \big )^{-1} ({P}_i {\hat{\beta }}^* ). \end{aligned}$$

Note that \({\hat{\beta }}^*\) is estimated using \(y_{i,t}^*\), which is generated under the hypothesis that \({\beta }_i = {\beta }_b\) for all i. This implies that

$$\begin{aligned} W_i^* = \tilde{ {Z}}_2^{*\prime } \big ( {P}_i \hat{\Sigma }_T^*Q_T^{-1} {P}_i^\prime \big )^{-1} \tilde{ {Z}}_2^{*}, \end{aligned}$$

where \(\tilde{{Z}}_2^{*} = \sqrt{T} {P}_i {\hat{\beta }}^* = {P}_i Q_T^{-1} \xi _T^*\). By Lemma 2 and Assumption REGR, we have that

$$\begin{aligned} \tilde{{Z}}_2 \rightarrow _d N \big ({0}, {P}_i Q^{-1} \Sigma Q^{-1} {P}_i^\prime \big ) \quad \text{ in } \text{ probability }. \end{aligned}$$

In addition, from the proof of Theorem 1, \(\hat{\Sigma }_T^* - \Sigma _0 = o_{p^*}(1)\), which implies that \(\hat{\Sigma }_T^* Q_T^{-1} - \Sigma _0 Q^{-1} = o_{p^*}(1)\). This completes the proof of the third result.

The fourth result follows from the third and the continuous mapping theorem (White 2000, Proposition 2.2). \(\square \)

Proof of Theorem 3

The first result follows from the first and last results of Lemma 1. To show the second result, note that, by the first part of Lemma  1, \(W_i \rightarrow _p \infty \) if \(i \notin \mathcal {Z}\). Therefore, if \(k = q\), \(\mathcal {D}_k^c\) equals \(\mathcal {Z}\) in the limit with probability one. It follows from second and fourth parts of Lemma 1 that the probability of not rejecting the null, \(H_0(k)\), is equal to \(1 - \alpha \) (the confidence level in Algorithm SEQBOOT). Finally, as for the third result of Theorem 3, note that

$$\begin{aligned} \sum _{k=0}^{N-1} \lim _{T \rightarrow \infty } P(\hat{q} = g) = \sum _{k=0}^{q} \lim _{T \rightarrow \infty } P(\hat{q} = g) + \sum _{k=q + 1}^{N-1} \lim _{T \rightarrow \infty } P(\hat{q} = g) = 1. \end{aligned}$$

From the first two parts of the theorem, \(\sum _{k=0}^{q} \lim _{T \rightarrow \infty } P(\hat{q} = g) = 1 - \alpha \). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Blomquist, J., Westerlund, J. Panel bootstrap tests of slope homogeneity. Empir Econ 50, 1359–1381 (2016). https://doi.org/10.1007/s00181-015-0978-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00181-015-0978-z

Keywords

JEL Classification

Navigation