﻿ Bootstrapping the Expected Shortfall

Theoretical Economics Letters
Vol.08 No.04(2018), Article ID:82856,14 pages
10.4236/tel.2018.84046

Bootstrapping the Expected Shortfall

Shuxia Sun1, Fuxia Cheng2

1Wright State University, Dayton, OH, USA

2Illinois State University, Normal, IL, USA

Received: December 22, 2017; Accepted: March 4, 2018; Published: March 7, 2018

ABSTRACT

The expected shortfall is a popular risk measure in financial risk management. It is defined as the conditional expected loss given that the loss is greater than a given high quantile. We derive the asymptotic properties of the blocking bootstrap estimators for the expected shortfall of a stationary process under strong mixing conditions.

Keywords:

High Quantile, Risk Measure, Moving Block Bootstrap, Nonparametric Estimation, Strong Mixing Sample Quantile

1. Introduction

The expected shortfall is a risk measure which has been mostly used among actuaries and insurance companies. The expected shortfall on a portfolio of financial assets is the conditional expected loss given that the loss is greater than a high quantile named as value at risk (VaR). While the expected shortfall and VaR are two popular risk measures, the expected shortfall is becoming increasingly important due to its better properties. For example, the expected shortfall satisfies the sub-additive property, whereas the VaR does not. Sub-additivity, which is one of the four requirements of coherent risk measures, implies in the context of risk management that the total risk on a portfolio should not be greater than the sum of the individual risks. See [1] and [2] for details.

Let ${\left\{{X}_{t}\right\}}_{t\in ℤ}$ be a sequence of stationary random variables with common distribution function F that describes the negative profit and loss distribution, where $ℤ\equiv \left\{0,±1,±2,\cdots \right\}$ denotes the set of all integers. For a given positive value p close to zero, the $\left(1-p\right)$ confidence level VaR, denoted by ${\nu }_{p}$ , is defined as the $\left(1-p\right)$ -th high quantile of the loss function F. That is

${\nu }_{p}={F}^{-1}\left(1-p\right)=\mathrm{inf}\left\{t:F\left(t\right)\ge 1-p\right\}\text{ }.$ (1.1)

VaR is defined as the loss of a financial position over a time horizon that would be exceeded with small probability p. See [3] [4] [5] for more discussions on VaR. The expected shortfall associated with a confidence level $\left(1-p\right)$ on a portfolio, denoted by ${\mu }_{p}$ , is a conditional expectation defined as follows

${\mu }_{p}=E\left({X}_{t}|{X}_{t}>{\nu }_{p}\right).$

As can be seen from the definitions above, the expected shortfall gives the potential size of the loss that exceeds it, whereas VaR does not. Moreover, as a risk measure, in addition to being coherent, the expected shortfall gives weights to all quantiles greater than the high quantile ${\nu }_{p}$ , whereas the VaR gives all its weight to a single quantile ${\nu }_{p}$ and tells nothing about the size of the loss that exceeds it. Thus, as a risk measure, the expected shortfall is more applicable and produces better incentive for traders than VaR.

In the past, the estimation of expected shortfall has been mainly developed for the identically independently distributed (IID) observations based on the extreme value theory in a parametric or semi-parametric framework. We refer the reader to [6] and [7] for details. However, many empirical studies showed that financial data are often weakly dependent with heavy tails. It is also challenging to build a parametric model that can capture the tail behavior for calculating risk measures since the data are generally sparse in the tail part of the loss distribution. As a result, nonparametric based methods can play an important role in diverse problems in risk managements.

For the weak dependence case, under suitable mixing conditions,[8] first proposed a nonparametric kernel estimator for the expected shortfall in the context of portfolio allocation and derived its asymptotic properties. Reference [9] compared the performance of the sample estimator and the kernel smoothed estimator of the expected shortfall and showed that extra smoothing does not result in more accurate approximation for the expected shortfall.

Although properties of expected shortfall are well studied in the literature, no work seems to be available on the properties of bootstrap approximations for the expected shortfall. Our main contribution in this research is to provide a theoretical foundation to the practical applications of the moving block bootstrap for the expected shortfall.

In this paper, we investigate the asymptotic properties of nonparametric block bootstrap methods for estimating the sampling distribution and the asymptotic variance of expected shortfall under weak dependence setting.

The rest of the paper is organized as follows. In Section 2, we introduce some background material, including a brief description of the moving block bootstrap (MBB) method. In Section 3, we state the main results of this paper. Technical details and proofs will be presented in Section 4.

2. Background

We first define the sample estimator of expected shortfall. For a sample ${X}_{1},\cdots ,{X}_{n},n\ge 1$ , of stationary random variables with common distribution function F, let ${F}_{n}$ denote the corresponding empirical distribution function, putting mass 1/n on each ${X}_{i}$ , i.e.,

${F}_{n}\left(x\right)={n}^{-1}\sum _{i=1}^{n}\text{ }I\left({X}_{i}\le x\right),\text{ }x\in ℝ,$ (2.1)

where $I\left(\cdot \right)$ denotes the indicator function which is defined as follows

$I\left(S\right)\equiv \left\{\begin{array}{l}1,\text{ }\text{ }\text{if}\text{\hspace{0.17em}}\text{the}\text{\hspace{0.17em}}\text{statement}\text{\hspace{0.17em}}S\text{\hspace{0.17em}}\text{is}\text{\hspace{0.17em}}\text{true}\text{ }\\ 0,\text{ }\text{ }\text{if}\text{\hspace{0.17em}}\text{the}\text{\hspace{0.17em}}\text{statement}\text{\hspace{0.17em}}S\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}\text{false}\text{ }\end{array}$

Then, ${\stackrel{^}{\nu }}_{n}={F}_{n}^{-1}\left(1-p\right)$ is the sample estimator of VaR at confidence level $\left(1-p\right)$ . The sample estimator of the expected shortfall, denoted by ${\stackrel{^}{\mu }}_{n}$ , can be defined as

${\stackrel{^}{\mu }}_{n}=\frac{{\sum }_{i=1}^{n}\text{ }{X}_{i}I\left({X}_{i}\ge {\stackrel{^}{\nu }}_{n}\right)}{{\sum }_{i=1}^{n}\text{ }I\left({X}_{t}\ge {\stackrel{^}{\nu }}_{n}\right)}=\frac{1}{⌊np⌋+1}\sum _{i=1}^{n}\text{ }{X}_{i}I\left({X}_{i}\ge {\stackrel{^}{\nu }}_{n}\right),$

where $⌊x⌋$ denotes the largest integer not exceeding x for $x\in \text{R}$ , i.e.,

$⌊x⌋=max\left\{n\in ℤ:n\le x\right\}$

Then, the normalized expected shortfall, denoted by ${Z}_{n}$ , is defined as below:

${Z}_{n}\equiv \sqrt{n}p\left({\stackrel{^}{\mu }}_{n}-{\mu }_{p}\right)\text{ }.$

Our goal in this paper is to investigate the asymptotic properties of bootstrap approximations to the distribution and the variance of the normalized expected shortfall, ${Z}_{n}$ .

Reference [10] first introduced the bootstrap method into the statistical world. The bootstrap is a flexible method that can be applied to a variety of problems. It can be used to approximate the quantities of interest such as distribution, bias, variance, significance level in a nonparametric framework.

However, Efron’s bootstrap method fails when the data are not independent [11] [12] . Block bootstrap methods for dependent data have been put forward by several authors, notably by [13] - [20] . See [21] and references therein for a detailed account of results on bootstrap methods (for smooth functions of the data) in the dependent case.

It is worth mentioning that although there has been a considerable amount of work on properties of block bootstrap methods for smooth functionals of weakly dependent data, not many theoretical results seem to be available on properties of the moving block bootstrap (MBB) and other block bootstrap methods in case of nonsmooth functionals. Reference [22] first showed that blocking bootstrap methods provide a valid approximation to the distribution and its asymptotic variance of a non-smooth function of data, the normalized sample quantile. Recently, due to its applications in financial times series data analysis, quantile based methods (for nonsmooth functions of data) are becoming increasingly attractive, such as expected shortfall [8] , quantile hedging [23] , risk management [24] , among others.

In this paper, we investigate the blocking bootstrap approximation to the expected shortfall based on time series data. For definiteness and conciseness, we shall exclusively concentrate on the MBB method that was independently proposed by [15] and [16] .

For the sake of integrity, we briefly describe the MBB method for estimating the sampling distributions of statistics based on weakly dependent observations. Let ${X}_{1},{X}_{2},\cdots ,{X}_{n}$ be a sample from the stationary process ${\left\{{X}_{i}\right\}}_{i\in ℤ}$ . For $\mathcal{l}$ , a positive integer between 1 and n, we define the overlapping blocks of size $\mathcal{l}$ as:

${B}_{i}=\left({X}_{i},\cdots ,{X}_{i+\mathcal{l}-1}\right),\text{ }i=1,\cdots ,N=n-\mathcal{l}+1.$

Let ${B}_{1}^{*},\cdots ,{B}_{b}^{*}$ be a random sample of blocks from $\left\{{B}_{1},\cdots ,{B}_{N}\right\}$ , where $b=⌊n/\mathcal{l}⌋$ , that is, ${B}_{1}^{*},\cdots ,{B}_{b}^{*}$ are independently and identically distributed as $\text{Uniform}\left\{{B}_{1},\cdots ,{B}_{N}\right\}$ .

The observations in the resampled block ${B}_{i}^{*}$ are denoted by ${X}_{\left(i-1\right)\mathcal{l}+1}^{*},\cdots ,{X}_{i\mathcal{l}}^{*},1\le i\le b$ . The MBB sample consists of ${X}_{1}^{*},\cdots ,{X}_{\mathcal{l}}^{*},\cdots ,{X}_{{n}_{1}}^{*}$ , where ${n}_{1}=b\mathcal{l}$ . Let

${T}_{n}\equiv {t}_{n}\left({X}_{1},\cdots ,{X}_{n};\theta \right)$ (2.2)

be a random quantity of interest that is a function of the random variables $\left\{{X}_{1},\cdots ,{X}_{n}\right\}$ and of some unknown population parameter $\theta$ . Then, the MBB version of ${T}_{n}$ is defined as

${T}_{n}^{*}={t}_{{n}_{1}}\left({X}_{1}^{*},\cdots ,{X}_{{n}_{1}}^{*};{\stackrel{^}{\theta }}_{n}\right),$ (2.3)

where ${\stackrel{^}{\theta }}_{n}$ is a sensible estimator of $\theta$ based on $\left\{{X}_{1},\cdots ,{X}_{n}\right\}$ . The MBB estimator of the distribution of ${T}_{n}$ can be defined as the conditional distribution of ${T}_{n}^{*}$ , given $X\equiv \left\{\cdots ,{X}_{1},{X}_{2},\cdots \right\}$ . Note that Efron’s bootstrap method is a special case of the MBB method with block length $\mathcal{l}=1$ .

An alternative definition of the MBB version of ${T}_{n}$ of (2.2) is given by resampling $⌈n/\mathcal{l}⌉$ blocks from $\left\{{B}_{1},\cdots ,{B}_{N}\right\}$ , and using the first n out of the $⌈n/\mathcal{l}⌉\cdot \mathcal{l}$ -many resampled values. However, the difference between the two versions is asymptotically negligible. To simplify the proofs of the main results, here we shall use the version given by (2.3) based on b complete resampled blocks.

Throughout this paper, we use ${P}_{*}$ , ${E}_{*}$ , and $Va{r}_{*}$ to denote, respectively, the conditional probability, the conditional expectation, and the conditional variance, given $X$ .

Now, we are in a position to define the MBB version of the normalized expected shortfall, ${Z}_{n}$ , for a given $p\in \left(0,1\right)$ . Let ${F}_{n}^{*}$ denote the MBB empirical distribution function, i.e., ${F}_{n}^{*}\left(x\right)={n}_{1}^{-1}{\sum }_{i=1}^{{n}_{1}}\text{ }I\left({X}_{i}^{*}\le x\right),x\in ℝ$ . Then, the MBB version of the sample VaR, ${\stackrel{^}{\nu }}_{n}={F}_{n}^{-1}\left(1-p\right)$ , is defined as ${\nu }_{n}^{*}\equiv {F}_{n}^{*}{}^{-1}\left(1-p\right)$ . Similarly, the MBB version of the expected shortfall can be defined as below

${\mu }_{n}^{*}=\frac{{\sum }_{i=1}^{n}\text{ }{X}_{i}^{*}I\left({X}_{i}^{*}\ge {\stackrel{˜}{\nu }}_{n}\right)}{{\sum }_{i=1}^{{n}_{1}}\text{ }I\left({X}_{i}^{*}\ge {\stackrel{˜}{\nu }}_{n}\right)}=\frac{1}{⌊{n}_{1}p⌋+1}\sum _{i=1}^{n}\text{ }{X}_{i}^{*}I\left({X}_{i}^{*}\ge {\stackrel{˜}{\nu }}_{n}\right),$

where ${\stackrel{˜}{\nu }}_{n}={\stackrel{˜}{F}}_{n}^{-1}\left(p\right)$ , and ${\stackrel{˜}{F}}_{n}\left(\cdot \right)={E}_{*}{F}_{n}^{*}$ . The MBB version of the normalized expected shortfall, ${Z}_{n}=\sqrt{n}p\left({\stackrel{^}{\mu }}_{n}-{\mu }_{p}\right)$ , is given by

${Z}_{n}^{*}\equiv \sqrt{{n}_{1}}p\left({\mu }_{n}^{*}-{\stackrel{˜}{\mu }}_{n}\right),\text{ }{\stackrel{˜}{\mu }}_{n}={E}_{*}{\mu }_{n}^{*}\text{ }.$ (2.4)

Note that in the definition of the MBB version of ${Z}_{n}$ , we center ${\mu }_{n}^{*}$ by ${\stackrel{˜}{\mu }}_{n}$ . As in the case of the sample mean [25] , and the sample quantile [22] , this helps center constant for the MBB sample expected shortfall.

Let

${H}_{n}\left(x\right)=P\left({Z}_{n}\le x\right),\text{ }x\in ℝ,$ (2.5)

denote the distribution function of ${Z}_{n}$ . Then, the MBB estimator of ${H}_{n}\left(x\right)$ is given by the conditional distribution of ${Z}_{n}^{*}$ , i.e., by

${\stackrel{^}{H}}_{n}\left(x\right)={P}_{*}\left({Z}_{n}^{*}\le x\right),\text{ }x\in ℝ.$ (2.6)

We conclude this section with an introduction of some standard dependence condition on the ${X}_{i}$ ’s. Suppose that the random variables ${\left\{{X}_{i}\right\}}_{i\in ℤ}$ are defined on a common probability space $\left(\Omega ,\mathcal{F},P\right)$ . Let ${\mathcal{F}}_{m}^{n}=\sigma 〈{X}_{i}:m\le i\le n,i\in ℤ〉$ be the s-field generated by random variables ${X}_{m},\cdots ,{X}_{n}$ , $-\infty \le m\le n\le \infty$ . For $n\ge 1$ , we define

$\alpha \left(n\right)=\underset{m\in ℤ}{sup}\underset{A\in {\mathcal{F}}_{-\infty }^{m},B\in {\mathcal{F}}_{m+n}^{\infty }}{sup}|P\left(A\cap B\right)-P\left(A\right)P\left(B\right)|.$

The sequence ${\left\{{X}_{i}\right\}}_{i\in ℤ}$ is called strongly mixing or a-mixing if $\alpha \left(n\right)\to 0$ as $n\to \infty$ . Strong mixing is a fairly non-restrictive dependence assumption. Empirical studies have showed that many log financial returns are strong mixing with exponential decay coefficients.

As a convention, we assume throughout this paper that unless otherwise specified, limits are taken as $n\to \infty$ . Next we state the main results of the paper.

3. Main Results

The first result asserts consistency of the MBB approximation for the distribution function of ${Z}_{n}$ .

Theorem 1. Assume that $0 with p close to zero and that F has a positive and continuous derivative f in a neighborhood ${\mathcal{N}}_{p}$ of ${\nu }_{p}$ with $f\left({\nu }_{p}\right)>0$ . Also, suppose that $\alpha \left(n\right)\le C{\rho }^{n}$ for some $C\in \left(0,\infty \right)$ and $\rho \in \left(0,1\right)$ , and that ${\mathcal{l}}^{-1}=o\left(1\right)$ and $\mathcal{l}=o\left({n}^{1/4}{\left(\mathrm{log}\mathrm{log}n\right)}^{-1/4}\right)$ . Then, under the moment condition

$E{|{X}_{1}|}^{4+\delta }<\infty \text{ },\text{ }\text{ }\text{for}\text{\hspace{0.17em}}\text{some}\text{ }\text{\hspace{0.17em}}\delta >0\text{ },$ (3.1)

$\underset{x\in ℝ}{sup}|{P}_{*}\left(\sqrt{{n}_{1}}p\left({\mu }_{n}^{*}-{\stackrel{˜}{\mu }}_{n}\right)\le x\right)-P\left(\sqrt{n}p\left({\stackrel{^}{\mu }}_{n}-{\mu }_{p}\right)\le x\right)|={o}_{p}\left(1\right)\text{ }.$ (3.2)

Theorem 1 shows that the MBB method provides a valid approximation to the distribution of the centered and scaled sample expected shortfall ${Z}_{n}$ under geometric mixing and under a mild condition on the block length $\mathcal{l}$ .

The next result proves the consistency of the MBB variance estimators under the same conditions as Theorem 1.

Theorem 2. Under the conditions of Theorem 1,

${\stackrel{^}{\sigma }}_{n}^{2}=Va{r}_{*}\left(\sqrt{{n}_{1}}p\left({\mu }_{n}^{*}-{\stackrel{˜}{\mu }}_{n}\right)\right){\to }^{p}{\sigma }_{\infty }^{2}\text{ },$

where

${\sigma }_{\infty }^{2}=\sum _{i\in ℤ}\text{ }Cov\left({X}_{1}I\left({X}_{1}>{\nu }_{p}\right),{X}_{1+i}I\left({X}_{i+1}>{\nu }_{p}\right)\right).$ (3.3)

Theorem 2 shows that under some mild conditions, the MBB estimator ${\stackrel{^}{\sigma }}_{n}^{2}$ of the asymptotic variance of the centered and scaled expected shortfall at confidence level $\left(1-p\right)$ converges in probability.

Remark 1. Note that in addition to the regularity conditions, we require the moment condition (3.1) for both main results. The consistency of the MBB approximation to the distributions of some quantities, as in the cases of sample mean and sample quantile, does not require moment condition although consistency of the MBB variance estimator in general needs some moment condition. The moment condition of Theorem 1 may be relaxed.

4. Proofs

We now introduce some basic notation. Let C, $C\left(\cdot \right)$ denote generic constants in $\left(0,\infty \right)$ that depend on their arguments (if any) but not on the variables n and x. For real numbers x and y, write $x\vee y=\mathrm{max}\left\{x,y\right\}$ , $x\wedge y=\mathrm{min}\left\{x,y\right\}$ .

For any real number x, $i=1,\cdots ,n$ , we introduce the following

${Y}_{i}\left(x\right)={X}_{i}I\left({X}_{i}\ge x\right),\text{ }{Y}_{i}^{*}\left(x\right)={X}_{i}^{*}I\left({X}_{i}^{*}\ge x\right)$

${U}_{i}\left(x\right)=\frac{1}{\mathcal{l}}\sum _{j=1}^{\mathcal{l}}\text{ }{Y}_{\left(i-1\right)\mathcal{l}+j}\left(x\right)\text{ },\text{ }{U}_{i}^{*}\left(x\right)=\frac{1}{\mathcal{l}}\sum _{j=1}^{\mathcal{l}}\text{ }{Y}_{\left(i-1\right)\mathcal{l}+j}^{*}\left(x\right)$

Note that ${U}_{i}\left(\cdot \right)$ and ${U}_{i}^{*}\left(\cdot \right)$ , $i=1,\cdots ,b$ , are block averages. Then,

${\stackrel{^}{\mu }}_{n}\left(x\right)=\frac{1}{⌊np⌋+1}\sum _{i=1}^{n}\text{ }{Y}_{i}\left(x\right),\text{ }{\mu }_{n}^{*}\left(x\right)=\frac{1}{⌊{n}_{1}p⌋+1}\sum _{i=1}^{{n}_{1}}\text{ }{Y}_{i}^{*}\left(x\right)$

which implies that

${\stackrel{^}{\mu }}_{n}={\stackrel{^}{\mu }}_{n}\left({\stackrel{^}{\nu }}_{n}\right),\text{ }{\mu }_{n}^{*}={\mu }_{n}^{*}\left({\stackrel{˜}{\nu }}_{n}\right).$

For a random variable X and a real number q, we define

${‖X‖}_{q}\equiv {\left(E{|X|}^{q}\right)}^{\frac{1}{q}},\text{ }q\in \left[1,\infty \right).$

Recall that unless otherwise indicated, limits are taken by letting n tend to infinity. The first lemma is Theorem 1 of [9] . It states the asymptotic normality of the centered and scaled sample estimator of expected shortfall, ${Z}_{n}=\sqrt{n}p\left({\stackrel{^}{\mu }}_{n}-{\mu }_{p}\right)$ , for a given $p\in \left(0,1\right)$ close to zero. We include this result here for the sake of completeness.

Lemma 1. Suppose that F is differentiable at ${\nu }_{p}$ with a positive derivative $f\left({\nu }_{p}\right)>0$ and that $\alpha \left(n\right)\le C{\rho }^{n}$ for some $C>0$ and $0<\rho <1$ . Then,

${Z}_{n}=\sqrt{n}p\left({\stackrel{^}{\mu }}_{n}-{\mu }_{p}\right){\to }^{d}N\left(0,{\sigma }_{\infty }^{2}\right),$ (4.1)

where ${\sigma }_{\infty }^{2}$ is as defined in (3.3).

The next lemma is a consistency result of the MBB variance estimator of ${\sigma }_{\infty }^{2}$ , the asymptotic variance of the normalized expected shortfall.

Lemma 2. Under conditions of Theorem 2, we have

$\mathcal{l}Va{r}_{*}{U}_{1}^{*}\left({\stackrel{˜}{\nu }}_{n}\right){\to }^{p}{\sigma }_{\infty }^{2},\text{ }\text{ }\text{as}\text{ }\text{\hspace{0.17em}}n\to \infty .$

Proof: By Lemma 5.4 (iii) of [22] , for $\mathcal{l}=o\left({n}^{1/2}\right)$ , we have,

$|{\stackrel{˜}{\nu }}_{n}-{\nu }_{p}|\le {C}_{1}{n}^{-1/2}{\left(\mathrm{log}\mathrm{log}n\right)}^{1/2}\text{ },\text{\hspace{0.17em}}\text{ }\text{a}\text{.s}\text{.}\text{ }$ (4.2)

where ${C}_{1}>0$ is a constant. Let

${x}_{n1}={\nu }_{p}-{C}_{1}{n}^{-1/2}{\left(\mathrm{log}\mathrm{log}n\right)}^{1/2},\text{ }{x}_{n2}={\nu }_{p}+{C}_{1}{n}^{-1/2}{\left(\mathrm{log}\mathrm{log}n\right)}^{1/2}\text{ },$

and

${R}_{i,n}=\frac{1}{\mathcal{l}}\sum _{j=1}^{\mathcal{l}}\left\{{X}_{\left(i-1\right)\mathcal{l}+j}\left[I\left({X}_{\left(i-1\right)\mathcal{l}+j}\ge {\stackrel{˜}{\nu }}_{n}\right)-I\left({X}_{\left(i-1\right)\mathcal{l}+j}\ge {\nu }_{p}\right)\right]\right\},\text{ }1\le i\le N\text{ }.$ (4.3)

Then, for $r=1,2$ , $j=1,\cdots ,\mathcal{l}$ ,

$\begin{array}{l}E{|{X}_{\left(i-1\right)\mathcal{l}+j}\left[I\left({X}_{\left(i-1\right)\mathcal{l}+j}\ge {\stackrel{˜}{\nu }}_{n}\right)-I\left({X}_{\left(i-1\right)\mathcal{l}+j}\ge {\nu }_{p}\right)\right]|}^{2r+\delta }\\ =E{|{X}_{1}\left[I\left({X}_{1}\ge {\stackrel{˜}{\nu }}_{n}\right)-I\left({X}_{1}\ge {\nu }_{p}\right)\right]|}^{2r+\delta }\\ \le E\left[{|{X}_{1}|}^{2r+\delta }I\left({x}_{n1}\le {X}_{1}\le {x}_{n2}\right)\right]\\ ={\int }_{{x}_{n1}}^{{x}_{n2}}{|x|}^{2r+\delta }f\left(x\right)\text{d}x\\ =O\left({n}^{-1/2}{\left(\mathrm{log}\mathrm{log}n\right)}^{1/2}\right),\end{array}$ (4.4)

since $f\left(x\right)$ is continuous (and hence bounded) in a neighborhood ${\mathcal{N}}_{p}$ of ${\nu }_{p}$ . Because the mixing coefficient $\alpha \left(\cdot \right)$ decays exponentially, it is easily seen that, for $1\le k\le 2r$ ,

$\Delta \left(k,r\right)=1+\sum _{j=1}^{\mathcal{l}}\text{ }{j}^{k}{\left[\alpha \left(j\right)\right]}^{\delta /\left(2r+\delta \right)}<\infty$

Hence, applying Lemma 3.2 of Lahiri (2003) with $r=1,2$ , and $1\le k\le 2r$ , we obtain

$\begin{array}{c}E{|{R}_{i,n}|}^{k}\le {\mathcal{l}}^{-k}E{\left(\sum _{j=1}^{\mathcal{l}}|{X}_{\left(i-1\right)\mathcal{l}+j}\left[I\left({X}_{\left(i-1\right)\mathcal{l}+j}\ge {\stackrel{˜}{\nu }}_{n}\right)-I\left({X}_{\left(i-1\right)\mathcal{l}+j}\ge {\nu }_{p}\right)\right]|\right)}^{k}\\ \le {\mathcal{l}}^{-k}\cdot C\left(r\right)\Delta \left(k,r\right){‖{X}_{1}\left(I\left({x}_{n1}\le {X}_{1}\le {x}_{n2}\right)\right)‖}_{2r+\delta }^{k}{\mathcal{l}}^{k/2}\\ =O\left({\mathcal{l}}^{-k/2}{n}^{-k/\left(4r+2\delta \right)}{\left(\mathrm{log}\mathrm{log}n\right)}^{k/\left(4r+2\delta \right)}\right).\end{array}$ (4.5)

Thus,

$E\left(\frac{1}{N}\sum _{i=1}^{N}|{R}_{i,n}|\right)=E|{R}_{1,n}|=O\left({\mathcal{l}}^{-1/2}{n}^{-1/\left(4+2\delta \right)}{\left(\mathrm{log}\mathrm{log}n\right)}^{1/\left(4+2\delta \right)}\right),$

and

$E\left(\frac{1}{N}\sum _{i=1}^{N}{|{R}_{i,n}|}^{2}\right)=E{|{R}_{1,n}|}^{2}=O\left({\mathcal{l}}^{-1}{n}^{-1/\left(2+\delta \right)}{\left(\mathrm{log}\mathrm{log}n\right)}^{1/\left(2+\delta \right)}\right).$

which together with the Markov’s inequality lead to

$\frac{1}{N}\sum _{i=1}^{N}|{R}_{i,n}|={O}_{p}\left({\mathcal{l}}^{-1/2}{n}^{-1/\left(4+2\delta \right)}{\left(\mathrm{log}\mathrm{log}n\right)}^{1/\left(4+2\delta \right)}\right),$ (4.6)

and

$\frac{1}{N}\sum _{i=1}^{N}{|{R}_{i,n}|}^{2}={O}_{p}\left({\mathcal{l}}^{-1}{n}^{-1/\left(2+\delta \right)}{\left(\mathrm{log}\mathrm{log}n\right)}^{1/\left(2+\delta \right)}\right).$ (4.7)

Define,

${\overline{U}}_{N}=\frac{1}{N}\sum _{i=1}^{N}\text{ }{U}_{i}\left({\nu }_{p}\right),\text{ }{\overline{R}}_{N}=\frac{1}{N}\sum _{i=1}^{N}\text{ }{R}_{i,n}\text{ }.$ (4.8)

Now, we investigate $Va{r}_{*}\left[{U}_{1}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)\right]$ .

$\begin{array}{c}Va{r}_{*}\left[{U}_{1}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)\right]=\frac{1}{N}\sum _{i=1}^{N}{\left[{U}_{i}\left({\stackrel{˜}{\nu }}_{n}\right)-\frac{1}{N}\sum _{i=1}^{N}\text{ }{U}_{i}\left({\stackrel{˜}{\nu }}_{n}\right)\right]}^{2}\\ =\frac{1}{N}\sum _{i=1}^{N}{\left[\left({U}_{i}\left({\nu }_{p}\right)+{R}_{i,n}\right)-\frac{1}{N}\sum _{i=1}^{N}\left({U}_{i}\left({\nu }_{p}\right)+{R}_{i,n}\right)\right]}^{2}\\ =\frac{1}{N}\sum _{i=1}^{N}{\left[\left({U}_{i}\left({\nu }_{p}\right)-{\overline{U}}_{N}\right)+\left({R}_{i,n}-{\overline{R}}_{N}\right)\right]}^{2}\end{array}$

$\begin{array}{l}=\frac{1}{N}\sum _{i=1}^{N}{\left[{U}_{i}\left({\nu }_{p}\right)-{\overline{U}}_{N}\right]}^{2}+\frac{1}{N}\sum _{i=1}^{N}{\left[{R}_{i,n}-{\overline{R}}_{N}\right]}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{2}{N}\sum _{i=1}^{N}\left[{U}_{i}\left({\nu }_{p}\right)-{\overline{U}}_{N}\right]\left[{R}_{i,n}-{\overline{R}}_{N}\right]\\ =Va{r}_{*}\left[{U}_{1}^{*}\left({\nu }_{p}\right)\right]+\frac{1}{N}\sum _{i=1}^{N}{\left[{R}_{i,n}-{\overline{R}}_{N}\right]}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{2}{N}\sum _{i=1}^{N}\left[{U}_{i}\left({\nu }_{p}\right)-{\overline{U}}_{N}\right]\left[{R}_{i,n}-{\overline{R}}_{N}\right].\end{array}$ (4.9)

Theorem 3.1 of Lahiri (2003) implies that

$\mathcal{l}Va{r}_{*}\left[{U}_{1}^{*}\left({\nu }_{p}\right)\right]{\to }^{p}{\sigma }_{\infty }^{2}\text{ }.$ (4.10)

Next, we show that

$\begin{array}{l}\frac{1}{N}\sum _{i=1}^{N}{\left[{R}_{i,n}-{\overline{R}}_{N}\right]}^{2}\\ ={O}_{p}\left({\mathcal{l}}^{-1}{n}^{-1/\left(2+\delta \right)}{\left(\mathrm{log}\mathrm{log}n\right)}^{1/\left(2+\delta \right)}\right),\end{array}$ (4.11)

and

$\begin{array}{l}\frac{1}{N}\sum _{i=1}^{N}\left[{U}_{i}\left({\nu }_{p}\right)-{\overline{U}}_{N}\right]\left[{R}_{i,n}-{\overline{R}}_{N}\right]\\ ={O}_{p}\left({\mathcal{l}}^{-1/2}{n}^{-\left(1+\delta \right)/\left(4+2\delta \right)}{\left(\mathrm{log}\mathrm{log}n\right)}^{1/\left(4+2\delta \right)}\right).\end{array}$ (4.12)

Using (4.6), (4.7), we get

$\begin{array}{c}\frac{1}{N}\sum _{i=1}^{N}{\left[{R}_{i,n}-{\overline{R}}_{N}\right]}^{2}\le \frac{2}{N}\sum _{i=1}^{N}\left({|{R}_{i,n}|}^{2}+{|{\overline{R}}_{N}|}^{2}\right)\\ \le \frac{2}{N}\sum _{i=1}^{N}{|{R}_{i,n}|}^{2}+2{\left(\frac{1}{N}\sum _{i=1}^{N}|{R}_{i,n}|\right)}^{2}\\ ={O}_{p}\left({\mathcal{l}}^{-1}{n}^{-1/\left(2+\delta \right)}{\left(\mathrm{log}\mathrm{log}n\right)}^{1/\left(2+\delta \right)}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{O}_{p}\left({\mathcal{l}}^{-1}{n}^{-2/\left(4+2\delta \right)}{\left(\mathrm{log}\mathrm{log}n\right)}^{2/\left(4+2\delta \right)}\right)\\ ={O}_{p}\left({\mathcal{l}}^{-1}{n}^{-1/\left(2+\delta \right)}{\left(\mathrm{log}\mathrm{log}n\right)}^{1/\left(2+\delta \right)}\right).\end{array}$

Hence, Equation (4.11) is proved. By (4.10), (4.11), and Cauchy-Schwartz inequality, we have,

$\begin{array}{l}\frac{1}{N}\sum _{i=1}^{N}\left[{U}_{i}\left({\nu }_{p}\right)-{\overline{U}}_{N}\right]\left[{R}_{i,n}-{\overline{R}}_{N}\right]\\ \le \frac{1}{N}{\left(\sum _{i=1}^{N}{\left[{U}_{i}\left({\nu }_{p}\right)-{\overline{U}}_{N}\right]}^{2}\right)}^{1/2}{\left(\sum _{i=1}^{N}{\left[{R}_{i,n}-{\overline{R}}_{N}\right]}^{2}\right)}^{1/2}\\ =\frac{1}{\sqrt{N}}{\left(Va{r}_{*}\left[{U}_{1}^{*}\left({\nu }_{p}\right)\right]\right)}^{1/2}{\left(\frac{1}{N}\sum _{i=1}^{N}{|{R}_{i,n}|}^{2}\right)}^{1/2}\\ ={N}^{-1/2}\cdot {O}_{p}\left(1\right)\cdot {O}_{p}\left({\mathcal{l}}^{-1/2}{n}^{-1/\left(4+2\delta \right)}{\left(\mathrm{log}\mathrm{log}n\right)}^{1/\left(4+2\delta \right)}\right)\\ ={O}_{p}\left({\mathcal{l}}^{-1/2}{n}^{-\left(1+\delta \right)/\left(4+2\delta \right)}{\left(\mathrm{log}\mathrm{log}n\right)}^{1/\left(4+2\delta \right)}\right).\end{array}$

We complete the proof of (4.12). Combining (4.9)-(4.12), we obtain

$\begin{array}{c}\mathcal{l}Va{r}_{*}\left[{U}_{1}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)\right]={\sigma }_{\infty }^{2}+\mathcal{l}\cdot {O}_{p}\left({\mathcal{l}}^{-1}{n}^{-1/\left(2+\delta \right)}{\left(\mathrm{log}\mathrm{log}n\right)}^{1/\left(2+\delta \right)}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\mathcal{l}\cdot {O}_{p}\left({\mathcal{l}}^{-1/2}{n}^{-\left(1+\delta \right)/\left(4+2\delta \right)}{\left(\mathrm{log}\mathrm{log}n\right)}^{1/\left(4+2\delta \right)}\right)\\ ={\sigma }_{\infty }^{2}+{O}_{p}\left({\mathcal{l}}^{1/2}{n}^{-\left(1+\delta \right)/\left(4+2\delta \right)}{\left(\mathrm{log}\mathrm{log}n\right)}^{1/\left(4+2\delta \right)}\right)\\ ={\sigma }_{\infty }^{2}+{o}_{p}\left(1\right)\end{array}$

Here we used the condition on the block length $\mathcal{l}=o\left({n}^{1/4}{\left(\mathrm{log}\mathrm{log}n\right)}^{-1/4}\right)$ . We complete the proof of Lemma 2. ,

Lemma 3 below gives a convergence result of the third moment of the MBB block average.

Lemma 3. Under the conditions of Theorem 2,

$\frac{n\sqrt{n}}{{b}^{2}}{E}_{*}{|{U}_{1}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)-{E}_{*}{U}_{1}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)|}^{3}{\to }^{p}0\text{ }.$

Proof: It can be verified by using (4.5) with $k=3,r=2$ ,

$E\left(\frac{1}{N}\sum _{i=1}^{N}{|{R}_{i,n}|}^{3}\right)=E{|{R}_{1,n}|}^{3}=O\left({\mathcal{l}}^{-3/2}{n}^{-3/\left(8+2\delta \right)}{\left(\mathrm{log}\mathrm{log}n\right)}^{3/\left(8+2\delta \right)}\right),$

$\frac{1}{N}\sum _{i=1}^{N}{|{R}_{i,n}|}^{3}={O}_{p}\left({\mathcal{l}}^{-3/2}{n}^{-3/\left(8+2\delta \right)}{\left(\mathrm{log}\mathrm{log}n\right)}^{3/\left(8+2\delta \right)}\right).$ (4.13)

Using (4.6), (4.13), and the fact that

${\left(x+y\right)}^{m}\le {2}^{m}\left({x}^{m}+{y}^{m}\right)\text{ },\text{ }\forall m\text{ },x\text{ },y>0$ (4.14)

we obtain,

$\begin{array}{l}\frac{1}{N}\sum _{i=1}^{N}{|{R}_{i,n}-{\overline{R}}_{N}|}^{3}\\ \le \frac{1}{N}\sum _{i=1}^{N}{2}^{3}\left({|{R}_{i,n}|}^{3}+{|{\overline{R}}_{N}|}^{3}\right)\\ \le \frac{8}{N}\sum _{i=1}^{N}{|{R}_{i,n}|}^{3}+8{\left(\frac{1}{N}\sum _{i=1}^{N}|{R}_{i,n}|\right)}^{3}\\ ={O}_{p}\left({\mathcal{l}}^{-3/2}{n}^{-3/\left(8+2\delta \right)}{\left(\mathrm{log}\mathrm{log}n\right)}^{3/\left(8+2\delta \right)}\right).\end{array}$

Therefore,

$\frac{1}{N}\sum _{i=1}^{N}{|{R}_{i,n}-{\overline{R}}_{N}|}^{3}={O}_{p}\left({\mathcal{l}}^{-3/2}{n}^{-3/\left(8+2\delta \right)}{\left(\mathrm{log}\mathrm{log}n\right)}^{3/\left(8+2\delta \right)}\right).$ (4.15)

Note that for $1\le j\le \mathcal{l}$ , $r=1,2$ , and $1\le k\le 2r$ ,

${‖{Y}_{j}\left({\nu }_{p}\right)‖}_{2r+\delta }\le {‖{X}_{1}‖}_{2r+\delta }<\infty$

and

$\Delta \left(k,r\right)=1+\sum _{j=1}^{\mathcal{l}}\text{ }{j}^{k}{\left[\alpha \left(j\right)\right]}^{\delta /\left(2r+\delta \right)}<\infty$

Then, Lemma 3.2 of Lahiri (2003) implies,

$E{|\sum _{j=1}^{\mathcal{l}}\text{ }{Y}_{j}\left({\nu }_{p}\right)|}^{k}\le \Delta \left(k,r\right){‖{Y}_{1}\left({\nu }_{p}\right)‖}_{2r+\delta }^{k}{\mathcal{l}}^{k/2}\le C\left(k,r\right){‖{X}_{1}‖}_{2r+\delta }^{k}{\mathcal{l}}^{k/2}=O\left({\mathcal{l}}^{k/2}\right)$ (4.16)

which implies

$E\left(\frac{1}{N}\sum _{i=1}^{N}{|{U}_{i}\left({\nu }_{p}\right)|}^{3}\right)=E{|{U}_{1}\left({\nu }_{p}\right)|}^{3}=E{\left(\frac{1}{\mathcal{l}}\sum _{j=1}^{\mathcal{l}}\text{ }{Y}_{j}\left({\nu }_{p}\right)\right)}^{3}=O\left({\mathcal{l}}^{-3/2}\right)$

and

$E\left(\frac{1}{N}\sum _{i=1}^{N}|{U}_{i}\left({\nu }_{p}\right)|\right)=E|{U}_{1}\left({\nu }_{p}\right)|=E\left(\frac{1}{\mathcal{l}}\sum _{j=1}^{\mathcal{l}}\text{ }{Y}_{j}\left({\nu }_{p}\right)\right)=O\left({\mathcal{l}}^{-1/2}\right)$

Therefore,

$\frac{1}{N}\sum _{i=1}^{N}{|{U}_{i}\left({\nu }_{p}\right)|}^{3}={O}_{p}\left({\mathcal{l}}^{-3/2}\right),\text{ }\frac{1}{N}\sum _{i=1}^{N}|{U}_{i}\left({\nu }_{p}\right)|={O}_{p}\left({\mathcal{l}}^{-1/2}\right).$ (4.17)

Finally, combining (4.14), (4.15), and (4.17) gives,

$\begin{array}{l}{E}_{*}{|{U}_{1}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)-{E}_{*}{U}_{1}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)|}^{3}\\ =\frac{1}{N}\sum _{i=1}^{N}{|{U}_{i}\left({\stackrel{˜}{\nu }}_{n}\right)-\frac{1}{N}\sum _{i=1}^{N}\text{ }{U}_{i}\left({\stackrel{˜}{\nu }}_{n}\right)|}^{3}\\ =\frac{1}{N}\sum _{i=1}^{N}|\left({U}_{i}\left({\nu }_{p}\right)-{\overline{U}}_{N}\right)+\left({R}_{i,n}-{\overline{R}}_{N}\right)|{|}^{3}\\ \le \frac{1}{N}\sum _{i=1}^{N}\text{ }{2}^{3}\left({|{U}_{i}\left({\nu }_{p}\right)-{\overline{U}}_{N}|}^{3}+{|{R}_{i,n}-{\overline{R}}_{N}|}^{3}\right)\end{array}$

$\begin{array}{l}=\frac{8}{N}\sum _{i=1}^{N}{|{U}_{i}\left({\nu }_{p}\right)-{\overline{U}}_{N}|}^{3}+\frac{8}{N}\sum _{i=1}^{N}{|{R}_{i,n}-{\overline{R}}_{N}|}^{3}\\ =\frac{64}{N}\sum _{i=1}^{N}{|{U}_{i}\left({\nu }_{p}\right)|}^{3}+64{|{\overline{U}}_{N}|}^{3}+\frac{8}{N}\sum _{i=1}^{N}{|{R}_{i,n}-{\overline{R}}_{N}|}^{3}\\ ={O}_{p}\left({\mathcal{l}}^{-3/2}+{\mathcal{l}}^{-3/2}{n}^{-3/\left(8+2\delta \right)}{\left(\mathrm{log}\mathrm{log}n\right)}^{3/\left(8+2\delta \right)}\right).\end{array}$ (4.18)

Thus, we obtain,

$\begin{array}{l}\frac{n\sqrt{n}}{{b}^{2}}{E}_{*}{|{U}_{1}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)-{E}_{*}{U}_{1}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)|}^{3}\\ ={\mathcal{l}}^{2}{n}^{-1/2}\cdot {O}_{p}\left({\mathcal{l}}^{-3/2}+{\mathcal{l}}^{-3/2}{n}^{-3/\left(4+2\delta \right)}{\left(\mathrm{log}\mathrm{log}n\right)}^{3/\left(4+2\delta \right)}\right)\\ ={o}_{p}\left(1\right)\end{array}$

Lemma 3 is proved. ,

Proof of Theorem 1: By the definitions of ${\mu }_{n}^{*}$ , ${\stackrel{˜}{\mu }}_{n}$ , ${Y}_{i}^{*}$ , ${U}_{i}^{*}$ , ${n}_{1}=b\mathcal{l}$ , and the fact that ${U}_{1}^{*},{U}_{2}^{*},\cdots ,{U}_{b}^{*}$ are IID, we get

$\begin{array}{c}{\mu }_{n}^{*}-{\stackrel{˜}{\mu }}_{n}=\frac{{\sum }_{i=1}^{n}\text{ }{X}_{i}^{*}I\left({X}_{i}^{*}\ge {\stackrel{˜}{\nu }}_{n}\right)}{{\sum }_{i=1}^{{n}_{1}}\text{ }I\left({X}_{i}^{*}\ge {\stackrel{˜}{\nu }}_{n}\right)}-{E}_{*}\left(\frac{{\sum }_{i=1}^{n}\text{ }{X}_{i}^{*}I\left({X}_{i}^{*}\ge {\stackrel{˜}{\nu }}_{n}\right)}{{\sum }_{i=1}^{{n}_{1}}\text{ }I\left({X}_{i}^{*}\ge {\stackrel{˜}{\nu }}_{n}\right)}\right)\\ =\frac{{n}_{1}}{⌊{n}_{1}p⌋+1}\left\{\frac{1}{{n}_{1}}\sum _{i=1}^{{n}_{1}}\text{ }{Y}_{i}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)-{E}_{*}\left(\frac{1}{{n}_{1}}\sum _{i=1}^{{n}_{1}}\text{ }{Y}_{i}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)\right)\right\}\\ =\frac{{n}_{1}}{⌊{n}_{1}p⌋+1}\left\{\frac{1}{b}\sum _{i=1}^{b}\text{ }{U}_{i}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)-{E}_{*}\left(\frac{1}{b}\sum _{i=1}^{b}\text{ }{U}_{i}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)\right)\right\}\\ =\frac{{n}_{1}}{⌊{n}_{1}p⌋+1}\left(\frac{1}{b}\sum _{i=1}^{b}\left[{U}_{i}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)-{E}_{*}{U}_{i}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)\right]\right)\end{array}$ (4.19)

Then, for any $y\in ℝ$ ,

$\begin{array}{l}{P}_{*}\left(\sqrt{{n}_{1}}p\left({\mu }_{n}^{*}-{\stackrel{˜}{\mu }}_{n}\right)\le y\right)\\ ={P}_{*}\left(\frac{1}{{n}_{1}}\sum _{i=1}^{{n}_{1}}\left[{Y}_{i}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)-{\stackrel{˜}{\mu }}_{n}\right]\le \frac{⌊{n}_{1}p⌋+1}{p{n}_{1}\sqrt{{n}_{1}}}y\right)\\ ={P}_{*}\left(\frac{1}{b}\sum _{i=1}^{b}\left[{U}_{i}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)-{E}_{*}{U}_{i}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)\right]\le \frac{⌊{n}_{1}p⌋+1}{p{n}_{1}\sqrt{{n}_{1}}}y\right),\end{array}$

which together with the Berry-Esseen Theorem gives

$\begin{array}{l}|{P}_{*}\left(\sqrt{{n}_{1}}p\left({\mu }_{n}^{*}-{\stackrel{˜}{\mu }}_{n}\right)\le y\right)-\Psi \left(\frac{\sqrt{b}}{\sqrt{Va{r}_{*}{U}_{1}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)}}\frac{⌊{n}_{1}p⌋+1}{p{n}_{1}\sqrt{{n}_{1}}}y\right)|\\ \le \frac{3{E}_{*}{|{U}_{1}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)-{E}_{*}{U}_{1}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)|}^{3}}{\sqrt{b}{\left(Va{r}_{*}{U}_{1}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)\right)}^{3/2}}\end{array}$ (4.20)

Lemma 2 and Lemma 3 imply, respectively,

$\frac{\sqrt{b}}{\sqrt{Va{r}_{*}{U}_{1}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)}}\frac{⌊{n}_{1}p⌋+1}{p{n}_{1}\sqrt{{n}_{1}}}y=\frac{⌊{n}_{1}p⌋+1}{p{n}_{1}}\frac{y}{\sqrt{\mathcal{l}Va{r}_{*}{U}_{1}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)}}{\to }^{p}\frac{y}{{\sigma }_{\infty }},$ (4.21)

and

$\frac{{E}_{*}{|{U}_{1}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)-{E}_{*}{U}_{1}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)|}^{3}}{\sqrt{b}{\left(Va{r}_{*}{U}_{1}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)\right)}^{3/2}}=\frac{\frac{n\sqrt{n}}{{b}^{2}}{E}_{*}{|{U}_{1}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)-{E}_{*}{U}_{1}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)|}^{3}}{{\left(\mathcal{l}Va{r}_{*}{U}_{1}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)\right)}^{3/2}}{\to }^{p}0$ (4.22)

Then, by (4.20)-(4.22), we have

$|{P}_{*}\left(\sqrt{{n}_{1}}p\left({\mu }_{n}^{*}-{\stackrel{˜}{\mu }}_{n}\right)\le y\right)-\Psi \left(\frac{y}{{\sigma }_{\infty }}\right)|{\to }^{p}0,$

uniformly in $y\in ℝ$ . This together with Lemma 1 yields

$\underset{y\in ℝ}{sup}|{P}_{*}\left(\sqrt{{n}_{1}}p\left({\mu }_{n}^{*}-{\stackrel{˜}{\mu }}_{n}\right)\le y\right)-P\left(\sqrt{n}p\left({\stackrel{^}{\mu }}_{n}-{\mu }_{p}\right)\le y\right)|{\to }^{p}0.$

That is, ${Z}_{n}^{*}=\sqrt{{n}_{1}}p\left({\mu }_{n}^{*}-{\stackrel{˜}{\mu }}_{n}\right)$ converges in probability to ${Z}_{n}=\sqrt{n}p\left({\stackrel{^}{\mu }}_{n}-{\mu }_{p}\right)$ . We complete the proof of Theorem 1. ,

Proof of Theorem 2: Theorem 2 is a consequence of Lemma 2 and Equation (4.19).

$\begin{array}{c}Va{r}_{*}\left({Z}_{n}^{*}\right)=Va{r}_{*}\left(\sqrt{{n}_{1}}p{\mu }_{n}^{*}\right)\\ =Va{r}_{*}\left(\frac{{n}_{1}\sqrt{{n}_{1}}p}{⌊{n}_{1}p⌋+1}\cdot \frac{1}{{n}_{1}}\sum _{i=1}^{{n}_{1}}\text{ }{Y}_{i}^{*}\right)\\ ={\left(\frac{{n}_{1}\sqrt{{n}_{1}}p}{⌊{n}_{1}p⌋+1}\right)}^{2}Va{r}_{*}\left(\frac{1}{b}\sum _{i=1}^{b}\text{ }{U}_{i}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)\right)\\ ={\left(\frac{{n}_{1}p}{⌊{n}_{1}p⌋+1}\right)}^{2}\left(\mathcal{l}Va{r}_{*}\left({U}_{1}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)\right)\right)\\ \to {\sigma }_{\infty }^{2}\text{ },\end{array}$

since ${U}_{1}^{*},{U}_{2}^{*},\cdots ,{U}_{b}^{*}$ are IID, which implies that

$Va{r}_{*}\left(\frac{1}{b}\sum _{i=1}^{b}\text{ }{U}_{i}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)\right)=\frac{1}{b}Va{r}_{*}\left({U}_{1}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)\right)=\frac{1}{{n}_{1}}\left(\mathcal{l}Va{r}_{*}\left({U}_{1}^{*}\left({\stackrel{˜}{\nu }}_{n}\right)\right)\right).$

The proof of Theorem 2 is completed. ,

5. Conclusions

In this paper, we establish the asymptotic properties of the blocking bootstrap estimators of the expected shortfall. We prove that the MBB method provides a valid approximation to the distribution of the centered and scaled sample estimator of the expected shortfall and show that under mild regularity conditions, the MBB variance estimator is consistent.

As in many situations where the block bootstrap methodology is involved, the performance of the block bootstrap distribution function and variance estimators of ${Z}_{n}$ critically depends on the block size $\mathcal{l}$ . Although there have been some theoretical works on the choice of the optimal block length in the literature, the optimal block lengths and/or the optimal rates of convergence in Theorems 1 and 2 are unknown at this stage.

It is also of interest to conduct both empirical studies and simulations to investigate the performance of the MBB estimators and compare the obtained results with those in [9] .

Acknowledgements

We thank the reviewer for his/her helpful comments and suggestions which improved the manuscript.

Cite this paper

Sun, S.X. and Cheng, F.X. (2018) Bootstrapping the Expected Shortfall. Theoretical Economics Letters, 8, 685-698. https://doi.org/10.4236/tel.2018.84046

References

1. 1. Artzner, P., Delbaen, F., Eber, J.-M. and Heath, D. (1999) Coherent Measures of Risk. Mathematical Finance, 9, 203-228. https://doi.org/10.1111/1467-9965.00068

2. 2. Follmer, H. and Schied, A. (2002) Convex Measures of Risk and Trading Constraints. Finance and Stochastic, 6, 429-447. https://doi.org/10.1007/s007800200072

3. 3. Duffie, D. and Pan, J. (1997) An Overview of Value at Risk. Journal of Derivative, 4, 7-49. https://doi.org/10.3905/jod.1997.407971

4. 4. Jorion, P. (2001) Value at Risk. 2nd Edition, McGraw-Hill, New York.

5. 5. Chen, S.X. and Tang, C.Y. (2005) Nonparametric Inference of Value at Risk for Dependent Financial Returns. Journal of Financial Econometrics, 3, 227-255. https://doi.org/10.1093/jjfinec/nbi012

6. 6. Embrechts, P., Klueppelberg, C. and Mikosch, T. (1997) Modeling Extremal Events for Insurance and Finance. Springer, Berlin. https://doi.org/10.1007/978-3-642-33483-2

7. 7. McNeil, A. (1997) Estimating the Tails of Loss Severity Distributions Using Extreme Value Theory. ASTIN Bulletin, 27, 117-137. https://doi.org/10.2143/AST.27.1.563210

8. 8. Scaillet, O. (2004) Nonparametric Estimation and Sensitivity Analysis of Expected Shortfall. Mathematical Finance, 14, 115-129. https://doi.org/10.1111/j.0960-1627.2004.00184.x

9. 9. Chen, S.X. (2008) Nonparametric Estimation of Expected Shortfall. Journal of Financial Economics, 87-107.

10. 10. Efron, B. (1979) Bootstrap Methods: Another Look at the Jackknife. Annals of Statistics, 7, 1-26. https://doi.org/10.1214/aos/1176344552

11. 11. Bickel, P.J. and Freedman, D.A. (1981) Some Asymptotic Theory for the Bootstrap. Annals of Statistics, 9, 1196-1217. https://doi.org/10.1214/aos/1176345637

12. 12. Singh, K. (1981) On Asymptotic Accuracy of Efron’s Bootstrap. Annals of Statistics, 9, 1187-1195. https://doi.org/10.1214/aos/1176345636

13. 13. Hall, P. (1985) Resampling a Coverage Pattern. Stochastic Processes and Their Applications, 20, 231-246. https://doi.org/10.1016/0304-4149(85)90212-1

14. 14. Carlstein, E. (1986) The Use of Subseries Methods for Estimating the Variance of a General Statistic from a Stationary Time Series. Annals of Statistics, 14, 1171-1179. https://doi.org/10.1214/aos/1176350057

15. 15. Künsch, H.R. (1989) The Jackknife and the Bootstrap for General Stationary Observations. Annals of Statistics, 17, 1217-1261. https://doi.org/10.1214/aos/1176347265

16. 16. Liu, R.Y. and Singh, K. (1992) Moving Blocks Jackknife and Bootstrap Capture Weak Convergence. In: Lepage, R. and Billard, L., Eds., Exploring the Limits of Bootstrap, Wiley, New York, 225-248.

17. 17. Politis, D. and Romano, J.P. (1992) A Circular Block Resampling Procedure for Stationary Data. In: Lepage, R. and Billard, L., Eds., Exploring the Limits of Bootstrap, Wiley, New York, 263-270.

18. 18. Politis, D. and Romano, J.P. (1994) Stationary Bootstrap. Journal of the American Statistical Association, 89, 1303-1313. https://doi.org/10.1080/01621459.1994.10476870

19. 19. Politis, D., Romano, J.P. and Wolf, M. (1997) Subsampling for Heteroskedastic Time Series. Journal of Econometrics, 81, 281-317. https://doi.org/10.1016/S0304-4076(97)86569-4

20. 20. Paparoditis, E. and Politis, D.N. (2001) Tapered Block Bootstrap. Biometrika, 88, 1105-1119. https://doi.org/10.1093/biomet/88.4.1105

21. 21. Lahiri, S.N. (2003) Resampling Methods for Dependent Data. Springer-Verlag, New York. https://doi.org/10.1007/978-1-4757-3803-2

22. 22. Sun, S. and Lahiri, S.N. (2006) Bootstrapping the Sample Quantile Based on Weakly Dependent Observations. Sankhyā, 68, 130-166.

23. 23. Follmer, H. and Leukert, P. (1999) Quantile Hedging. Finance and Stochastic, 3, 251-273. https://doi.org/10.1007/s007800050062

24. 24. Melnikov, S. and Romaniuk, Y. (2006) Evaluating the Performance of Gompertz, Makeham and Lee-Carter Mortality Models for Risk Management with Unit-Linked Contracts. Insurance: Mathematics and Economics, 39, 310-329. https://doi.org/10.1016/j.insmatheco.2006.02.012

25. 25. Lahiri, S.N. (1992) Edgeworth Correction by Moving Block Bootstrap for Stationary and Non Stationary Data. In: Lepage, R. and Billard, L., Eds., Exploring the Limits of Bootstrap, Wiley, New York, 225-248.