Jump to ContentJump to Main Navigation
Finite Sample Econometrics$

Aman Ullah

Print publication date: 2004

Print ISBN-13: 9780198774471

Published to British Academy Scholarship Online: August 2004

DOI: 10.1093/0198774478.001.0001

Show Summary Details

(p.179) Appendix A Statistical Methods

(p.179) Appendix A Statistical Methods

Source:
Finite Sample Econometrics
Publisher:
Oxford University Press

The finite sample theories of econometrics heavily depend on several statistical concepts. These include moments, distributions, and asymptotic expansions. Accordingly, the objective here is to present results that are useful for the finite sample results covered in this book. In doing so it is assumed that the reader has a basic knowledge of probability and statistics.

A.1 Moments and Cumulants

The characteristic function of a random variable y is

ψ ( t ) = E e i t y , = 1 + i t E y + ( i t ) 2 E y 2 2 ! + ( i t ) 3 E y 3 3 ! + , = r = 0 ( i t ) r r ! μ r ' = r = 0 ( i t ) r r ! ψ ( r ) ( 0 ) i r ,
where μr′ = Ey r and the last equality is the expansion of ψ(t) around zero. Then the rth moment around zero of y is
E y r = μ r = 1 i r ψ ( r ) ( 0 ) ,
where ψ(r)(0) is the rth derivative of ψ(t) with respect to t and evaluated at t = 0.

The cumulant function is defined by

K ( t ) = log ψ ( t ) .
(p.180) Using the expansion of K(t) around 0
K ( t ) = K ( 0 ) + t K ( 1 ) ( 0 ) + t 2 2 ! K ( 2 ) ( 0 ) + , = r = 1 ( i t ) r r ! κ r ,
where
κ r = 1 i r K ( r ) ( 0 )
is the rth cumulant of y.

It is easy to verify that κ1 = μ1′, κ2 = μ2, κ3 = μ3, κ 4 = μ 4 - 3 μ 2 2 , κ5 = μ5 − 10μ3μ2, κ 6 = μ 6 - 15 μ 4 μ 2 - 10 μ 3 2 + 30 μ 2 3 , where μr = E(yEy)r is the rth central moment around mean.

A.2 Gram–Charlier and Edgeworth Series

Gram (1879), and Charlier (1905) series represent the density of a standardized variable y as a linear combination of the standardized normal density φ (y) and its derivatives. That is

f ( y ) = j = 0 c j φ ( j ) ( y ) ,
where the c j are constants and
φ ( j ) ( y ) = d j φ ( y ) d y j = ( - 1 ) j H j ( y ) φ ( y ) ;
H j(y) is a polynomial in y of degree j, which is the coefficient of t j/j! in exp (ty − ½t 2), for example H 0(y) = 1, H 1(y) = y, H 2(y) = y 2 − 1, H 3(y) = y 3 − 3y, and H 4(y) = y 4 − 6y 2 + 3. These H j(y) form an orthogonal set of polynomials (Hermite Polynomials) with respect to normal density φ (y), that is
- H j ( y ) H e ( y ) φ ( y ) d y = j ! , when  j = e , = 0 otherwise .
Because of this, multiplying f(y) by H j(y) and integrating term by term we get
c j = ( - 1 ) j 1 j ! - H j ( y ) f ( y ) d y .
These c j can be obtained in terms of the moments of y, and these are
c 0 = 1 , c 1 = c 2 = 0 , c 3 = - κ 3 3 ! , c 4 = κ 4 4 ! , c 5 = - κ 5 5 ! ; c 6 = 1 6 ! ( κ 6 + 10 κ 3 2 ) .
(p.181) Then the Gram–Charlier series of Type A can be written as
f ( y ) = φ ( y ) [ 1 + 1 3 ! κ 3 H 3 + 1 4 ! κ 4 H 4 + ] = φ ( y ) - κ 3 3 ! φ ( 3 ) ( y ) + κ 4 4 ! φ ( 4 ) ( y ) + .
Using this
F ( y ) = Φ ( y ) - φ ( y ) [ κ 3 3 ! H 2 ( y ) + κ 4 4 ! H 3 ( y ) + ] .
If y is not a standardized variable then
c j = ( - 1 ) j 1 j ! [ μ j ' - ( j ) 2 2 ! μ j - 2 ' + ( j ) 4 2 2 2 ! μ j - 4 ' + ] ,
where (j)r = j(j − 1) ċċċ (jr +1). Then
c 0 = 1 , c 1 = 0 , c 2 = 1 2 ! ( μ 2 - 1 ) , c 3 = - 1 3 ! μ 3 c 4 = 1 4 ! ( μ 4 - 6 μ 2 + 3 ) , c 5 = 1 5 ! ( μ 5 - 10 μ 3 ) c 6 = 1 6 ! ( μ 6 - 15 μ 4 + 45 μ 2 - 15 ) .
In this case
f ( y ) = φ ( y ) [ 1 + 1 2 ! ( μ 2 - 1 ) H 2 + 1 3 ! μ 3 H 3 + 1 4 ! ( μ 6 - 6 μ 2 + 3 ) H 4 + ] .

The Edgeworth Type A (1905) series is closely related to Gram–Charlier series. For this we obtain the characteristic function around the normal distribution and then use inversion theorem to obtain the series expansion of the density. Let us write

ψ ( t ) = exp [ K ( t ) ] = e - ( 1 / 2 ) t 2 exp [ j = 3 ( i t ) j j ! κ j ] = e - ( 1 / 2 ) t 2 j = 0 ( - i t ) j j ! c j .
(p.182) where c j is given by (A.10). Alternatively, (A.14) can be obtained by using (A.4) as
ψ ( t ) = e - ( 1 / 2 ) t 2 e [ i t y - ( 1 / 2 ) ( i t ) 2 ] f ( y ) d y = e - ( 1 / 2 ) t 2 j = 0 ( i t ) j j ! H j ( y ) f ( y ) d y = e - ( 1 / 2 ) t 2 j = 0 ( - i t ) j c j ,

Now using the inversion theorem

f ( y ) = 1 2 π - e - i t y ψ ( t ) d t
and the fact that if f has the characteristic function ψ(t) the f (j) has the characteristic function (−it)jψ (t), which gives
1 2 π - e - i t y e - ( 1 / 2 ) t 2 ( - i t ) j d t = φ ( j ) ( y ) ,
we obtain the Gram–Charlier type series expansion in (A.11). If we collect the terms containing elements not higher than H 6 we can write
f ( y ) = φ ( y ) ( 1 + κ 3 3 ! H 3 + κ 4 4 ! H 4 + κ 5 5 ! H 5 + κ 6 + 10 κ 3 2 6 ! H 6 ) .
This is often called the Edgeworth form of the Type A series, see Kendall and Stuart (1977). Further, if cumulants above the fourth are neglected the Edgeworth series reduces to
f ( y ) = φ ( y ) [ 1 + κ 3 3 ! H 3 + κ 4 4 ! H 4 + 10 κ 3 2 6 ! H 6 ] .

We note that the above series can also be written as

f ( y ) = exp ( - κ 3 D 3 3 ! + κ 4 D 4 4 ! - ) φ ( y ) ,
where D = d/dy. This is the form originally suggested by Edgeworth (1905), also see Kendall and Stuart (1977). The idea behind these series goes back to Chebyshev (1890), also see Cramér (1925, 1928) for the historical details.

A.3 Asymptotic Expansion and Asymptotic Approximation

For large values of nonstochastic x, consider

f ( x ) = A 0 + A 1 x + A 2 x 2 + + A n x n +
(p.183) is the asymptotic expansion of a function f(x) if the coefficients are determined as follows:
A 0 = lim | x | f ( x ) A 1 = lim | x | x ( f ( x ) - A 0 ) A 0 = lim | x | x 2 ( f ( x ) - A 0 - A 1 x ) A 2 = lim | x | x 2 ( f ( x ) - A 0 - A 1 x - - A n - 1 x n - 1 ) .

The series on the right, viz.,

A 0 + A 1 x + A 2 x 2 +
may be convergent for large values of x or divergent for all values of x.

However, it should be noted that the difference between f(x) and the sum of the n terms of its asymptotic expansion:

f ( x ) - ( A 0 + A 1 x + + A n - 1 x n - 1 )
is of the same order as the (n + 1)th term when |x| is large. Then the asymptotic expansion may be considered more suitable for approximate numerical computation than a convergent series.

Let us illustrate this point with the help of the following example, from Whittaker and Watson (1965: 150–151).

Consider the function

f ( x ) = - 1 t e x - t d t ,
where x is real and positive. By repeated integration by parts, we get
f ( x ) = 1 x - 1 x 2 + 2 ! x 3 + + ( - 1 ) n - 1 ( n - 1 ) ! x n + .
We observe that the absolute value of the ratio of the (m + 1)th term to the mth term is equal to
m x ,
(p.184) which tends to ∞ as m ⇛ ∞ for all values of x. It follows that the series expansion of f(x) is, in fact, divergent for all values of x. In spite of this, however, the series can be used for the calculation of f(x). This may be seen as follows: Write
S n ( x ) = 1 x - 1 x 2 + 2 ! x 3 + + ( - 1 ) n n ! x n + 1
and
R n ( x ) = ( - 1 ) n + 1 ( n + 1 ) ! x e x - t t n + 2 d t
such that
f ( x ) = S n ( x ) + R n ( x ) .
Then, because e |xt| < 1
| f ( x ) - S n ( x ) | = ( n + 1 ) ! x e x - t t n + 2 < ( n + 1 ) ! x d t t n + 2 = n ! x n + 1 .
This is very small (for any value of n) for sufficiently large values of x. It follows, therefore, that the value of the function f(x) can be calculated with great accuracy for large values of x. Even for small values of x and n
S 5 ( 10 ) = 0.09152
and
0 < f ( 10 ) - S 5 ( 10 ) < 0.00012.

It has also been shown by Whittaker and Watson (1965: 153, section 8.31), that it is permissible to integrate an asymptotic expansion term by term, the resulting series being the asymptotic expansion of the integral of the function represented by original series. It has also been stated that a given series can be an asymptotic expansion of several distinct functions; however, a given function cannot be represented by more than one asymptotic expansion, see Copson (1967), Kendall and Stuart (1977), and Srinivasan (1970).

A.3.1 Asymptotic Expansion (Stochastic)

Now we consider the case of stochastic asymptotic expansion. Most econometric estimators and test statistics can be expanded in a power series in n −1/2 with coefficients that are well behaved random variables. Suppose, for example, (p.185) Z n is an estimator or test statistic whose stochastic expansion is

Z n = T n + A n n + B n n + R n n n = ξ 0 + ξ - 1 / 2 + ξ - 1 + O p ( n - 3 / 2 ) ,
where ξj = O p(n −j) and T n, A n, and B n are sequences of random variables with limiting distribution as n tends to infinity. If R n is stochastically bounded, that is, P[|R n| > c] < ε as n ⇛ ∞ for every ε > 0 and a constant c, then the limiting distribution of n ( Z n - ξ 0 ) is the same as the limiting distribution of A n. Then expansion in (A.19) is the asymptotic expansion of Z n, see Chapter 3 for examples.

A.4 Moments of the Quadratic Forms Under Normality

Let N i, for i = 1, 2, 3, 4, be the symmetric matrices. Further consider a n × 1 vector y, which is distributed as a normal distribution with the mean vector Ey = μ and variance matrix as V(y) = Σ. Then the following results can be verified from the result of Lemma 1 and Exercises 2 and 4 in Chapter 2.

To write these results we first introduce the notations as given below:

a p = t r ( N p Σ ) , a p q = t r ( N p Σ N q Σ ) , a p q r = t r ( N p Σ N q Σ N r Σ ) , a p q r s = t r ( N p Σ N q Σ N r Σ N s Σ ) , θ p = μ N p μ , θ p q = μ N p Σ N q μ , θ p q r = μ N p Σ N p Σ N r Σ μ , θ p q r s = μ N p Σ N q Σ N r Σ N s μ ,
where tr represents the trace of a matrix. Then
E ( y N 1 y ) = a 1 + θ 1 , E ( y N 1 y ) ( y N 2 y ) = p = 1 3 ( a p + θ p ) + p = 1 2 q = 1 2 ( a p q + 2 θ p q ) q p
E ( y N 1 y ) ( y N 2 y ) ( y N 3 y ) = p = 1 3 ( a p + θ p ) + p = 1 3 q = 1 3 r = 1 3 ( a p a q r + 2 ( a p + θ p ) θ q r + a p q θ r + 4 3 a p q r + 4 θ p q r ) , q p r
(p.186)
E ( y N 1 y ) ( y N 2 y ) ( y N 3 y ) ( y N 4 y ) = p = 1 4 ( a p + θ p ) + p = 1 4 q = 1 4 r = 1 4 s = 1 4 [ 4 3 a p a q r s + a r s ( a p a q + a p q ) + θ r s ( 2 a p ( θ q + a q ) + 2 a p q + θ p θ q + 4 θ p q ) + θ s ( a p a q r + a p q r + 1 2 a p q θ r ) + 4 3 θ q r s ( θ p + a p ) + 2 3 a p q r s + 8 θ p q r s ] . q p r s
The above results can also be developed from Mathai and Provost (1992) where the moment generating functions approach is used.

In the special case where μ = 0, yN(0, Σ), the above results reduce to the following results.

E ( y N 1 y ) = a 1
E ( y N 1 y ) ( y N 2 y ) = a 1 a 2 + 2 a 12 E ( y N 1 y ) ( y N 2 y ) ( y N 3 y ) = a 1 a 2 a 3 + 2 [ a 1 a 23 + a 2 a 13 + a 3 + a 12 ] + 8 a 123
E ( y N 1 y ) ( y N 2 y ) ( y N 3 y ) ( y N 4 y ) = a 1 a 2 a 3 a 4 + 8 [ a 1 a 234 + a 2 a 134 + a 3 a 124 + a 4 a 123 ] + 4 [ a 12 a 34 + a 13 a 24 + a 14 a 23 ] + 2 [ a 1 a 2 a 34 + a 1 a 3 a 24 + a 1 a 4 a 23 + a 2 a 3 a 14 + a 2 a 4 a 13 + a 3 a 4 a 12 ] + 16 [ a 1234 + a 1243 + a 1324 ] .
When yN(0, I) then these results in (A.22) remain the same except that Σ = I in all the terms. For the alternative derivations of these results see Magnus (1978, 1979), Srivastava and Tiwari (1976), and Mathai and Provost (1992).

In another special case where yN(0, I) and N 1 is an idempotent matrix of rank m then yN 1 y is distributed as a central χ2 at m d.f. In this case tr  ( N 1 i ) = m , and

E ( y N 1 y ) = m , E ( y N 1 y ) 2 = m ( m + 2 ) E ( y N 1 y ) 3 = m ( m + 2 ) ( m + 4 ) , E ( y N 1 y ) 4 = m ( m + 2 ) × ( m + 4 ) ( m + 6 ) ,
which generalizes to
E ( y N 1 y ) r = 2 r Γ ( ( m / 2 + r ) Γ ( m / 2 ) , r 1.
This is the rth moment of a central χ2 distribution.

When the matrix N 1 is not a symmetric matrix then we can write yN 1 y = y′((N 1 + N 1′)/2)y where ((N 1 + N 1′)/2) is always a symmetric matrix. Thus the above results also go through when the matrices N 1 to N 4 are not symmetric.

(p.187) A.5 Moments of Quadratic Forms Under Nonnormality

Let y = (y 1, …,y n)′ to be an n × 1 vector of i.i.d. elements with

E y i = 0 , E y i 2 = σ 2 , E y i 3 = σ 3 γ 1 , E y i 4 = σ 4 ( γ 2 + 3 ) , E y i 5 = σ 5 ( γ 3 + 10 γ 1 ) E y i 6 = σ 6 ( γ 4 + 10 γ 1 2 + 15 γ 2 + 15 )
for i = 1, …, n, where γ1 and γ2 are the Pearson's measures of skewness and kurtosis of the distribution and these and γ3 and γ4 can be regarded as measures for deviation from normality. For normal distributions, the parameters γ1, γ2, γ3, and γ4 are zero while for symmetrical distributions, only γ1 and γ3 are zero. These γ′s can also be expressed as cumulants, for example, γ1 and γ2 represent the third and fourth cumulants, see Section A.1.

Under the above assumptions the following results follow, where N 1 and N 2 matrices are not assumed to be symmetric, ι is an n × 1 vector of unit elements and * represents the Hadamard product:

1 σ 2 E ( y N 1 y ) = t r ( N 1 ) , 1 σ 3 E ( y N 1 y . y ) = γ 1 ( I n * N 1 ) ι , 1 σ 4 E ( y N 1 y . y y ) = γ 2 ( I n * N 1 ) + ( tr  N 1 ) I n + N 1 + N 1 , 1 σ 5 ( y N 1 y . y N 2 y . y ) = γ 3 ( I n * N 1 * N 3 ) ι + γ 1 [ ( ( tr  N 1 ) I n + N 1 + N 1 ) ( I n * N 2 ) + ( ( tr  N 2 ) I n + N 2 + N 2 ) ( I n * N 1 ) + ( I n * N 1 N 2 ) + ( I n * N 1 N 2 ) + ( I n * N 1 N 2 ) + ( I n * N 1 N 2 ) ] ι , 1 σ 6 ( y N 1 y . y N 2 y . y y ) = γ 4 ( N 1 * N 2 ) + γ 2 [ tr ( N 1 * N 2 ) I n + ( tr  N 1 ) ( I n * N 2 ) + ( tr  N 2 ) ( I n * N 1 ) + { I n * ( N 1 ( N 2 + N 2 ) + N 2 ( N 1 + N 1 ) ) } + ( N 1 + N 1 ) ( I n * N 2 ) + ( N 2 + N 2 ) ( I n * N 1 ) + ( I n * N 2 ) ( N 1 + N 1 ) ( I n * N 1 ) ( N 2 + N 2 ) ] + γ 1 2 [ ( N 1 + N 1 ) * ( N 2 + N 2 ) ( I n * N 1 ) ι ι ( I n * N 2 ) + ( I n * N 2 ) ι ι ( I n * N 1 ) + I n * { ( N 1 + N 1 ) ( I n * N 2 ) } ι ι + I n * { ( N 2 + N 2 ) ( I n * N 1 ) } ι ι ] + [ ( tr  N 1 ) I n + N 1 + N 1 ] ( N 2 + N 2 ) + [ ( tr  N 2 ) I n + N 2 + N 2 ] ( N 1 + N 1 ) + [ ( tr  N 1 N 2 ) + ( tr  N 1 N 2 ) + ( tr  N 1 ) ( tr  N 2 ) ] I n .
(p.188) Setting γ1, γ2, γ3, and γ4 equal to zero, we obtain the results for normally distributed disturbances given above. For derivations and applications, see Chandra (1983) and Ullah, Srivastava, and Chandra (1983).

We also note that E(yN 1 y.yy′) above gives the result for E((yN 1 y) (yN 2 y)) = tr[N 2 E(yN 1 y.yy′)]. Similarly E[(yN 1 y) (yN 2 y)yy′] gives the result for E((yN 1 y)(yN 2 y)(yN 3 y)) = tr[N 3 E(yN 1 y.yN 2 y.yy′)]. Further the results for the case where the mean of y is a vector Ey = μ the above results can be extended by writing, say, yN 1 y = (y − μ + μ)′N 1(y − μ + μ) = (y − μ)′ × N 1(y − μ) + μ′N 1μ + 2(y − μ)′N 1μ. Then E(yN 1 y) = E((y − μ)′N 1(y − μ)) + μ′N 1μ = σ2 tr(N 1) + μ′N 1μ.

Now consider the case where the elements y i are non i.i.d. such that

E y i = 0 E y i y i = σ i j E ( y i y j y k ) = σ i j k , E ( y i y j y k y l ) = σ i j k l
for i, j, k, l = 1, …, n. Define a n × n matrix Θk = ((σijk)) and another n × n matrix Δkl = ((σijkl)) for i, j = 1, …, n. Further denote
θ = [ tr ( N 1 Θ 1 ) tr ( N 1 Θ n ) ] , Δ = [ tr ( N 1 Δ 1 ) tr ( N 1 Δ 1 n ) tr ( N 1 Δ n 1 ) tr ( N 1 Δ n n ) ] .
Then
E ( y N 1 y . y ) = θ , E ( y N 1 y . y y ) = Δ .

A.6 Moment of Quadratic Form of a Vector of Squared Nonnormal Random Variables

Consider an n × 1 random vector

e = M y ,
where y is an n × 1 random vector with Ey = 0, V(y) = σ2 I n, and M is an n × n idempotent matrix of rank r.

Let us write e = (e 1, …, e n)′, where e i = m i y and m i is a 1 × n ith vector element of the matrix

M = [ m 1 m n ] = ( ( m i j ) ) ,
(p.189) i, j = 1, …, n. Further denote e ˙ = ( e 1 2 , , e n 2 ) as an n × 1 vector of squares of e i and M ˙ = ( ( m i j 2 ) ) as an n × n matrix of the squares of the elements of M. Then, for a matrix N = ((n ij))
E ( e ˙ N e ˙ ) = σ 4 [ γ 2 tr ( N M ˙ 2 ) + 2 tr ( N M ˙ ) + ι M ˙ N M ˙ ι ]
and when M˙ ≃ I n.
E ( e ˙ N e ˙ ) - ˜ σ 4 [ ( γ 2 + 2 ) tr  N + ι N ι ] .
For the proof of this write
E ( e ˙ N e ˙ ) = i j n i j E ( e i 2 e j 2 ) , = i j n i j E ( y m i m i y y m j m j y ) .
Then using the results in (A.25) we get the result in (A.29).

When yN(0,σ2 I) then

E ( e ˙ N e ˙ ) = σ 4 [ ι M ˙ N M ˙ ι + 2 tr ( N M ˙ ) ] .

An application of this result can occur in the linear regression model y = Xβ + u where X is an n × k matrix. In this case e = My = Mu where M = IX(XX)−1 X′ is an idempotent matrix of rank nk. Several tests of heteroskedasticity are expressible as a quadratic form of ė, see Chapter 5.

A.7 Moments of Quadratic Forms in Random Matrices

Let us consider a stochastic n × M matrix Y = ((y it)) = (y 1, …, y M), where t = 1, …, n, i = 1, …, M and y i is a n × 1 vector. We assume that, for all i and t,

E y i t = 0 , E ( y i t y j t ) = σ i j , if  t = t , = 0 , otherwise , E ( y i t y j t y k t ′′ ) = σ i j k , if  t = t = t ′′ , = 0 , otherwise , E ( y i t y j t y k t ′′ y e t ′′′ ) = σ i j k e , if  t = t = t ′′ = t ′′′ , = σ i j σ k e , if  t = t , t ′′ = t ′′′ , but  t t ′′ , = σ i k σ j e , if  t = t ′′ , t = t ′′′ , but  t t , = σ i e σ j k , if  t = t , t ′′ = t ′′′ , but  t t , = 0 , otherwise,
where j, k, e = 1, …, M.

(p.190) Define

γ i j k e = σ i j k e - σ i e σ k e - σ i k σ j e - σ i e σ j k
so that σijk and γijke are 0 for normally distributed disturbances.

If N 1 is any nonstochastic matrix, we have

E ( y i N 1 y j ) = σ i j tr ( N 1 ) E ( y i N 1 y j ċ y k ) = σ i j k ( I * N 1 ) ι E ( y i N 1 y j ċ y k y e ) = γ i j k e ( I * N 1 ) + σ i j σ k e ( tr  N 1 ) I + σ i k σ j e N 1 + σ i e σ j k N 1 .
Further denoting (1/n)E(YY) = Σ = ((σij)) and considering N 1 of appropriate dimensions we have
E ( Y N 1 Y ) = ( tr  N 1 ) Σ , E ( Y N 1 Y ) = ( tr  N 1 Σ ) I n E ( Y N 1 Y ) = N 1 Σ .
Now introducing the following notations:
σ ( i j ) = ( σ i j 1 σ i j M ) ; Δ ( h ) = ( σ 11 h σ 1 M h σ M 1 h σ M M h )
and considering N 1 and N 2 are of appropriate dimensions we have
E ( Y N 1 Y N 2 Y ) = ( ( σ ( h g ) N 2 ( I * N 1 ) ι ) ) , E ( Y N 1 Y N 2 Y ) = [ ( tr  N 1 Δ ( 1 ) , , tr  N 1 Δ ( M ) ) ( I * N 2 ) ι ] , E ( Y N 1 Y N 2 Y ) = ( ( σ ( h g ) N 1 ( I * N 2 ) ι ) ) , E ( Y N 1 Y N 2 Y ) = ( ( I * N 1 Δ ( 1 ) N 2 ) ι , , ( I * N 1 Δ ( M ) N 2 ) ι ) , E ( Y N 1 Y N 2 Y ) = ( tr  N 2 Δ ( 1 ) tr  N 2 Δ ( M ) ) ι ( I * N 1 ) , E ( Y N 1 Y N 2 Y ) = N 2 ( tr  N 1 Δ ( 1 ) tr  N 1 Δ ( M ) ) ι , E ( Y N 1 Y N 2 Y ) = ( ι ( I * N 2 Δ ( 1 ) N 1 ) ι ( I * N 2 Δ ( M ) N 1 ) ) , E ( Y N 1 Y N 2 Y ) = ι ( tr  N 2 Δ ( 1 ) , , tr  N 2 Δ ( M ) ) N 1 .
(p.191) For the following results we introduce additional matrix representations:
( i j ) = ( ( γ i j k e ) ) , N 1 = ( n 11 n 1 M ) N 2 = ( n 21 n 2 M ) , N 3 = ( n 31 n 3 M )
where n 1i, n 2i, and n 3i are now vectors for i = 1, …, M. We also use N 1 = ((n 1ij)), N 2 = ((n 2ij)) and N 3 = ((n 3ij)). Then
E ( Y N 1 Y N 2 Y N 3 Y ) = i j N 3 ( i j ) ( I * n 1 i n 2 j ) + N 3 Σ N 2 N 1 Σ + N 1 Σ N 2 N 3 Σ + ( tr  N 3 Σ N 1 ) N 2 Σ , E ( Y N 1 Y N 2 Y N 3 Y ) = ( I * N 3 ) i j n 2 i j N 1 ( i j ) + ( tr  N 2 ) N 1 Σ N 2 Σ + N 3 N 1 Σ N 2 Σ + ( tr  N 2 Σ ) N 3 N 1 Σ , E ( Y N 1 Y N 2 Y N 3 Y ) = i j ( tr  N 3 ( i j ) ) ( I * n 1 i n 2 j ) + ( tr  N 1 Σ N 3 Σ N 2 ) I n + ( tr  N 3 Σ ) N 1 Σ N 2 + N 2 Σ N 3 Σ N 1 , E ( Y N 1 Y N 2 Y N 3 Y ) = i j ( tr  N 2 ( i j ) ) ( I * n 3 j n 1 i ) + ( tr  N 3 Σ N 1 ) × ( tr  N 2 Σ ) I n + N 3 Σ N 2 Σ N 1 + N 1 Σ N 2 Σ N 3 , E ( Y N 1 Y N 2 Y N 3 Y ) = M 1 + ( tr  N 1 ) Σ N 2 N 3 Σ + Σ N 3 N 1 N 2 Σ + ( tr  Σ N 2 N 1 N 3 ) Σ , E ( Y N 1 Y N 2 Y N 3 Y ) = M 2 + ( tr  N 1 ) ( tr  N 3 ) Σ N 2 Σ + ( tr  N 1 N 3 ) ( tr  N 2 Σ ) Σ + ( tr  N 1 N 3 ) Σ N 2 Σ , E ( Y N 1 Y N 2 Y N 3 Y ) = M 3 N 3 ( I * N 1 ) + ( tr  N 1 ) ( tr  Σ N 3 ) Σ N 2 + Σ N 3 Σ N 2 N 1 + Σ N 3 Σ N 2 N 1 , E ( Y N 1 Y N 2 Y N 3 Y ) = M 4 N 3 ( I * N 1 ) + ( tr  N 1 ) Σ N 2 Σ N 3 + Σ N 2 Σ N 3 N 1 + ( tr  Σ N 2 ) Σ N 3 N 1 , E ( Y N 1 Y N 2 Y N 3 Y ) = ( I * N 2 ) N 3 M 5 + ( tr  N 2 ) N 3 Σ N 1 Σ + ( tr  N 1 Σ ) N 2 N 3 Σ + N 2 N 3 Σ N 1 Σ , E ( Y N 1 Y N 2 Y N 3 Y ) = ( I * N 3 ) N 2 M 5 + ( tr  N 3 ) ( tr  Σ N 1 ) N 2 Σ + N 3 N 2 Σ ( N 1 + N 1 ) Σ , E ( Y N 1 Y N 2 Y N 3 Y ) = ( tr  N 3 M 5 ) ( I * N 2 ) + ( tr  N 1 Σ N 3 Σ ) N 2 + ( tr  N 1 Σ ) ( tr  N 3 Σ ) N 2 + ( tr  N 1 Σ N 3 Σ ) N 2 E ( Y N 1 Y N 2 Y N 3 Y ) = i j ( tr  N 1 Γ ( i j ) ) ( I * n 3 i n 2 j ) + ( tr  N 3 Σ N 1 Σ N 2 ) I n + ( tr  N 1 Σ ) N 2 Σ N 3 + N 3 Σ N 1 Σ N 2 , E ( Y N 1 Y N 2 Y N 3 Y ) = M 6 + ( tr  N 3 Σ N 1 N 2 ) Σ + Σ N 1 N 2 N 3 Σ + Σ N 3 N 2 N 1 Σ , E ( Y N 1 Y N 2 Y N 3 Y ) = M 7 + ( tr  N 3 ) Σ N 1 N 2 Σ + Σ N 2 N 3 N 1 Σ + ( tr  Σ N 2 N 3 N 1 ) Σ , E ( Y N 1 Y N 2 Y N 3 Y ) = M 8 N 1 ( I * N 2 ) + ( tr  N 2 ) Σ N 3 Σ N 1 + ( tr  N 3 Σ ) Σ N 1 N 2 + Σ N 3 Σ N 1 N 2 , E ( Y N 1 Y N 2 Y N 3 Y ) = i j ( I * n 3 i n 2 j ) Γ ( i j ) ι N 1 + Σ N 3 N 2 Σ N 1 + Σ N 1 N 2 Σ N 3 + ( tr  N 1 Σ N 3 ) Σ N 2
(p.192) where M = ((m ij)) and m ij for M 1 to M 8 are, respectively,
tr ( Γ ( i j ) N 2 ( I * N 1 ) N 3 ) , ( tr ( I * N 1 ) N 3 ) ( tr  N 2 Γ ( i j ) ) , tr ( Γ ( i j ) N 3 ) , tr ( Γ ( i j ) N 2 ) , tr ( Γ ( i j ) N 1 ) , tr ( N 1 Γ ( i j ) N 3 ( I * N 2 ) ) , tr ( Γ ( i j ) N 2 ( I * N 3 ) N 1 ) , and tr ( Γ ( i j ) N 3 ) .

The above results simplify for the normal distribution by using σijk = 0 and γijke = 0. For the applications, see Ullah and Srivastava (1994), Ullah (2002), Srivastava and Maekawa (1995), among others. These results are useful in developing the moments of various econometric statistics under nonnormal errors, also see Lieberman (1997).

A.8 Distribution of Quadratic Forms

Let yN(μ, Σ) be an n × 1 normal vector with Ey = μ and V(y) = Σ. Further consider N to be an n × n nonstochastic matrix and b and c to be constants. Then

y * = y N y + 2 b y + c ~ χ r 2 ( θ )
if and only if r = Rank(NΣ), ΣNΣNΣ = ΣNΣ, Σ(b + Nμ) = ΣNΣ(b + Nμ) and θ = c + 2b′μ + μ′Nμ. The χ r 2 ( θ ) represents the noncentral chi-square distribution with the d.f. r and the noncentrality parameter θ. For θ = 0, (p.193) χ r 2 ( θ ) = χ r 2 becomes a central chi-square with the d.f. r, see Srivastava and Khatri (1979: 64), Rao (1973), and Mathai and Provost (1992).

A necessary and sufficient condition for

y * = y N y ~ χ r 2 ( θ )
is that ΣNΣNΣ = ΣNΣ with d.f. = r = Rank of NΣ, and θ = μ′Nμ. For μ = θ = 0, y * χ r 2 , which is a central chi-square.

If |Σ| ≠ 0 then the above necessary and sufficient condition becomes NΣN = N with the d.f. r = Rank of NΣ. This implies the condition that NΣ or Σ1/2 NΣ1/2 is an idempotent matrix of rank r, where Σ = Σ1/2Σ1/2, see Rao (1973: 188) and Srivastava and Khatri (1979: 64).

A necessary and sufficient condition that the vector N 1 y and the quadratic form (y − μ)′N(y − μ) are statistically independent is ΣNΣN 1 = 0, or NΣN 1 = 0 if |Σ| ≠ 0, where N 1 is an n × n nonstochastic matrix. Similarly (y − μ)′N 1(y − μ) and (y − μ)′N(y − μ) are independent if ΣN 1ΣNΣ = 0, or N 1ΣN = 0 if |Σ| ≠ 0.

A.8.1 Density and Moments of a Noncentral Chi-square Variable

Let yN(μ, Σ). Then

y * = y Σ - 1 / 2 N Σ - 1 / 2 y ~ χ r 2 ( θ ) ,
where N is assumed to be an idempotent matrix of rank r, θ = (1/2)μ′Σ−1/2 NΣ−1/2μ and Σ−1/2 yN−1/2μ, I). The density function of the noncentral χ2 variable y* is
f ( y * ) = e - θ i = 0 θ i i ! y * ( ( r + 2 i ) / 2 ) - 1 e - y * / 2 2 ( ( r + 2 i ) / 2 ) Γ ( ( r + 2 i ) / 2 ) .
Further the sth inverse moment of y* is
E ( y * ) - s = 0 ( y * ) - s f ( y * ) d y * ,
which gives, for s = 1,2, …,
E ( y * ) - s = 2 - s Γ ( ( r / 2 ) - s ) Γ ( r / 2 ) e - θ F 1 ( r 2 - s ; r 2 ; θ ) ,
see Ullah (1974).

(p.194) Using the derivatives of the confluent hypergeometric function in Slater (1960: 15, eq. 2.1.8) we have

d m E ( y * ) - s d θ m = 2 - s ( - 1 ) s Γ ( s + m ) Γ ( s ) Γ ( ( r / 2 ) - s ) Γ ( ( r / 2 ) + m ) e - θ F 1 ( r 2 - s ; r 2 + m ; θ )
for m = 1,2,….

When μ = 0 so that θ = 0, the density of y* given above reduces to the density of a central χ r 2 . Further

E ( y * ) - s = 2 - s Γ ( ( r / 2 ) - s ) Γ ( r / 2 ) .

We note that the distribution of yNy = y′Σ−1/21/2 NΣ1/2−1/2 y is not a noncentral χ2 distribution unless Σ1/2 NΣ1/2 is idempotent. If Σ1/2 NΣ1/2 is not idempotent the sth inverse moment of yNy is as given in Chapter 2.

A.8.2 Moment Generating Function and Characteristic Function

Let yN(μ, Σ), where Σ = PP′. Let Let N 1, N 2, …, N m be m symmetric n × n matrices. Then the joint moment generating function of yN 1 y, …, yN m y is

M ( t 1 , , t m ) = E exp  ( t 1 y N 1 y + ċċċ + t m y N m y ) , = | I - 2 C | - 1 / 2 exp { - 1 2 μ 0 μ 0 } exp  { - 1 2 μ 0 ( I - 2 C ) - 1 μ 0 } ,
where C = P′(t 1 N 1 + ··· + t m N m)P and μ0 = P−1μ, see Magnus (1986), and Mathai and Provost (1992). For m = 2 this result is given in Chapter 2 and it has been used in Sawa (1972), Magnus (1986), and Mathai and Provost (1992) to obtain the moments of the product and ratio of quadratic forms. For example, if y 1* = yN 1 y and y 2* = yN 2 y, then
E ( ( y 1 * ) s 1 ( y 2 * ) s 2 ) = s 1 + s 2 M ( t 1 , t 2 ) t 1 s 1 t 2 s 2 | t 1 = 0 , t 2 = 0 .
For the moments of the ratio of y 1*/y 2*, see Chapter 2.

Now we consider the vector y distributed nonnormally with Ey = μ and V(y) = Σ, which is a diagonal matrix of σ i 2 , i = 1, …, n. If we let φ ( y i | μ i , σ i 2 ) be a normal density with mean μi and variance σ i 2 then the Edgeworth or Gram–Charlier series expansion of the density for f(y i) in Section A.2 can be (p.195) written as (Davis 1976)

f ( y i ) = exp [ r = 3 c r φ ( r ) ( y i | μ i , σ i 2 ) ] , = E z i [ φ ( y i | μ i + z i , σ i 2 ) ]
where E z i ( z i r ) = ( - 1 ) r r ! c r and c r is as in (A.10).

Since y|zN(μ + z, Σ), therefore using the Davis (1976) technique the characteristic (c.f.) or moment generating function (m.g.f.) under the {Edgeworth} density can be obtained in two steps. First find the c.f. for the normal case. Second consider the expectation of this c.f. with respect to z. With this approach Knight (1985) provided the c.f. of a linear form ay and the quadratic form yNy with corrections for skewness and kurtosis, that is the first four terms of the Edgeworth expansion.

These results on the c.f. and m.g.f. provide the moments of the products and ratio of quadratic forms under the Edgeworth density of y. For applications, see Knight (1985, 1986) for the moments and distribution of the 2SLS estimator and Peters (1989) for the moments of the LS estimator in a dynamic regression model.

A.8.3 Density Function Based on Characteristic Function

When the absolute value of the c.f. ψ(t) = ψ(t 1, …, t n) is integrable then the density function f(y) exists and is continuous for all y, and it is given by

f ( y ) = 1 ( 2 π ) n - e - i t y ψ ( t ) d t .

This is known as the uniqueness theorem or inversion theorem for the c.f., see Cramér (1946).

Next consider the variable q, which is the ratio of two random variables Y and X, q = Y/X. Let ψ(t 1, t 2) be the c.f. of (Y,X). Then the density of q is given by

f ( q ) = 1 2 π i - [ ψ ( t 1 , t 2 ) t 2 ] d t 1 t 2 = - q t 1 ,
see Cramér (1946). Phillips (1985) generalizes this result to matrix quotients Q = X −1 Y where X is a k × k positive definite matrix whose expectation of the determinant exists and Y is a k × l matrix. As an application of this result Phillips (1985) shows that the LS regression coefficient matrix for multivariate normal sample is a matrix t-distribution.

(p.196) A.9 Hypergeometric Functions

Here we present well known power series, which are used in the text.

A power series

F 1 ( a ; c ; x ) = Γ ( c ) Γ ( a ) i = 0 Γ ( a + i ) Γ ( c + i ) x i i ! , c > 0 , | x | < = 1 + a c x + a ( a + 1 ) c ( c + 1 ) x 2 2 + ċċċ
is known as the confluent hypergeometric function or Kummer's series, see Slater (1960: 2). It has been used extensively in finite sample econometrics, see Ullah (1974), Sawa (1972), and Phillips (1983), among others. Also, see Abadir (1999) for an introduction to hypergeometric function for economists.

Another power series, hypergeometric functions, is written as

F 2 ( a , b ; c ; x ) = Γ ( c ) Γ ( a ) Γ ( b ) i = 0 Γ ( a + i ) Γ ( b + i ) Γ ( c + i ) x i i !
for |c| > 0 and |x| < 1, see Slater (1960), and Ullah and Nagar (1974).

Now we consider the integral of a function, which has a power series expansion in terms of hypergeometric functions, see Sawa (1972). This is, for 0 ≤ k < 1 and p > 1

G ( k , θ ; p , q ) = - 0 g ( x ; k , θ , p , q ) d x , = e - θ Γ ( p - 1 ) Γ ( q ) j = 0 Γ ( q + j ) Γ ( p + j ) k j F 1 ( p - 1 ; p + j ; θ ) ,
where
g ( x ; k , θ , p , q ) = 2 ( 1 - 2 x ) p - q [ 1 - 2 ( 1 - k ) x ] q exp [ - θ + θ 1 - 2 x ] .

For k = 1 and pq > 1

G ( 1 , θ ; p , q ) = e - θ Γ ( p - q - 1 ) Γ ( p - q ) F 1 ( p - q - 1 ; p - q ; θ ) .

To derive the above result we can first use the following change of variable transformation t = 1/(1 − 2x). Then

G ( k , θ ; p , q ) = e - θ 0 1 t p - 2 [ 1 - k ( 1 - t ) ] - q e θ t d t .

Then using the binomial expansion of [1 − k(1 − t)]q and doing term by term integration gives the above result, Sawa (1972).

(p.197) A.9.1 Asymptotic Expansion

For a, c > 0, and x > 0 we have

F 1 ( a ; c ; x ) = Γ ( c ) e x x - ( c - a ) Γ ( a ) Γ ( c - a ) Γ ( 1 - a ) × [ j = 0 r - 1 Γ ( c - a + j ) Γ ( 1 - a + j ) j ! x j + O ( x - r ) ] ,
see Copson (1948: 265), Erdelyi (1956), Slater (1960), and Sawa (1972). For large x this gives the asymptotic expansion up to O(x −(r − 1)). Using this in the G function we get the asymptotic expansion, up to O−4), as
G ( k , θ ; p , q ) = 1 θ + ( q k - p + 2 ) 1 θ 2 + [ q ( q + 1 ) k 2 - 2 q ( p - 2 ) k + ( p - 2 ) ( p - 3 ) ] 1 θ 3 + [ q ( q + 1 ) ( q + 2 ) k 3 - 3 q ( q + 1 ) ( p - 2 ) k 2 + 3 q ( p - 2 ) ( p - 3 ) k - ( p - 2 ) ( p - 3 ) ( p - 4 ) ] 1 θ 4 .

A.10 Order of Magnitudes (Small o and Large O)

Here we decide the measure of the order of magnitude of a particular sequence, say, {X n}. The magnitude is defined by looking into the behavior of X n for large n.

Definition 1 The sequence {X n} of real numbers is said to be at most of order n k and is denoted by

X n = O ( n k ) , if X n n k c
as n ⇛ ∞, for some constant c > 0. Further if {X n} is a sequence of random variables then it is said to be at most of order n k in probability,
X n = O p ( n k )
if, as n ⇛ ∞,
X n n k - c n 0 in prob ,
where c n is a nonstochastic sequence.

(p.198) Definition 2 The sequence {X n} of real numbers is said to be of smaller order than n k and is denoted by

X n = o ( n k ) , if X n n k 0
as n ⇛ ∞. Further if {X n} is stochastic then
X n = o p ( n k )
if
X n n k 0 in prob .

In the above definitions k can take any real value (positive or negative). Also the order of magnitude is almost sure if the convergence of the sequence is almost sure.

As an example, consider a stochastic sequence

X n = n ( X ¯ - μ ) σ = 1 n i = 1 n ( X i - μ ) σ ,
where EX i = μ and V(X i) = σ2. Then using Chebychev's inequality, the sequence X n is bounded in probability in the sense that P[|X n| > ε] ≤ 1/ε2 as n ⇛ ∞. Thus X n = O p(1) and X ¯ - μ = O p ( n - 1 / 2 ) .

The order of magnitudes satisfy the following properties.

If X n = O(n k) and Y n = O(n l) then

  1. 1. X n Y n = O(n k+1).

  2. 2. X n r = O ( n r k )

  3. 3. Xn + Yn = O(n l 0, l 0 = Max(k,l).

The same results for small o in place of capital O. Further, if X n = O(n k) and Y n = o(n l), then

  1. 1. Xn + Yn = O(n k).

  2. 2. X n Y n = O(n k+l).