5
Computational Statistics & Data Analysis 30 (1999) 13–17 The lower bound method in probit regression Dankmar B ohning * Department of Epidemiology, Free University Berlin, Fabeckstr. 60-62, Haus 562, 14195 Berlin, Germany Received 1 April 1996; received in revised form 1 July 1998; accepted 30 November 1998 Abstract The lower bound principle consists of replacing the second derivative matrix of the log-likelihood by a global lower bound in the Loewner ordering. This bound is then used in the Newton–Raphson iteration instead of the Hessian matrix leading to a monotonically converging sequence of iterates. In this note we apply this principle to probit regression as an example of a generalized linear model for which observed and expected information matrix do not coincide. This note also points out that it is not sucient to use a bound for the expected value of the Hessian matrix as has been sug- gested recently in the context of generalized linear models. c 1999 Elsevier Science B.V. All rights reserved. Keywords: Probit link function and probit regression; Generalized linear model; Lower bound method; Quadratic approximation of log-likelihood 1. Introduction Consider a log-likelihood function L() with p-dimensional parameter = ( 1 ;:::; p ) T and associated second-order Taylor expansion around 0 : L()= L( 0 )+ L( 0 ) T ( - 0 )+ 1 2 ( - 0 ) T 2 L( * )( - 0 ); (1) where * = (1 - ) + 0 for some (0; 1). * Corresponding author. 0167-9473/99/$ – see front matter c 1999 Elsevier Science B.V. All rights reserved. PII: S 0167-9473(98)00094-2

The lower bound method in probit regression

Embed Size (px)

Citation preview

Page 1: The lower bound method in probit regression

Computational Statistics & Data Analysis 30 (1999) 13–17

The lower bound method inprobit regression

Dankmar B�ohning ∗

Department of Epidemiology, Free University Berlin, Fabeckstr. 60-62, Haus 562,14195 Berlin, Germany

Received 1 April 1996; received in revised form 1 July 1998; accepted 30 November 1998

Abstract

The lower bound principle consists of replacing the second derivative matrix of the log-likelihoodby a global lower bound in the Loewner ordering. This bound is then used in the Newton–Raphsoniteration instead of the Hessian matrix leading to a monotonically converging sequence of iterates. Inthis note we apply this principle to probit regression as an example of a generalized linear modelfor which observed and expected information matrix do not coincide. This note also points out thatit is not su�cient to use a bound for the expected value of the Hessian matrix as has been sug-gested recently in the context of generalized linear models. c© 1999 Elsevier Science B.V. All rightsreserved.

Keywords: Probit link function and probit regression; Generalized linear model; Lower bound method;Quadratic approximation of log-likelihood

1. Introduction

Consider a log-likelihood function L(�) with p-dimensional parameter � =(�1; : : : ; �p)T and associated second-order Taylor expansion around �0:

L(�) = L(�0) +∇L(�0)T(� − �0) + 12(� − �0)T∇2L(�∗)(� − �0); (1)

where �∗ = (1− �)� + ��0 for some � ∈ (0; 1).

∗ Corresponding author.

0167-9473/99/$ – see front matter c© 1999 Elsevier Science B.V. All rights reserved.PII: S 0167-9473(98)00094-2

Page 2: The lower bound method in probit regression

14 D. B�ohning / Computational Statistics & Data Analysis 30 (1999) 13–17

The lower bound method 1 (B�ohning and Lindsay, 1988; B�ohning, 1989, 1992, 1993)requires the existence of a lower bound matrix B for the second derivative matrix∇2L(�) with elements ∇2L(�)jk = (@2L=@�j@�k)(�):

∇2L(�)≥B; (2)

where “≥” is Loewner ordering, e.g. for non-negative de�nite, symmetric matri-ces A; B : A ≥ B if their di�erence A − B is non-negative de�nite. Then �LB =�0 − B−1∇L(�0) maximizes the quadratic form QB(�) = L(�0) +∇L(�0)T(�− �0) +12(�−�0)TB(�−�0) and, since QB(�) is a global lower bound for the log-likelihoodL(�); �LB leads to a monotonic increase of the log-likelihood:

L(�LB)− L(�0)≥QB(�LB)− L(�0)=−∇L(�0)TB−1∇L(�0) + 1

2(B−1∇L(�0))TB(B−1∇L(�0))

=− 12∇L(�0)TB−1∇L(�0) ≥ 0;

since B as a lower bound of ∇2L is also negative de�nite. These results have beendeveloped previously in B�ohning and Lindsay (1988), B�ohning (1989, 1992, 1993)and applied favourably in the context of logistic regression and Cox regression. Forexample, in logistic regression it was shown that B = − 1

4

∑ni=1 xix

Ti , where xi is

the ith vector of covariates. For this case in addition, a comparison of the lowerbound method with the conventional Newton–Raphson method has been undertaken(B�ohning, 1993) showing some bene�t in overall computational e�ciency for thelower bound method. The similarity to the EM algorithm (Dempster et al., 1977)is rather striking where also an approximating form, the expected complete-datalog-likelihood, serves as a lower bound for the incomplete log-likelihood.

2. The result

Let Y be a binary variable with Pr{Y =1}=p and Pr{Y =0}=1−p; p ∈ (0; 1).Also, let x be a p-dimensional vector of covariates, which is connected to E(Y )by a generalized linear model with link function g : p = E(Y ) = g(�); � being thelinear predictor �= �Tx (McCullagh and Nelder, 1989). For convenience, only oneobservation is considered. The likelihood is py(1 − p)1−y and the log-likelihood isL(�) = y log(p) + (1− y)log(1− p). Straightforward computation yields

@L@�j

= dL=d�× @�@�j

= dL=d�× xjand

@2L@�j@�k

= d2L=(d�)2 × @�@�j

@�@�k

= d2L=(d�)2 × xjxk ;

1 The lower bound method was originally introduced in B�ohning and Lindsay (1988), where it wasabbreviated as LB-method. In the sequel other authors understood this abbreviation as a short form ofLindsay–B�ohning method, which, of course, was not intended.

Page 3: The lower bound method in probit regression

D. B�ohning / Computational Statistics & Data Analysis 30 (1999) 13–17 15

Fig. 1. Observed and expected information for the probit link function.

or, in vector notation

∇L(�) = dL=d�(�)× x; ∇2L(�) = d2L=(d�)2(�)× xxT:The existence of a lower bound depends solely on the existence of a lower boundfor

d2L=(d�)2(�) = y[g′′(�)g(�)− g′(�)2]=g(�)2

−(1− y)[g′′(�)(1− g(�)) + g′(�)2]=[1− g(�)]2;where g′(�)=dg=d�; g′′(�)=d2g=(d�)2. It is tempting to take expected values becausea simple algebraic form can be reached for the expected information:

E{d2L=(d�)2}= [g′′(�)g(�)− g′(�)2]=g(�)−[g′′(�)(1− g(�)) + g′(�)2]=[1− g(�)]

=−g′(�)2={g(�)(1− g(�))}:This is a simple function, for which bounds can be easily found depending onthe form of the link function. Note that these results are independent on the formof the link function. Let us consider the probit link function, that is g(�) = �(�);� being the cumulative distribution function of the standard normal distribution and� it’s density: �(�) = exp(− 1

2�2)=

√2�. Fig. 1 shows −E{d2L=(d�)2} for the probit

link function, and it is easily seen that the maximum is attained for � = 0 forwhich −E{d2L=(d�)2(0)}=4=(2�)=2=� ≈ 0:6369. This approach has been taken byDevidas and George (1995), and bounds could be provided for a whole class of linkfunctions. However, it must be pointed out that this approach– though intuitive andappealing–su�ers under the weakness of no longer guaranteeing the before mentionedmonotonicity property, since it works with the approximating log-likelihood L(�0)+∇L(�0)T(� − �0) + 1

2(� − �0)TE[∇2L(�∗)](� − �0), for which a global lower boundis found. This lower bound, however, might not be a global lower bound for L(�)and the mathematical argument of replacing the log-likelihood by a lower bound

Page 4: The lower bound method in probit regression

16 D. B�ohning / Computational Statistics & Data Analysis 30 (1999) 13–17

collapses. It is necessary to �nd a lower bound for the observed information whichis provided in the following theorem.

Theorem. Let g(�) =�(�) the probit link function. Then the parameter-dependentpart of the observed information matrix is bounded above by 1:

−∇2L(�) =−d2L=(d�)2(�)× xxT ≤ xxT ( for one observation)

−∇2L(�) =−n∑

i=1

d2Li=(d�i)2(�)xixTi ≤n∑

i=1

xixTi ( for a sample of size n; the

index i is referring to the ith contribution to the likelihood):

Proof. It is su�cient to consider d2L=(d�)2(�) = y[− ��(�)�(�)− �(�)2]=�(�)2 −(1 − y)[ − ��(�)(1 − �(�)) + �(�)2]=[1 − �(�)2]. Note that one of the two termsmust be 0 always.Case 1: Let y = 1. Then d2L=(d�)2(�) = −��(�)=�(�) − [�(�)=�(�)]2 (see

Fig. 1). If � approaches +∞; d2L=(d�)2(�) becomes 0. If � approaches −∞, therule of l′Hospital provides clarity: lim�→∞{[ − ��(�)]′=�′(�) − [�′(�)=�′(�)]2} =lim�→∞{[−�(�)+�2�(�)]=�(�)− [−��(�)=�(�)]2}=lim�→∞ [−1+�2−�2]=−1.Case 2: The case y = 0 can be treated analogously and ends the proof.

3. Discussion

The result leads to a particular simple form of iteration: �LB=�0−B−1∇L(�0)=�0+[X TX ]−1∇L(�0), where X TX =∑n

i=1 xixTi . Because of the fact that X

TX needs to beinverted only once, the lower bound method should also compare favourably in thiscase to the Newton–Raphson method as well as to Fisher–Scoring which also requiresthe inversion of E(∇2L(�0)) at each iteration: �FS =�0−E(∇2L(�0))−1∇L(�0). Thisfact, in connection with the guaranteed monotonicity, makes the lower bound methodattractive. The convergence rate is only linear, but the overall numerical complexityis rather low in comparison to Newton–Raphson or Fisher–Scoring as it has beendemonstrated for logistic regression in B�ohning (1993).

Acknowledgements

This research is under current support of the German Research Foundation.

References

B�ohning, D., 1989. Likelihood inference for mixtures: geometrical and other constructions of monotonestep-length algorithms. Biometrika 76, 375–383.

B�ohning, D., 1992. Multinomial logistic regression algorithm. Ann. Inst. Statist. Math. 44, 197–200.B�ohning, D., 1993. Construction of reliable maximum likelihood algorithms with application to logisticand Cox regression. In: Rao, C.R. (Ed.), Handbook of Statistics, vol. 9. North-Holland, Amsterdam,pp. 409–422.

Page 5: The lower bound method in probit regression

D. B�ohning / Computational Statistics & Data Analysis 30 (1999) 13–17 17

B�ohning, D., Lindsay, B.G., 1988. Monotonicity of quadratic approximation algorithms. Ann. Inst.Statist. Math. 40, 641–663.

Dempster, A.P., Laird, N.M., Rubin, D.B., 1977. Maximum likelihood estimation from incomplete datavia the EM algorithm (with discussion). J. Roy. Statist. Soc. B 39, 1–38.

Devidas, M., George, E.O., 1995. Monotonic algorithms for computing maximum likelihood estimatesin generalized linear models. Preprint, Division of Biostatistics, University of Mississipi MedicalCenter.

McCullagh, P., Nelder, J.A., 1989. Generalized Linear Models, 2nd ed. Chapman & Hall, London.