25
A COMPARISON BETWEEN THE LOG-LINEAR AND THE PARAMETERIZED EXPECTATIONS METHODS ¤ Ilaski Barañano, Amaia Iza and Jesús Vázquez y Universidad del País Vasco July 3, 2001 Short running title: PEA versus Log-linear ¤ We are grateful for useful conversations and comments from Jaime Alonso, Omar Licandro, Alfonso Novales, Luis Puch, Sergio Restrepo, José Victor Ríos-Rull, and seminar participants at the Universidad de Alicante, Universidad del País Vasco, the 1997 ASSET meeting, II Workshop on Dynamic Macroeconomics and the XXII Simposio de Análisis Económico. We are also grateful for suggestions from two anonymous referees that have been of great help in improving the paper. Financial support from Ministerio de Educación, Gobierno Vasco and UPV through projects PB97-0620, GV PI-1998/86 and UPV 035.321–HB067/96, respectively, is gratefully acknowledged. y Correspondence to: Jesús Vázquez, Departamento de Fundamentos del Análisis Económico, Universidad del País Vasco, Av. Lehendakari Aguirre 83, 48015 Bilbao, Spain. Phone: (34) 94-601-3779, Fax: (34) 94-601-3774, e-mail: [email protected] 1

A comparison between the log-linear and the parameterized expectations methods

  • Upload
    laslab

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

A COMPARISON BETWEEN THELOG-LINEAR AND THE PARAMETERIZED

EXPECTATIONS METHODS¤

Ilaski Barañano, Amaia Iza and Jesús Vázquezy

Universidad del País Vasco

July 3, 2001

Short running title: PEA versus Log-linear

¤We are grateful for useful conversations and comments from Jaime Alonso, Omar Licandro,Alfonso Novales, Luis Puch, Sergio Restrepo, José Victor Ríos-Rull, and seminar participantsat the Universidad de Alicante, Universidad del País Vasco, the 1997 ASSET meeting, IIWorkshop on Dynamic Macroeconomics and the XXII Simposio de Análisis Económico. Weare also grateful for suggestions from two anonymous referees that have been of great help inimproving the paper. Financial support from Ministerio de Educación, Gobierno Vasco andUPV through projects PB97-0620, GV PI-1998/86 and UPV 035.321–HB067/96, respectively,is gratefully acknowledged.

yCorrespondence to: Jesús Vázquez, Departamento de Fundamentos del AnálisisEconómico, Universidad del País Vasco, Av. Lehendakari Aguirre 83, 48015 Bilbao, Spain.Phone: (34) 94-601-3779, Fax: (34) 94-601-3774, e-mail: [email protected]

1

ABSTRACTThis paper compares the performance of a log-linear method and a parameter-ized expectations method in solving a dynamic general equilibrium endogenousgrowth model with human capital. Quantitative evaluation based on secondmoment statistics shows that the results provided by the two numerical meth-ods are very similar in this framework whenever the propagation mechanismof technology shocks is weak. However, the cross correlations of some relevantvariables in the RBC literature obtained from the two methods are signi…cantlydi¤erent when the model exhibits a strong propagation mechanism. The pa-rameterized expectations method captures the sensitiviness of second momentstatistics to the curvature of the utility function while the log-linear methoddoes not.

Key words: RBC features, approximation solution methods, endogenous growthJEL Classi…cation: C63, E32, O41

2

1 Introduction

The need to solve dynamic stochastic general equilibrium models (DSGEM),which do not typically have an analytic solution, has given rise to the develop-ment and application of numerical approximation methods over the last decade.1

There are actually at least ten alternative approximation solutions for solvingDSGEM in the literature. Taylor and Uhlig (1990) and Christiano (1990) arethe …rst papers which compare alternative approximation methods using thebasic (exogenous) growth model as the benchmark model. Dotsey and Mao(1992) also compares the accuracy of some non-re…ning approximations (lin-ear and log-linear methods) taking a numerical method that can be re…ned(discrete-state space approximation) as a benchmark method in the context ofan exogenous growth model with distortionary taxes. Novales and Pérez (1999)extend this analysis to consider other exogenous growth models. The questionthey pose is how to choose among di¤erent non-re…ning methods. Our goal isdi¤erent. We analyze what speci…c features of the model studied are relevantwhen deciding what approximation method is more appropriate for solving themodel. In particular, we analyze the potential of two approximation methods,Uhlig’s (1995, 1999) log-linear method (LLM)2 and Marcet’s (1988) parameter-ized expectations algorithm (PEA),3 in capturing the real business cycle (RBC)features displayed by an endogenous growth model with human capital. Ourresults point out that the choice of an approximation method may be driven bythe model’s assumptions.

Endogenous growth models, whose engine of growth is human capital ac-cumulation, have been adopted by many researchers aiming to explain busi-ness cycles features (Bean (1990), King et al. (1988b), Gomme (1993) andOzlu (1996)), long-run growth (among many others, Lucas (1988), Ladrón-de-Guevara et al. (1997)), the e¤ects of uncertainty and the curvature of the utilityfunction on long-run growth and second moment of stationary variables (Joneset al. (2000)), transitional dynamics (Mulligan and Sala-i-Martin (1993)), andthe money-growth relationship (Gomme (1993)). Moreover, we consider thisframework because, as pointed out by Singleton (1988) among others, growthand cyclical components in data may both be determined endogenously by opti-mal economic decision making. Therefore, we relax the presumption underlyingstandard RBC models that growth components are determined by di¤erent fac-tors from those causing business cycles. In particular, this article considers a

1 A good deal of e¤ort appears in the Winter 1990 Issue of the Journal of Business andEconomic Statistics. More recently, a book edited by Ramon Marimon and Andrew ScottComputational Methods for the Study of Dynamic Economics (1999) collects a set of articlesthat describe how to implement in di¤erent contexts most of the numerical methods proposedin the literature.

2 As is discussed by Uhlig (1995), subject to applicability, all the log-linear approximationmethods …nd the same recursive equilibrium law ofmotion since the linear space approximatinga nonlinear di¤erentiable function is unique and robust to di¤erentiable transformations ofparameter space.

3 PEA has been applied among others by den Haan (1990), Marshall (1992), and Marcetand Marimon (1992).

3

stochastic version of the Uzawa-Lucas model with two modi…cations. On theone hand, physical capital is included as a production factor in the human cap-ital sector as suggested by King et al. (1988b), Gomme (1993) and Ozlu (1996)among others. Human capital accumulation is assumed to be a non-market ac-tivity.4 On the other hand, following Becker’s (1965) idea, we introduce quali…edleisure as an argument of the utility function using the speci…cation suggested byLadrón-de-Guevara et al. (1997). In this way we ensure the existence of a singlebalanced growth path. Results reveal that when a higher share of physical capi-tal in the human capital accumulation process is considered, the model providesnot only a stronger propagation mechanism but also a quantitative improve-ment in the results obtained for labor market ‡uctuations without consideringadditional sources of uncertainty.5 In particular, the introduction of physicalcapital into the human capital production process increases the volatility ofhours relative to productivity, and also reduces the cross correlations betweenproductivity and output, and between hours and productivity. The introduc-tion of physical capital into human capital production ampli…es the e¤ect of theshock since not only the allocation of time between these two sectors, but alsothe allocation of physical capital will change. In addition, technology shockswill shift not only the labor demand curve but also the labor supply curve. Infact, physical capital works as a shock that a¤ects human capital accumulationand shifts the labor supply curve. Once this model speci…cation is considered,results are sensitive to the curvature of the utility function. The higher therelative risk aversion parameter is, the lower the above mentioned correlationsare.

This paper illustrates how LLM and PEA accommodate the role of the shareof physical capital in the human capital production function, µ, and the cur-vature of the utility function measured by the relative risk aversion parameter,¾, when the generalized Uzawa-Lucas model is considered. This paper showsthat if one evaluates the model quantitatively based on some second momentstatistics, as RBC researchers do, one will …nd that the RBC features providedby the two approximation solution methods are very similar in the context ofan endogenous growth model that does not include physical capital as an in-put in the human capital production function (i.e. µ = 0 ). However, in thecontext of an endogenous growth model in which the share of physical capitalin the human capital accumulation process is positive (i.e. µ > 0 ), some sec-ond moment statistics obtained from the two approximation methods studieddi¤er signi…cantly. In particular, the cross correlations between consumptionand output, productivity and output, and hours and productivity, are di¤erentdepending on the approximation method used. Moreover, once the physicalcapital is included as an additional argument in the human capital production

4 The introduction of a non-market (home production) sector competing with market activ-ity to explain aggregate ‡uctuations has already been used in the RBC literature, for exampleby Benhabib et al. (1991), Greenwood and Hercowitz (1991), Gomme (1993) and Ozlu (1996).

5 Summers (1986) critizes standard exogenous RBC models due to the weakness of thepropagation mechanism of innovative progress, which attributes a disproportionate weight totechnology shocks in order to characterize the business cycle.

4

function, the results show that certain correlations may be very sensitive to thechoice of the value for the relative risk aversion parameter, ¾, depending onthe numerical method used. On the one hand, the cyclical features obtainedfrom LLM do not change when alternative parameterizations of the relative riskaversion parameter are considered, since this method, by construction, removesthe nonlinearities associated with high values of this parameter. On the otherhand, some correlations obtained from PEA change signi…cantly with ¾, sincethis method takes into account the nonlinearities induced by a value of ¾ suf-…ciently larger than one. Thus the choice of the numerical solution methodmay be determined by the value of µ and ¾. The higher these parameter valuesare, the higher the nonlinearities induced by the model will be, and as a con-sequence PEA will accommodate the role played by those nonlinearities betterthan LLM. The sensitivity of second moment properties displayed by the modelto alternative parametrizations of µ and ¾ captured by PEA is consistent withsensitivity results found in a recent paper by Jones et al. (2000) which considera discrete-space approximation for solving a similar generalized Uzawa-Lucasmodel.

In order to gain more con…dence on the RBC implications one can draw fromthis model, we also compare the two alternative solution procedures consideredby looking at two other dimensions. One dimension is the analysis of impulseresponse functions, which allow us to study how propagation mechanisms aremodi…ed as a result of the model’s assumptions. The other dimension studied isthe ful…llment of the rational expectations hypothesis. In particular, in order totest the rational expectations hypothesis in numerical solutions we carry out theaccuracy test suggested by den Haan and Marcet (1994) to check whether theEuler residuals are orthogonal to current and past values of variables included inthe agents’ information set. The results from this test show that PEA performsbetter than LLM since it is a re…ning method.

The rest of the paper is organized as follows. Section 2 sets up a simpleendogenous growth model with human capital. Section 3 compares some basicstatistics analyzed in the RBC literature under the two solution methods More-over, the den Haan-Marcet accuracy test is implemented. Section 4 concludes.

2 A growth model with human capital

This paper analyzes a stochastic discrete time version of the generalized Uzawa-Lucas framework. One of the modi…cations, used by Bean (1990), King et al.(1988b) and Gomme (1993), is that physical capital is included as an input inthe human capital production function. The second modi…cation is that leisureis assumed to have a positive e¤ect on agents’ welfare. The economy is inhabitedby a large number of identical households. The size of the population is assumedto be constant.

5

The representative household maximizes

E0

1X

t=0

¯tU (ct ; lth¸t ); (1)

where E0 denotes the conditional expectation operator, 0 < ¯ < 1 is the dis-count factor, ct is consumption, and ltht is quali…ed leisure, and this capturesBecker(1965)’s idea that the utility of a given amount of leisure increases withthe stock of human capital. In particular, we assume that the preferences of therepresentative household are described by the following utility function:

U (ct; lth¸t ) =

[c!t (lth

¸t )1¡! ]1¡° ¡ 1

1 ¡ °; (2)

where 0 � ! � 1, ° > 0 and 0 � ¸ � 1. This type of utility function guaranteesthe existence of a balanced growth path for the economy, where the fractionof time allocated to each activity remains constant and all per-capita variablesgrow at the same rate.6

There are two productive activities in this economy: the production of the…nal good (market sector) and the accumulation of human capital (human cap-ital sector). At any point in time, a household has to decide what portion ofits time is allocated to each of these activities, apart from the time allocated toleisure. The production function of the representative household is a produc-tion function with constant returns to scale with respect to physical capital ande¢cient labor. Formally,

yt = Fm(Átkt; ntht; zt) = AmZt(Átkt)®(ntht)

1¡®; (3)

where Am is a technology parameter, Át is the fraction of physical capital stockallocated to the market sector, nt is the fraction of time allocated to the marketsector, ht denotes the stock of human capital at the beginning of time t (there-fore, ntht denotes e¢cient labor), ® is the share of physical capital in …nal goodproduction and Zt is a technology shock which follows a …rst-order autoregres-sive process log(Zt) ¡ log(Z) = ½[log(Zt¡1) ¡ log(Z)] + ²t , where 0 � ½ � 1,Z denotes the unconditional mean of the random variable Zt and ²t is a whitenoise with standard deviation ¾². The law of motion for physical capital is

kt+1 + ct = AmZt(Átkt)®(ntht)

1¡® + (1 ¡ ±k )kt; (4)

where ±k is the depreciation rate of physical capital.The human capital sector is characterized as follows:

ht+1 = F h[(1 ¡ Át)kt ; (1 ¡ lt ¡ nt)ht ] + (1 ¡ ±h)ht =

Ah[(1 ¡ Át)kt ]µ [(1 ¡ lt ¡ nt)ht ]

1¡µ + (1 ¡ ±h)ht ; (5)

6 See King et al. (1988 a, pp. 201-202) for an exposition of the conditions one shouldimpose in order to guarantee a constant growth rate in a steady state.

6

where Ah is a technology parameter, µ is the share of physical capital stock inhuman capital production and ±h is the rate of depreciation of human capital.

As is well known, the competitive equilibrium can be characterized throughthe …rst-order conditions derived from a benevolent social planner’s problem inthe absence of externalities and public goods. The social planner maximizes(1) subject to (4)-(5) with k0 > 0 and h0 > 0 given. In the steady state, thevariables ct, kt and yt grow at a constant rate which is equal to the rate ofaccumulation of human capital, and nt, lt and Át are constant. Therefore, thetime series ct , kt and yt obtained from the …rst-order conditions characterizingthe social planner problem are non-stationary. In order to facilitate the use ofcomputational techniques, it is convenient to write the …rst-order conditions interms of the ratios ct = ct/ht , kt = kt/ht , thus reducing the number of statevariables. The necessary and su¢cient …rst-order conditions for an (interior)optimum are then given by

U1(ct; lt) = ¯(ht+1

ht)¿ EtfU1(ct+1; lt+1)[F

m1 (Át+1kt+1; nt+1) + 1 ¡ ±k ]g; (6)

U1(ct ; lt) =U2( ct ; lt)

Fm2 (Át kt ; nt)

; (7)

U2(ct ; lt)

F h2 [(1 ¡ Át)kt ;1 ¡ lt ¡ nt ]

= ¯(ht+1

ht)¿ Etf

U2(ct+1; lt+1)

Fh2 [(1 ¡ Át+1)kt+1; 1 ¡ lt+1 ¡ nt+1]

[1 ¡ ±h + (1 ¡ lt+1 + ¸lt+1)Fh2 ((1 ¡ Át+1)kt+1; 1 ¡ lt+1 ¡ nt+1)]g; (8)

F m1

F m2

=Fh

1

Fh2

; (9)

ht+1

ht

= Ah[(1 ¡ Át)kt ]µ(1 ¡ lt ¡ nt)

1¡µ + 1 ¡ ±h; (10)

ct + kt+1ht+1

ht= AmZt(Át kt)

®n1¡®t + (1 ¡ ±k)kt; (11)

limt!1

Et¯tU1kt+1

ht+1

ht= 0;

limt!1

Et¯tU2

h¸¡1t

F h2

ht+1

ht= 0:

where ¿ = [! + ¸(1 ¡ !)](1 ¡ °) ¡ 1.

7

2.1 Model Calibration

Next, we assign values for the parameters in the model. This step is needed tosolve, simulate and quantitatively evaluate the model. Following the seminalpaper by Kydland and Prescott (1982), we choose, on the one hand, the struc-tural parameter values based on the existing empirical evidence obtained frommicro data sets. On the other hand, the steady state values of the variablesare approached by averaging the corresponding U.S. time series for the period1954-1989.

Looking at the market sector, the value of ® = 0:36 is chosen so that itequals the average share of physical capital in the US GNP over the period.Since we are using quarterly data, the rate of depreciation for physical capital,±k, has been …xed at 0:025 which is equivalent to the 10% annual rate used byKydland and Prescott (1982).

Given that the human capital sector is a non-market activity the calibrationof the parameters involved is no trivial matter. We have chosen parametervalues in such a way that they guarantee values for steady state variables thatreproduce the average of U.S. time series. In particular, the values for Am

and Ah have been chosen so that the growth rate of output in a steady statematches the average annual growth rate of per capita U.S. GNP, 1:4%. Onthe other hand, we consider two alternative values for the share of physicalcapital stock in human capital production, µ, µ = 0 (the Uzawa-Lucas model,implying a weak propagation mechanism of technology shocks) and µ = 0:2 (thegeneralized Uzawa-Lucas model, inducing a strong propagation mechanism).The highest value of µ considered in this paper is relatively low compared to thevalue considered by Gomme (1993), which assumes ® = µ = 0:36.

The derivation of values for the parameters characterizing household pref-erences follows standard procedures. The discount factor, ¯, is chosen so thatthe annual real interest rate is equal to 4%. The value for ¯ is derived from thefollowing equation and it characterizes the steady state, given the homogeneityproperties of the utility function:

1:01 ¯ (ht+1

ht)¿ = 1:

Since the utility function is multiplicatively separable we have that U (c; lh¸) =u(c)v(lh¸), where u(c) is homogeneous of degree 1 ¡ ¾, where ¾ is the relativerisk aversion parameter. Most of the studies reviewed by Mehra and Prescott(1985) establish that a reasonable value for the relative risk aversion parameter,¾, lies in the interval [1; 2]. We consider two alternative values for ¾ : ¾ = 1:3and ¾ = 2.7 In order to derive the parameter values for ! and °, we make useof the homogeneity properties of the utility function as well as the suggestionof Gomme (1993) and Greenwood and Hercowitz (1991) that the fraction of

7 Auerbach and Kotliko¤ (1987) consider that reasonable values for ¾ lie in the range of2 to 10. By choosing the two values of ¾ considered in this paper, we would like to stressthe point that even relatively small changes in the curvature of the utility function matter incharacterizing the RBC features displayed by an endogenous growth model.

8

time allocated to the market sector is 0:24, which is the fraction of time spentworking by the working-age population in the U.S. The choice of a parametervalue for ¸ is not straightforward because there is no empirical evidence. Thispaper shows the results obtained using ¸ = 1 (quali…ed leisure).8

Based on …rst moments from the Solow residual, we follow the suggestion ofPrescott (1986) for ½: ½ = 0:95. Moreover, as we can see in Tables 1 and 2, thestandard deviation of the innovation in the …rst-order autoregressive process forthe technology shock ¾² has been adjusted so that the standard deviation ofper-capita U.S. GNP is close to the standard deviation of the simulated timeseries for output using both approximation methods. This implies, as will benoted below, that the value chosen for ¾² changes with the value assigned to µ.

3 Real business cycle features

In this section, we derive and compare the RBC features of the model ob-tained from the implementation of the LLM and PEA. Appendix 1 providesa brief description of these two approximation methods. We calculate somebasic statistics studied in the RBC literature and their associated con…denceintervals.9 One way of seeing whether two approximation solution methodsdeliver very di¤erent business cycle properties is to check whether the inter-section between the two con…dence intervals for a statistic obtained from thetwo alternative solution methods considered is empty. Another way of detect-ing di¤erent dynamic features is to check whether the statistic obtained from aparticular approximation method lies in the con…dence interval obtained fromthe other solution method. The con…dence interval of a statistic x is de…ned byE(x) §

pvar(x). The results are displayed in Tables 1 and 2. Table 1 shows

the standard deviation of some basic variables analyzed in the RBC literature.

(Insert Table 1 and Table 2)

First of all, comparing the value of ¾² in the …rst line of both panels (that is,the size of the technology shock which is chosen so that the standard deviationof the simulated time series for output is close to the standard deviation of per-capita U.S. GNP), we observe that the propagation mechanism of technologyshock is much stronger when µ = 0:2 than when µ = 0. If µ = 0 is chosen,

8 As shown by Ladrón-de-Guevara et al. (1997), a value of ¸ = 1 guarantees the existenceof a single balanced growth path.

9 Many RBC researchers (Kydland and Prescott (1982) and Hansen (1985) among others)calculate the second moment statistics from the model based on simulated time series of thesame length as U.S. quarterly time series data (roughly, 150 observations). These time serieshave been previously …ltered through a Hodrick-Prescott …lter. Other RBC researchers (forinstance King et al. (1988 a)) evaluate their models based on population models calculatedfrom un…ltered data. In this paper, we follow the former approach. We are aware thatfollowing the latter approach might change some of the conclusions reached in this paper.This analysis is left for future research.

9

we have considered the standard deviation used by traditional RBC models,i.e. 0.007. However, for µ = 0:2 (regardless of the method used or the valuefor ¾), the standard deviation of the technology shock required to mimic thevolatility of the output is roughly three and half times smaller than 0.007, dueto the higher intensity of the propagation mechanism. The intuition is simple.Consider, for example, the case of a favorable technology shock. In this case,not only the allocation of time will change, inducing a movement from the ac-cumulation of human capital to the production sector, but also the allocation ofphysical capital between the two sectors will change, thus amplifying the e¤ectof the shock. This e¤ect induced by changes in µ reinforces the substitutionor relative wage e¤ect stressed by Mulligan and Sala-i-Martin (1993). This ef-fect is captured by using either method (PEA or LLM). In Figure 1 we showthe impulse response functions obtained for the PEA for µ = 0 and µ = 0:2.1 0

According to this e¤ect a higher value for µ implies a higher intersectorial sub-stitution, magnifying the e¤ects of the technology shock on work e¤ort. Thismagnifying e¤ect on work e¤ort is shown in Table 1, since the standard deviationof working hours increases with µ: On the other hand, a higher µ will decreasethe standard deviation of real wages (productivity) as re‡ected in Table 1. Thisresult is consistent with the di¤erent behavior of productivity as µ increases, asshown in the impulse response function of productivity, w, displayed in Figure1.

(Insert Figure 1 and Figure 2)

Looking at the …rst panel in Table 2, we see that when analyzing the con-temporaneous cross-correlations, the results for LLM and PEA are very similarwhen µ = 0. However, the second panel in Table 2 shows that for the contempo-raneous correlations of output with consumption ½cy and productivity ½wy , andthe correlation of hours with productivity ½nw , there are major di¤erences be-tween the two numerical solutions studied in this paper when µ = 0:2. Moreover,when using LLM the second moment statistics are not a¤ected by the choiceof the value for the intertemporal elasticity of substitution (1=¾) regardless ofthe value chosen for µ. As stated above, this result is quite intuitive since thelog-linear approximation removes the nonlinearities introduced by assuming ahigher curvature of the utility function (that is, ¾ = 2). However, the standarddeviations of consumption and productivity, the cross-correlations between con-sumption and output, between productivity and output, and between hours andproductivity, and the impulse response functions of real wages and hours (seeFigure 2) are sensitive to the choice of the curvature of the utility function (¾)when PEA is used in the generalized Uzawa-Lucas model (µ = 0:2). In short,PEA takes into account the nonlinearities introduced by a high value for ¾ when

10 The impulse response functions obtained from LLM are very similar to those illustratedin Figure 1. These impulse response functions are available upon request.

10

the propagation mechanism is su¢ciently strong which implies that certain sec-ond moment statistics are highly sensitive to the choice of ¾. LLM, by removingthese nonlinearities, may however suggest that those correlations displayed bythe model are not sensitive to the choice of ¾ when in fact they should be sen-sitive. These results suggest that special care should be taken at some points(for instance, the choice of the approximation method) when considering thegeneralized Uzawa-Lucas model since the choice of the approximation methodshould be determined by the values chosen for certain parameters. These resultsare relevant if one is interested in studying the labor market implications of anRBC model, in the sense that one might conclude that the model with µ = 0:2does a better job in reproducing the features observed in U.S. data than themodel with µ = 0, but this conclusion is based on a particular value chosenfor ¾ and on a particular approximation method. As has been pointed out byWatson (1993, p.1036) and many others, these cross-correlations are particu-larly important because ‘these are the features in which the basic RBC modelis typically thought to fail’.

The sensitivity of RBC features exhibited by the model to alternative param-eterizations of µ and the intertemporal elasticity of substitution (1=¾) capturedby PEA is consistent with the results obtained in a recent paper by Jones etal. (2000), which considers a discrete-space approximation for solving a sim-ilar generalized Uzawa-Lucas model.1 1 They argue that in contrast with theexogenous growth models used in the RBC literature, in which the curvatureof the utility function plays a minor role in characterizing both growth andcycles, the features of growth and cyclical variables displayed by endogenousgrowth models are crucially determined by the relative risk aversion parame-ter. Thus, one should expect RBC features displayed by an endogenous growthmodel to be sensitive to alternative parameterizations of the utility functioncurvature. In particular, should expect that a higher curvature of the utilityfunction, and therefore a lower intertemporal elasticity of substitution, to im-ply a lower standard deviation for consumption, ¾c re‡ecting a higher desire forsmoothing consumption pro…le. The e¤ect of a higher ¾ on ¾c captured by PEA

11 Jones et al. (2000) study di¤erent versions (by assuming elastic or inelastic labor supply,incomplete or complete capital depreciation, correlated shocks or i.i.d. shocks) of a one sectormodel with physical and human capital being perfect substitutes as in Gomme (1993) (that is,using our notation, they assume ®= µ). Their primary aim is to analyze how long-run growthis determined by the relative risk aversion parameter characterizing agents’ preferences and thesize of technology (policy) shocks. But they also quantify how sensitive certain second momentstatistics displayed by the model are to changes in the relative risk aversion parameter. Incomparing the second moment statistics obtained by Jones et al. (2000) with those obtainedin this paper, the reader should notice that the two papers di¤er in many aspects. Amongother di¤erences, we highlight the following. First, we analyze an endogenous growth modelwhere physical and human capital are not perfect substitutes (that is, ® > µ). Second, whenanalyzing the sensitivity of second moment statistics to changes in the curvature of the utilityfunction we do not keep the other parameters constant. More in the RBC traditition, whenchanging ¾ we re-calibrate other parameters such as the discount factor ¯ and the size oftechnology shocks ¾² in order to match a 4% annual real interest rate and the observedstandard deviation of per-capita U.S. GNP. Third, we assume quali…ed leisure entering theutility function.

11

is also characterized by the corresponding impulse response function displayedin Figure 2. Moreover, the impulse response functions for productivity (w) andworking hours (n) also show that a higher ¾ decreases the volatility of produc-tivity whereas the volatility of working hours increases when µ = 0:2. Thesee¤ects captured by PEA are consistent with the changes in the standard devia-tions of w and n displayed in the second panel of Table 1 when the curvature ofthe utility function increases. Furthermore, Figure 2 also shows that a higherrisk aversion parameter alters the impulse response functions associated withrelevant variables in di¤erent ways. Thus, the responses of consumption andproductivity to a shock decrease whereas the responses of output and workinghours to a shock increase when ¾ increases. Given these di¤erent responses ofthe alternative variables depending on the value of ¾ , one may expect that, asshown in the second panel of Table 2, the cross-correlations between consump-tion and output, productivity and output, and productivity and working hoursdecreases with increases in ¾.

In short, this section has stressed that when endogenous models are consid-ered, a model’s assumptions are crucial in deciding which type of approximationmethod is appropriate in characterizing the RBC features displayed by the modelstudied.

In order to gain more con…dence on the RBC implications one can draw fromthis model, we also compare the two alternative solution procedures consideredby implementing the accuracy test suggested by den Haan and Marcet (1994).A description of this test is given in Appendix 2. This test, stated brie‡y, stud-ies whether a numerical solution preserves the orthogonal restrictions imposedby the rational expectations hypothesis. That is, whether the Euler equationresiduals obtained from the …rst-order conditions describing the equilibrium ofthe economy are orthogonal to any variable included in the agent’s informationset.

(Insert Table 3)

Table 3 shows the results of this accuracy test. This table shows that regard-less of the value chosen for the relative risk aversion parameter, ¾, and regardlessof the share of physical capital in the human capital production, µ, PEA per-forms better than LLM in the sense that the results of this test when usingthe former solution method, with respect to both critical regions, are closer tothe true Â2(2) distribution than when using the latter method. In short, PEAtends to preserve the orthogonality conditions imposed by the rational expec-tations hypothesis far better than LLM. This result is not surprising. After all,the PEA solution has been re…ned in order to satisfy the orthogonality con-ditions imposed by the rational expectations hypothesis. One may argue thatthe results of this accuracy test for the PEA solution are not impressive. Thefact that our model requires a rather small standard deviation for exogenousshock due to the strong propagation mechanism induced by a large value for

12

µ explains the di¢culties found in re…ning the PEA solution by increasing thedegree of the polynomial. The reason is that if the noise introduced into thesystem ² is small, then the new added terms will be highly correlated with theterms associated with a lower degree of the polynomial. Moreover, in this sce-nario this accuracy test does not help to choose the order of the polynomialin implementing the PEA because, as we mention in Appendix 2, we use theconstant as the only instrument.12 One may wonder whether considering otherapproximation methods which are not prone to the multicollinearity problems ofPEA, such as the weighted residual methods suggested by Judd (1992) or morerecently the PEA version suggested by Christiano and Fischer (1997) that usesChebyshev polynomial instead of ordinary polynomials in order to approximateconditional expectations, would be useful to have a clear-cut of what part ofthe di¤erences found between LLM and PEA results are due to the interactionbetween uncertainty and nonlinearities and what part comes from the problemsof PEA in approximating the solution. The comparison of other approximationmethods exceeds the scope of this paper. Nonetheless, the sensitivity of certainsecond moment statistics to uncertainty and nonlinearities found by Jones etal. (2000) using a discrete-space approximation point in the same direction thatthe sensitivity results found in this paper when using PEA.13

4 Conclusions

A general conclusion can be drawn from this paper. Namely, the potential ofalternative numerical methods in approximating the solution depends in somecontexts on model’s assumptions. The use of alternative solution methods,far from adding more confusion to the analysis of DSGEM, may then helpto understand better the dynamic features of the model considered and therelevance of some model’s assumptions. In particular, this paper shows thatPEA and LLM characterize the same RBC features when the model displays aweak propagation mechanism (absence of intersectorial substitution). Since it isde…nitely less costly to implement LLM than PEA, we can conclude that LLMgives a su¢ciently good approximation to analyze RBC features in this context.

12 Den Haan and Marcet (1994, p.11) also notice this shortcoming of PEA. They consider astandard deviation for the technology shock ² equal to 0.01 as low. The standard deviation wehave to consider for the technology shock is even smaller (0.007 or around 0.00205 dependingon the calibration used for µ). In this paper, a four-order polynomial is chosen for PEA. Theresults are qualitatively unchanged by varying the order of the polynomial between 4 and 6.

13 The rational expectations hypothesis can be tested using alternative tests. For instance,the rational expectations hypothesis implies that the Euler residuals as a proxy of the ex-pectational residuals should not be serially correlated. Therefore, another accuracy test canbe carried out by checking for autocorrelation in the Euler residuals. Moreover, the rationalexpectations hypothesis also implies that the Euler residuals should not show any particularstructure. One may thus consider an ARCH test based on the Lagrange multiplier principlethat was suggested by Engle (1982, p. 1000) as a another accuracy test. The results obtainedfrom these tests, which are not shown for reasons of limited space, con…rm the results of theden Haan-Marcet accuracy test. Namely, PEA performs, in general, better than LLM. Theresults from these alternative tests are available upon request.

13

However, in models displaying a strong propagation mechanism (i.e. high µ;a high intersectorial substitution), PEA and LLM provide di¤erent dynamicfeatures. Whereas LLM is not able to capture the di¤erent cyclical propertiesinduced by alternative parameterizations of the relative risk aversion parameter¾ because this method, by construction, removes the nonlinearities induced byhigh values of ¾, PEA, however, takes into account these nonlinearities whenthe propagation mechanism is strong.

APPENDIX 1This appendix describes the two numerical methods considered in this pa-

per. These methods have in common some of the steps in carrying out theseprocedures. Nevertheless, for the sake of completeness we brie‡y summarize allthe steps for each method.

Uhlig’s Log-Linear Method (LLM)In recent papers, Uhlig (1995, 1999) proposes a simple log-linear method

to solve the dynamics of a nonlinear DSGEM. His procedure can easily besummarized in the following four steps:Step 1: Obtain the necessary conditions that characterize the equilibrium.Step 2: Choose the parameter values of the model and …nd steady state values.Step 3: Log-linearize the …rst-order conditions, which characterize the equilib-rium of the model, in order to make all the equations approximately linear inthe log-deviations from the steady state.Step 4: Solve for the recursive equilibrium law of motion by using the methodof undetermined coe¢cients suggested by Uhlig (1999) which is simple and ofgeneral applicability (that is, it can be implemented in models where there aremore endogenous state variables than expectational equations). Uhlig’s methodof undetermined coe¢cients is closely related to Blanchard and Khan’s (1980)approach for solving linear di¤erence equations.14

Steps 1-3 involve many tedious, though simple, computations (see the math-ematical workings). After Step 3, it is convenient to rewrite the system oflog-linearized …rst-order conditions in matrix form. Then, using Uhlig’s nota-tion, we have a matrix system where there is one vector of endogenous statevariables xt (size mx1), another vector containing other endogenous variablesyt (size nx1) and a third vector of exogenous stochastic variables zt (size kx1):

Axt + Bxt¡1 + Cyt + Dzt = 0; (12)

Et[Fxt+1 + Gxt + Hxt¡1 + Jyt+1 + Kyt + Lzt+1 + Mzt] = 0; (13)

zt+1 = Nzt + ²t+1; (14)

where Et(²t+1 ) = 0. It is assumed that C is of size lxn, l ¸ n and of rank n,where l is the number of deterministic equations (i.e., the number of equations

14 For more details on the relation between the two approaches, see Theorem 3 and AppendixA in Uhlig’s (1999) article.

14

involved in (12)), F is of size (m+n¡l)xm, and N has only stable eigenvalues. Inour model, we have two endogenous state variables: the log-deviations from thesteady-state values of physical capital and human capital growth rate denotedby kt and ht+1

ht, respectively;1 5 one exogenous variable zt and four other (non-

state) endogenous variables: the log-deviations from the steady-state valuesof consumption-human capital ratio, ct , the fraction of capital stock allocatedto the market sector, Át, the fraction of time allocated to the market sectorproduction, nt, and leisure, lt , respectively.

The log-linear solution method seeks a recursive equilibrium law of motionof the following form:

xt = Pxt¡1 + Qzt; (15)

yt = Rxt¡1 + Szt : (16)

In our case, l = n, we can then apply Corollary 1 of Uhlig (1999) in order to…nd P , Q, R and S so that the equilibrium described by these rules is stable.

Parameterized Expectations Algorithm (PEA)Den Haan and Marcet (1990) provide a complete description for PEA. This

procedure can be summarized as follows:Step 1: Find the necessary conditions characterizing the equilibrium. Withoutloss of generality we can write these conditions as

f (!t) = E[©(!t+1; !t+2; :::)=­t]; (17)

where !t is the set of variables characterizing the whole economy. Some compo-nents of !t can be exogenous. Given the parameters of the model f : Rr ! Rp

and © : RrxR1 ! Rp. E [:=­t ] denotes the expectation operator conditionalon the available information set at time t, ­t . This information set includes asubset of current and past values of !t .Step 2: Choose the parameter values of the model.Step 3: Let à be a class of functions (e.g., a polynomial function) that canapproximate, in principle, any arbitrary function by increasing the order ofthe polynomial. Pick as many polynomial functions as there are expectationalequations which characterize the equilibrium and replace the expectations withthese polynomial functions. These polynomials depend upon the state variablesof the model and upon some parameters which are estimated by nonlinear leastsquares.Step 4: Simulate long time series (in this exercise we use 40,000 observations)for all the variables of the model (consumption, physical capital, human capital,etc.) and …nd for each expectational equation the argument ± which minimizesthe mean squared error. Formally,

S(±) = argmin¹±

E[©(!t+1(±); !t+2(±); :::) ¡ Ã(kt(±); Zt ;¹±)]2

15ht+1/ht is considered as a state variable in order to satisfy the requirement that l ¸ nimposed by the log-linear approximation, although it is not a state variable in the propersense because it does not appear in the policy rules (see matrices P and R below). In otherwords, ht+1/ht is not known at time t but its law of motion is known and given by (10).

15

The aim of PEA is to …nd a …xed point ±F such that ±F = S(±F ). One wayof …nding the …xed point is to start with an initial value of ±, ±0, this value isobtained analytically from the deterministic steady state. Next, run a nonlinearleast squares regression of ©(!t+1(±

0); !t+2(±0); :::) on Ã(kt(±

0); Zt).16 This isour approximation of S(±0). As is well known, the result of the nonlinear leastsquares regression converges to S(±0) when T goes to in…nity. Next, ±1, ±2; :::are given by

±v = (1 ¡ ¹)±v¡1 + ¹S(±v¡1);

where v = 1; 2; :::, 0 < ¹ � 1. We use ¹ = 1 in this study.

APPENDIX 2Den Haan and Marcet (1994) suggest a test to analyze the accuracy of nu-

merical methods when solving rational expectation models. This test, statedbrie‡y, studies whether a numerical solution preserves the orthogonal restric-tions imposed by the rational expectations hypothesis through the …rst-orderconditions describing the equilibrium of the economy. This accuracy test hasnice properties: (i) it is easy to implement, (ii) it has low computational costs,(iii) it can be carried out without knowing the true solution of the model, and(iv) it is applicable to any numerical solution method.

Formally, the idea of den Haan-Marcet’s test can be described as follows.Given the necessary conditions describing the equilibrium, say equation (17),the Euler equation residuals

ut+1 = ©(!t+1; !t+2; :::) ¡ f (!t); (18)

satisfy

E [ut+1 ­ g(st)=­t ] = 0; (19)

where st is kx1 included in ­t and g : Rk ! Rq . Hence, the test analyzeswhether (19) is close to being satis…ed for synthetic time series f¹!g obtainedfrom a certain numerical method. The test procedure can be explained asfollows:Step 1: Generate long time series (say 3,000 observations) of f¹!tgT

t=1 from themodel. The bar over a variable denotes a simulated variable.Step 2: Calculate

BT =

P1t=0 ¹ut+1 ­ g(¹st)

T;

where ¹ut+1 and g(¹st) are calculated from ¹!t . Given the low variability of theshock imposed in our model, the constant is the only instrument included in

16 See the appendix in den Haan and Marcet (1990) for a description of the approximationused to run nonlinear least squares. Moreover, given that we are approximating two expecta-tional functions, we apply a generalized (nonlinear) least squares estimator in order to obtainmore e¢cient estimates of ±F .

16

g(:) (see den Haan and Marcet (1994, p.11) for details). Moreover, g(:) in thispaper is considered as the identity function.Step 3: Check whether or not BT is close to zero. One possibility for performingthis test is calculate

T B0T A¡1

T BT ! Â2(qp): (20)

where p is the number of …rst-order conditions involving conditional expectations(see equation (17)) and AT is some consistent estimator of

SW =

1X

t=0

Ef[ut+1 ­ g(st)][ut+1¡i ­ g(st¡i)]0g:

Since a constant is only included in g(:), a consistent estimator of SW is givenby

AT =

PTt=0 ¹ut+1g(¹st)2¹u0

t+1

T:

Step 4: Repeat the test many times (say 500 times) for di¤erent realizations ofthe exogenous variables.Step 5: Report the percentage of the statistic (20) in the upper and lower critical5% regions of a Â2(qp). We also look to the low critical region to convince thereader that a non-rejection of the null hypothesis is not due to a lucky drawfrom the exogenous processes. Therefore, it would be desirable that an algorithmshould have roughly a 10% rejection rate, with a 5% in the lower tail and 5%in the upper tail.

17

References

[1] Auerbach, A., Kotliko¤, L. (1987) Dynamic Fiscal Policy. Cambridge Uni-versity Press, Cambridge, MA

[2] Bean, C.R. (1990) Endogenous Growth and the Procyclical Behavior ofProductivity. European Economic Review 34: 355-363

[3] Becker, G.S. (1965) A Theory of the Allocation of Time. Economic Journal75: 493-517

[4] Benhabib, J., Rogerson, R., Wright, R. (1991) Homework in Macroeco-nomics: Household Production and Aggregate Fluctuations. Journal ofPolitical Economy 99: 1166-1187

[5] Blanchard, O.J., Khan, C.M. (1980) The Solutions of Linear Di¤erenceModels under Rational Expectations. Econometrica 48: 1305-1311

[6] Christiano, L.J. (1990) Linear-Quadratic Approximation and Value Func-tion Iteration: A Comparison. Journal of Business and Economic Statistics8: 99-113

[7] Christiano, L.J., Fisher, J.D.M. (1997) Algorithms for Solving DynamicModels with Occasionally Binding Constraints. NBER technical workingpaper 218

[8] Den Haan, W.J. (1990) The Optimal In‡ation Path in a Sidrauski Modelwith Uncertainty. Journal of Monetary Economics 25: 389-410

[9] Den Haan, W.J., Marcet, A. (1990) Solving the Stochastic Growth Modelby Parameterized Expectations. Journal of Business and Economic Statis-tics 8: 31-34

[10] Den Haan, W.J., Marcet, A. (1994) Accuracy in Simulations. Review ofEconomics Studies 61: 3-17

[11] Dotsey, M., Mao, C.S. (1992) How Well Do Linear Approximation MethodsWork? The Production Tax Case. Journal of Monetary Economics 29: 25-58

[12] Engle, R.F. (1982) Autoregressive Conditional Heteroscedasticity with Es-timates of the Variance of United Kingdom In‡ation. Econometrica 50:987-1007

[13] Fairise, X., Langot, F. (1995) An RBC Model for Explaining Cyclical LaborMarket Features. In: Henin, P. (ed.) Advances in Business Cycle Research.Springer-Verlag, Berlin

[14] Gomme, P. (1993) Money and Growth Revisited: Measuring the Costs ofIn‡ation in an Endogenous Growth Model. Journal of Monetary Economics32: 51-77

18

[15] Greenwood, J., Hercowitz, Z. (1991) The Allocation of Capital and Timeover the Business Cycle,” Journal of Political Economy 99: 1188-1214

[16] Hansen, G. (1985) Indivisible Labor and the Business Cycle. Journal ofMonetary Economics 16: 824-840

[17] Hodrick, R.J., Prescott, E.C. (1997) “Post-War U.S. Business Cycles: AnEmpirical Investigation. Journal of Money, Credit, and Banking 29: 1-16

[18] Jones, L.E., Manuelli, R.E., Stacchetti, E. (2000) Technology (and Pol-icy) Shocks in Models of Endogenous Growth. Research Department Sta¤Report 281, Federal Reserve Bank of Minneapolis

[19] Judd, K.L. (1992) Projection Methods for Solving Aggregate Growth Mod-els. Journal of Economic Theory 58: 410-452

[20] King, R., Plosser, C., Rebelo, S. (1988 a) Production, Growth and BusinessCycles I. The Basic Neoclassical Model. Journal of Monetary Economics21: 195-232

[21] King, R., Plosser, C., Rebelo, S. (1988 b) Production, Growth and BusinessCycles II. New Directions. Journal of Monetary Economics 21: 309-341

[22] Kydland, F.E., Prescott, E.C. (1982) Time to Build and Aggregate Fluc-tuations. Econometrica 50: 1345-1370

[23] Kydland, F.E, Prescott, E.C. (1990) Business Cycles: Real Facts and aMonetary Myth. Federal Reserve Bank of Minneapolis Quarterly Review(Spring) 3-18

[24] Ladrón-de-Guevara, A., Ortigueira, S., Santos, M. (1997) Equilibrium Dy-namics in Two-Sector Models of Endogenous Growth. Journal of EconomicDynamics and Control 21: 115-143

[25] Lucas, R.E. (1988) On the Mechanics of Economic Development. Journalof Monetary Economics 21: 3-32

[26] Marcet, A. (1988) Solution of Nonlinear Models by Parameterizing Expec-tations. Mimeo. GSIA, Carnegie Mellon University

[27] Marcet, A., Marimon, R. (1992) Communication, Commitment andGrowth. Journal of Economic Theory 58: 219-249

[28] Marshall, D. (1992) In‡ation and Asset Returns in a Monetary Economy.Journal of Finance 47: 1315-1342

[29] Mehra, R., Prescott, E.C. (1985) The Equity Premium: A Puzzle. Journalof Monetary Economics 15: 145-161

[30] Mulligan, C.B., Sala-i-Martin, X. (1993) Transitional Dynamics in Two-Sector Models of Endogenous Growth. Quarterly Journal of Economics107: 739-773

19

[31] Novales, A., Pérez, J.J. (1999) An Evaluation of some Solution Methods forNon-Linear Rational Expectations Models,” Working Paper 9809, InstitutoComplutense de Análisis Económico

[32] Ozlu, E. (1996) Aggregate Economic Fluctuations in Endogenous GrowthModels. Journal of Macroeconomics 18: 27-47

[33] Prescott, E.C. (1986) Theory Ahead of Business Cycle Measurement.Carnegie-Rochester Conference Series on Public Policy 25: 11-44

[34] Singleton, K.J. (1988) Econometric Issues in the Analysis of EquilibriumBusiness Cycle Models. Journal of Monetary Economics 21: 361-386

[35] Summers, L.H. (1986) Some Skeptical Observations on Real Business CycleTheory. Federal Reserve Bank of Minneapolis Quarterly Review 23-27

[36] Taylor, J.B., Uhlig, H. (1990) Solving Nonlinear Stochastic Growth Models:A Comparison of Alternative Solution Methods. Journal of Business andEconomic Statistics 8: 1-17

[37] Uhlig, H. (1995) A Toolkit for Analyzing Nonlinear Dynamic StochasticModels Easily. Discussion Paper 101, Federal Reserve Bank of Minneapolis

[38] Uhlig, H. (1999) A Toolkit for Analyzing Nonlinear Dynamic StochasticModels Easily. In Marimon, R., Scott, A. (eds.) Computational Methodsfor the Study of Dynamic Economics. Oxford University Press, Oxford

[39] Uzawa, H. (1965) Optimal Technical Change in an Aggregative Model ofEconomic Growth. International Economic Review 6: 18-31

[40] Watson, M. (1993) Measures of Fit for Calibrated Models. Journal of Po-litical Economy 101: 1011-1041

20

TABLE 1. Standard Deviation Statistics of some Relevant Vari-ables in RBC Models

µ = 0 US Data LLM(¾ = 1:3) PEA(¾ = 1:3) LLM(¾ = 2) PEA(¾ = 2)¾² 0.007 0.007 0.007 0.007¾y 1.70 1.66 1.64 1.61 1.59

(1.47,1.85) (1.45,1.82) (1.43,1.80) (1.41,1.77)¾c 0.85 0.46 0.46 0.44 0.44

(0.40,0.52) (0.40,0.52) (0.38,0.50) (0.38,0.50)¾i 5.35 4.74 4.65 4.60 4.51

(4.19,5.28) (4.13,5.17) (4.10,5.13) (4.02,5.02)¾n 1.77 1.21 1.18 1.14 1.11

(1.07,1.34) (1.06,1.32) (1.01,1.26) (0.99,1.24)¾w 0.85 0.53 0.53 0.56 0.56

(0.46,0.60) (0.43,0.60) (0.48,0.63) (0.48,0.65)¾c=¾y 0.50 0.28 0.28 0.27 0.28¾i=¾y 3.15 2.86 2.85 2.86 2.84¾n=¾y 1.04 0.73 0.73 0.71 0.70¾w=¾y 0.5 0.32 0.32 0.35 0.35

µ = 0:2 US Data LLM(¾ = 1:3) PEA(¾ = 1:3) LLM(¾ = 2) PEA(¾ = 2)¾² 0.002 0.00226 0.002 0.001775¾y 1.70 1.72 1.70 1.72 1.71

(1.54,1.90) (1.52,1.87) (1.54,1.90) (1.54,1.87)¾c 0.85 0.16 0.18 0.15 0.14

(0.13,0.18) (0.15,0.20) (0.12,0.17) (0.11,0.16)¾i 5.35 2.59 2.63 2.60 2.58

(2.32,2.86) (2.28,2.80) (2.32,2.86) (2.33,2.84)¾n 1.77 1.64 1.57 1.63 1.66

(1.47,1.80) (1.41,1.74) (1.46,1.80) (1.50,1.82)¾w 0.85 0.16 0.19 0.16 0.15

(0.14,0.19) (0.16,0.21) (0.14,0.19) (0.13,0.18)¾c=¾ y 0.50 0.09 0.11 0.09 0.08¾i=¾y 3.15 1.51 1.49 1.52 1.51¾n=¾y 1.04 0.95 0.92 0.95 0.97¾w=¾y 0.5 0.09 0.11 0.09 0.09

Notes: US data statistics are taken from Gomme (1993). The E(x) §pvar(x) con…dence intervals are in parentheses.

21

TABLE 2. Cross-Correlation Coe¢cients of some Relevant Vari-ables in RBC Models

µ = 0 US Data LLM(¾ = 1:3) PEA(¾ = 1:3) LLM(¾ = 2) PEA(¾ = 2)

¾² 0.007 0.007 0.007 0.007½cy 0.75 0.90 0.89 0.91 0.91

(0.88,0.91) (0.88,0.91) (0.90,0.92) (0.89,0.92)½iy 0.89 0.992 0.993 0.993 0.994

(0.989,0.994) (0.991,0.995) (0.990,0.995) (0.993,0.996)½ny 0.88 0.98 0.98 0.977 0.976

(0.975,0.986) (0.975,0.985) (0.971,0.983) (0.971,0.982)½wy 0.16 0.90 0.896 0.902 0.902

(0.88,0.91) (0.88,0.91) (0.885,0.919) (0.885,0.918)½nw 0.11 0.80 0.79 0.79 0.79

(0.75,0.83) (0.75,0.83) (0.75,0.83) (0.75,0.83)

µ = 0:2 US Data LLM(¾ = 1:3) PEA(¾ = 1:3) LLM(¾ = 2) PEA(¾ = 2)

¾² 0.002 0.00226 0.002 0.001775½cy 0.75 0.73 0.81 0.68 0.55

(0.69,0.76) (0.77,0.84) (0.64,0.71) (0.51,0.59)½iy 0.89 0.9994 0.9997 0.9994 0.9997

(0.9992,0.9996) (0.9996,0.9998) (0.9992,0.9996) (0.9996,0.9998)½ny 0.88 0.997 0.996 0.996 0.996

(0.996,0.997) (0.995,0.997) (0.995,0.998) (0.995,0.997)½wy 0.16 0.55 0.70 0.55 0.35

(0.51,0.59) (0.66,0.74) (0.51,0.59) (0.31,0.38)½nw 0.11 0.48 0.64 0.48 0.27

(0.43,0.53) (0.60,0.69) (0.43,0.53) (0.21,0.30)Notes: US data statistics are taken from Gomme (1993), except the value

for ½nw which is taken from Fairise and Langot (1995). The E(x) §p

var(x)con…dence intervals are in parentheses.

22

TABLE 3 Den Haan and Marcet Accuracy Test

µ = 0:0 Low (¾ = 1:3) Up (¾ = 1:3) Low (¾ = 2) Up (¾ = 2)LLM 0.0 100 0.0 100PEA 5.4 3.6 3.2 11.8

µ = 0:2 Low (¾ = 1:3) Up (¾ = 1:3) Low (¾ = 2) Up (¾ = 2)LLM 8.6 28.8 0.0 23.8PEA 5.2 1.2 7.8 1.0Notes: A two-order and a four order polynomial are used to derive the PEA

solution when µ is equal to 0:0 and 0:2, respectively. Low (up) denotes lower(upper) 5% tail. An entry in this table shows the percentage of den Haan-Marcet statistic lying in the lower (upper) 5% tail of a Â2(2). Therefore, a valuefor the statistic close to 5 means that we cannot reject the null hypothesis thatthe Euler residuals are white noise.

23

Figure 1:

24

Figure 2:

25