7
CFA Institute Combining Value Estimates to Increase Accuracy Author(s): Kenton K. Yee Source: Financial Analysts Journal, Vol. 60, No. 4 (Jul. - Aug., 2004), pp. 23-28 Published by: CFA Institute Stable URL: http://www.jstor.org/stable/4480584 . Accessed: 18/06/2014 00:35 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. . CFA Institute is collaborating with JSTOR to digitize, preserve and extend access to Financial Analysts Journal. http://www.jstor.org This content downloaded from 185.44.78.190 on Wed, 18 Jun 2014 00:35:04 AM All use subject to JSTOR Terms and Conditions

Combining Value Estimates to Increase Accuracy

Embed Size (px)

Citation preview

Page 1: Combining Value Estimates to Increase Accuracy

CFA Institute

Combining Value Estimates to Increase AccuracyAuthor(s): Kenton K. YeeSource: Financial Analysts Journal, Vol. 60, No. 4 (Jul. - Aug., 2004), pp. 23-28Published by: CFA InstituteStable URL: http://www.jstor.org/stable/4480584 .

Accessed: 18/06/2014 00:35

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

CFA Institute is collaborating with JSTOR to digitize, preserve and extend access to Financial AnalystsJournal.

http://www.jstor.org

This content downloaded from 185.44.78.190 on Wed, 18 Jun 2014 00:35:04 AMAll use subject to JSTOR Terms and Conditions

Page 2: Combining Value Estimates to Increase Accuracy

Financial Analysts Journal Volume 60 Number 4 F

02004, CFA Institute A l.. r . 'v-

PERSPECTIVES

Combining Value Estimates

to Increase Accuracy

Kenton K. Yee

C orporate executives, tax authorities, investment bankers, and other practitio- ners are constantly pressed to assess value (see Rappaport and Mauboussin 2002).

Even if you believe the market is efficient in the long run, you may not want to accept daily prices at face value if you suspect the market is bubbly or noisy in the short term (Black 1986). In contrast, corporate executives seek to exploit market conditions for advantageous stock repurchases, public offerings, and merger and acquisition activities. Similarly, tax authorities and regulators must appraise estates, assess the fairness of transfer prices, and assess the viability of employee stock option plans. When dis- putes arise, their appraisals are litigated in court, where a judge is charged with choosing a final appraisal-perhaps after weighing conflicting tes- timony from adversarial financial analysts.

In theory, valuing a company is not rocket science. According to finance theory, value is the sum of expected cash flows suitably discounted. One way to estimate equity value is the discounted cash flow method. DCF analysis is implemented by forecasting expected cash inflows from operations, netting them against forecasted payments to cred- itors, and then discounting. The discount factor, which depends on interest rates, market risk, and leverage, can be estimated from the capital asset pricing model (CAPM) and the weighted-average cost of capital (Kaplan and Ruback 1995). In a cer- tain world with perfect markets, DCF analysis would be cut-and-dried. But because of uncertainty in prospective cash flows and the need to adopt a value for the elusive equity risk premium when applying the CAPM (Arnott and Bernstein 2000; Asness 2000), DCF analysis inevitably leads to an imprecise answer.

As things are, different valuation procedures applied to the same company often yield disparate answers, and no single procedure is conclusively

the most precise and accurate in all situations. In this sense, estimated value, like beauty, lies in the eye of the beholder. Accordingly, the financial analysis industry has spawned a creative menu of estimation techniques and even variations on each technique. New valuation textbooks (for instance, Damodaran 2001; English 2001; Penman 2001) pro- mulgate several techniques. No technique univer- sally dominates the others in all contexts, and multiple techniques are typically applied in any given situation.

Therefore, financial analysts frequently run through more than one methodology when asked to value a company. For instance, the joint AOL Time Warner proxy statement (SEC Form S-4, 19 May 2000) advising shareholders about the attrac- tiveness of their proposed merger described the results of three valuation methodologies applied to a "high" value and a "low" value scenario. As indicated in Table 1, the analysis provided six dis- tinct value estimates for Time Warner and four distinct value estimates for AOL.1 The AOL valua- tions reflect a 300 percent difference between the lowest and the highest estimates. And the Time Warner valuations reflect a 33 percent difference.

The AOL Time Warner proxy statement is silent with respect to the relative credibility or accuracy of these value estimates. Indeed, presenting a range of values without commenting on their relative credi- bility is common practice. But if every value estimate is an incremental piece of information, shouldn't it be possible to form a better estimate by combining individual estimates into an aggregate estimate? If so, how should an analyst combine several value estimates together into a superior value estimate? The extant literature offers surprisingly little guid- ance on this issue.

This article aims to help fill the void and inspire further research on this question by proposing sim- ple rules for combining estimates. To this end, I draw from the Delaware Block Method (a legal precedent honed by decades of contentious appraisal litigation), Bayesian decision theory, and

Kenton K. Yee is assistant professor of accounting at Columbia Business School, New York City.

July/August 2004 www.cfapubs.org 23

This content downloaded from 185.44.78.190 on Wed, 18 Jun 2014 00:35:04 AMAll use subject to JSTOR Terms and Conditions

Page 3: Combining Value Estimates to Increase Accuracy

Financial Analysts Journal

Table 1. Value Estimates for Time Warner and AOL Shares

Time Wamer Shares AOL Shares

Methodology Low High Low High

Public market $69.58 $81.46 $45.47 $136.28 Private market 83.47 92.37

Discounted cash flow 71.45 78.16 50.93 107.24

Notes: Immediately prior to announcement of the merger pro- posal, Time Wamer had been trading at $64.75 a share and AOL at $73.75 a share. The proxy statement defined the methodologies as follows: A "public market analysis reviews a business' oper- ating performance and outlook relative to a group of publicly traded peer companies.... A private market analysis provides a valuation range based upon financial information of the compa- nies involved in selected recent business transactions . . . that have been publicly announced and which are in the same or similar industries.... A discounted cash flow analysis derives the intrinsic value of a business based on the net present value of the future free cash flow anticipated to be generated by the assets of the business."

forecasting science. These bodies of knowledge and experience suggest five lessons: 1. Estimate value as a linear weighted average of

all available value estimates, including current market price, if it is available.

2. Take advantage of the benefits of diversifica- tion by incorporating as many bona fide value estimates as available.

3. If you believe some of the estimates are more accurate and precise than others, assign greater weight to the more accurate and pre- cise estimates.

4. Take an equally weighted "simple" average of all available estimates. In practice, this approach usually works just as well as more sophisticated weighting procedures.

5. Perhaps try statistical back testing to peer- group or historical data, but be careful. Back testing may help determine the optimal weights, but it comes with its own set of caveats.

The remainder of this article provides the motiva- tion for and explanation of these lessons.

Delaware Block Method Surveys of legal proceedings and U.S. IRS tax- related appraisal cases (Beatty, Riffe, and Thomp- son 1999) show that courts consistently appraise "fair value" as a weighted average of valuation estimates arrived at by the traditional methodolo- gies, such as DCF analysis and the "method of comparables" (in which an asset's value is esti- mated by comparing its valuation metrics with the metrics of similar assets with known prices). A reading of judicial opinions suggests that judges

may believe that averaging over several value esti- mates diversifies away idiosyncratic errors or biases and, therefore, improves the reliability of the final appraisal.

The Delaware Block Method nicely illustrates Lesson 1: Estimate value as a linear weighted aver- age of all available bona fide value estimates, including market price, if it is available. In the years following the Great Depression, the Delaware Court of Chancery-probably the most influential corporate law jurisdiction in the United States- developed a valuation procedure that by the late 1970s had become known as the Delaware Block Method. By 1983, virtually all U.S. courts routinely used the Block Method for appraising equity value. Although judges do not cast the Block Method in algebraic terms, it can be presented in a simple formula. The Block Method prescribes estimating three value measures: * the contemporaneous market price, P, * the net asset value of equity, b, and * the five-year trailing earnings average, e, cap-

italized by +, a number estimated from P/Es of comparable companies.

Then, the fair value of equity at date t, Vt, is esti- mated as the weighted average of the three value measures:

vt = (1-Kb-Ke)P + Kbb+ Ke-

The weights (1 - Kb - Ke), Kb, and Ke are numbers between 0 and 1 that are adjudicated on a case-by- case basis. The earnings capitalization factor, +, which is also determined on a case-by-case basis, typically ranges between 5 and 15.

A notable feature of the Delaware Block Method is that it treats market price as just another value estimate, on the same footing as net asset value and capitalized earnings. Thus, the courts evidently do not believe in the strong form of mar- ket efficiency. Rather, the courts are essentially con- sistent with Black's contention:

All estimates of value are noisy, so we can never know how far away price is from value. How- ever, we might define an efficient market as one in which price is within a factor of 2 of value.... By this definition, I think almost all markets are efficient almost all of the time. (p. 533)

Black's assertion implies that market price is a noisy estimate of value-not necessarily better or worse than any other estimate of value. Price may deviate from intrinsic value for many reasons. If a security is thinly traded, for example, illiquidity may cause temporary price anomalies.2 Alterna- tively, "animal spirits" may cause the price of hotly traded issues to swing unexplainably. For such reasons, the level of price efficiency may fluctuate over time.

24 www.cfapubs.org 02004, CFA Institute

This content downloaded from 185.44.78.190 on Wed, 18 Jun 2014 00:35:04 AMAll use subject to JSTOR Terms and Conditions

Page 4: Combining Value Estimates to Increase Accuracy

Combining Value Estimates to Increase Accuracy

How can you discern how much weight to give P, b, and e in the Delaware Block Method? Should you weight market value more than net asset value? Should you weight net asset value more than earnings value? The simplest choice might be to take an equally weighted average, but even this parsimonious rule is ad hoc and unjustified.

On this issue, the IRS valuation rules, although honed by decades of spirited, high-stakes litigation and expensive expert testimony, do not lend any insight. Revenue Ruling 59-60 advises that "where market quotations are either lacking or too scarce to be recognized," the tax appraiser should set 1 - Kb - K, = 0 and focus on earnings and dividend- paying capacity.3 For the method of comparables, the tax appraiser should use the "market price of corporations engaged in the same or similar line of business." Upon recognizing that "determination of the proper capitalization rate presents one of the most difficult problems," the ruling advises that "depending upon the circumstances in each case, certain factors may carry more weight than others ... valuations cannot be made on the basis of a prescribed formula."

Similarly, the Delaware courts have not artic- ulated a guiding principle for picking the weights. Indeed, the articulated rule is:

No rule of thumb is applicable to weighting of asset value in an appraisal ... ; rather, the rule becomes one of entire fairness and sound rea- soning in the application of traditional standard and settled Delaware law to the particular facts of each case.4

Nonetheless, an examination of the litigation outcomes reveals a definite trend: Net asset value

is usually weighted almost twice as much as market value or earnings value. In the list of frequently cited judicial appraisals shown in Table 2, net asset value, Kb, is given 46 percent of the weight, on average-almost twice the average weight given to market value and earnings value. The weights vary tremendously from case to case, however, as shown by the standard deviation of Kb of 26 percent, more than half the average value. The market value and earnings weights also have substantial stan- dard deviations.5

Guidance from Bayesian Theory Bates and Granger (1969) worked out a Bayesian framework for combining forecasts, such as earn- ings or revenue forecasts. With careful reinterpreta- tion, their approach may be adapted into a Bayesian framework for combining value estimates (Yee 2003). The result provides formal support for Lesson 2 (because each bona fide value estimate provides incremental information, incorporate every avail- able estimate with nonzero weight) and Lesson 3 (weight more credible estimates more and less cred- ible estimates less).

Lesson 2 advises against throwing out bona fide value estimates. Every value estimate-even if it disagrees with what the appraiser expects or desires-is a piece of incremental information about intrinsic value. Hence, in the absence of per- suasive justification to the contrary, you should not omit any bona fide value estimate. Systematically discarding value estimates based on your expecta- tions or desires introduces subjectivity and, there- fore, selection bias.

Table 2. Weights Given to Variables in Court Cases

Case Year 1 -Kb -Ke Kb Ke

Levin v. Midland-Ross 1963 0.25 0.50 0.25

In re Delaware Racing Association 1965 0.40 0.25 0.35

Swanton v. State Guarantee Corp. 1965 0.10 0.60 0.30

In re Olivetti Underwood Corp. 1968 0.50 0.25 0.25

Poole v. N.V. Deli Maatschappij 1968 0.25 0.50 0.25

Brown v. Hedahl's-Q B&R 1971 0.25 0.50 0.25

Tome Land & Improvement Co. v. Silva 1972 0.40 0.60 0.00

Gibbons v. Schenley Industries 1975 0.55 0.00 0.45

Santee Oil Co. v. Cox 1975 0.15 0.70 0.15

In re Creole Petroleum Corp. 1978 0.00 1.00 0.00

In re Valuation of Common Stock of Libby 1979 0.40 0.20 0.40

Bell v. Kirby Lumber Corp. 1980 0.00 0.40 0.60

Average 0.27 0.46 0.27

Standard deviation 0.18 0.26 0.17

Notes: Seligman (1984) identified these cases as representative Block Method cases. Although Brown, Libby, Santee Oil, and Tome Land are not Delaware court cases, they all used the Delaware Block Method.

July/August 2004 www.cfapubs.org 25

This content downloaded from 185.44.78.190 on Wed, 18 Jun 2014 00:35:04 AMAll use subject to JSTOR Terms and Conditions

Page 5: Combining Value Estimates to Increase Accuracy

Financial Analysts Journal

Lesson 3 recognizes that some value estimates may have more credibility than others and thus deserve more weight. For instance, when you have extremely reliable cash flow forecasts, you are quite justified in placing more weight on the DCF value. Depending on how much relative confidence you have in the result of DCF analysis and the method of comparables, you may decide to assess the final value as a weighted average, with the DCF value getting, for instance, a 70 percent weight and the method of comparables getting a 30 percent weight.

Formal Bayesian theory provides mathemati- cal formulas for the optimal weights based on Bayes' rule from probability theory (Yee). The for- mulas are difficult to use, however, because they depend on parameters that are themselves difficult to estimate, such as the standard error sizes of the individual value estimates. For this reason, as described in the next section, studies show that Bayesian recipes for combination do not do any better than straightforward recipes, such as using the equally weighted average. Ultimately, you are probably better off expending effort on improving the individual value estimates than on developing more intricate procedures to finely tune the relative weights on each estimate. Additional research into combining value estimates is needed, however, to address this issue.

In summary, Lessons 2 and 3 encapsulate the most useful implications of Bayesian decision theory: Take weighted averages of all available value estimates and give more weight to the more credible estimates.

A Lesson from Forecasting Research Armstrong (1989), Clemen (1989), and Diebold (1989) summarized the research into combining forecasts that was inspired by Bates and Granger's seminal work. This research, which is part theoret- ical, part experimental, and part behavioral, has two major conclusions-one expected and one sur- prising. The expected conclusion, which reaffirms Lesson 1, is that combining forecasts reduces fore- cast error (relative to the average error of the indi- vidual forecasts). The surprising conclusion is that in experimental settings with human test subjects, an equally weighted average performs as well as any of the formal, "more sophisticated" Bayesian formulas. This conclusion leads to Lesson 4: A sim- ple equally weighted average performs as well as more sophisticated statistical approaches.

Committing at the outset to the use of an equally weighted average has several advantages. First, it avoids the need to estimate weights-a pro- cess that may itself introduce additional estimation error. Second, precommitting to an equally

weighted average reduces the subjective degrees of freedom. In a variable-weight average, the appraiser has subjective freedom to discount unde- sirable estimates, either purposely or subcon- sciously, by biasing the weights in the weight estimation process.

As reported by Armstrong, Clemen, and Die- bold, research shows that combining forecasts by using preset mechanical rules consistently, on aver- age, beats individual forecasts or forecasts in which the weights are subjectively adjusted after the fact on the basis of "judgment" or "intuition." In light of this research, to minimize the chances of subjec- tive bias, you should commit to a weighting rule before obtaining the individual value estimates. To do so, start with an equally weighted average as the default benchmark. Then ask yourself whether you have reason to trust a particular valuation method especially more than another in the given context. If so, raise its weight and make compensating adjustments to the other weights so they still sum to 1. Repeat this process until you are satisfied with all the weights. Finally, commit to these weights before performing the individual estimates.

Back Testing and Large-Sample Studies If you have an appropriate and sufficiently large sample of historical or peer-group data, the optimal weights may be computed empirically by using sta- tistical techniques. This is the final rule, Lesson 5: If appropriate data are available, determine weights by back testing to historical or peer-group data.

For example, given a sample of peer companies whose market prices, p, are efficient (or at least more efficient than the market price of the subject company) and given DCF estimates, VDCF, and method-of-comparables estimates, VcompI for each of the peers, you can run the regression

p = (1- C)VDCF + aVcomp + Noise

for the peer sample to determine the weights 1 - a and a for the DCF and method-of-comparables estimates. Alternatively, you can try to estimate optimal weights empirically by using historical time-series data for the subject company.

Peer testing and back testing, however, are "buyer beware" endeavors. If the optimal weights change over time, back testing reveals only the historical weights, which may no longer apply. Similarly, determining weights from peer-group data requires identification of the subject com- pany's peer group-a task that is fraught with issues and gray areas (Bhojraj and Lee 2002). If the peer group is misidentified, the weights deter- mined may be irrelevant or, worse, misleading.

26 www.cfapubs.org 02004, CFA Institute

This content downloaded from 185.44.78.190 on Wed, 18 Jun 2014 00:35:04 AMAll use subject to JSTOR Terms and Conditions

Page 6: Combining Value Estimates to Increase Accuracy

Combining Value Estimates to Increase Accuracy

Numerous academic studies have run statistical horse races between DCF analysis and various ver- sions of the method-of-comparables analysis on large samples to determine whether any particular method dominates and deserves 100 percent weight (Kaplan and Ruback; Gilson, Hotchkiss, and Ruback 2000; Lie and Lie 2002; Liu, Nissim, and Thomas 2002; Yoo 2002). The results have been largely neg- ative, although Liu et al. and Yoo showed that for- ward earnings-when consensus analyst forecasts are available-consistently deserve more weight than other method-of-comparables estimates.6 The Liu et al. and Yoo studies did not, however, race forward earnings against DCF results.

Kaplan and Ruback estimated valuations based on price to earnings before interest, taxes, depreci- ation, and amortization (EBITDA) and also DCF valuations for a sample of highly leveraged transac- tions (HLTs). For their sample of 51 HLTs between 1983 and 1989, they found that price-to-EBITDA and DCF analyses gave comparable levels of accu- racy and precision for predicting the HLT transac- tion price. In particular, about half of the value estimates fell within 15 percent of the actual HLT transaction price. Many will argue, however, that this glass is half empty. Differences exceeding 15 percent in the other half of the Kaplan and Ruback sample demonstrate that price-to-EBITDA and DCF results are inconsistent about half the time.

Gilson et al. compared DCF analysis with the method-of-comparables analysis for companies emerging from bankruptcy. Using EBITDA multi- ples based on the industry median, they found that about 21 percent of the value estimates fell within 15 percent of market values. The authors conjectured that Kaplan and Ruback's HLT valuations were more precise than the bankruptcy ones because of greater information efficiency in the HLT setting.

Using 1998 financial data from Compustat, Lie and Lie found that the method of comparables

based on asset values yields more precise and less biased estimates than estimates based on sales and earnings multiples, especially for financial firms. As did Liu et al. and Yoo (whose studies were based on a much longer time period), Lie and Lie con- cluded that forward P/Es are more accurate and precise estimators than trailing ones. Finally, Lie and Lie reported that the accuracy and precision of all the method-of-comparables estimates varied greatly according to company size, profitability, and the extent of intangible value. The message is clear: The optimal weights are context specific.

Summary Remarks Combining value estimates makes sense because every bona fide estimate provides information, so relying on only one estimate ignores information. Combining takes advantage of the benefits of diver- sification by averaging away idiosyncratic biases and errors in the individual estimates. Drawing from the Delaware Block Method, Bayesian deci- sion theory, and forecasting research, this article proposed five rules of thumb for combining two or more value estimates into a superior value estimate.

As a procedure, however, combining has unre- solved issues. Bayesian theory says that value esti- mates should be combined as a linear weighted average. But without a reliable peer group of effi- ciently priced comparable companies, determining what the weights should be is usually difficult. Moreover, there is no way to evaluate whether the weights, once chosen, are correct. If the optimal weights vary over time, estimating them by back testing to historical time series is inappropriate.

Nevertheless, despite these problems, combin- ing promises enough benefits to warrant much more attention from practitioners, as well as aca- demic researchers, than it has attracted in the past.

Notes 1. I thank Russell Lundholm for calling my attention to this

proxy statement. The proxy statement also describes sev- eral additional value estimates in a postmerger "synergies" scenario that I do not go into here.

2. For example, a large block trade of microcap shares may cause temporary price volatility. In contrast, nonsynchro- nous trading may cause apparent excessive price stability, as seen in emerging equity markets.

3. The Internal Revenue Code addresses the valuation of closely held securities in Section 2031(b). The standard of value is "fair market value," the price at which willing buy- ers or sellers with reasonable knowledge of the facts would be willing to transact. Revenue Ruling 59-60 (1959-1 C.B. 237) sets forth the IRS's interpretation of IRC Section 2031(b).

4. Bell v. Kirby Lumber Corp., Del. Supr., 413 A.2d 137 (1980).

5. For comparison, a random number homogeneously distrib- uted between 0 and 1 has an average value of 50 percent and a standard deviation of 28.9 percent. Because the mar- ket value and earnings value weights have a significantly smaller spread and their means differ from 50 percent, these weights are apparently not homogeneously distributed random variables.

6. In a large-scale study of Compustat companies, Yoo empir- ically examined whether a linear combination of several univariate method-of-comparables estimates would achieve better value estimates than any comparables estimate alone. He concluded that the forward P/E essentially beats the trailing ratios or a combination of trailing ratios so much that combining the forward P/E with otherbenchmark estimates does not help.

July/August 2004 www.cfapubs.org 27

This content downloaded from 185.44.78.190 on Wed, 18 Jun 2014 00:35:04 AMAll use subject to JSTOR Terms and Conditions

Page 7: Combining Value Estimates to Increase Accuracy

Financial Analysts Journal

References AOL Time Warner, Inc. 2000. Amendment No. 4 to Form S-4, as filed with the U.S. SEC, Registration No. 333-30184 (19 May).

Armstrong, J. 1989. "Combining Forecasts: The End of the Beginning or the Beginning of the End?" International Journal of Forecasting, vol. 5, no. 4:585-588.

Arnott, R., and P. Bernstein. 2000. "What Risk Premium Is 'Normal'?" Financial Analysts Journal, vol. 56, no. 2 (March/ April):64-84.

Asness, C. 2000. "Stocks versus Bonds: Explaining the Equity Risk Premium." Financial Analysts Journal, vol. 56, no. 2 (March/ April):96-113.

Bates, J., and C. Granger. 1969. "The Combination of Forecasts." Operational Research Quarterly, vol. 20:451-468.

Beatty, R., S. Riffe, and Rex Thompson. 1999. "The Method of Comparables and Tax Court Valuations of Private Firms: An Empirical Investigation." Accounting Horizons, vol. 13, no. 3 (September):177-199.

Bhojraj, Sanjeev, and C. Lee. 2002. "Who Is My Peer? A Valuation- Based Approach to the Selection of Comparable Firms." Journal of Accounting Research, vol. 40, no. 2 (May):407-439.

Black, F. 1986. "Noise." Journal of Finance, vol. 41, no. 3 (July):529-542.

Clemen, R. 1989. "Combining Forecasts: A Review and Annotated Bibliography." International Journal of Forecasting, vol. 5, no. 4:559-583.

Damodaran, A. 2001. The Dark Side of Valuation. Upper Saddle River, NJ: Prentice Hall.

DeAngelo, L. 1990. "Equity Valuation and Corporate Control." Accounting Review, vol. 65, no. 1 (january):93-112. Diebold, F. 1989. "Forecasting Combination and Encompassing: Reconciling Two Divergent Literatures." International Journal of Forecasting, vol. 5, no. 4:589-592.

English,J. 2001. Applied EquityAnalysis. New York: McGraw-Hill.

Gilson, S.C., E.S. Hotchkiss, and R.S. Ruback. 2000. "Valuation of Bankrupt Firms." Review of Financial Studies, vol. 13, no. 1 (Spring):43-74.

Kaplan, S.N., and R. Ruback. 1995. "The Valuation of Cash Flow Forecasts: An Empirical Analysis." Journal of Finance, vol. 50, no. 4 (September):1059-93.

Lie, E., and H. Lie. 2002. "Multiples Used to Estimate Corporate Value." Financial Analysts Journal, vol. 58, no. 2 (March/ April):44-53.

Liu, D., D. Nissim, and J. Thomas. 2002. "Equity Valuation Using Multiples." Journal of Accounting Research, vol. 40, no. 1 (March):135-172.

Penman, S. 2001. Financial Statement Analysis and Security Valuation. Boston, MA: McGraw-Hill.

Rappaport, A., and M. Mauboussin. 2002. "Valuation Matters." Harvard Business Review (March):24-25.

Seligman, J. 1984. "Reappraising the Appraisal Remedy." George Washington Law Review, vol. 52, no. 3:829-871.

Yee, K. 2003. "Combining Valuation Estimates: A Bayesian Framework." Working paper, Columbia University.

Yoo, Y. 2002. "Equity Valuation Using Combination of Multiples." Working paper, Columbia University.

28 www.cfapubs.org @2004, CFA Institute

This content downloaded from 185.44.78.190 on Wed, 18 Jun 2014 00:35:04 AMAll use subject to JSTOR Terms and Conditions