10
Effectiveness of Earnings Per Share Forecasts Author(s): Timothy E. Johnson and Thomas G. Schmitt Source: Financial Management, Vol. 3, No. 2 (Summer, 1974), pp. 64-72 Published by: Wiley on behalf of the Financial Management Association International Stable URL: http://www.jstor.org/stable/3665292 . Accessed: 16/06/2014 22:01 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. . Wiley and Financial Management Association International are collaborating with JSTOR to digitize, preserve and extend access to Financial Management. http://www.jstor.org This content downloaded from 185.44.78.143 on Mon, 16 Jun 2014 22:01:48 PM All use subject to JSTOR Terms and Conditions

Effectiveness of Earnings Per Share Forecasts

Embed Size (px)

Citation preview

Page 1: Effectiveness of Earnings Per Share Forecasts

Effectiveness of Earnings Per Share ForecastsAuthor(s): Timothy E. Johnson and Thomas G. SchmittSource: Financial Management, Vol. 3, No. 2 (Summer, 1974), pp. 64-72Published by: Wiley on behalf of the Financial Management Association InternationalStable URL: http://www.jstor.org/stable/3665292 .

Accessed: 16/06/2014 22:01

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

Wiley and Financial Management Association International are collaborating with JSTOR to digitize, preserveand extend access to Financial Management.

http://www.jstor.org

This content downloaded from 185.44.78.143 on Mon, 16 Jun 2014 22:01:48 PMAll use subject to JSTOR Terms and Conditions

Page 2: Effectiveness of Earnings Per Share Forecasts

EFFECTIVENESS OF EARNINGS PER

SHARE FORECASTS

TIMOTHYE. JOHNSON and THOMAS G. SCHMITT

Dr. Johnson is Assistant Professor of Finance at the University of Cincinnati and received his PhD from the University of Illinois. His teaching and research have been in the areas of corporate finance and investments, and he is the author of several articles and books in the area of financial management. Mr. Schmitt is manager of Inventory and Production Controlfor The G. A. Gray Company, a division of Warner and Swasey.

Because a company's stock price will probably be affected by its future earning power, the investment

community as a whole and security analysts in

particular are keenly interested in forecasting this

earning power. An educated forecast of earnings per share should consider such causative factors as

product demand, costs of operation, interest rates, and shares outstanding, to name but a few. This data has traditionally been segregated into inter- nal information and publicly disseminated external information that allows rational investors equal access.

Numerous studies have attempted to use mathe- matical formulas on external information to predict future earnings. Similarly, other studies have tested

compiled earnings predictions made by professional analysts. The consensus is that one cannot effectively use external information in forecasting earnings. This may be due to lack of rigor of the analytical techniques employed, inappropriate techniques of

reporting financial data, or randomness inherent in the data.

While a wealth of external information is avail- able on companies listed on the major exchanges, internal information is generally limited to the com-

pany's management because of legal constraints. Many persons feel that inside information can pro-

vide a distinct comparative advantage if used to make investment decisions, and there is great interest in publishing financial forecasts.

This study augments previous research using well known mathematical methods that utilize external information. The results are rigorously analyzed and the effectiveness of the forecast models are deter- mined. If the results of our mathematical forecast- ing are reliable, thus conflicting with previous re- search, then the need for more published insider information and better financial reporting techniques may diminish.

Systematic methods of deriving earnings per share forecasts may be classified into five categories [3]. Mechanical models are forecasts based upon mathe- matical extrapolations of the history of the earnings per share. Included are random methods, averages, trend projections, exponential smoothing, autocor- relations and harmonic analysis. Leading index meth- ods utilize economic series that are highly correlated with, and tend to lead, the earnings per share to be forecast. The third category, comparative pressures, utilize relationships between the earnings per share series and other data that may be more accurately predicted. Fourth is opinion polls, which are com-

pilations of forecasts made by people who presum- ably have access to information and techniques not

Financial Management 64

This content downloaded from 185.44.78.143 on Mon, 16 Jun 2014 22:01:48 PMAll use subject to JSTOR Terms and Conditions

Page 3: Effectiveness of Earnings Per Share Forecasts

directly available to the polltaker. Finally, econo- metric models are concerned with the dynamic in- fluence of the various economic factors that com- bine to determine earnings per share.

This study is limited to the first and last cate- gories. A number of mechanical forecasting models are tested on the earnings per share of various corporations to determine the most effective of these long range forecasting devices.

An econometric model is used to examine the collective effects of certain factors, other than his- torical earnings per share, on forecast effectiveness. These components are tested to determine whether mechanical forecasts can be used independently of other factors in making reliable earnings per share predictions.

Alternative Mechanical Models

Selected for study are moving averages, linear and exponential trends, and single, double, and triple exponential smoothing [4]. Since these models contain parameters that must be optimized, alter- native methods of choosing them are also examined in three of the techniques. In addition, a naive forecast model containing no mathematical deriva- tion is introduced as a control.

Naive Models

For this model even the most elementary mathe- matical computation is not required because the fore- cast is simply the last piece of available data.

If At represents the reported earnings per share for the present period (t), then the forecast for the kth future period (Ft+k) is:

Ft+k = At

All data except the most recently reported in- formation is ignored.

Moving Average Model

Often the naive model is unreliable in fore- casting time series because point estimates usually contain large random components. The moving average [3, Chap. 3] provides a simple method of mathematically removing this irregular component by smoothing the random fluctuations over several successive periods. The moving average forecast (Ft+k) is equal to the arithmetic mean of his- torical earnings per share reported over N periods:

t Ft+k= S Ai/N

i=t-N

In this method equal weights are afforded to a series of past data. The base (N) is a parameter which must be optimized in order to compare the moving average with other forecasting techniques. If N is large, irregular fluctuations will be smoothed over more data, thus removing more of the ran- dom influence. However, less weight is placed on the most recent observations so that the model is less sensitive to both temporary and permanent changes in the data. Here N is varied between a three-year average and an eight-year average inclusive.

Linear Trend Projection

Linear trend projection [8, Chap. 13] is a method of detecting and forecasting a constant amount of growth or loss in data, which is characterized by the slope (b) of the regression line. A line is fitted to a given number (N) of past data by using the least squares criterion. The point at which the line intersects the earnings per share axis may be repre- sented by (a) in the model:

Ft+k = a+b (N+k).

The problem with the moving average model is also inherent in the trend model. The trend pro- jection is dependent upon the number of previous years (N) considered. The first method of solving for the parameter is the same procedure outlined in the average model (moving a fixed base over time). These forecasts are then compared with the actual earnings per share to determine the optimal fixed base for the linear trend model. The second method is to fit the trend line to all available past data. An 8 through a 19-year trend projection may be used depending on the amount of information available at the time of the forecast.

Exponential Trend Projection

Many series, particularly economic data, contain a trend component increasing or decreasing ex- ponentially, the compound interest banks pay to their depositors being an example [8, Chap. 13].

In the following equation let (a) represent a multi- plicative constant and (b) represent the slope of the exponential line.

Ft+k = ab(N+k)

Summer 1974 65

This content downloaded from 185.44.78.143 on Mon, 16 Jun 2014 22:01:48 PMAll use subject to JSTOR Terms and Conditions

Page 4: Effectiveness of Earnings Per Share Forecasts

Again, the base of the model (N) is a para- meter which must be optimized. The method of solving for N is to fit the exponential model to all available past data (variable base method). The fixed moving base method of solving for N is not used because the exponential model is very sensitive to random influences and tends to oscillate if these random effects are not dampened. By considering as many data points as possible, random effects are effectively removed.

Limitations on the slope (b) of the linear and exponential trend models are that the linear slope cannot exceed plus or minus 25%, while the ex- ponential slope must be at least .75 and no greater than 1.25. Excessive slopes have been found to cause random oscillation and poor forecast performance.

The least squares criterion (with logrithmic trans- formation) is used to fit the exponential model to the data.

Single Exponential Smoothing

The average and trend models assume that all previous data in the base (N) have an equal effect on the value of future earnings per share, an as- sumption that is often invalid because the most recent information usually has more effect on the future response than older information. Logically this dilemma might be resolved by compromising between rapidly adapting to the response and smooth- ing random behavior over time by placing an ex-

ponentially decreasing weight on the data as it becomes successively older, such that the sum of these weights is equal to 100%. The single ex-

ponential smoothing model [2] is actually a weighted average, with the weights following an exponential distribution. If St_1 is the single smoothed mean for the previous time period (t-1) and a represents the smoothing weight, then the single smoothed mean for the present period (St) is:

St = a At + (l-a) St_1,

and the forecast for the kth period equals:

Ft+k = St

The weight (a) may vary between 1.0, where the forecast equals the last piece of data in a series, and 0.0, where the forecast equals the original piece of data in a series. As long as the parameter (a) is between 0 and 1, no data are completely ignored. For example, if the value of the smoothing con- stant is a= 0.3, then:

Time period

t (present period) t-1 t-2 t-3 (all others)

Weight

.3000

.2100

.1470

.1029 .2401

1.0000

The forecaster is again presented with the problem of optimizing a parameter for which two methods are used. In the first, values of a are varied between 0 and 1 by .05 increments on all previously avail- able data. For example, if 1962 is the year to be forecast (forecast for the next year, k=l) for a given company, then the values of alpha are varied until the mean absolute percent error between the forecasts for the prior years (1952-61) and the actual values are minimized. (To maintain consistency, the mean absolute percent error criteria, shown in re- search to provide forecasts superior to minimizing squared forecast errors and mean absolute error, is used both in optimizing the parameter and in evalu- ating the overall performance of the model.) This optimum factor is then used to forecast the 1962 earnings per share. For each successive year op- timizing the weighting factor occurs, until the 1971 data is forecast from the previous 19 years (1952-70). This process is repeated on each company, until forecasts are determined for all those in the sample.

Trigg and Leach Adaptive Single Smoothing

The second method of solving for alpha in the previous model does not require more than one iteration per forecast. This method [15] is based upon the principle that if the forecast is tracking the actual earnings per share properly, then the weighted sum of the forecast errors (Et) should be close to 0 (positive errors cancelling negative errors). However, if the forecast begins to stray off track, the weighted sum (Et) should increase so that its absolute value approaches the weighted sum of the absolute forecast errors (Dt), causing alpha to in- crease towards unity. In other words, there has been a significant change in the data and alpha has increased since only the most recent information is representative of this change.

Et = (At-St) + (l-) (At_ 1-St1)

D, = I lAt-St 1+(1-6) At_1-St_1I

o = I Et/Dt I

Financial Management 66

This content downloaded from 185.44.78.143 on Mon, 16 Jun 2014 22:01:48 PMAll use subject to JSTOR Terms and Conditions

Page 5: Effectiveness of Earnings Per Share Forecasts

There is a new parameter (6) introduced, ap- propriately known as the adaptive constant. Delta is not as sensitive as alpha in the sample and may be chosen arbitrarily from values between 0.1 and 0.9 with only negligible changes in the earnings per share forecasts. It is assigned the value .5 in this study. The Trigg and Leach model reduces the amount of computations required by the simulation model by about 80%.

Double Exponential Smoothing

If there is a trend component in the data, single exponential smoothing forecasts will tend to lag the actual values because averages, weighted or otherwise, do not account for growth in data. In the double exponential smoothing model [2], the

periodic amount of growth is first determined in the series (St-St_1); then, the linear slope (Bt) is com-

puted by taking an exponentially weighted sum of these periodic changes:

Bt = ta(St-St-_) +( -a)Bt-1

Finally, the weighted slope (Bt) is added to com-

pensate for the trend lag:

Ft+k = St( (1-a)/a)Bt+kBt.

Chow's Adaptive Double Smoothing

The first method used in determining the ap- propriate alpha factor is the simulation process, which requires 20 iterations per forecast and is described in the single smoothing model. The second method is Chow's Adaptive Double Smoothing model [5], which reduces the iterations required to three per forecast and is based upon the concept that the smoothing constant should not change significantly in a series over time. An initial alpha is selected, and then two other alpha values are chosen that are plus and minus .05 away from the initial smoothing constant. The alpha value that minimizes the mean absolute percent error of the three forecasts becomes the new initial smoothing constant for the next period. After repeating this process for several periods of reported earnings per share, the smoothing constant should theoretically settle to its appropriate value.

Triple Exponential Smoothing

This model computes a weighted second order trend (Ct) in combination with a weighted linear

trend (Bt) and weighted average (St). Tt represents the triple smoothed mean and Et the triple smoothed forecast at the present time. The forecast (Ft+k) extrapolates Et into the kth period by means of a quadratic adjustment [2].

Tt = aDt + (1-a) Tt-1

Et = 3(St-Dt) +Tt

Ft+k = Et+K(Bt+2Ctt)+k2Ct

If there is a significant non-linear trend in the data, the second order function should improve the forecast. The method of determining the alpha factor is the simulation process used in the single and double smoothing models.

Description of the Sample

Earnings per share data were compiled from the 1972 edition of Standard and Poor's Compustat Tapes, from a sample of 150 selected industrial companies. Every company listed on the tapes is considered, providing that earnings per share are recorded for the full 20 years (1952-71) and that there are at least five companies within the in- dustry. Fifty-two industries and 433 companies are eligible for selection. A final sample of 30 industries with five companies each is randomly selected using the two-stage cluster sampling technique and may be biased in favor of the relatively more mature com- panies.

The period 1952-61 is used as a base in order to initially fit the models to the earnings per share data. Forecasts are then generated for the years 1962-71 from the previous data for each company and for each of the ten mechanical models. The observations are selected sequentially by the fore- cast models so that the parameters of each equation are based only upon data that would have been available at the time of the forecast. The para- meters are reoptimized and the forecasts are recom- puted with each new year of available data.

A sample of 1500 data points (forecast errors) are generated for each model. By utilizing the ten mechanical models on each of the 150 sample companies, earnings per share forecasts are computed for the next year, for the second year following, and for the third year following for each of the years 1962-71.

The annual earnings per share series of each com- pany is adjusted for non-recurring gains and losses, stock splits and stock dividends.

67 Summer 1974

This content downloaded from 185.44.78.143 on Mon, 16 Jun 2014 22:01:48 PMAll use subject to JSTOR Terms and Conditions

Page 6: Effectiveness of Earnings Per Share Forecasts

Evaluative Measures

Explanations of the four evaluative measures used in assessing the effectiveness of the ten mechan- ical models follow.

Accuracy

A number of criteria have been suggested for determining the accuracy of the forecast values, including maximizing the probability of being ex- actly right and the probability that errors do not exceed a specified threshold [9, p. 220], and mini- mizing the standard deviation of forecast errors [15], the current absolute deviation [5], the mean absolute deviation [3, Chap. 3] and the mean abso- lute percent deviation [14]. The last was selected for use here because the percentages readily allow comparison of data with different base values.

To calculate the mean absolute percent devia- tion, the percent deviation is first derived by divid- ing the forecast errors, actual E.P.S. less the fore- casted E.P.S. (Ai-Fi), by the actual E.P.S. (Ai). Then, the mean absolute percent deviation statistic (MA%D) is computed by taking an average of the absolute values of the percent deviations over the number of observations in the sample.

1500 MA%D = Z (Ai-Fi)/ IAi 1)/1500

i=1

This deviation is a relative measure of the magnitude of the forecast errors; if this value is large, a great deal of inaccuracy results.

Bias

A predictive device which tends to over or under forecast the actual data is said to be biased. The mean percent deviation (M%D) [15] is used in evalu-

ating the bias of the forecasts.

1500 M%D = 2 I ((Ai-Fi) / Ai) / 1500

i=l

If a model is unbiased, the deviation equals zero; if the mean is positive (negative), the forecasts tend to understate (overstate) the actual values.

Consistency

It is desirable to ascertain the mechanical model whose mean forecast errors indicate accuracy and lack of bias and to determine the one whose dis-

tribution of forecast errors is consistently accurate and unbiased. Mean absolute percent and mean

percent deviation are measures of central tendency and do not indicate the degree of dispersion of the

percent errors about the mean. If there is a great deal of dispersion in the error distributions, the forecast model may be unreliable to the investor, regardless of the means. The Wilcoxon Matched- Pairs Signed-Ranks test [13] is used to evaluate the consistency of the forecast accuracy and lack of bias.

Computational Efficiency

It is apparent from the descriptions of the me- chanical models that there is a great variance with regard to technical complexity and time and cost

required to derive the various forecasts. If two modelsexhibit comparable effectiveness, based upon the other criteria, the most computationally efficient forecasting device should be selected. The .001 level is used in all test -statistics. Due to the great variance in computational efficiency of the models, it is desirable to use this highly discriminatory significance level.

Performance of the Alternative Mechanical Models

The mean absolute percent and mean percent deviations of each forecast model are shown in Exhibit 1. The percentages can be directly compared to determine the most accurate and unbiased model over the sample companies. Comparatively, the naive control seemed to have performed quite well. However, it is possible that the model whose mean is optimal in this sample may not be so over the universe of companies. Since it is not desirable at this

point to assume that the distributions of absolute and actual percent errors are normal about the mean, two nonparametric statistics are utilized in testing the significance of the results.

The Friedman Two-Way Analysis of Variance

by Ranks [13] is used in testing the hypothesis that the ten samples of percent forecast errors are drawn from the same population. This nonpara- metric test indicates whether or not the accuracy and bias of all of the models are the same in fore- casts of one, two and three years in the future. The figures shown in Exhibit 2 indicate that they are not the same, since the computed Chi-square values are great-e than the critical Chi-square values in each case.

Financial Management 68

This content downloaded from 185.44.78.143 on Mon, 16 Jun 2014 22:01:48 PMAll use subject to JSTOR Terms and Conditions

Page 7: Effectiveness of Earnings Per Share Forecasts

Exhibit 1. Earnings Per Share Forecast Effectiveness

Forecast model

1. Naive

2. Average (3 yr.)*

3. Linear trend (6 yr.)*

4. Linear trend (variable)

5. Exponential trend (variable)

6. Single smoothing (Trigg and Leach)

7. Single smoothing (simulation)

8. Double smoothing (Chow)

9. Double smoothing (simulation)

10. Triple smoothing (simulation)

Accuracy (MA%D) Bias (M%D)

1 year 2 years 3 years 1 year 2 years 3 years

29.3% 38.4% 43.7%

34.2% 41.3% 45.3%

29.3% 39.8% 47.5%

36.9% 42.4% 46.7%

38.2% 43.2% 47.3%

31.1% 38.8% 43.3%

31.7% 40.3% 45.6%

30.9% 39.6% 45.6%

32.6% 40.8% 46.8%

34.2% 42.0% 47.9%

2.7%

5.5%

-0.7%

6.3%

5.1%

2.4%

4.6%

0.9%

1.8%

1.9%

5.0% 8.0%

8.9% 12.7%

-2.4% -1.1%

8.8% 11.0%

3.5%

4.6%

3.1%

7.4%

9.6% 11.6%

4.1%

5.8%

4.4%

4.5%

6.1%

5.2%

*Three years was found to provide the best base in the average model. Six years was the optimal base in the

The Wilcoxon test [13] is used in determining which, if any, of the models have the same ac- curacy and/or bias. This is done by a limited num- ber of Wilcoxon tests on the differences between absolute and actual percent forecast errors of the various models. Consistency is also evaluated be- cause this test is sensitive to any kind of difference (central tendency, dispersion, skewness, etc.) in the sample distributions. Models optimal from,the stand- point of consistency and lack of bias are listed in Exhibit 3. Those groups containing more than one model indicate that the Wilcoxon test shows no significant difference in the error distributions among models at the .001 level.

The models which contain trend components, par- ticularly the three-year linear trend, seem to be the most unbiased forecasting techniques. However, from the standpoint of accuracy, the naive model performs as well as the more sophisticated mechanical models, which suggests that there is a strong random com-

Exhibit 2. Performance Similarity of Mechanical Model Forecasts (Friedman Analysis of Variance by Rank)

Computed Chi-square Critical 1 Year 2 Years 3 Years Chi-square

Accuracy 700.28 223.24 127.30 27.88 Bias 1,114.21 1,143.52 1,312.50 27.88

linear trend model.

ponent [1, Chap. 8] in the data. Furthermore, earn- ings per share forecasts which are expected to be in error by at least 29% (Exhibit 1) are of little value to investors. It is possible that there are factors other than historical earnings per share that foretell future earnings.

An Econometric Approach

To test for the existence of other important variables, a simple econometric model is constructed. Factorial analysis of variance experiments are con- ducted using the absolute and the actual percent forecast errors as the response in each of the res- pective experiments. Considered are the forecast time horizon, the year to be forecast, the mechanical model type, and the company and industry to be forecast. Two-, three, and four-way interactions of these factors are also included.

The time horizon and the mechanical model types have been examined in previous experiments. It is evident in Exhibit 1 that the forecast effectiveness of each model degenerates in accuracy and bias as the forecast time horizon increases. In addition, Exhibit 3 indicates that some models are significantly more effective than others so that the time horizon and the type of forecast model should be included in the experiments.

Summer 1974 69

This content downloaded from 185.44.78.143 on Mon, 16 Jun 2014 22:01:48 PMAll use subject to JSTOR Terms and Conditions

Page 8: Effectiveness of Earnings Per Share Forecasts

Exhibit 3. Optimal Forecast Models (Wilcoxon Matched-Pairs Signed-Ranks Test)

1 Year 2 Year 3 Year

Accuracy (Absolute Percent Deviation)

Naive Naive Naive*

Linear trend Linear trend Linear trend (6 yr.) * (6 yr.) ** (6 yr.) **

Double smoothing Single smoothing Linear trend (Chow) ** (Trigg and Leach) (variable) *

Double smoothing Single smoothing (Chow) ** (Trigg and Leach)

Double smoothing Double smoothing (simulation) * (Chow)

Double smoothing (simulation) *

Bias (Actual Percent Deviation)

Linear trend Linear trend Linear trend (6 yr.) (6 yr.) (6 yr.)

Double smoothing (Chow)

Double smoothing (simulation) *

Triple smoothing (simulation)

*significant at the .05 level.

**significant at the .25 level.

The year of the forecast is included because it is assumed, a priori, that there are systematic in- fluences, such as economic conditions, affecting earnings per share. Industry and company variables are inserted into the experiments because, presum- ably, there are homogeneous effects unique to each industry and company that could be helpful in fore- casting earnings per share.

Certain assumptions not required in the nonpara- metric tests must be met in order for these analyses of variance experiments to be valid. Namely, the residual errors of the experiments must be drawn from a normal population with a constant variance, and the main and interaction effects must be ad- ditive. Examinations of residual plots of the two experimental designs reveal no contradictions of these assumptions [10, Chap.3]. In addition, the factors and interactions included explain most of the systematic effects in the experiments since 95% of the variation in the absolute and actual percent de- viations are explained by the designs.

The results of the experimental designs are shown in Exhibit 4. If the computed F value is greater

than the critical F value, then the corresponding main or interaction term has a significant effect on the forecast.

According to the factorial designs, the choice of the mechanical model has a significant effect on both the accuracy and bias of the forecasts, which is consistent with the results of the Friedman test. In fact, all of the main effects are significant at the .001 level. More importantly, almost every interaction term is significant in both experiments, which indicates that the included variables are mean- ingful and are dynamically interdependent.

The empirical findings are particularly noticeable in the accuracy experiment. The grand mean of all of the mean absolute percent errors (MA%D), shown in Exhibit 1, is 39.7%. After removing the effects of the included variables, the result- ing grand mean, more commonly known as the mean residual error, has been reduced to 2.1%, which is clearly acceptable for investment purposes. The improvement in the mean percent error of only 0.7% in the bias experiment, while statistically signi- ficant, has little practical benefit.

Financial Management 70

This content downloaded from 185.44.78.143 on Mon, 16 Jun 2014 22:01:48 PMAll use subject to JSTOR Terms and Conditions

Page 9: Effectiveness of Earnings Per Share Forecasts

Exhibit 4. Factorial Analysis of Variance Experiments

Calculated F

Degrees of (1) (2) freedom Accuracy Bias

Block 1 Time horizon

Block 2 Industry Block 3 Company Block 4 Mechanical models

Block 5 Year of forecast

Interaction 1,2 Interaction 1,3 Interaction 1,4 Interaction 1,5 Interaction 2,3 Interaction 2,4 Interaction 2,5 Interaction 3,4 Interaction 3,5 Interaction 4,5 Interaction 1,2,3 Interaction 1,2,4 Interaction 1,2,5 Interaction 1,3,4 Interaction 1,3,5 Interaction 1,4,5 Interaction 2,3,4 Interaction 2,3,5 Interaction 2,4,5 Interaction 3,4,5 Interaction 1,2,3,4 Interaction 1,2,3,5 Interaction 1,2,4,5 Interaction 1,3,4,5 Interaction 2,3,4,5

Residual

Total

Mean residual

Grand mean

Multiple coefficient

of determination (R2)

*Not significant at the .001 level.

One logical inference in the findings of the fac- torial designs is that professional analysts, who carefully consider a wealth of factors rather than just historical earnings per share, should be able to make earnings predictions which are more accurate than mechanical extrapolations.

Such a hypothesis is overwhelmingly refuted by the evidence presented in three different studies [6, 11, 12]. Each concludes that analysts are unable

to forecast earnings per share more accurately than specified mechanical techniques. These results appear to be contradictory to the findings in this study, since the analysts undoubtedly evaluate economic conditions, industry differences, and company pros- pects. However, it is important to note that there is no method of determining what specific forces within three factors of the experimental designs (year of forecast, industry, and company) are affect-

Summer 1974

2

29

4

9

9

58

8

18

18

116

261

261

36

36

81

232

522

522

72

72

162

1044

1044

2349

324

2088

2088

4698

648

9396

3,106.46

2,252.21 736.85

74.25

782.87

17.71

11.47

13.40

29.87

862.28

11.92

167.52

2.61

111.77

9.92

10.58

1.61

8.26

.87*

6.33

3.14

7.93

93.46

3.70

2.61

1.31

7.25 1.19

1.19*

2.99

116.97

340.06

178.90

97.00

3,408.88 22.80

10.17

8.31

172.04

165.31

20.50

221.66

8.32

70.88

16.75

11.05

1.84

14.73

1.22*

10.60

6.11

13.76

104.19

2.66

1.98

1.35 9.90

1.32

1.15*

1.97

Critical F

6.91

2.01

4.62

3.02

3.02

1.69

3.27

2.34

2.34

1.48

1.30

1.30

1.90

1.90

1.57 1.32

1.21

1.21

1.61

1.61

1.39

1.15

1.15

1.10

1.27

1.10

1.10

1.07

1.19

1.05

18792

44999

2.1%

39.7%

95.6%

4.3%

5.0%

94.5%

71

This content downloaded from 185.44.78.143 on Mon, 16 Jun 2014 22:01:48 PMAll use subject to JSTOR Terms and Conditions

Page 10: Effectiveness of Earnings Per Share Forecasts

ing the response. Any of several explanations could account for the inability of the analysts to utilize external information in forecasting earnings per share. First, important variables may be inaccessible, and/or the analyst may not be able to define and forecast important variables; secondly, the interre-

lationships between variables may be intricate (as evidenced by the significant four-way interaction

terms) and indefinable; and finally, a lack of ob-

jective and systematic methodology in preparing forecasts is possible.

A sophisticated econometric model may be able to identify many of these effectual forces and signifi- cantly improve the forecasts. A quantitative approach, suggested by Crane and Crotty [7], would be to

develop a multiple regression model, including me- chanical forecasts, actual data, leading indexes and variable interactions of specific items, that could be sequentially tested on corporate earnings per share over time. The forecast effectiveness of the

multiple regression model could be examined by

applying the evaluative techniques presented in this ady.

Conclusions

The failure of the methodology in this study to substantially improve earnings forecasts through the use of mechanical models and the inconclusive results of the econometric model fail to disclaim the demand for published forecasts and improved financial reporting.

Until some systematic method of effectively fore- casting earnings per share can be developed, the average investor must rely upon his judgment and

experience, with questionable assistance from me- chanical techniques, in making predictions of future earnings per share. In the future, provision of more inside information by management may significantly aid the investor in more accurately predicting earn-

ings per share.

REFERENCES

1. R. A. Brealey, An Introduction to Risk and Return From Common Stocks, Cambridge, Mass., M.I.T. Press, 1969.

2. Robert G. Brown, Smoothing, Forecasting and Pre- diction, Englewood Cliffs, N.J., Prentice-Hall, Inc., 1963.

3. Robert G. Brown, Statistical Forecasting For Inventory Control, New York, McGraw-Hill Book Company, 1959, Ch. 3.

4. John C. Chambers, Satinder K. Mullick and Donald D. Smith, '"How to Choose the Right Forecasting Tech- nique," Harvard Business Review (July-August 1971), pp. 45-74.

5. Wen M. Chow, "Adaptive Control of Exponential Smoothing Constants," Journal of Industrial Engineer- ing (September-October 1965), pp. 314-317.

6. John Cragg and Burton Malkiel, "The Consensus and Accuracy of Some Predictions of the Growth in

Corporate Earnings," Journal of Finance (March 1968), pp. 67-84.

7. Dwight B. Crane and James R. Crotty, "A Two- Stage Forecasting Model: Exponential Smoothing and Multiple Regression," Management Science (April 1967), pp. 8501-8507.

8. F. E. Croxton and D. J. Crowden, Applied General Statistics, Englewood Cliffs, New Jersey, 3rd edition, Prentice-Hall, Inc., 1967.

9. W. B. Davenport and W. L. Root, An Introduction to the Theory of Random Signals and Noise, New York, McGraw-Hill Book Company, Inc., 1958.

10. N. R. Draper and H. Smith, Applied Regression Analysis, New York, John Wiley and Sons, Inc., 1966.

11. Edwin J. Elton and Martin J. Gruber, "Earnings Estimates and the Accuracy of Expectational Data," Management Science (April 1972), pp. B409-B424.

12. Victor Niederhoffer and Donald Regan, summarized in Barron's Magazine, December 18, 1972, pp. 9-

13. Sidney Siegel, Non Parametric Statistics, New York, McGraw-Hill Book Company, Inc., 1956.

14. Albert J. Simone, "Statistical Techniques" (Work- ing paper, Department of Quantitative Analysis, Uni- versity of Cincinnati).

15. D. W. Trigg and A. G. Leach, "Exponential Smoothing with an Adaptive Response Rate," Operational Research Quarterly (1967), pp. 53-59.

Financial Management 72

This content downloaded from 185.44.78.143 on Mon, 16 Jun 2014 22:01:48 PMAll use subject to JSTOR Terms and Conditions