44
107 RESEARCH METHODOLOGY 3.1 Introduction The importance of any empirical studies is generally examined and valued by its database and methodology. The analysis on database, in general, includes 1) the choice and selection of the field, 2) definition and recognition of the variables/parameters on which the data are to be collected, 3) methods and sources of data collection and finally 4) problems and limitations of the collected data. According to the objectives and hypotheses of the study as mentioned earlier, the researcher has chosen the stocks listed on the National Stock Exchange (NSE) as our field of analysis and the period of this analysis is from 1 st April, 2009 to 31 st March, 2014. Further, this study is dependent on the secondary level data collected from the Capitaline package. The types, sources and the limitations of the data, if any, to be used for the analysis of weak form and semi strong form of stock market efficiency based on publicly available information regarding historical stock price, and publication of quarterly earnings reports are discussed here. Apart from covering the database of the study, this chapter also is devoted to describe briefly its methodology sectionwise. 3.2 Data Indian Stock Market is one of the oldest stock market in Asia. Its history has dated back to just about 200 years ago. In additions, over the period time, the Indian securities market has become more dynamic, modern, and efficient securities market in Asia. It now conforms to some of the most excellent international standards and practices, both in terms of operating efficiency and structure. The profile of the investors, issuers and intermediaries approaching Indian markets also has changed significantly. Technological up gradation and online trading have modernized the stock exchanges. The bullish run of Indian stock market can very well be associated with a steady growth of country’s GDP by around 4.8% and the escalation of many Indian companies as MNCs. With the introduction of the New Economic Policy (NEP) in 1991, the Indian financial market underwent widespread changes. Though the Indian stock market started its operation in Mumbai in 1875, up to 1980s trading was through open outcry, settlements

RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

107

RESEARCH METHODOLOGY

3.1 Introduction

The importance of any empirical studies is generally examined and valued by its database

and methodology. The analysis on database, in general, includes 1) the choice and

selection of the field, 2) definition and recognition of the variables/parameters on which

the data are to be collected, 3) methods and sources of data collection and finally 4)

problems and limitations of the collected data. According to the objectives and

hypotheses of the study as mentioned earlier, the researcher has chosen the stocks listed

on the National Stock Exchange (NSE) as our field of analysis and the period of this

analysis is from 1st April, 2009 to 31

st March, 2014. Further, this study is dependent on

the secondary level data collected from the Capitaline package. The types, sources and

the limitations of the data, if any, to be used for the analysis of weak form and semi–

strong form of stock market efficiency based on publicly available information regarding

historical stock price, and publication of quarterly earnings reports are discussed here.

Apart from covering the database of the study, this chapter also is devoted to describe

briefly its methodology section–wise.

3.2 Data

Indian Stock Market is one of the oldest stock market in Asia. Its history has dated back

to just about 200 years ago. In additions, over the period time, the Indian securities

market has become more dynamic, modern, and efficient securities market in Asia. It

now conforms to some of the most excellent international standards and practices, both in

terms of operating efficiency and structure. The profile of the investors, issuers and

intermediaries approaching Indian markets also has changed significantly. Technological

up gradation and online trading have modernized the stock exchanges. The bullish run of

Indian stock market can very well be associated with a steady growth of country’s GDP

by around 4.8% and the escalation of many Indian companies as MNCs.

With the introduction of the New Economic Policy (NEP) in 1991, the Indian financial

market underwent widespread changes. Though the Indian stock market started its

operation in Mumbai in 1875, up to 1980s trading was through open outcry, settlements

Page 2: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

108

were paper–based, regulations were ineffective and disclosure norms by companies were

inadequate. Finally, NSE of India has been at establishing of change in the Indian

securities market since it was setting up in 1992. That during remarkable changes has

been seen in markets, from how capital was increased and traded, to how business are

explicated and developed. The market has developed in scale and scope in a way which

could not have been expected at the time. Till the early 1980s there was no index to

measure the movements of the stock prices. The BSE came out in 1986 with the Sensex –

a basket of 30 highly traded stocks across different sectors with the base year 1978–79 by

considering initially full market capitalization but finally free float market capitalization

of constituent stocks (2003). The NSE was lunched with firm disclosure norms and also

marked the onset of electronic trading, launched the S & P CNX Nifty in 1996 with a

basket of fifty stocks with base year 1995 using the market capitalization weighted

method. Several other popular and sectoral indices were also launched with changing

constituents to reflect liquidity and market capitalization. The Depository Act (1996)

covered the way for situation depositories for trading in dematerialized form while the

acceptance of the recommendations of the LC Gupta Committee (1996) on derivatives

trading extended the explanation of securities to include derivatives (1999). Thus, trading

on index futures and options of individual stocks commenced from 2000 onwards which

was anticipated to develop the liquidity and efficiency in the market. Stock exchanges

with screen based trading systems were allowed to expand their trading terminals to

different locations of the country and with the introduction of internet trading, physical

boundaries ceased to exist.

Average daily trading volumes have increased from 17 Crore in 1994–95 when NSE

happening its Cash Market segment to 29.37 Crore in 2012–13. Similarly, market

capitalization of listed companies went up from 363,350 Crore at the end of March 1995

to 49, 28,331.78 Crore at end March–2012-14. Now, Indian equity markets are among the

most vibrant and deep markets in the world. NSE offers a wide range of products for

multiple markets, including equity shares, Exchange Traded Funds (ETF), Mutual Funds,

Debt instruments, Index futures and options, Stock futures and options, Currency futures

and Interest rate futures. The Exchange has more than 1,582 companies listed in the

Capital Market and also more than 92% of these companies are actively traded. The debt

Page 3: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

109

market has 5,846 securities available for trading. Index futures and options trade on four

different indices and on 190 stocks in stock futures and options as on 31st March, 2013.

Currency futures contracts are traded in four currency pairs. Interest Rate Futures (IRF)

contracts based on 10 years 7% Notional GOI Bond is also available for trading.

CNX Nifty is managed and owned by (IISL) India Index Services and Products Ltd. The

India's first particular corporation focused on the index as a core product is IISL. The

CNX Nifty Index characterizes approximately 68.99 % of free float market capitalization

of stocks which is listed in NSE as on 31st December of 2013. The value of total traded

for the last six months which is ending December 2013 of all index constituents is about

59.01 % of the traded value of all NSE stocks. The CNX Nifty impact cost for portfolio

size of Rs.50 lakhs approximately is 0.06% for December of 2013. Professionally, CNX

Nifty is maintained and ideal for derivatives trading. And it consists of well diversified

50 stock indexes which are accounting for 22 sectors of economy. It is used for a

different aims including benchmarking index based derivatives, fund portfolios, and

index funds.

The study has been an examining study of 7 sectors out of 22 sectors and 7 sectoral

indices on Nifty CNX of NSE. All sectors have selected based on available data include

FMCG, IT, Petroleum/Oil and Gas, Telecom, Construction Material, Banks and Finance,

and Automobile sectors of economy. The debate in the branch of financial economics

seems to be unending and is still continuing, especially in regard to the efficiency of the

stock market.

3.2.1 Database of Weak Form Efficiency Tests

Stock market efficiency is an important parameter in order to examine the nature of

financial system in any country. It assumes extreme significance especially in developing

countries such as India. The study on efficiency of Indian stock market especially the

leading stock exchange of India (NSE) attracts the attention of researchers and analysts in

view of recent fluctuations in portfolio investments levels and the global financial

turmoil. The efficiency tests conducted by the researchers so far have produced

contradictory results and it is difficult to comment on Indian stock market efficiency with

Page 4: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

110

precision and definitiveness. The present study is a modest attempt to make conclusive

remark on stock market efficiency in India. The stocks selected for this analysis are those

which are listed on the CNX Nifty which is an index of well–diversified 50 stocks index

accounting for 22 sectors of the economy and also sectoral indices.

Stock market is affected by several macro and micro factors. Macro factors can be

attributed to various economic and political events. Several studies have been done in

Indian as well as international context to assess the impact of macroeconomic and

political events on the stock market. It has been found that the macroeconomic effects

have much more surprise value for the stock market than the political events. It may be

because political events usually arise as a build up mechanism, whereas economic events

arise because of sudden policy changes with prior confidentiality. Consequently, to gauge

the impact of macroeconomic events on the stock market efficiency in India, the period

of study has been so selected that it after financial crisis because the crisis will be effect

in stock price entails the most radical financial sector reforms in the Indian history.

These have led to enormous and sweeping changes in all facets of the market economy,

thereby, radically transforming the Indian stock market by expanding its breadth and

width. On the basis of above consideration the period of study for examining stock

market efficiency has been selected, as noted earlier from 1st April 2009 to 31

st March

2014.

Price data collected for this study period (from 1st April 2009 to 31

st March 2014) for the

50 constituent companies of CNX nifty has been found that only 29 firms were listed

continuously throughout the study period of six years and also 7 sectors of CNX Nifty

index. Twenty nine firms have been selected from 7 selected sectors according to

available data and 7 sectoral indices. In the case of 29 firms, 2 firms of consumer goods

(ITC Ltd., HINDUNILVR Ltd.), 5 firms of IT sector (Tata Consultancy Services Ltd.,

Infosys Ltd., HCL Technologies Ltd., Wipro Ltd., and Tech Mahindra Ltd.), 5 firms of

petroleum (oil) and gas sector (Oil & Natural Gas Corporation Ltd., Reliance Industries

Ltd., GAIL (India) Ltd., Cairn India Ltd., and Bharat Petroleum Corporation Ltd.), 1

company in the telecom sector (Bharti Airtel Ltd.), 1 firm in the construction material

(Larsen & Toubro Ltd.), 10 of Bank and Finance sector (HDFC Bank Ltd., State Bank of

Page 5: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

111

India, ICICI Bank Ltd., Housing Development Finance Corporation Ltd., Axis Bank td.,

Kotak Mahindra Bank Ltd., Bank of Baroda, Punjab National Bank, IndusInd Bank Ltd.,

and IDFC Ltd.), and 5 firms of automobile sector (Tata Motors Ltd., Maruti Suzuki India

Ltd., Mahindra and Mahindra Ltd., Bajaj Auto Ltd., and Hero MotoCorp Ltd.). Data was

also collected for this study from 5 sectoral indices of CNX Nifty; these sectors include

CNX Auto, CNX FMCG, CNX Finance, CNX Bank, CNX Energy, CNX IT and CNX

Pharma Index. The list of the firms along with their industry–wise classification and

sectoral indices are given in Tables 3.2.1.1 and 3.2.1.2.

Table 3.2.1.1

List of Constituent firms

Sl.

No. Sectors

Capitalization

Cr Firms Total

1 Consumer

Goods 2,073,579 ITC Ltd., & INDUNILVR Ltd. 2

2 IT 2,058,606 Tata Consultancy Services Ltd., Infosys

Ltd., HCL Technologies Ltd., Wipro Ltd., & Tech Mahindra Ltd.

5

3 Telecom 1,658,121 Bharti Airtel Ltd. 1

4 Construction

Material 1,540,702 Larsen & Toubro Ltd. 1

5 Petroleum /

Oil and Gas

Sector

1,379,263

(Oil & Natural Gas Corporation Ltd.,

Reliance Industries Ltd., GAIL (India) Ltd., Cairn India Ltd., & Bharat

Petroleum Corporation Ltd.

5

6 Banks and

Finance 1,259,366

HDFC Bank Ltd., State Bank of India, ICICI Bank Ltd., Housing Development

Finance Corporation Ltd., Axis Bank

Ltd., Kotak Mahindra Bank Ltd., Bank

of Baroda, Punjab National Bank, IndusInd Bank Ltd., & IDFC Ltd.

10

7 Automobile 803,748

Tata Motors Ltd., Maruti Suzuki India

Ltd., Mahindra and Mahindra Ltd.,

Bajaj Auto Ltd., & Hero MotoCorp Ltd

5

Total 7 Sectors 29 Firms

Page 6: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

112

Table 3.2.1.2

List of Sectoral Indices of NSE

Sl. No. 1 2 3 4 5 6 7

Sectors Indices AUTO FMCG Finance Bank Energy IT PHARMA

The daily closing prices of the above mentioned 29 of 7 sectors of CNX nifty constituent

firms were collected from the capitalize software package for the study period of 1st April

2009 to 31st March 2014 (omitting those days when there was no trading of the stock or

the stock exchange was closed for a particular day). All the firms whose scrip comprises

the sample paid dividends during the period of study and some of them also issued new

shares. However, the data has been adjusted for payment of dividends or the issue of

bonus rights shares as per the in–built formulae of the Capitaline. Further as the daily

prices have extremely high short term volatility, so the researcher collected the daily

closing prices of each stock. That has done by considering the closing price of the stocks

traded on the last working day of the week which is normally a Friday or the preceding

day, if Friday is a holiday. The daily price data has been transformed by taking natural

log and then their first differences. This is nothing but the transformation of price data

into daily return data:

ln (Pt /P t−1) = ln (Pt ) – ln (P t−1) = rt (3-1)

Where,

rt is the continuously compounded rate of change in the stock price, i.e., return

for the week t;

Pt is the price of the stock for the daily t;

Pt−1 is the same for the preceding daily t-1;

ln is the natural logarithm.

The definition of change in prices, which is nothing but return for the period is closely

approximated by the differences between the natural logs of successive prices for small

Page 7: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

113

changes in price (Osborne, 1959). The transformation of the price data has been justified

by Granger and Morgenstern (1970) on the following basis:

a) The distribution of prices is bounded from below at 0 but unbounded from

above. The logarithmic transformation results in a distribution which is

symmetrically unbounded.

b) The transformed data is more stable and stationary in terms of mean and

variance.

The actual tests are not performed on the daily prices themselves but on the first

differences of their natural logarithms by Fama (1969). The variable of interest is:

ut+1 = logePt+1 − logePt (3-2)

Where Pt+1 is the price of the security at the end of day t + 1, Pt is the end of day t price.

There are three main reasons for using changes in log price rather than simple price

changes:

a) The change in log price is the yield, with continuous compounding, from

holding the security for that day.

b) Moore (1962) has shown that the variability of simple price changes for a

given stock is an increasing function of the price level of the stock. His work

indicated that taking logarithms seemed to neutralize most of that price level

effect.

c) For changes less than ±15 per cent the change in log price is very close to the

percentage price change, and for many purposes it is convenient to look at the

data in terms of percentage price changes.

3.2.2 Database of Semi–Strong Form Efficiency Tests on Information Effect of

Earnings Announcement Reports

A sample of all the stocks listed on the CNX Nifty index for a period of five years from

1st April 2009 to 31

st March 2014 was collected through the issues of Economic Times

Page 8: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

114

over this period. That provided us with a list of 21 firms out of 5 sectors. Three sets of

data have been used. The first set of data consists of quarterly earning announcements

made by the sample companies. This includes the dates on which the Board of Directors

meets and approves the quarterly financial results of the firm. Those were obtained from

the websites of the respective companies and NSE. The second set of data consists of the

weekly closing prices and the actual quarterly earnings per share of the continuously

listed stocks on the CNX Nifty indexes for the period of this study. That data were

collected from the Prowess database of Centre for Monitoring Indian Economy (CMIE)

website and Capitaline package. The third set of data consists of the ordinary CNX Nifty

fifty index prices compiled and published by the NSE for this study period of six years.

That was collected from the NSE website and the Financial Daily, The Economic Times.

Out of the twenty seven firms, some companies were omitted if they failed to satisfy

either one or more of the following criteria:

a) Availability of the dates of board meetings or publication of quarterly earnings

reports; and

b) Availability of weekly stock prices and actual quarterly EPS throughout the

period of this study.

Firms were also omitted for uneven length reporting quarters or if they reported only

half–yearly for the entire or vast part of the period of this study. That occurred because

Securities and Exchange Board of India initiated the compulsory publication of quarterly

earning announcements from 1998 onwards while earlier it used to be half–yearly

announcements. That left us with a final list of 21 firms out of five sectors. The list of the

companies along with their industry–wise classification is given in Table 3.2.2.1.

Page 9: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

115

Table 3.2.2.1

List of the Constituent Firms of NSE Quarterly Earnings Announcements

Sl.

No. Sectors capitalization Firms Total

1 Consumer

Goods 2,073,579 (ITC Ltd.) 1

2 IT 2,058,606

Tata Consultancy Services Ltd., Infosys

Ltd., HCL Technologies Ltd., Wipro

Ltd., & Tech Mahindra Ltd.

5

3

Petroleum /

Oil & Gas

Sector

1,379,263

Oil & Natural Gas Corporation Ltd.,

Reliance Industries Ltd., Cairn India Ltd.,

& Bharat Petroleum Corporation Ltd.

4

4 Banks and

Finance 1,259,366

HDFC Bank Ltd., ICICI Bank Ltd.,

Housing Development Finance

Corporation Ltd., Axis Bank Ltd., Kotak

Mahindra Bank Ltd., Bank of Baroda,

Punjab National Bank, & IDFC Ltd.

8

5 Automobile 803,748 Maruti Suzuki India Ltd., Mahindra and

Mahindra Ltd., & Hero MotoCorp Ltd. 3

Total 5 Sectors 21 Firms

The selection of the sample of continuously listed stocks on the CNX Nifty Indexes

introduces a potential bias, since firms that were delisted during the study period were not

included in the sample. Joy, Litzenberger, & McEnally (1977) and Young (1968) found

that the predelisting average return on delisted NYSE stocks did not differ significantly

from the average return for all NYSE stocks over their respective periods of study. To

take cognizance of this potential post–selection bias, Joy, Litzenberger and McEnally

used market index consisting of an equal dollar–weighted index of all the listed stocks in

the sample. However, an alternate analysis of the data performed using the S & P’s 425

Stock Index found the average bias using the equal–weighted index of the listed stocks

and S & P’s 425 Stock Index to be extremely small (4/10 of one percent). Consequently,

instead of calculating the market index based on only twenty–one continuously listed

stocks, the researcher has used the S & P CNX Nifty Fifty Index as a whole. For each

earnings observation, weekly rates of return are observed over the period thirteen weeks

prior to and thirteen weeks subsequent to the announcement week. This is calculated

from the weekly closing prices means ―The weekly return data of the stocks from the

weekly closing prices of the individual stocks as well as for the Nifty index‖.

Page 10: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

116

Defining Pjt as the price of the jth security in the tth week, it can be calculated the weekly

return of the jth security as:

Rjt = lnPjt − lnPjt − 1 (3-3)

and for the index as:

R m = lnPt – ln Pt−1 (3-4)

Where,

Pt is the Nifty index value in the tth week.

The debate in the branch of financial economics seems to be unending and is still

continuing, especially in regard to the efficiency of the stock market.

3.3 Research Methodology

3.3.1 Methodology of Weak Form Efficiency

First of all, it is needed to point out the link between random walks and weak form

efficient markets. As it was seen before, a weak form efficient financial market is a

market in which all past price information is fully reflected in stock prices. Therefore, it

is impossible to predict future prices based on past price information. The random walk

model says precisely the same as this weak form efficient market condition. The stock

price of tomorrow is unpredictable, as there is no way of predicting the arbitrary drift

term. This parallel between the weak form market efficient condition and the random

walk made it interesting for researchers to test weak form market efficiency indirectly, by

testing if stock returns are following a random walk.

In the early writings, dating back to the beginning of the last century, empirical

justification was sought for the initial form of the EMH theory, namely the Random Walk

Hypothesis (Bachelier, 1900).The random walk which contends that successive price

changes are independent to each other, occupied a significant proportion of research till

the late 1960s (Cowles & Jones, 1937; Kendall, 1953; Osborne, 1959; Granger &

Morgenstern, 1963; Cootner, 1962, & 1964; & Moore, 1962).

Page 11: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

117

There was plenty of empirical evidence but what was lacking was a formal theory. This

was filled up by a more general model based on the concept of efficiency of the markets

in which shares are traded – the Efficient Market Hypothesis (EMH) (Fama, 1965). In

most cases, a hypothesis is postulated whose validity is later empirically tested. If the

results confirm the hypothesis, then it graduates to a theory. The essence of these tests is

briefly discussed below to this end.

3.3.1.1 Types of Random Walk Model

The researcher briefly describes the types and classifications of Random Walk as follow:

a) The IID Increments Random Walk Model (Random Walk 1 model or RW1)

Perhaps the simplest version of the walk random hypothesis is the independently and

identically distributed (IID) Increments case that the dynamics of (Pt ( are given by the

next bellow equation:

Pt = μ + Pt−1 + εt , εt~IID(0,ς2) (3-5)

Where, μ is drift or the accepted price change, and IID(0,ς2) denotes that εt is identically

distributed and independently with mean 0 and variance ς2. The independence of the

increments (εt ) implies that the random walk is a fair game, but in a really stronger

concept than the martingale: Independence implies not only those nonlinear functions of

the increments are uncorrelated, but also that are any increments uncorrelated. It should

be called Random Walk 1 model or RW1.

In order to develop some intuition for RW1, consider its conditional mean and variance at

date t, conditional on some initial value P0 at date 0:

E Pt P0 = P0 + μt (3-6)

Var Pt P0 = a2t, (3-7)

Which follow from recursive substitution of lagged Pt in equation (3-5), and the IID

increments assumption? From equations (3-6), and (3-7) it is apparent that the random

walk is non–stationary and that its conditional mean and variance are both linear in time.

Page 12: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

118

These implications also hold for the two other forms of the random walk hypothesis

(RW2 and RW3) described below.

Perhaps the most common distributional assumption for the innovations or increments εt

is normality. If the εt 's are IID 𝒩(0,ς2), then equation (3-5) is equivalent to an arithmetic

Brownian motion, sampled at often spaced unit intervals. This distributional assumption

simplifies many of the calculations surrounding the random walk, but suffers from the

similar problem that afflicts normally distributed returns: violation of limited liability. If

the conditional distribution of Pt is normal, then there will be a positive probability

that Pt < 0, at all times.

To avoid violating limited liability, it may be used the same device as in (3-5), namely, to

assert that the natural logarithm of prices Pt ≡ log Pt follows a random walk with

normally distributed increments, hence:

Pt = μ + Pt−1 + εt , εt~IID𝒩(0,ς2) (3-8)

This implies that continuously compounded returns are IID normal varieties with mean μ

and varianceς2, which yields the lognormal model of Bachelier (1900) and Einstein

(1905).

b) The Independent Increments Random Walk Model (Random Walk 2 model or

RW2)

Despite the elegance and simplicity of Random Walk 1 model, the statement of

identically distributed increments is unlikely for financial asset prices in excess of long

time spans. For example, over the two hundred year history of the New York Stock

Exchange, there have been countless alters in the economic, institutional, social,

technological, & regulatory environment in which stock prices are determined. The

statement that the possibility regulation of daily stock returns has stayed the same over

this two hundred year period is simply implausible. Therefore, the researcher relaxes the

assumptions of RWl to consist of procedure with not identically distributed but

independent (INID) increments, and it shall be called Random Walk 2 model or RW2.

Random Walk 2 model clearly contains RW1 as a special case, but also contains

Page 13: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

119

considerably more general price processes. For example, RW2 allows for unconditional

heteroscedasticity in the εt 's, a mostly useful aspect given the time variation in volatility

of very much financial asset return series.

While RW2 is weaker than RW1, it still retains the most interesting economic property of

the IID Random Walk: Any arbitrary transformation of future price increments is

unforecastable by any arbitrary transformation of past price increments.

c) The Uncorrelated Increments Random Walk Model (Random Walk 3 model or

RW3)

An even more common version of the random walk hypothesis the one most often tested

in the recent empirical literature–may is acquired by relaxing the independence statement

of RW2 to inclue the procedures with dependent but uncorrelated increments. This is the

weakest form of the random walk hypothesis, which it should be refered to Random

Walk 3 model or RW3, and contains RW1 and RW2 as special cases. A simple example

of a process that satisfies the assumptions of RW3 but not of RWl or RW2 is any process

for which Cov (εt , εt−k) = 0 for all k # 0, but where Cov (ε1 ,2 ε1−k,

2 ) # 0 for some k # 0.

Such a process has uncorrelated increments, but is clearly not independent since its

squared increments are correlated.

Since the assumptions of IID are so central to classical statistical inference, it should

come as no surprise that tests for these two assumptions have a long and illustrious

history in statistics, with considerably broader applications than to the random walk.

Because of their breadth and ubiquity, it is almost impossible to catalogue all tests of IID

in any systematic fashion, and the researcher shall test just a few of the most popular

tests.

While independently and identically distributed (IID) are characteristic of random

variables which are not accurate to particular parametric relatives of distributions, many

of these tests fall under the rubric of nonparametric tests.

Page 14: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

120

The random walk model is very popular in the literature. The starting point of this

analysis is the hypothesis of an efficient market, hence randomness of returns can be

assumed. Theory of random walk in stock prices involves two hypotheses:

a) The successive price changes are independent, and

b) The price changes conform to some probability distribution (Fama 1965).

In statistical literature, a ―purely random‖ process refers to a process that can produce

independent and identically distributed (IID) samples. If an observed value in the

sequence is influenced by its position in the sequence or by the observations which

precede it, the process is not truly random. To illustrate the notion of random walk, it

should be assumed that the price of a stock at time t is equal to its price at time (t–1) plus

a random shock ut which is a white noise error term with 0 mean and variance ς2.

Pt = Pt−1 + εt (3-9)

Substituting backwards from Pt−1, Pt−2 etc leads to:

Pt = P0 + εttt=1 (3-10)

Where,

P0 is some initial value of Pt .

The term εttt=1 represents the stochastic trend in Pt . It was found that the mean of Pt

was equal to its initial value P0 (which was a constant but its variance ςt2 increased

indefinitely as t increases). Thus, RWM without drift is non–stationary process.

Introducing a drift parameter σ in the RWM gives the equation:

Pt = μ + Pt−1 + εt , εt~IID𝒩(0,ςt2) (3-11)

Here,

Mean of Pt, E (Pt) is P0 + ςt , while var (Pt) is ςt2.

Thus RWM with drift has both mean and variance increasing overtime, again violating

the condition of stationary. Thus Random Walk Model with or without drift is a non–

Page 15: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

121

stationary stochastic process. An interesting feature of RWM is the persistence of random

shocks since Pt is the sum of initial value P0 plus the sum of random shocks. So the

impact of a particular shock does not die away and consequently random walk is said to

have an infinite memory. It should be noted that though Pt is non–stationary, its first

difference is stationary:

∆Pt = Pt – Pt−1 = εt (3-12)

If a variable follows a random walk, then the regression of one variable against another

can lead to a ―spurious‖ result because the Gauss–Markov Theorem would not hold as the

random walk does not have a finite variance and the OLS would not yield a consistent

estimator.

Investigations of randomness of a given sequence often require statistical tools for

distribution comparison. Among them, goodness–of–fit tests and entropy estimates are

two well–understood concepts. However, when the distribution of the observed data is

unknown, the hypothesis is:

H0: Sequence is independent and identically distributed (random).

Ha: Sequence is not independent and identically distributed (non–random).

Then, the researcher has to resort to non–parametric tests, using some distribution–

invariant properties of random processes. For example, if the observations can be

transformed to some symbols that can reflect some properties of their relative positions or

magnitudes, then the pattern of the resulting symbol sequence can serve as a measure of

the randomness of the original process. The pattern of the symbols can be analyzed using

runs, or entropy estimators if the distributions of the symbols is known.

Four decades have passed since the formulation and initial test of market efficiency by

Fama (1965). The econometric techniques used to measure efficiency have since evolved

with developments in technology and statistical software packages. Some of these

methods are robust to several limiting features that frequently crop up in stock market

data, such as heteroscedasticity, volatility and non–normality. Among these techniques,

those that are commonly applied in the literature to assess informational efficiency

Page 16: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

122

include the ordinary runs test, the sign test, the runs up and down test, the Mann–Kendall

test, the Bartels’ rank test and the tests based on entropy estimators, Serial Correlation

tests, Runs tests, Unit Root tests and Variance Ratio tests. However, the ordinary runs test

has been most prolifically used by the researchers in the testing of the randomness of a

stock price time–series data. The analysis also concludes that most of the tests are

vulnerable to a certain set of sequences, which are deterministic but accepted as random

processes. Since different tests have different weaknesses, the decisions from a host of

different tests are combined to minimize the error probability of missing Ha. Thus along

with the ordinary runs test, the researcher has used another non–parametric test, namely,

the Kolmogrov–Smirnov Goodness–of–Fit test to check the efficiency of this study stock

market data. Several pioneering studies, including Fama and French (1988), Lo and

MacKinlay (1988) and Porterba and Summers (1988), recommended the use of Variance

Ratio procedures to evaluate informational efficiency. Variance Ratio–type tests are now

used in the bulk of the contemporary empirical EMH literature and are widely regarded

by most researchers as some of the most powerful tools for assessing informational

efficiency. However, these tests, along with the serial correlation, runs and unit root tests

have several innate limitations, which may bias their overall results as to whether the

market is informationally efficient. A brief discussion of these mainstream econometric

techniques and their key limitation will be discussed in the subsequent sections.

3.3.1.2 Weak Form Efficiency Tests

3.3.1.2.1 Non–Parametric Tests

3.3.1.2.1.1 Runs Test

One of the common tests for RWl is the runs test, in which the number of sequences of

consecutive positive and negative returns, or runs, is contrasted and tabulated against its

sampling distribution under the random walk hypothesis. Runs test is a strong test for

randomness in investigating the serial dependence in stock price movements. The test is

non–parametric and independent of the constant variance and normality of the data. A

runs is defined as a series of like signs that are followed or preceded by a no sign or

different sign at all. That is, given a succession of observations, the runs test assesses

Page 17: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

123

whether the value of one observation impacts the values which are taken by later

observations. If there isn’t any influence (observations are independent) the sequence is

considered random. The null hypothesis of test is that observed series is a random

variable; when the expected number of the runs is significantly different from the

observed number of runs, and the test rejects the null hypothesis. Two few runs point out

a tendency for high and low values to group while too many runs show a tendency for

low and high values in order to alternate. Lower than expected number of runs points to

the markets’ overreaction to information, then reversed, whereas higher number of runs

reflects a lagged response to information. Either condition would recommend an

opportunity to build excess returns (Poshakwale, 1996).

Under the null hypothesis of independence in share–price changes (share returns), the

total expected number of runs (m) can be estimated as:

E R = N N+1 − n i

23i=1

N (3-13)

Where N is the total number of observations (price changes or returns) and n i is the

number of price changes (returns) in each category (N = ni23

i=1 ). For a large number of

observations (N 30), the sampling distribution of m is approximately the standard error

and normal of E (R) is given by:

ςR = n i

23i=1 [ n i

2+N N+1 ]−2N n i2−N2

i=12i=1

N N−1 (3-14)

Following Gujarati (2003), it should be accepted the null hypothesis that the series is

random if the number of runs (R) lies in the following confidence interval and reject it

otherwise: Prob{E (R) – 1.96 ςR ≤ R ≤ E(R) + 1.96 ςR} = 0.95.

As a replacement, the reseacher transfer total number of runs into a z–statistic:

Z =R±0.5−E(R)

ςR (3-15)

Where, R is actual number of runs, E(R) is expected number of runs, and 0.5 is continuity

adjustment (Wallis and Roberts, 1956) in which sign of the continuity adjustment is

Page 18: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

124

positive (0.5) if R>E(R), and negative otherwise. Since there is proof of dependence

among share returns when R is too large or too small, the test is a two–tailed one. If the

z–value is larger than or equal to 1.96, the null hypothesis is rejected at 5% level of

significance (Sharma and Kennedy, 1977).

𝐇𝟎: The observed series are random (The number of expected runs is about the

same as the number of actual runs).

𝐇𝐚 The observed series are not random (Significantly different counts of runs).

Runs test shows the cutting point, the number of runs, the number of cases below the

cutting point, the number of cases larger than or equal to the test statistic z and the critical

point with its observed level of significance.

3.3.1.2.1.2 Kolmogrov–Smirnov Test

The K–S test was at first developed in the 1930s. In order to test whether the observed

distribution fit theoretical normal or uniform distribution the researcher use non–

parametric test. Kolmogrov Smirnov Goodness of Fitness Test (KS) is a non–parametric

test and is used to establish how well a random sample of data fits a particular

distribution (uniform, normal, Poisson, exponential). It is based on comparison of the

sample’s cumulative distribution against the standard cumulative function for each

distribution. The Kolmogrov Smirnov is one goodness sample of fit test which contrast

the cumulative distribution functions for a variable with normal or uniform distributions

and uniform whether the distributions are homogeneous. The researcher used both normal

and uniform parameters to test distribution.

𝐇𝟎: Distributions are homogenous.

𝐇𝐚: Distributions are not homogenous.

Page 19: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

125

3.3.1.2.2 Parametric Tests

3.3.1.2.2.1 Unit Root Test

In order to test the efficiency hypothesis, the researcher uses the ADF unit roots tests of

Dickey and Fuller (1979) and Said and Dickey (1984). The Augmented Dickey Fuller

(ADF) test was based on the OLS t–statistic corresponding to 2 in the regression model.A

test of stationary that has become widely popular over the past several years is the unit

root test. In a RWM which resembles a Markov first–order autoregressive model:

yt = ρyt−1 + ut , t=1, 2… (3-16)

We know that if ρ = 1 then it becomes:

yt = yt−1 + ut (3-17)

Which has a constant mean (equal to the initial valuey0) but a time–varying variance

(tς2), i.e., it is non–stationary.

However, if │ρ│˂ 1, then it can be shown that the above series is stationary.

Assuming that the initial value of y (=y0) is zero, │ρ│˂ 1, and ut is white noise and

distributed normally with zero mean and unit variance, then it follows that:

E (yt) = 0

and,

var (yt) = 1 ρ2

Since both of these are constants, yt is weak form stationary. For theoretical reasons, we

manipulate the above equation and instead estimate the equation:

∆yt = σyt−1 + ut (3-18)

Where,

σ = ρ – 1

Page 20: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

126

The researcher test the null hypothesis that:

𝐇𝟎: σ = 0

against the alternative hypothesis:

𝐇𝐚: σ ≠ 0

If σ = 0, then ρ = 1, i.e., we have a unit root, implying that the time–series is non–

stationary. As the random process may have no drift, or it may have drift or it may have

both deterministic and stochastic trends, so to allow for the various possibilities, the unit

root test is done in the following three forms:

1) When yt is a random walk without drift and trend:

∆yt = σyt−1 + ut (3-19)

2) When yt is a random walk with drift but no trend:

∆yt = β1 +σyt−1 + ut (3-20)

3) When yt is a random walk with drift as well as stochastic trend:

∆yt = β1 + β2t +σyt−1 + ut (3-21)

Where,

t is the trend or time variable,

yt is the daily return of the particular stock, and

∆ is the difference operator.

In conducting the above unit root tests the implicit assumption is that the error term is

uncorrelated. But in case the ut’s are correlated, Dicky and Fuller have developed another

test, known as the Augmented Dicky Fuller test (ADF test). The ADF approach controls

for higher–order correlation by adding lagged difference terms of the dependent variable

to the regression (i.e., as explanatory variables). Thus the fourth specification is:

∆yt = β1 + β2t +σyt−1 +α1∆yt−1 + α1∆yt−2+ ut (3-22)

Page 21: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

127

The hypotheses for testing the stationary are as before. The significance of the slope

coefficient is tested by using the tau–statistic at 5% and 1% level.

3.3.1.2.2.2 Auto–Correlation Test and Q–Statistic

Auto correlation test has been almost universally used by the researchers in checking the

stationary of a time series stock price data. The ACF at lag k denoted by:

ρk =γk

γ0 (3-23)

and covariance at lag k/Variance. In practice, we only have a realization, i.e., a sample of

a stochastic process. Hence the researcher computes the sample autocorrelation function:

ρ k =γ k

γ 0 (3-24)

Where,

γ k = sample covariance at lag k, and

γ 0 = sample variance.

Then SACF (sample autocorrelation coefficients) measures the amount of linear

dependence between observations in a time–series that are separated by lag k. If the price

changes of the stocks are independently distributed, ρ kwill be 0 for all time lags. The

statistical significance of any ρ k is judged by its standard error. Bartlett (1946) has shown

that if a time–series is purely random, i.e., it exhibits white noise, the sample

autocorrelation coefficients ρ k are approximately ρ k~ N (0, 1/n), i.e., in large samples,

the sample autocorrelation coefficients are normally distributed with zero mean and

variance equal to one over the sample size. Dividing the estimated value of any ρkby the

standard error 1 n

, for sufficiently large n, the researcher obtains the standard z–value

whose probability is checked from the standard normal table.

In addition to testing the statistical significance of an individual autocorrelation

coefficient, the joint hypothesis that all the ρk up to certain lags is simultaneously equal

Page 22: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

128

to zero has also been tested. This has been done by using the Q–statistic developed by

Box and Pierce (1978) defined as:

Q=n ρ k2m

k=1 (3-25)

where,

n = sample size, and

m = lag length.

In large samples, Q is approximately distributed as the chi–square distribution with m d.f.

If the computed Q exceeds the critical Q value from the chi–square distribution at the

chosen level of significance, then the researcher reject the null hypothesis that all ρk are

zero; at least some of them must be non–zero. A variant of the Box–Pierce (1970) Q–

statistic is the Ljung–Box (LB) (1978) demonstrated that the modified statistic as:

LB = n N + 1 ρ k

2mk=1

n − k ~xm2 (3-26)

Although in large samples both Q and LB statistics follow the chi–square distribution, for

samples LB statistic is more appropriate than Q–statistic.

The population correlogram, i.e., a plot of ρk against lag k gives a graphical method of

testing the independence of the stock price changes. However in practice the researcher

concludes about the population correlogram from a plot of ρ kagainst k (sample

correlogram).For a purely white noise process or a stationary time–series process, the

autocorrelations at various lags hover around zero. Thus, if the correlogram of an actual

time–series resembles the correlogram of a white noise time–series, the researcher can

conclude that the time–series is purely stationary. The typical correlogram of a non–

stationary time–series depicts very high values of autocorrelation coefficients for initial

lags which declines very slowly towards zero as the lag lengthens, i.e., there is a linear

decay of the ACF function.

Page 23: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

129

3.3.1.2.2.3 Variance Ratio

Lo and MacKinlay (1988) argued that the variance of increments in a random walk was

linear in the sampling interval. Exploiting this property, Lo and MacKinlay (1988)

suggested that when we were carrying out analysis of time series with one day lag, the

researcher would be finding out two values of variances:

1) Variance of series rt , and

2) Variance of series rt + rt+1.

It is interesting to observe that the individual variances of series rt and series rt+1 would

be same when the time series consists of large number of values. If series rt and rt+1 are

independent as assumed under random walk hypothesis, the co–variance between series

rt and rt+1 would be zero. In other words, the variance of series rt + rt+1 would be

simply two times the variance of series rt (or rt+1), when rt and rt+1 are independent.

In that event, the ratio:

VR = Variance of rt +rt+1

2×Variance of rt (3-27)

The ratio is less than 1 or more than 1, it implies a case of dependence of the series rt,

and rt+1, because the co–variance between the two series would be either a negative

value or positive value. Hence, when that variance ratio defined above approaches 1, it

implies a case of weakly efficient form of stock market; otherwise, an inefficient market

even in weak form.

The above analysis also holds good for determining interrelationship of a time series with

multiple lags. For instance, consider a series of daily data. The variance estimated from

the quarterly returns should be three times as large as the variance estimated from daily

returns when the RWH holds good.

The generalized formula for the variance ratio for q period return should be q times as

large as the one–period returns. Hence,

Page 24: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

130

VR=VR rt q

q×VR rt = 1 + 2 1 −

k

q ρ(k)

q−1k=1 (3-28)

Where,

rt (k) = rt + rt+1 + … +rt−k+1, and is the kth order autocorrelation coefficient of rt .

When RWH holds good VR (q) = 1 and 2 1 −k

q ρ(k)

q−1k=1 would be reduced to zero,

when VR (q) is not equal to one. 2 1 −k

q ρ(k)

q−1k=1 Would assume a negative or

positive value depending upon aggregate inter–relationship amongst the multiple time

series, reject RWH.

The statistical significance of the variance ratio is examined by calculating two different

values of Z–statistic depending upon the following assumptions:

1) Homoscedasticity, where all of the individual observations in the time series

would be random values with an equal amount of variance.

2) Heteroscedasticity, where each of the individual observations in the time

series would be random values with differing amounts of variance.

1) Under Homoscedasticity:

Z (q) = VR q −1

Φ(q)1

2 (3-29)

Where,

Φ q =2 2q−1 (q−1)

3q(np ) (3-30)

Here,

np is the number of observations, and

Φ q is the asymptotic variance of the variance ratio.

2) Under Heteroscedasticity:

Page 25: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

131

Z∗ q =VR q −1

Φ∗ q 1

2 ~𝒩(0,1) (3-31)

Where, the standard error term is:

Φ∗(q) = 4 1k

q

2q−1k=1 δ k (3-32)

and,

δ k =nq ρ j−ρ j−1−μ

2 ρ j−k−ρ j−k−1−μ

2nqj=k +1

ρ j−ρ j−1−μ 2nq

j=k+1

2 (3-33)

Where,

δ k is the heteroscedasticity–consistent estimator,

ρj is the price of the security at time t and the average return.

When Z (q) or Z∗ q is negative, it is a case of negative correlation and when it is

positive, it is a case of positive correlation.

Whether it is Z (q) or Z∗ q , we examine the statistical significance of the variance ratio

by finding out whether Z (q) or Z∗ q are greater than or less than the critical value for a

given significance level. For instance, when the given significance level is 5%, we find

out whether the above values are less than or greater than 1.96. If Z (q) or Z∗ q < 1.96, it

implies that VR (q) = 1 and hence we accept RWH. On the other hand, if Z (q) or Z∗ q >

1.96, it implies that VR (q) ≠ 1 and hence we reject RWH.

3.3.2 Methodology of Semi–Strong Form Efficiency

According to the semi–strong form of the market, the security prices reflect all publicly

available information within the purview of the efficient market hypothesis. In this state,

the market reflects even those forms of information which may be concerning with the

announcement of a firm’s most recent dividend forecast and adjustments which will have

taken place in the prices of security. The investors in semi–strong form of the market will

find it to earn return on portfolios which is based on the publicly available information in

excess of the return which may be said to be commensurate with the risk. Many empirical

Page 26: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

132

studies have been made to test semi–strong form of Efficient Market Hypothesis. In the

semi–strong form market, any new announcement would bring reaction immediately to

the company. This reaction could be even prior to the announcement in the market. This

reaction prior to or immediately after the announcement would be caused by the

additional information which is not anticipated by the stock exchange participants. This

information also would not be disclosed to the market participants. But semi–strong form

of efficient market hypothesis would immediately indicate a change in the price of the

securities but the price would be adjusted immediately by the market participants and in

this way, the participants remove any possibility for abnormal returns in the future.

The Financial Accounting Standards Board (FASB) and the Securities Exchange

Commission strive to set reporting regulations so that financial statements and related

information releases are informative about the value of the firm. In setting standards, the

information content of the financial disclosures is of interest. Event studies provide an

ideal tool for examining the information content of the disclosures.

Event study methodology has been pioneered by Fama et al. (1969) and since then has

become the preferred method to measure the security price reaction to a large variety of

events. These events can be macroeconomic events, such as changes in interest rates,

unemployment rates, and inflation, or corporate events such as M&A, earnings, and

directors’ dealings announcements.

3.3.2.1 Event Study Methodology

In this section, the description of event study methodology in particular type of disclosure

quarterly earnings announcement is considered. And also the objective is that to

investigate the information. In other words, the goal is that to see whether there are any

changes between the observed change of the market value of the company and the

information.

The event study is an important research tool in economics and finance. This tool is

widely followed in capital market arena to assess the effect of an event on stock prices.

So an event study is a statistical method in order to measure the impact of a specific event

on the value of a firm. The event study methodology is designed to investigate the effect

Page 27: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

133

of an event on specific dependant variable. A commonly used dependent variable in event

studies is the stock price of the company. The definition of such an event study will be ―a

study of the changes in stock price beyond expectation (abnormal returns) over a period‖.

The event study methodology seeks to determine whether there is an abnormal stock

price effect associated with an event. From this, the researcher can infer the significance

of the event. The key assumption of the event study methodology is that the market must

be efficient.

MacKinlay (1997) outlined an event study methodology involving the following steps:

a) Identification of the event of interest,

b) Definition of the event window,

c) Selection of the sample set of firms to be included in the analysis,

d) Prediction of a ―normal‖ return during the event window in the absence of the

event,

e) Estimation of the ―abnormal‖ return within the event window, where the

abnormal return is defined as the difference between the actual and predicted

returns, without the event occurring, and

f) Testing whether the abnormal return is statistically different from zero.

Furthermore, MacKinlay (1997) studied that event studies have been used in a wide range

of settings, including accounting and finance. As an example in finance, researchers have

used event studies to examine the market effect of mergers and acquisitions. Additional

examples in accounting include whether accounting disclosures contain information,

based on whether the stock market reacts to the disclosure of information events. In

general, in virtually any discipline, the basic methodology remains the same: there is an

event and a test to determine whether the stock market reacts to the event. However, the

motivation and the theories used to generate expectations are likely to differ across

disciplines. Theories used to generate expectations are likely to differ across disciplines.

The types of events and their motivations for information systems usually differ from

those events used in accounting and finance. The events related to information systems

typically relate to the adoption, implementation, purchase, or use of information systems

Page 28: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

134

technology. Generally, these event studies focus on all of the costs and benefits the

technologies offer.

Hence, event studies generally follow a certain structure of analysis. First, the event of

interest needs to be defined. Second, the time period during which the reaction of stock

prices to the event–the event window–is defined. In short–horizon event studies, the event

window typically includes the day of the announcement and several post–event days.

Third, sample selection criteria are determined. This includes the sample period, any

restrictions due to data availability, and other filters to ensure the integrity of the data set.

Fourth, abnormal returns for the event window are computed. Other common terms for

abnormal profits are excess returns, prediction errors, or residuals. Abnormal returns are

given by the actual ex–post return of a security minus the return that would have been

expected if the event had not occurred. Thus, a security pricing model for expected

returns has to be chosen. Popular choices are the market model and the constant mean

return model. Depending on the normal performance model, the estimation window has

to be specified. The estimation window is used to estimate the parameters of the

securities return model.

Figure 3.3.2.1.1: Exemplary Event Study Time Line

Typically, the estimation period falls before the event date and does not include the event

window in order to prevent the event from influencing the estimation of the asset return

model. Figure 3.3.2.1.1 demonstrates the typical time sequence of event studies. Fifth,

testing framework for the estimated excess returns has to be developed. This involves the

formulation of a null hypothesis, the aggregation of abnormal returns, and the

specification of appropriate test statistics. Finally, the results are presented, analyzed, and

interpreted.

Page 29: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

135

3.3.2.2 Event Study Structure

Also Dickgieber (2010) explained about event study methodology following next part.

Given an efficient market, the effects of the event would be reflected immediately in the

stock prices of the company. That would allow the researcher to observe the economic

effect of the event over a relatively short period. In his study, he also considered the

structural of event study shown Figure 3.3.2.2.1 as follow:

Figure 3.3.2.2.1: Event Study Structure

3.3.2.2.1 Cross–Sectional Regression Analysis

In many event studies, estimating and testing abnormal returns are only the first steps.

Usually, it is not only of interest whether an event has a significant effect on stock prices,

but also how excess returns are related to firm characteristics. This allows for a test of

Page 30: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

136

related economic hypotheses. For this purpose, early event studies mainly relied on

subsamples. Events were sorted into portfolios according to the value of the firm–specific

variable of interest. For these subsamples, CAARs were calculated and compared. While

this approach is still employed today, cross–sectional regressions with excess returns as

dependent variables have become more common. While coefficients can be estimated by

the OLS method, standard errors require more attention. If the latter are uncorrelated in

the cross–section and are homoscedasticity, the standard OLS errors can be used to draw

statistical inferences. If this is not the case, heteroscedasticity– consistent standard errors,

as proposed by White (1980), should be computed. In addition, biases in the cross–

sectional regression analysis may arise if a relationship between firm characteristics and

the extent to which the event is anticipated exists. Karafiath (1994), however, found that a

standard OLS estimation was unbiased if the sample size exceeded 50 and certain

conditions were met (Greenwald, 1983).

3.3.2.2.2 Asset Pricing Models

As noted above, asset pricing models, or normal performance models, are essential to any

event study, since they allow for the calculation of abnormal returns caused by the event

of interest. Several asset pricing models have been developed. The deferent theories can

generally be grouped into economic and statistical models. This section provides an

overview of the available models and their most important properties.

3.3.2.2.2.1 Economic Models

Economic models are based on assumptions regarding the behavior of investors. Like

statistical models, however, economic models rely on statistical properties to be applied

in practice. As a result, economic models can be viewed as statistical models with

additional economic restrictions imposed on them. The most important economic models

are the Capital Asset Pricing Model (CAPM), the Arbitrage Pricing Theory (APT), and

Multifactor Pricing Models.

Page 31: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

137

3.3.2.2.2.1.1 Capital Asset Pricing Model (CAPM)

The CAPM was developed by Sharpe (1964), Lintner (1965), and Mossin (1966), and

―marked the birth of asset pricing theory.‖ Firstly, the CAPM was on the basis of the

works of Markowitz (1952). Black (1972) derived a more general CAPM model that did

not require risk–free borrowing and lending. In addition, Merton (1973) developed an

inter–temporal, and Lucas (1978) and Breeden (1979) a consumption–based version of

the CAPM. Today, the CAPM remains one of the most widely used tools in practice and

is the cornerstone of most finance courses. As any other asset pricing model, the most

useful property of the CAPM is that it enables economists to quantify risk and the

required compensation for bearing it. The main implication of the CAPM is that the

expected return of a security is linearly related to the covariance of its returns with the

return of the market portfolio. Specifically, if the existence of risk–free lending and

borrowing is assumed, the expected return of security takes the form.

E (ri) = rf + βi(rm − rf) (3-34)

βi =cov Ri ,Rj

var Rm (3-35)

Where,

E (ri) : Expected return of security,

rf: Risk–free rate,

βi : Beta coefficient of security

rm : Market return, and

(rm − rf): Market risk premium.

Thus, the expected return of security depends primarily on its beta factor, which

measures the security’s systematic risk. Systematic risk is the part of the security’s

statistical variance that cannot be eliminated by portfolio diversification. As a result, the

Page 32: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

138

CAPM establishes a relationship that rewards only the bearing of systematic, not

idiosyncratic, risk.

3.3.2.2.2.1.2 Arbitrage Pricing Theory (APT)

Economists have pointed out several shortcomings of the CAPM. In general; the cross–

section of average stock returns does not seem to be explained very well by the CAPM’s

beta factor alone. As a result, and researchers have looked for additional risk factors

suited to explain expected returns. Ross (1976) developed the Arbitrage Pricing Theory

(APT), which allowed for an unlimited number of factors. The APT does not, however,

specify which kind of factors these should be. Chen, Roll, and Ross (1986) identified four

macroeconomic factors as significant in the return–generating process: (a) changes in the

risk premium, (b) surprises in the yield curve, (c) unanticipated changes in inflation, and

(d) changes in expected inflation. Additional macroeconomic factors may be short–term

interest rates, commodity prices, and diversified market indices such as the S&P 500.

Similar to the CAPM, the multiple risk factors are linearly related to the expected return:

E (ri) = λi,0 + λi,1F1 + λi,2F2 + … + λi,K FK i (3-36)

Where,

E (ri): Expected return of security i,

λi,K : Factor sensitivity of security i to factor k,

Fk : Realization of factor k, and

K: Number of risk factors.

The underlying assumptions of the APT are that markets are competitive and frictionless,

so that no arbitrage opportunities exist. Like the CAPM, the APT argues that only

systematic risk, and not total risk, matters. Unlike the CAPM, however, the APT does not

require that all investors behave alike. It also does not claim that the capital–weighted

market portfolio is the only risky asset that will be held. In addition, the APT does not

require the identification of the market portfolio, which can be an extremely difficult

undertaking. While one of the APT’s advantages compared to the CAPM is that the APT

Page 33: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

139

is based on fewer assumptions, it has been questioned whether the APT is at all testable.

Furthermore, the empirical relevance of the APT has been challenged. In the German

stock market, Steiner and Nowak (1994) found that the explanatory power of the APT did

not dominate that of the CAPM.

3.3.2.2.2.2 Statistical Models

In contrast to economic models, statistical models do not rely on assumptions regarding

the behavior of investors. Instead, statistical models are based on statistical assumptions

concerning the behavior of asset returns. In particular, statistical models assume that

financial returns are distributed jointly multivariate normal, as well as independently and

identically throughout time. While this assumption is often violated, especially in the case

of daily returns, MacKinlay (1997) and Brown and Warner (1985) argued that that did

not led to model misspecification in practice. The most important statistical models are

the mean adjusted return model, the market adjusted return model, the market model, and

multifactor pricing models.

3.3.2.2.2.2.1 Mean Adjusted Return Model

The mean adjusted return model may be the most simplistic statistical model. It

terminated predictable returns from past returns and does not normalize for any other risk

factors, whether market or correlated. In particular, the predictable return for any day of

the event window is the mean return more the estimation period, and is given by:

E (Ri,t ) =1

T Ri.TTr=1

(3-37)

Where,

E (Ri,t ): Return of security i for time t, and

T: Number of time periods of the estimation window.

Estimated abnormal returns are given by:

ARi,t =Ri,t −1

T Ri.TTr=1

(3-38)

Page 34: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

140

Hence, for any given security , the expected return is constant and does not change over

time. Assuming that the security’s beta factor is constant over time, in addition to the

efficient frontier, the mean adjusted return model is consistent with the CAPM, which

would also predict a constant return in these assumptions. For short term event studies

using daily financial data, nominal returns are generally in use. When event studies are on

the basis of monthly returns, the model can also be computed with real returns that have

been adjusted for the risk free rate. Although basic, the model is often as strong as more

sophisticated models, depending on the deduction of the variance of abnormal returns

that the more advanced models accomplish.

3.3.2.2.2.2.2 Market Adjusted Return Model

The market adjusted return model equates the ex–ante expected return of securities with

the return of the market portfolio. In contrast to the mean adjusted return model, the

market adjusted return model thus allows for time–varying expected returns, which are

given by:

E (Ri,t) = Rm .t (3-39)

Where,

Rm .t denotes the period– returns of the market index.

Thus, the estimated expected return is identical across securities. Estimated abnormal

returns are given by:

ARi,t= Ri,t − Rm .t (3-40)

If all securities have the same beta factor of one, the market adjusted return model is

consistent with the CAPM. Besides, the model can be observed as a particular case of the

market model with the estimation parameters αi and βi constrained to zero and one,

respectively. While the market adjusted return model adjusts for market risk factors, it

does not incorporate any firm specific determinative, which can recline to biases. An

advantage of the model is, however, that it does not require any estimation period or

Page 35: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

141

historic return data. MacKinlay (1997) pointed out that that property was useful in event

studies related to initial public offering.

3.3.2.2.2.2.3 Market Model

The market model has its origins in the single index model developed by Sharpe (1963)

and assumes a linear relationship between a security’s return and the market return. For

any security , the market model equals:

Ri,t = αi + βi + εi,t (3-41)

E εi,t = 0

E (εi,t) = ςεt2

Where,

Ri,t : Period–t return of security i,

αi : Constant component of the return of security i,

βi : Beta factor measuring the systematic risk of security i,

Rm .t : Period– return of the market portfolio,

εi,t: Period– disturbance term of security i.

Thus, the return generating process for each security i consist of a systematic as well as

an unsystematic component. The beta factor captures the sensitivity of the return of

security i to the market.

The residual return εi,tis entirely firm specific and has an expected value of zero.

Additionally, the disturbance terms are uncorrelated across securities and with the market

return. Therefore, the firm particular measurement of the return variance is fully

diversifiable. To achieve the market model parameters αi and βi, an ordinary least squares

regression is run. In particular, the security returns, Ri,t are regressed on the market

Page 36: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

142

return, Rm ,t over the estimation period ... . Given the market model parameters, the ex–

post expected returns can be calculated as below:

E (Ri,t) = αi + βiRm ,t (3-42)

Abnormal returns are given by:

A (Ri,t) = Ri,t − (αi + βiRm ,t) (3-43)

The market model demonstrates a potential improvement on other models, since the stock

return variance is determinative to market as well as firm particular factors. The relation

development in the discovery of abnormal returns is depended upon the deduction in the

variance of abnormal returns. This, in turn, depends upon the goodness of fit of the

market model regression, as measured byR2:

R2 = βi2 ςm

2

ς i2 (3-44)

Where,

ςm2 : Variance of the market returnRm .t

ςi2: Variance of the return of security i, Ri,t .

The larger the amount ofR2, the larger the portion of return variance is due to systematic

risk, and the larger the gain of employing the market model. While many assumptions of

the market model may not hold in actuality, this model is the ideal choice of asset pricing

model in event studies. MacKinlay (1997) pointed out that the use of the CAPM in event

studies has almost ceased because of the unrealistic restrictions it imposes on the market

model. In addition, the additional factors of the APT offer few benefits, since most of the

variance is explained by market factor. As a result, the market model has become the

most common choice normal performance model alongside the constant mean return

model.

Page 37: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

143

3.3.2.2.2.2.4 Multifactor Pricing Models

Other statistical models are predominately factor models. The motivation behind factor

models is to deduct the volatility of excess returns by explanation more of the variance of

expected returns. The market model, for elaborating, is a single factor model based on the

market return. Multifactor models commonly add factors for industry classification. In

addition, Fama and French (1988) have developed a three factor model that adds firm

specific risk factors to market beta. Thus, in contrast to the APT, the Fama–French three–

factor model tries to enhance the CAPM by accounting for company–specific–firm size

and value–and not macroeconomic risk factors. The SMB factor accounts for firm size

and translates for small minus big. It measures the additional return investors have

historically captured by investing in small companies. In practice, the factor is calculated

as the mean return of the smallest 30% of all stocks minus the average return of the

largest 30% of stocks. The HML factor stands for high minus low and has been

constructed to measure the historical value premium associated with investing in value

stocks, as proxies by a low market–to–book value ratio. It is computed as the average

return of the 50% of stocks with the lowest market–to–book ratio minus the mean return

of the 50% of stocks with the highest market–to–book ratio. Thus, the Fama–French

three–factor model is given by

E (ri) = rf + βi rm − rf + si SMB + hiHML (3-45)

Where,

E (ri): Expected return of security i,

rf: Risk–free rate,

βi: Beta coefficient of security i,

rm : Market return,

(rm − rf): Market risk premium,

si : Exposure to SMB factor of security i,

Page 38: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

144

hi : Exposure to HML factors of security i.

Fama and French (1993) argued that size and the market–to–book ratio performed as

proxies for the role of leverage and financial distress in companies and, thus, were a

better measurement of risk than the beta factor alone. While the declaration of the SMB

as a risk factor was appealing, since small firms might be more sensitive to outside

influences because of their less–diversified operations, the inclusion of the HML factor

was controversial. In general, the discussion as to why the two outlined factors in

particular accurately capture fundamental risk is still ongoing. In addition, many

economists believe that the development of the three–factor model has been primarily

motivated to uphold the EMH. Not surprisingly, Fama and French (1996) found that the

three–factor model explains the anomalies associated with earnings/price, cash

flow/price, and sales increase, and with long–term return variables. For Germany, Ziegler

et al. (2007) disclose that the three factor model has a larger descriptive power with

consider to expected returns than the one–factor CAPM.

3.3.2.3 Corporate Earning Announcements Methodology

When any firm releases its earnings report, analysts compare the reported actual earnings

to their pre–determined estimates for the quarter. The estimates are based on the firm’s

previous performance, recent good or bad news about the firm and any outside effects

(i.e., economic conditions) that may affect the firm’s performance. Earnings reports

higher or lower than estimates constitute earnings surprises which can significantly affect

the stock prices. Positive surprise earnings announcement typically drives up the firm’s

stock price by sending a positive signal to investors about the firm’s future cash flows.

Conversely, a negative earnings surprise announcement exerts downward pressure on

stock prices sending a negative signal about the firm’s future. This section describes the

tests applied in this study to investigate whether an investor can achieve an above normal

return by acting on public announcement of earnings surprises.

The researcher has used two stage approaches in order to test the stock price responses to

quarterly earnings announcements. In the first stage, two naïve expectation models are

occupied for measuring the estimated earnings per share (EPS) and also estimation of

Page 39: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

145

parameters like beta, alpha based on the expost returns on stocks with the help of market

model. In the second stage, these estimated parameters are utilised to measure abnormal

returns around the announcement date which are then averaged, cumulated across

securities and during the time.

For this reason, two models have been used; martingale (Model 1) and martingale with

non–constant drift (Model 2), to compute the estimated EPS. If Ej,q denotes reported EPS

for firm j in quarter q and E j,q denotes expected EPS for firm j in quarter q, then model 1

forecasts the expected EPS for the qth quarter as equal to the actual EPS of the same

quarter from the previous year.

a) Model 1: Martingale

E j,q = Ej,q−4 (3-46)

Though the simple martingale is an inadequate characterization of the time–series

behavior of quarterly earnings; nonetheless, it involves a comparison that closely

resembles the actual presentation of quarterly earning announcements published in

certain financial journals.

Naïve expectation Model 2 forecasts the change in the quarter’s earnings from the same

quarter in the previous year as the average change in the prior three quarter earnings from

the corresponding quarters of the previous year.

b) Model 2: Martingale with Non–constant Drift

E j,q = Ej,q−4+ 1/3 [(Ej,q−1 − Ej,q−5) + (Ej,q−2 − Ej,q−6) + (Ej,q−3 − Ej,q−7) (3-47)

In this study, the method employed for the categorisation of companies into portfolios

depends upon whether the reported quarterly EPS is either greater or equal to and/or less

than expected quarterly EPS. Therefore to examine the information contained in the

reported quarterly EPS, the three portfolios of companies formed are named as favorable,

neutral and unfavorable depending on whether Ej,q > E j,q , E j,q = Ej,q , Ej,q < E j,q

respectively.

Page 40: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

146

To examine the information contained in the magnitude of an unanticipated earnings

change or earnings surprises, comparison of reported and expected EPS are classified

using three schemes: The first classification defined as the Absolute Residual Scheme (A)

categorizes an earnings report as favorable if Ej,q > E j,q , neutral if E j,q = Ej,q and

unfavorable if Ej,q < E j,q respectively.

The second classification defined as the 20% Residual Scheme (B) categorizes an

earnings report as favorable ifEj,q > 1.2 E j,q , neutral if 1.2 E j,q ≥ Ej,q ≥ 0.8 E j,q and

unfavorable if Ej,q< 0.8 E j,q respectively.

The third classification defined as the 40% Residual Scheme (C) categorizes an earnings

report as favorable if Ej,q > 1.4 E j,q , neutral if 1.4 E j,q ≥ Ej,q ≥ 0.6 E j,q and unfavorable

if Ej,q < 0.6 E j,q respectively.

Two income expectation models and three residual classifications result in six

combinations 1A, 1B, 1C, 2A, 2B and 2C where the first numeral denotes the earnings

expectation model and the English alphabet denotes the residual classification. It is to be

noted that if an earnings announcement date cannot be identified, if reported EPS is

unavailable, or if there is missing data such that it is not possible to calculate E j,q , that

observation has been omitted.

Next for each earnings observation, weekly rates of return rjw , are observed for the

thirteen weeks prior to and thirteen weeks after the announcement week. For each

quarter, the week in which the quarterly earnings announcement is made is denoted as

week 0. Actual weekly returns are calculated from weekly closing prices of each stock

for –13 to +13 weeks corresponding to each earnings announcement by employing the

following formula:

Rj,t = Pjt −Pjt −1

Pjt −1 (3-48)

Where,

Rj,t is the return of the jth stock in the tth week;

Page 41: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

147

Pjt is the closing price of the jth stock in the tth week; and

Pjt−1 is the closing price of the jth stock in the (t − 1)th week.

The weekly return on the Market Index (proxied by the S&P CNX Nifty Index) is

calculated by a similar formula:

Rm ,t = Pt−Pjt −1

Pt−1 (3-49)

Where,

Rm ,t is the actual return on the market index in the tth week;

Pt and Pt−1 are the Nifty closing values in the tth and (t − 1)th week.

Expected Weekly Stock Returns for each earnings announcement over a period of (–13)

to (+13) are calculated using the market model:

E(rj,t) = αi + βj + ei,t (3-50)

Where,

E(rj,t)= expected return for stock j in week t ;

αi and βi = parameters which are the alpha coefficient and beta coefficient of the

jth security;

ei,t = error term with a zero mean and constant standard deviation during the week

t.

To calculate the expected weekly security returns, the corresponding weekly values of

parameters (αi and βi) are to be computed for each earnings announcement of a stock.

This is done by regress the weekly security rates of return on weekly market rates of

return for a period of 108 weeks (i.e., two years) prior to each of the announcement week

for the particular security. Thus corresponding to each of the given security we will have

at most twenty–four (6 years × 4 quarters) estimates of α and β (some of the parameter

estimates could not be obtained because of the unavailability of the weekly returns data

Page 42: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

148

or the earnings announcement data). This method has the advantage of continuous

quarterly updating of the parameter estimates, using a substantially long period (108

weeks or two years) of weekly return data to calculate the α & β’s. The parameter

estimates of the jth stock for the six years of this study are presented in Table 3.2.1.1.

Using the earnings announcement–wise parameter estimates for each stock, the

corresponding expected weekly return of the stock is calculated using the following

market model for twenty six weeks, thirteen weeks on either side of the announcement

week:

r j,t = αi + β mt (3-51)

Thereafter, the residual rate of return for security (j) in calendar week w is calculated as:

ujw = rjw – r jw . (3-52)

These residuals are used to construct both Abnormal Performance Index (API) and

Cumulative Average Performance Index (CAPI) for the portfolios of stocks earlier

constructed on the basis of unanticipated favorable earnings reports, unanticipated neutral

earnings reports and unanticipated unfavorable earnings reports using both the martingale

and martingale with non–constant drift expectation models.

For each of the twenty one and sectors portfolios of the stocks (1A, 1B, 1C, 2A, 2B and

2C–each is further categorized into favorable, neutral and unfavorable), the API and

CAPI are calculated around the announcement week w as:

APIw = 1N π 1 + ujt

i ; w = −13,… , … , . . +13wj=−13

Nj=1 (3-53)

CAPIw = 1N π 1 + ujt

i ; w = −13,… , … , . . +13wj=−13

Nj=1 C = (3-54)

Where,

(i) is a superscript denoting particular portfolio category,

(j) denotes firm whose earnings announcement belongs to a given earnings

portfolio category,

Page 43: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

149

(N) is the total number of announcements in a given earnings portfolio category,

and

𝐀𝐏𝐈𝐰 is the portfolio of stocks with favourable quarterly earnings announcement

measures the average price adjustments net of the effects of market

movements from thirteen weeks prior to the announcement to w weeks

prior to or after the announcement.

Since the security’s overall reaction to the event of quarterly earnings announcement will

not be captured instantaneously in the behaviour of average abnormal return for one

specific week, it is necessary to accumulate the abnormal returns over a long period. Thus

CAPIw gives an idea about the average stock price behaviour over time accumulated till

week w. Generally, if the market is efficient, the API will be around one while CAPI will

be close to zero.

Empirical results will be presented for both the performance indices. The API and the

CAPI results will be also analyzed graphically. In the statistical testing, however, the

CAPI results will be given more importance because of the parametric properties of that

variable. The main reason behind giving this importance is that the weekly 1 + ujwi

values are multiplied to each other in constructing the API while the weekly values of

the ujwi ’s are cumulated in constructing the CAPI. If the ujw

i ’s are multivariate and

normally distributed, hence, the CAPI will be also normally distributed but the API

would not be normally distributed. Under the assumptions that the weekly security rates

of returns are multivariate, serially independent and normally distributed, a parametric

significance test will be used to the estimated CAPI results. The related test statistic is as

below:

t0 = CAPI / S.E. (CAPI) (3-55)

Where,

S.E. (CAPI) = Standard Error of Cumulative Abnormal Performance Index

S = ς/ n (3-56)

Page 44: RESEARCH METHODOLOGY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/93070/12... · RESEARCH METHODOLOGY ... and publication of quarterly earnings reports are discussed here

150

Where,

σ being the standard deviation of CAPI, and

n is the number of firm–quarters.