171
CLASSICAL AND BAYESIAN INFERENTIAL PROCEDURES FOR SOME PROBABILISTIC MODELS USEFUL IN RELIABILITY ANALYSIS THESIS SUBMITTED TO THE KUMAUN UNIVERSITY, NAINITAL BY SANJAY KUMAR FOR THE AWARD OF THE DEGREE OF DOCTOR OF PHILOSOPHY IN STATISTICS UNDER THE SUPERVISION OF Dr. NEERAJ TIWARI SENIOR LECTURER AND CAMPUS HEAD DEPARTMENT OF STATISTICS KUMAUN UNIVERSITY, S. S. J. CAMPUS, ALMORA-263601 UTTARAKHAND (INDIA) 2007

CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

  • Upload
    vanthuy

  • View
    226

  • Download
    2

Embed Size (px)

Citation preview

Page 1: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

CLASSICAL AND BAYESIAN INFERENTIAL

PROCEDURES FOR SOME PROBABILISTIC

MODELS USEFUL IN RELIABILITY ANALYSIS

THESIS

SUBMITTED TO THE

KUMAUN UNIVERSITY, NAINITAL

BY

SANJAY KUMAR

FOR THE AWARD OF THE DEGREE OF

DOCTOR OF PHILOSOPHY

IN

STATISTICS

UNDER THE SUPERVISION OF

Dr. NEERAJ TIWARI

SENIOR LECTURER AND CAMPUS HEAD

DEPARTMENT OF STATISTICS

KUMAUN UNIVERSITY, S. S. J. CAMPUS, ALMORA-263601

UTTARAKHAND (INDIA)

2007

Page 2: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

CERTIFICATE

This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

INFERENTIAL PROCEDURES FOR SOME PROBABILISTIC MODELS

USEFUL IN RELIABILITY ANALYSIS” submitted to the Kumaun University,

Nainital for the degree of DOCTOR OF PHILOSOPHY IN STATISTICS is a

record of bonafide work carried out by Mr. Sanjay Kumar, under my guidance and

supervision. I hereby certify that he has completed the research work for the full

period as required in the ordinance 6. He has put in the required attendance in the

Department and signed in the prescribed register during the period. I also certify

that no part of this thesis has been submitted for any other degree or diploma.

(Dr. Neeraj Tiwari)

Department of Statistics

Soban Singh Jeena Campus

Almora (Uttarakhand)

Page 3: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

ACKNOWLEDGEMENT

I am very grateful to my supervisor Dr. Neeraj Tiwari, Senior Lecturer and

Campus head, Department of Statistics, Soban Singh Jeena Campus, Almora for

superbly guiding and constantly inspiring me to carry out my research work on

“CLASSICAL AND BAYESIAN INF ERENTIAL PROCEDURES FOR SOME

PROBABILSTIC MODELS USEFUL IN RELIABILITY ANALYSIS”.

I am extremely grateful to Dr. Ajit Chaturvedi, Reader, Department of

Statistics, University of Delhi for his encouragement, constructive suggestions and

help in carrying out the work.

I would like to express my gratitude to all the staff members and research

scholars of Department of Statistics at S. S. J. Campus, Almora and D. S. B.

Campus, Nainital.

I am very fortunate in having constant moral support from my local

guardian Ms.Basanti Kandpal, C.S.Kankpal, Anju Kandpal, Ayush, Suyesh and

Shepher at Almora for their co-operation in carrying out the work.

I am very thankful to my friend Mr. Girish Chandra Kandpal, Assistant

Professor, Central Agricultural University, Gangtok for his constant support in

carrying out the work. I am also thankful to Mr. Lalit Mohan Joshi, Girja Shankar

Pandey, Pankaj Pandey, Hemant Joshi, Rakesh Pant, Negi Thakur , Himanshu

Tiwari, Charuda, Vinay Kumar Pandey, I.S.S., Kanishk Kant Shrivastav, I.S.S. and

Dr. Meholika Sah for their co-operation in completing my research work.

I convey my sincere gratitude to my colleagues Mr. B.N.Joshi, B.S. Burfal,

D.S.Rawat, R.C.Pant, K.D.Pandey, D.C.Upreti, J.N.Mishra, K.S.Kathait,

D.P.Singh, B.K.Singh, D.K.Joshi, N.D.Pandey and Anil Joshi at the Sub Regional

Office of Field Operations Division (FOD) of the National Sample Survey

Organization (NSSO) at Jakhan Devi, Almora and a sincere thank goes to the

Page 4: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

probationers of the XXVIII th batch of I.S.S. for their co-operation in completing

the work.

I find no words to express my feeling for constant encouragement, blessing

and inspirations that I received from my father Mr. C.S.Pant, mother Basanti Pant,

uncle H.C.Pant, brother Y.K.Pant and lovely sister Meena Nainwal.

Help received from Central Science Library (CSL) of University of Delhi,

Banaras Hindu University (BHU) and Indian Statistical Institute (ISI), Delhi

Centre is also gratefully acknowledged. The computer assistance received from

Institute of Economic Growth (IEG), Delhi, Institute for Integrated Learning in

Management (IILM), Delhi and Computer Centre of Ministry of Statistics and

Programme Implementation, Govt. of India, New Delhi is also acknowledged.

Last but not least, I am much grateful to the almighty God for giving me the

talent. I express my deepest sense of gratitude to the People of ‘Dev Nagari’

Almora and my hometown Bhatronjkhan for their constant moral support.

Date: (Sanjay Kumar)

Page 5: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

CONTENTS

CHAPTERS PAGE NO.

Chapter I: THE GROWTH AND DEVLOPEMENT OF THE CLASSICAL

AND BAYESIAN INFERENTIAL PROCEDURES USEFUL IN

RELIABILITY ANALYSIS. 1-38

1.1 A Brief Historical Development of Reliability Analysis 1

1.2 Some Basic Definitions 4

1.3 Various Probabilistic Models Useful in Reliability Analysis 12

1.4 Classical Inferential Procedures in Reliability Analysis 26

1.5 Bayesian Inferential Procedures in Reliability Analysis 32

1.6 Contents of the Thesis 35

Chapter II: CLASSICAL AND BAYESIAN RELIABILITY

ESTIMATION OF BINOMIAL AND POISSON

DISTRIBUTIONS 39-61

2.1 Introduction 39

2.2 The Hazard-Rates of Binomial and Poisson Distributions and the

Set-up of the Estimation Problems 42

2.3 The UMVUE of the Powers of ?, R (to ) and ‘P’ for Binomial

Distribution 46

Page 6: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

2.4 The Bayesian Estimation of the Powers of ?, R (to ) and ‘P’ for

Binomial Distribution 50

2.5 The UMVUE of the Powers of ?, R (to) and ‘P’ for Poisson

Distribution 55

2.6 The Bayesian Estimation of the Powers of ?, R (to) and ‘P’ for

Poisson Distribution 58

Chapter III: SEQUENTIAL POINT ESTIMATION PROCEDURES FOR

THE GENERALIZED LIFE DISTRIBUTIONS 62-79

3.1 Introduction 62

3.2 The Generalized Life Distributions 64

3.3 The Set-Up of the Estimation Problem 66

3.4 The Sequential Estimation Procedure and Second Order

Approximations 68

3.5 Condition for Negative Regret and an Improved Estimator

for ? 75

Chapter IV: BAYESIAN ESTIMATION PROCEDURES FOR A FAMILY

OF LIFE TIME DISTRIBUTIONS UNDER SELF AND GELF

80-115

4.1 Introduction 80

4.2 The Family of Lifetime Distributions 83

4.3 Bayes Estimators of Powers of ?, ?(t) and ‘P’ Under SELF 84

4.4 Bayes Estimators of Powers of ?, ?(t) and ‘P’ Under GELF 101

4.5 Bayes Estimators of the Parameters under SELF and GELF When

Parameters are Unknown 112

Page 7: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Chapter V: TWO STAGE POINT ESTIMATION PROCEDURE FOR THE

MEAN OF A NORMAL POPULATION WITH KNOWN

COEFFICIENT OF VARIATION 116-135

5.1 Introduction 116

5.2 Minimum Risk Point Estimation 120

5.3 Second Order Approximations to E(N) and Rg(c) 124

5.4 Bounded Risk Point Estimation 131

5.5 Second Order Approximations to E(N), E(N2 ) and RN(A) 132

Chapter VI: SHRINKAGE–TYPE BAYES ESTIMATOR OF THE

PARAMETER OF A FAMILY OF LIFETIME

DISTRIBNUTIONS 136-152

6.1 Introduction 136

6.2 The Set-up of the Estimation Problem 139

6.3 Shrinkage Estimator versus the UMVUE 141

6.4 Shrinkage Estimator versus the Minimum Mean Squared Error

Estimator 144

6.5 Lindley Approximation 147

Chapter VII: SUMMARY 153-157

Page 8: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

CHAPTER-I

THE GROWTH AND DEVLOPEMENT OF THE CLASSICAL

AND BAYESIAN INFERENTIAL PROCEDURES USEFUL IN

RELIABILITY ANALYSIS

1.1 A BRIEF HISTORICAL DEVELOPMENT OF RELIABILITY

ANALYSIS

The word ‘reliable’ means able to be trusted or to do what is expected. It is

used in various contexts in our daily life such as reliable friend, reliable news,

reliable service centre etc. The concept of reliability is as old as man himself. He

has long been concerned with the questions about the products he used, such as:

‘Will this function satisfactorily?’, ‘Will that last long?’, ‘Will this be more

reliable than other?’ etc.

The growth and development of reliability theory has strong links with

quality control and its development. Shewhartz (1931) and Dodge and Roming

(1929) laid down the theoretical basis for utilizing statistical methods in quality

control of industrial products.

Page 9: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

The science of reliability is new and still growing. During the First World

War, reliability was measured as the number of accidents per hour of the flight

time. During World War II, a group headed by rocket engineer Wernher Von

Braun was developing the V-I missiles in Germany. After the war, it was reported

that the first ten missiles were all failed. In spite of providing high quality parts

and careful attention, all these missiles either exploded on the launching pad or

landed in the English Channel. A mathematician named Robert Lausser was called

as a consultant for analyzing the missile system. He quickly derived the law for

the reliability of the components, which are connected in series. According to this

law, the reliability of a system consisting of a large number of components is

equal to the product of the reliabilities of the individual components that made up

the system. If the system comprises a large number of components connected in

series, the system reliability may be rather low. For example if a system has three

components connected in series having reliabilities 0.4, 0.5 and 0.6, then the

reliability of the whole system is 0.012, which is even worse than 0.4 (the lowest

reliability component).

The development continued thereafter throughout the whole world in the field

of engineering and technology. The estimation of reliability has become a demand

in the context of modern technology. With automation, the need for complicated

control and safety of systems became steadily more pressing for researchers. The

growing need of the concepts of reliability in the fields of Statistics, Mathematics

and engineering sciences has made it a prominent topic of research. Investigations

Page 10: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

in reliability analysis began in the 1950’s and the growth of the theory began to

gain momentum after a decade. The first major committee on reliability was set up

by the Department of Defence, USA in 1950. Now almost all countries have

started to take keen interest in the applications of reliability principles.

The reliability analysis has a wide scope in the application areas. It is often

used in risk and safety analysis to evaluate the availability and applicability of

safety systems. Reliability analysis is useful fo r environmental protection. Many

industries have realized that most of the pollution caused by their plants is due to

production irregularities, which is most important factor in reducing pollution.

Many industries like aerospace, automobile and aviation have adopted reliability

principles in design process. Reliability has a wide field of application in quality

and reliability management also, since it is considered as a quality characteristic.

Nowadays, the applications of reliability principles have made remarkable

progress in many firms.

1.2 SOME BASIC DEFINITIONS

In this sub-section, we define some important terms used in reliability theory.

(a) Failures: Let us consider a system or a unit under some sort of stress. It may

be a steel beam under a load, a fuse inserted into a circuit or an electric device put

into service. The steel beam may crack or break, the fuse may burn out or the

electronic device may fail to function, all these undesirable states are defined as

Page 11: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

“Failure”. A failure is the partial or total loss or change in the property of unit or

system in such a way that its functioning is partially or completely stopped. In the

reliability analysis, failure means that the system is incapable of performing its

required function.

The penalties of failures are paid by people in terms of money, time and

even life itself. A failure in a single unit of a system may cause the complete

breakdown in the industrial plant. A failure in the network of railways may cause

the delay of trains. A failure in the break system of a metro train in Japan was the

cause for its accident with another train and resulted in deaths of hundreds of

people. A failure occurred in the Union Carbide plant at Bhopal, resulted in

leakage of Methyl Iso Cyanide (MIC) and became the cause of the death of

thousands of people. A failure in the Columbia Space Shuttle of NASA caused the

death of seven aeronauts, including Kalpana Chawla of India.

(b) Lifetime/ Failure time/ Survival time: Lifetime is the time until the failure of

the unit occurs i.e., it is the length of the failure free time. Lifetime is often

expressed in hours of operation. Mathematically lifetime is merely ‘nonnegative

valued random variable’. Survival time and failure time are the alternate terms for

lifetime that are frequently used in reliability theory.

(c) Reliability and Reliability Function: The reliability of a unit or system is

expressed as the probability that it will perform satisfactorily at least for a stated

Page 12: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

period under the given operational and environmental conditions, such as,

temperature, humidity, vibration etc. The reliability stresses four elements namely

probability, intended function, time and environmental conditions.

Mathematically, if X denotes the random variable (rv) representing the lifetime of

a unit, the reliability function at time t is defined as

R (t) = P(X > t)

= 1- P (X = t)

= 1 - F (t),

where F(t) is distribution function of X at specified time t, known as the failure

distribution and sometimes referred as the unreliability function.

In terms of pdf of X, say f(x), the reliability function at time t is

dxf(x)t

(t)R ∫∞

= .

R is called reliability or survival function and is always a function of time. Also

R(0) = 1 and R(8 ) = 0, hence R(t) is non-increasing function between 0 and 1.

(d) Mean Time to Failure (MTTF): The mean time to failure (MTTF) is the

expected time during which the component will perform successfully. It is given

as the mathematical expectation of the lifetime of the component. Thus

MTTF = E(X).

E(X) is also referred as the expected life.

Page 13: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

(e) Mean Between to Failure (MTBF): If the system under consideration is

renewed through maintenance and repaired, E(X) is known as the Mean Between

to Failure (MTBF).

(f) Hazard-rate: Hazard rate is a measure of instantaneous speed of failures. It is

denoted by h(t) and given as

t(t).?SN

t)?(tSN(t)SNlim

0t?(t)h

+−

→=

R(t)f(t)= ,

where (t)SN denotes the number of components surviving after time t. If in any

situation, we have some qualitative information about hazard-rate, we can utilize

this information in selecting a lifetime model. Some frequently used hazard-rates

are constant hazard-rate, monotonic increasing hazard-rate, monotonic decreasing

hazard-rate etc. In actuarial science, it is known as ‘force of mortality’ and in

extreme -value theory, it is called ‘intensity function’. In Economics, the reciprocal

of hazard rate is called ‘Mill’s ratio’.

One of the most popular hazard-rate is ‘bathtub-shaped’ hazard-rate, which

is appropriate when individuals in the population are followed right from actual

birth to death. This pattern is also observed in human populations. Up to the age of

about 10 years, a child has high probability of dying due to birth defects or infant

Page 14: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

diseases, called the initial failure. Between the ages of 10 to 30 years, the deaths

are assumed due to accidents. This period exhibits a constant failure rate and such

failure is called chance failure. After the age of 30 years, the death rate increases

with age and such failure is termed as wear out failure.

A mathematical relationship between h(t) and R(t) can easily be drawn

∫−= dth(t)t

0exp(t)R .

One more useful function related to hazard-rate, called the ‘Cumulative

Hazard Function’ is defined as

dxh(x)t

0(t)H ∫= .

The probabilistic models with increasing hazard-rate are very common and

frequently used. Models with constant hazard-rates are important and simple also.

Models with decreasing hazard-rates are less common but are sometimes used.

Non-monotone hazard-rates other than the bath-shaped are less common, but

possible.

(g) Censored sampling: In several situations, it is neither possible nor desirable to

record the failure time of all the items under test, since life testing experiments are

usually destructive in nature. Suppose n items are kept on a test and the

experiment is terminated when a pre-assigned number of items, say r (< n) have

failed. Such a sampling is known as ‘failure-censored sampling’ or ‘type II

Page 15: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

censored sampling’. On the other hand, if the experiment is terminated after a pre-

assigned time, say t. Such a sampling is known as ‘time-censored sampling’ or

‘type I censored sampling’.

For type I censored sampling the length of the experiment is fixed, while

the number of observations obtained before time t is a random variable. With type

II censored sampling the number of observations is fixed but the length of the

experiment is random.

(h) Loss Function: Suppose ? is an estimator of a parameter ?, then a loss

function, denoted by ?) ,?( L is a real-valued function such that:

0 ?) ,?( L ≥ , for every ?

0 ?) ,?( L = when ? ? = .

The expected value of loss function is known as risk function.

(i) Squared-Error Loss Function (SELF): If a parameter ? is estimated by ? , the

squared-error loss function (SELF) is given by

2?) -?( E ?) ,?( L = .

Page 16: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

(j) Absolute Error Loss Function: If a parameter ? is estimated by ? , the

absolute error loss function is given by

? -? ?) ,?( L = .

(k) LINEX (Linear in Exponential) Loss Function: A symmetric loss function

assumes that overestimation and underestimation are equally serious. However, in

some estimation problems such an assumption may be inappropriate. In estimating

reliability, overestimation is usua lly more serious than the underestimation. For

such situation when overestimation and underestimation are not equally serious,

Varian (1975) has suggested the LINEX loss function. If a parameter ? is

estimated by ? , the LINEX loss function is given by

0.b 0,a 1],?)-?a(?)-?( a [exp b ?) ,?( L >≠−−=

(l) General Entropy Loss Function (GELF): If a parameter ? is estimated by ? ,

the general entropy loss function (GELF) is defined as

0a 1,??

log a -a

??

?) ,?( L ≠−=

.

(m) Bayes Estimator: Bayes Estimator B? of parameter ? is defined as the

estimator, which minimizes the posterior expected loss

Page 17: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

[ ] d?)x?(O

p?) ,B

?( L?) ,B

?( L x?

E ∫= ,

where O is the parameter space of parameter? .

(n) Shrinkage Estimator: Thompson (1968, a & b) introduced the concept of

shrinkage estimator. Shrinkage estimation procedure is one of the interesting

procedures in which it is assumed that the prior knowledge about the parameter is

available in the form of a prior point estimate or in the form of interval which

contain parameter in it. Thompson suggested shrinkage estimator of a parameter

by giving suitable weights to the usual estimator and the prior point estimate.

(o) Minimum Mean Square Estimator (MMSE): Minimum mean square

estimator is an estimator that minimizes the mean square error of the estimator.

(p) Lindley Approximation: In many situations, Bayes estimators are obtained

as a ratio of two integral expressions and cannot be expressed in a closed form.

However, these estimators can be numerically approximated using complex

computer programming.

Page 18: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Lindley (1980) suggested an asymptotic approximation to the ratio of two

integrals. The basic idea behind it is to obtain Taylor series expansion of function

involved in the integral about the maximum likelihood estimator.

1.3 VARIOUS PROBABILISTIC MODELS USEFUL IN RELIABILITY

ANALYSIS

Some frequently used probability distributions in reliability analysis are

given below. For detailed description of the properties of these distributions, one

may refers to Johnson and Kotz (1969, 1970).

(a) Exponential Distribution: A fundamental distribution to the reliability

analysis is the Exponential distribution. It is widely used in lifetime models. Davis

(1952) examined that the exponential distribution appears to fit most of the data

related to reliability analysis. Epstein (1958) remarks that the exponential

distribution plays an important role in life testing experiments. The reason behind

the wide applicability of this distribution is the availability of simple statistical

methods and its suitability to represent the lifetime of many items.

The pdf of one-parameter exponential distribution is

0 ? x,;?xexp

?1?) (x;f >

−= .

The reliability function and the hazard-rate for this distribution are given by

Page 19: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

−=?texp(t)R

and

0.?where?1h(t) >=

For two-parameter exponential distribution the pdf is given by

otherwise.0,

0 ? ,µ- ;?

µ-xexp?1?) µ, (x;f

=

>∞<<∞

−=

This yields,

µt1,

µt,?µ-t

exp(t)R

≤=

>

−=

and

0.?where?1h(t) >=

Hence, the hazard-rate for exponential distribution is constant. It is more

appropriate for a situation where the failure rate appears to be more or less

constant.

(b) Weibull Distribution: The Weibull Distribution is also a widely used

distribution in reliability analysis. This distribution has been named after the

Swedish scientist Weibull. Weibull (1951) showed that it is useful in describing

Page 20: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

wear out failures. It has also been used as a model for vacume tubes, ball bearings,

tumors in human beings etc.

The pdf of two-parameter Weibull distribution is given as

0 ? p, x,;?

pxexp1-p x?pp)?, (x;f >−=

,

where p is referred to as the shape parameter and ? as the scale parameter of the

distribution. It reduces to exponential distribution for 1p = and Rayleigh

distribution for 2p = .

The reliability function and the hazard-rate for this distribution are given by

−=

?

ptexp(t)R

and

0.?p,where,1pt?ph(t) >−=

The Weibull distribution has increasing failure rate (IFR) for p>1,

decreasing failure rate (DFR) for 0 < p< 1 and constant failure rate for p=1.

(c) Gamma Distribution: The gamma distribution is also sometimes used as a

lifetime distribution. Its pdf is given by

0 ß a , x,;ß

xexp 1-a xG(a) aß

1ß) a , (x;f >−=

Page 21: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

where a and ß are shape and scale parameter, respectively.

The reliability function and hazard-rate for gamma distribution are

)G(a

)(aß tG)G(a(t)R

−=

and

−−=

)(aß tG)G(a aß

)ß texp(1ath(t) ,

where )(abG is the well known standard incomplete gamma function, given by

0a dy,yeb

0

1ay )(ab

G >−∫ −= .

There is no closed expression for R (t) and h (t) for this distribution.

However, R (t) and h (t) have been extensively studied and tabulated. Although it

can be shown that for a >1, it has IFR and DFR for 0 <a <1. For a =1, the gamma

distribution coincides with exponential distribution and yields constant hazard-

rate.

The generalized gamma distribution is a three-parameter distribution with

pdf

0.k ß, ?, x,;ßx)(?exp (k)G

1kßx)(?ß?)k ß, ?, (x;f >−

−=

This model includes many lifetime distributions as special cases and was

introduced by Stacy (1962).

Page 22: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

(d) Normal Distribution: Davis (1952) has shown that the normal distribution

gives quite a good fit for the failure time data, in the context of life testing and

reliability analysis. The support to the normal distribution is (-8 , 8 ) by taking the

mean µ to be sufficiently large positive valued and standard deviation s to be

sufficiently small relative to µ.

The pdf of normal distribution with location parameter µ (mean) and scale

parameter s (standard deviation) is given by

. 0 s ,µ x,- ;2µ)(x2s2

1exp

21) (2ps

1s ) µ, (x;f >∞<<∞

−−=

The reliability function and hazard-rate for this distribution are given by

−−=s

µtF1(t)R

and

,

sµt

F1s

sµt

fh(t)

=

where f (.) is pdf of standard normal variate (SNV) and ? (z) is the cumulative

distribution function (cdf) of SNV, given by

du2

2uexp 21) (2p

1z(z)F

−∫

∞−= .

Page 23: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Although we can not obtain h(t) for this distribution in closed form, yet it

can be shown that it has IFR .

(e) Log-Normal Distribution: In the contexts of life testing and reliability

problems, the lognormal distribution answers a criticism sometimes raised against

the use of normal distribution (-8, 8), as a model for failure time distribution

which must range over 0 to 8. The lognormal distribution is appropriate when

hazard-rate is decreasing for large value of t. Goldthwaite (1961) justified its use

as a failure time distribution.

The pdf of lognormal distribution is given by

0. s ,µ -,x0 ;2µ) x(log2s2

1exp 21) (2ps x

1s ) µ, (x;f >∞<<∞∞<<−−=

The reliability function and hazard-rate for the lognormal distribution are

given as

−−=

sµ tlogF1(t)R

and

.

sµ tlogF1 ts

sµ tlogf

h(t)

−−

=

Page 24: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

It should be noted that the hazard-rate initially increases over the time and

then decreases as time increases, thus the lognormal distribution serves as a model

when failure rate is rather high initially.

(f) Inverse Gaussian Distribution: Chhikara and Folks (1974, a) proposed this

distribution as a lifetime model and suggested its applications for studying

reliability aspects where the initial failure rate is high. They also developed

inferential procedure for inverse Gaussian distribution after studying its property.

The pdf of this distribution is given by

, 0 ? µ, x,;2µ)(x x22µ

?exp

21

3 x2p

??) µ, (x;f >

−−

=

where ? is shape parameter.

The reliability function and hazard-rate are given as

+λ−−−λ=

µt1

21

tF µ2?e

µt1

21

tF(t)R

and

+

−−

−−=

µt1

21

t?F µ2?e

µt1

21

t?F

t22µ2µ)?(texp213t2p?

h(t) .

Chhikara and Folks (1974, a) showed that this distribution has IFR for mtt < and

DFR for 32?t > , where mt is the mode of the distribution.

Page 25: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

(g) The Maxwell and Generalized Maxwell Distribution: Tyagi and

Bhattacharya (1989, a & b) considered the Maxwell distribution as a failure

model. Chaturvedi and Rani (1998) considered the generalized form of Maxwell

distribution and termed it as ‘generalized Maxwell failure distribution’.

The pdf of the generalized Maxwell failure distribution is given by

0k ?, x,; G(k) k?

)?2x-exp(1-2k2x)k ?, (x;f >=

Its reliability function and hazard-rate are given as

(k)?2t

G(t)R =

and

.

1

0dsse

1k

2t

2s1?2th(t)

∫∞ −

+=

It is observed that this distribution has IFR.

(h) Negative Binomial Distribution: Kumar and Bhattacharya (1989) considered

negative binomial distribution as lifetime model. The negative binomial

distribution has probability mass function (pmf)

........ 1, 0, x0,r 1,?0 ; r?)-(1x?x

1xr) ? r, (x;p =><<

−+=

Page 26: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

They studied the behaviour of the hazard-rate and showed that it has IFR

for r >1 and DFR for r <1. For r = 1, it leads to geometric distribution and yields

constant hazard-rate.

Kyriakoussis and Papadopoulos (1993) derived the Bayes estimator of the

reliability function for the zero-truncated negative binomial distribution.

Chaturvedi and Sharma (2007) considered this distribution as reliability model.

The justification behind its use as a reliability model is based on the behaviour of

its hazard-rate.

The zero-truncated negative binomial distribution has the probability mass

function (pmf)

........ 1,2, x0, s 1,?0 ; s?)-(1-1

s?)-(1x?x

1xs

) ? s, (x;p =><<

−+

=

The reliability function and the hazard-rate for this distribution are given as:

s?)-(1-1

s?)-(1x?x

1xs

tx)t(R

−+∞

==∑

and

x?

tx1txs

0x

t1ts

)t(h

+−++∞

=

−+

=

.

This distribution has IFR for s<1, DFR for s>1 and constant failure rate for s=1.

Page 27: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

(i) Binomial Distribution: Chaturvedi, Tiwari and Kumar (2007) introduced

binomial distribution as a lifetime model. The binomial distribution has pmf

r.2,...,1,0,x1,?0;xr?)(1x?xr

?)r,(x;p =<<−−

=

They studied the behaviour of the hazard-rate and showed that the binomial

distribution has IFR. [For detailed discussion, one may refer to Chapter II of the

thesis].

(j) Poisson Distribution: Chaturvedi, Tiwari and Kumar (2007) considered

Poisson distribution as a lifetime model. The Poisson distribution has pmf

. 2,...1,0,x;!x

x??e?)(x;p =−

=

They showed that the Poisson distribution can represent lifetime model

when one has IFR. [For detailed discussion, one may refer to Chapter II of the

thesis].

(k) Families of Lifetime Models: Various authors have discussed different

families of lifetime models, which contain several lifetime distributions as

particular cases. We have used the following two families of lifetime distributions

for classical and Bayesian inferential procedures discussed in the thesis.

Let us consider a family of lifetime models originated by Chaturvedi et.al.

(2002, 2003, a) with probability density function (pdf) given by

Page 28: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

0d?,g(x),;ax;?

g(x)exp

G(d) d?

(x)'g(x)1dg?)d,(x;f >>

−=

where ‘a’ is known and d & ? are parameters. Here, g (x) is real valued, strictly

increasing function of X with g (a) = 0 and (x)'g denotes the first derivative of g

(x).

The above family, known as the generalized life distributions, includes the

following life distributions useful in reliability analysis as particular cases:

(a) For g (x) = x, a = 0 and d = 1, we obtain the one-parameter exponential

distribution.

(b) For g (x) = x, a = 0, we get the gamma distribution and for d taking

integer values, it is known as Erlang distribution.

(c) For g (x) = xp and a = 0, the family gives the generalized gamma

distribution.

(d) For g (x) = xp , a = 0 and d = 1, it leads to Weibull distribution.

(e) For g (x) = x2, a = 0 and d =21 , it represents the half-normal

distribution.

(f) For g (x) = x2 , a = 0 and d = 1, we obtain Rayleigh distribution.

(g) For g (x) = 2x2

, a = 0 and d = 2a , it turns out to be chi-

distribution.

Page 29: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

(h) For g (x) = 2x2

, a = 0 and d = 23 , we get the Maxwell distribution and

for g (x) = x2, we obtain the generalized Maxwell distribution.

(i) For g (x) = log (1+ xb), a = 0 and d = 1, we obtain Burr distribution.

(j) For g (x) = log x, a = 1 and d = 1, it represents Pareto distribution.

The reliability function and hazard-rate for this family are given by:

( )?

g(x)y wheredy,?g(t)

yexp)( 1dy)t(R =∫

∞−

δΓ

−=

and

=

∫∞ −

+=?(t) g -y s where,

?(t) 'g

1

?g(t)dss-e

1d

(t) gs

?1(t)h .

The behaviour of hazard-rate depends upon (t) 'g . If (t) 'g is constant

(exponential distribution and Weibull distribution for p=1), h (t) is also constant. If

(t) 'g is monotonically increasing in t (Weibull distribution for p>1, Rayleigh

distribution and Chi distribution for a = 2), h (t) is monotonically increasing in t. If

(t) 'g is monotonically decreasing in t (Weibull distribution for p<1, Burr

distribution and Pareto distribution), h (t) is monotonically decreasing in t.

We consider one more family of lifetime distributions proposed by Moore

and Bilikam (1978). Let the random variable X follows the distribution given by

the pdf

Page 30: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

−−′

= (x)/?ßg exp (x)1ßg (x)g

?)ß,f(x; ; 0.?ß,x, >

This family is known as family of lifetime distributions since it includes the

following probabilistic distributions useful in reliability analysis as particular

cases:

(i) For xg(x) = and 1ß = , we get exponential distribution.

(ii) For xg(x) = , it yields Weibull distribution.

(iii) For 0b),bxlog(1g(x) >+= and 1ß = , we obtain Burr distribution.

(iv) For

=

axlogg(x) and 1ß = , it leads to Pareto distribution.

(v) For xg(x) = and 2ß = , it gives Rayleigh distribution.

The reliability function and hazard-rate are given by

θ−= (t)/ßgexp)t(R

and

(t)1ßg(t)g?ß

)t(h −′

= .

The behaviour of hazard-rate depends upon (t) 'g . If (t)g′ is constant

(exponential and Weibull distribution for ß=1), h(t) is also constant, h(t) is

monotonically decreasing in t if (t)g′ is monotonically decreasing in t (Burr,

Pareto and Weibull distribution for ß<1) and h(t) is monotonically increasing in t

Page 31: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

if (t)g′ is monotonically increasing in t (Rayleigh and Weibull distribution for

ß>1).

1.4 CLASSICAL INFERENTIAL PROCEDURES IN RELIABILITY

ANALYSIS

Some of the classical inferential procedures commonly used in reliability

analysis are described below.

The classical inferential procedures have been introduced in the field of

reliability analysis for deriving maximum likelihood estimators (MLE’s) and

uniformly minimum variance unbiased estimators (UMVUE’s) of the reliability

and other parametric functions.

In case of censoring from right for one-parameter exponential distribution,

Epstein and Sobel (1953) derived the MLE of scale parameter. Pugh (1963)

obtained the UMVUE of the reliability function of the exponential distribution.

Sinha (1972) discussed the behavior of UMVUE of reliability function of the

exponential distribution when a spurious observation may be present. Epstein and

Sobel (1954) and Epstein (1960) extended these results to the two-parameter

exponential distribution.

The UMUVUE of reliability function of two-parameter exponential

distribution with complete sample was obtained by Tate (1959), Laurent (1963)

and Sathe and Varde (1969). Basu (1964) derived the UMVUE of reliability

Page 32: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

function for exponential, gamma and Weibull distributions under type II

censoring. Patil and Wani (1966) derived the UMVUE of reliability function for

the gamma and normal distribution. Harter (1969) obtained the numerical

approximations to the MLE’s for the parameters of generalized gamma

distribution. Feldman and Fox (1968) derived the UMVUES of reliability function

of normal distribution. The reliability function of the inverse Gaussian distribution

was obtained by Roy and Wasan (1968) and Chhikara and Fox (1974, b).

Chaturvedi and Rani (1998) considered the UMVUE of the reliability function for

generalized Maxwell distribution. Chaturvedi and Rani (1997) developed a family

of lifetime distributions and obtained UMVUE of reliability function and

moments. Chaturvedi and Tomer (2002; 2003, a) obtained the UMVUE of

reliability function for negative binomial distribution and generalized life

distributions.

Another measure of reliability under stress-strength set-up is the probability

P=P(X>Y), which represents the reliability of performance of an item of strength

X subject to stress Y. Owen, Craswell and Hanson (1964), Church and Harris

(1970) and Dowton (1973) discussed the estimation of P when X and Y are

normally distributed. Tong (1974) and Kelly, Kelley and Schucany (1976)

considered the case when X and Y are exponentially distributed. Tong (1975) also

considered the case when X and Y follow gamma distribution. Simonoff,

Hochberg and Reiser (1985) discussed various estimation procedures for ‘P’ in

discretized data. The MLE and UMVUE of P when both X and Y follow gamma

Page 33: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

distribution with unequal scale and shape parameters were considered by

Constantine, Karson and Tse (1986). Chaturvedi and Surinder (1999) derived

UMVUE of P for exponential case under type I and type II censorings by using a

simpler technique of deriving UMVUEs. Using the same technique Chaturvedi

and Tomer (2002) obtained the MLE and UMVUE of ‘P’ for negative binomial

distribution.

The pioneering work in the field of sequential analysis was due to Wald

(1947), who developed sequential probability ratio test (SPRT) for testing a simple

null hypothesis against a simple alternative hypothesis. He also obtained

expressions for the operating characteristic (OC) and average sample number

(ASN) for the proposed sequential test. Epstein and Sobel (1955) considered

sequential life test in exponential case to test the simple null hypothesis against a

simple alternative hypothesis. They derived approximate formulae for OC and

ASN functions. Several authors contributed to this direction. For a brief review

one may refer to Epstein (1960), Woodall and Kurkjian (1962), Aroian (1976) and

Baryant and Schmee (1979). Phatarfod (1971) proposed sequential test for

composite hypothesis for the shape parameter of gamma distribution. Joshi and

Shah (1990) developed SPRT for inverse Gaussian distribution. Chaturvedi,

Kumar and Kumar (2000) proposed SPRT for testing simple and composite

hypotheses for the parameter of generalized life distributions.

The robustness of SPRT for different distributions have been discussed by

Harter and Moore (1976), Montange and Singpurwala (1985), Chaturvedi, Kumar

Page 34: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

and Chauhan (1998), Chaturvedi, Kumar and Kumar (1998) and Chaturvedi,

Tiwari and Tomer (2002).

Dantzing (1940) proved the non-existence of test of student’s hypothesis

having power function independent of variance for normal population.

Consequently, one cannot construct a confidence interval of pre assigned width

and coverage probability for the mean of a normal population when variance is

unknown. To deal with this problem, Stein (1945) proposed a two-stage procedure

determining the sample size as a random variable. Ruben (1961) studied the

properties of Stein’s two-stage procedure. This procedure is easy to apply since it

requires only two stages. However, it has some drawbacks. Firstly it is not

‘asymptotically efficient’ in Chow and Robbins (1965) viewpoint. According to

Chow and Robbins (1965), a sequential procedure is asymptotically efficient, if

the ratio of the average sample size to the optimal fixed sample size converges to

unity. Secondly, this procedure does not utilize the second-stage sample size for

the purpose of the estimation of nuisance parameter. Moreover, the ‘cost of

ignorance’ of the nuisance parameter does not remain asymptotically bounded.

These drawbacks can be removed by sequential upgrading of the

observations. An important contribution to this direction was made by Starr

(1966). Mukhopadhyay (1980) proposed a ‘modified’ two-stage procedure in

order to make Stein’s two-stage procedure ‘asymptotically efficient’. Anscombe

(1949) provided a large sample theory for sequential estimation. Robbins (1959)

considered the problem of minimum risk point estima tion of the mean of normal

Page 35: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

population under absolute error loss function and linear cost of sampling. Starr and

Woodroofe (1969) introduced another measure of the optimality of a sequential

point estimation procedure, known as ‘regret’. Regret of a sequential procedure is

defined as the difference between the risks of the sequential procedure and that of

the optimal fixed sample size procedure. A sequential procedure is ‘optimal’ if its

‘regret’ is asymptotically bounded.

Woodroofe (1977) introduced the concept of ‘second order approximations’

in the area of sequential estimation. In this theory, one may be able to study the

behavior of the remainder terms after the optimum position achieved by the fixed

sample size procedure. Chaturvedi (1988) generalized the results of Woodroofe

(1977) by obtaining the second order approximations for the regret of the

sequential procedure for the minimum risk point estimation of the mean of a

normal population by taking a family of loss functions and a general cost function.

Chaturvedi, Tiwari and Pandey

(1992) developed a class of sequential procedures for the point estimation of the

parameters of an absolutely continuous population in the presence of an unknown

scalar nuisance parameter. They also derived second-order approximations for the

expected sample size and the regret of the sequential procedure.

Hall (1981, 1983) proposed three-stage and ‘accelerated’ sequential

procedure. Chaturvedi, Tiwari and Pandey (1993) further analyzed the problem of

constructing a confidence interval of pre-assigned width and coverage probability

considered by Constanza, Hamdy and Son (1986). They utilized several multi-

Page 36: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

stage (purely sequential, accelerated sequential, three-stage and two-stage)

estimation procedures to deal with the same estimation problem. Kumar and

Chaturvedi (1993) and Chaturvedi and Rani (1999) proposed the classes of two-

stage procedures to construct fixed width confidence intervals and point

estimation. Chaturvedi and Tiwari (2002) developed a class of three-stage

estimation procedures taking into consideration the common distributional

properties of the estimators of the parameters to be estimated under different

continuous probabilistic models and those of nuisance parameters involved

therein. They also considered the problem of constructing fixed size confidence

regions as well as point estimation. They also presented the asymptotic properties

of the proposed class. Chaturvedi and Tomer (2003, b) considered the three-stage

and accelerated sequential procedures for the mean of a normal population with

known coefficient of variation.

1.5 BAYESIAN INFERENTIAL PROCEDURES IN RELIABILITY

ANALYSIS

The Bayesian ideas in reliability analysis were introduced for the first time

by Bhattacharya (1967), who considered the Bayesian estimation of reliability

function for one-parameter exponential distribution under uniform and beta priors.

Bhattacharya and Kumar (1986) and Bhattacharya and Tyagi (1988) obtained

Bayes estimators of the reliability function with other priors. The Bayes estimators

Page 37: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

for the reliability function of exponential and Weibull distributions using uniform

and gamma priors have been obtained by Harris and Singpurwala (1968). Canfield

(1970) considered an asymmetric loss function for the Bayesian estimation of the

reliability function under beta priors. Soland (1969) derived the Bayes estimator of

reliability function for the Weibull distribution using a discrete prior distribution

for the shape parameter. Using Monte-Carlo simulation, Tsokos (1972, b) and

Canvos and Tsokos (1973) showed that the Bayes estimators of reliability function

in case of uniform, exponential and gamma priors have uniformly smaller mean

squared-error than minimum variance unbiased estimators (MVUEs). Lian (1975)

and Martz and Lian (1977) obtained the Bayes estimator of reliability of Weibull

distribution using a piecewise linear prior distribution. Canvos and Tsokos (1971)

obtained Bayes estimator of reliability function for gamma distribution, restricting

the scale parameter as integer valued function. Padgett and Tsokos (1977) studied

the mean squared-error performance of Bayes estimator of reliability function

compared to MLE for lognormal distribution. Tyagi and Bhattacharya (1989, b)

considered the Bayesian estimation of reliability function of Maxwell distribution.

Chaturvedi and Rani (1998) extended the results of Tyagi and Bhattacharya (1989,

a & b) for the generalized Maxwell distribution. Chaturvedi, Tiwari and Kumar

(2007) obtained the Bayes estimator of the reliability function for binomial and

Poisson distributions [For detailed discussion, one may refer to Chapter II of the

thesis].

Page 38: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

The pioneering work on the Bayesian estimation of ‘P’ has been done by

Enis and Geisser (1971), who derived Bayes estimator of ‘P’ when X and Y

follow normal distributions. Basu and Tarmast (1987) also considered the problem

of Bayesian estimation of ‘P’. Basu and Ebrahimi (1991) obtained the Bayes

estimator of ‘P’ for the exponential case in complete sample. Chaturvedi and

Tomer (2002) considered Bayes estimation of ‘P’ for negative binomial

distribution. Chaturvedi, Tiwari and Kumar (2007) derived the Bayes estimator of

‘P’ for binomial and Poisson distributions [For detailed discussion, one may refer

to Chapter II of the thesis]. All these authors considered SELF, which is a

symmetrical loss function.

While estimating reliability function, the use of a symmetrical loss function is

inappropriate because of the recognition of the fact that overestimation is usually

more serious than the underestimation [See, Basu and Ebrahimi (1991) and

Calabria and Pulcini (1996) for a detailed discussion]. For the situations when

overestimation and underestimation are not equally serious, Varian (1975)

suggested LINEX (linear in exponential) loss function, which was further used by

Zellner (1986). Basu and Ebrahimi (1991) derived Bayes estimators of the mean

failure time, reliability function and ‘P’ of an exponential distribution for the

complete sample case considering both the SELF and LINEX loss functions. The

LINEX loss function is suitable for the estimation for location parameter but not

for the

Page 39: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

estimation of scale parameter and other parametric functions. Calabria and Pulcini

(1994) suggested the use of general entropy loss function (GELF) for estimating

these quantities.

1.6 CONTENTS OF THE THESIS

In Chapter 2 of the thesis, the binomial and Poisson distributions are

introduced as lifetime models. In Section 2.2, we study the behaviour of hazard-

rates of binomial and Poisson distributions and provide the set-up of the estimation

problems. In Sections 2.3 and 2.4, respectively, we obtain the classical and Bayes

estimators of powers of ?, reliability function and ‘P’ for binomial distribution.

The classical and Bayes estimators of powers of ?, reliability function and ‘P’ for

Poisson distribution are obtained in Sections 2.5 and 2.6, respectively. In order to

obtain the estimators of these parametric functions, the basic role is played by the

estimators of the factorial moments of the two distributions.

In Chapter 3 of the thesis, sequential point estimation procedure for the

generalized life distributions, which covers several distributions useful in

reliability analysis, including Weibull and gamma distributions as particular cases,

is considered. The failure of the fixed sample size procedure is established and

minimum risk point estimation for the parameters associated

with the generalized life distributions under SELF is considered. In Section 3.2,

we discuss the generalized life distributions and consider the problem of minimum

Page 40: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

risk point estimation. Section 3.3 describes the set-up of the problem. The

sequential estimation procedure and second-order approximations are obtained in

Section 3.4. In Section 3.5, the condition for the negative regret of the sequential

procedure is obtained and an improved estimator is proposed which dominates the

UMVUE.

In Chapter 4 of the thesis, Bayesian estimation procedures for powers of

parameter, reliability function and P(X>Y) for a family of lifetime distributions

under squared-error loss function (SELF) and general entropy loss function

(GELF) is considered. In Section 4.2, the family of life distributions is discussed.

This family includes several probabilistic distributions useful in reliability analysis

as discussed in the Chapter I. In Section 4.3, the Bayes estimators of ?, ?(t)

(reliability function at specified mission time t) and ‘P’ under SELF are discussed.

Bayes estimators of powers of ?, ?(t) and ‘P’ under GELF is considered in Section

4.4. Throughout the chapter, we have assumed that shape parameter is known,

while scale parameter is unknown. Finally, in Section 4.5, we assume that both the

parameters are unknown and the Bayes estimators for both the parameters are

obtained after calculating the marginal posteriors in each case.

In Chapter 5 of the thesis, we develop a two-stage point estimation procedure

for the mean of a normal population when the population CV is known. Both the

minimum risk and the bounded risk estimation problems are considered. Second

order approximations are also considered for the proposed two-stage point

estimation procedure. In Section 5.2, we discuss the minimum risk estimation for

Page 41: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

the parameters of normal distribution. In Section 5.3, we obtain second order

approximations for expected sample size, risk corresponding to two-stage point

estimation procedure [ (c)NR ] and the regret of the procedure (c)].g[R In Section

5.4, the bounded risk point estimation case is considered. In Section 5.5 we obtain

second order approximations for expected sample size, E( 2N ) and (A)NR .

In the Chapter 6 of the thesis, we derived the Shrinkage-type Bayes estimator

of the parameter of a family of lifetime distributions. In Section 6.2, the set-up of

the estimation problem is described and the desired shrinkage-type Bayes

estimators are obtained. The optimality in the sense of efficiency of the shrinkage-

type Bayes estimator over the UMVUE and the minimum mean squared error

estimator is established in the Section 6.3 and Section 6.4, respectively. Finally, in

Section 6.5, the Lindley approximation of the reliability function of the family of

lifetime distribution is considered.

In Chapter 7, a brief summary of the thesis is presented.

Page 42: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

CHAPTER II

CLASSICAL AND BAYESIAN RELIABILITY ESTIMATION OF

BINOMIAL AND POISSON DISTRIBUTIONS

2.1 INTRODUCTION

A lot of work has been done in the literature for estimating various

parametric functions of several discrete distributions through classical and

Bayesian approaches. Halmos (1946) provided a necessary and sufficient

condition for the existence of an unbiased estimator. Kolmogorov (1950)

investigated for what functions of parameter of success of binomial distribution,

there exist an unbiased estimator. Blyth (1980) studied the expected absolute error

of UMVUE of the probability of success of binomial distribution. For a random

variable (rv) X following binomial distribution, Pulskamp (1990) has shown that

the UMVUE of P(X = x) is admissible under squared-error loss function when x =

0 or n. Cacoullos and Charalambides (1975) obtained MVUE for truncated

binomial and negative binomial distributions. Bayesian estimation of the

parameter of binomial distribution has been considered by Chew (1971). Barton

Page 43: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

(1961) and Glasser (1962) obtained UMVUE of P(X = x) for Poisson distribution.

For MVUEs of generalized Poisson and decapitated generalized Poisson

distributions, onemay refer to Patel and Jani (1977). Irony (1992) developed

Bayesian estimation procedures related to Poisson distribution. Guttman (1958)

and Patil (1963) provided UMVUEs of parametric functions of negative binomial

distribution. Patil and Wani (1966) obtained UMVUEs of distribution functions of

various distributions. Roy and Mitra (1957) considered the problem of minimum

variance unbiased estimation of a univariate power series distribution. Patel (1978)

generalized their results to multivariate modified power series distribution. Patil

and Bildikar (1966) derived MVUE for logarithmic series distribution. Extensive

tables concerning UMVUEs of different parametric functions of various

distributions are available in Voinov and Nikulin (1993, 1996).

Discrete distributions have played important role in reliability theory.

Kumar and Bhattacharya (1989) considered negative binomial distribution as the

life-time model and obtained UMVUEs of the mean life and reliability function.

Another measure of reliability under stress-strength set-up is the probability PrX

≤ Y, where X is the stress variable and Y is the strength variable. Maiti (1995)

considered the estimation of PrX ≤ Y under the assumption that X and Y

followed geometric distributions and derived UMVUE and Bayes estimators.

Chaturvedi and Tomer (2002) considered classical and Bayesian estimation

procedures for the reliability function of the negative binomial distribution from

a different approach. Generalizing

Page 44: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

the results of Maiti (1995), they dealt with the problem of estimating PrX1 +

…+ Xk ≤ Y, where the rv’s X’s and Y were assumed to follow negative binomial

distribution.

In this chapter the problems of estimating the reliability function and Pr X1

+ … + Xk ≤ Y are considered. The random variables X’s and Y are assumed to

follow binomial and Poisson distributions. Classical as well as Bayes estimators

for these distributions are derived. In order to obtain the estimators of these

parametric functions, the basic role is played by the estimators of factorial

moments of the two distributions.

In order to obtain Bayes estimators of parameters and various parametric

functions of different distributions, the researchers have adopted conventional

technique, i.e. obtaining their posterior means. In the present discussion, we

consider binomial and Poisson distributions and studying the behaviour of their

hazard-rates, we investigate the situations when these distributions can be

recommended as life-time models. We consider the problems of estimating

reliability function and P = PrX1 + …+ Xk ≤ Y from Bayesian viewpoint. It is

worth mentioning here that, in contrary to conventional approach, only estimators

of factorial moments are needed to estimate these parametric functions and no

separate dealing is needed. [see ‘REMARKS’ 1 and 2].

In Section 2.2, we study the behaviour of hazard-rates of binomial and

Poisson distributions and provide the set-up of the estimation problems. In

Page 45: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Sections 2.3 and 2.4, respectively, we obtain the classical and Bayes estimators of

powers of ?, reliability function and ‘P’ for binomial distribution. The classical

and Bayes estimators of powers of ?, reliability function and ‘P’ for Poisson

distribution are obtained in Sections 2.5 and 2.6, respectively.

2.2 THE HAZARD-RATES OF BINOMIAL AND POISSON

DISTRIBUTIONS AND SET-UP OF THE ESTIMATION

PROBLEMS

The rv X is said to follow binomial distribution with parameters (r, ?) if its

probability mass function (pmf) is given by

r.2,...,1,0,x1,?0;xr?)(1x?xr

?)r,(x;p =<<−−

= (2.2.1)

Throughout the remaining part of this discussion, we assume that r is known

but ? is unknown. The reliability function for a specified mission time, say,

)0(ot ≥ cycles, is given by

)ot(XP)o(tR ≥=

.xr?)(1x?r

otx xr −−∑

=

= (2.2.2)

From (2.2.1) and (2.2.2), the hazard-rate is

)o(tR?)r,;o(tp

)o(th =

Page 46: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

xr?)(1x?

r

otx x

r

otr?)(1ot?

otr

−−∑=

−−

=

.

1

otr

otr

0x

x

?1?

otxr

∑−

=

+

= (2.2.3)

Let

,x

?1?

ot

rotx

r

)o(tu

+

=

so that

1)ot(x)ot(r

1)(rx1

)o(tu

1)o(tu

++−+

−=+

r.otallfor1, ≤<

Thus, u (to) is monotonically decreasing in to and we conclude from (2.2.3) that

binomial distribution can be taken as reliability model when we encounter

increasing failure rate.

Suppose that Xi, i = 1, 2, …, k be k independent rv’s, where Xi follows

the binomial distribution (2.2.1) with parameters (ri, ? ) and Y be another rv,

independent of Xi’s, following binomial distribution with parameters (s, ß). Using

Page 47: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

the additive property of binomial distribution and denoting by ∑=

=k

1i iX *X and

∑=

=k

1i ir*r , we conclude that

YX...XPrP k1 ≤++=

).ßs,(y;p *r

0*x

s

*xy?),*r;*(xp ∑

=∑=

= (2.2.4)

The rv X follows Poisson distribution with parameter ? if its pmf is

. 2,...1,0,x;!x

x??e?)(x;p =

−= (2.2.5)

The reliability function at a specified mission time, say, )0(ot ≥ cycles is

.x!

x??e

otx)o(tR

−∑∞

== (2.2.6)

Denoting by (a) (b) = a (a – 1) … (a – b + 1), from (2.2.5) and (2.2.6), the hazard-

rate is

Page 48: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

x!

x??e

otx

!ot

ot??e

)o(th−

∑∞

=

=

.

1

0x (x))ot(x

x?

∑∞

= += (2.2.7) Let

.(x))otx(

x?)o(tu+

=

Since

1)ot(x

1ot)o(tu

1)o(tu

+++

=+

<1,

we conclude that u (to) is monotonically decreasing in to and, from (2.2.7), Poisson

distribution can represent life-time model when we have increasing failure rate.

Let Xi, i = 1, 2, …, k are k independent rv’s and Xi follows Poisson

distribution (2.2.5) with parameter ?i and Y is another rv, independent of Xi’s,

following Poisson distribution with parameter ß. Denoting by ∑=

=k

1i iX *X and

∑=

=k

1i i? *? , from the additive property of Poisson distribution,

).ß(y;p0*x *xy

)*?;*(xpP ∑∞

=∑∞

== (2.2.8)

Page 49: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Our goal is to estimate R (to) and ‘P’ for binomial and Poisson distributions. In

what follows, we derive classical and Bayes estimators of powers of ?.

2.3 THE UMVUE OF THE POWERS OF ?, R (to ) AND ‘P’ FOR

BINOMIAL DISTRIBUTION

In the following theorem, we obtain the UMVUE of p? (p>0), which comes in

the expression for the pth factorial moment about origin.

Theorem 1: For p>0, the UMVUE of p? is

T.pif,Tnr

pTpnrp

U? ≤

−−

=

otherwise0,=

(2.3.1)

where .n

1i iXT ∑=

=

Proof: Given a random sample )nX...,2X,1(XX = from (2.2.1), it can be seen

that T is complete and sufficient for the family of binomial distributions [see Patel,

Kapadia and Owen (1976, p.157)]. Moreover, T follows binomial distribution with

parameter (nr, ?). Now we choose a function g (T) such that

p? E[g(T)] =

Page 50: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

i.e. ( ) p?tnr?1t?Tnr

)n

0tg(T =−−

∑=

or

( ) 1tnr?1p-t?Tnr

)n

ptg(T =−−

∑=

. (2.3.2)

Equation (2.3.2) holds if we choose g (T) = pU? , as given by (2.3.1). Hence the

theorem.

In the following theorem, we provide UMVUE of R (to) and ‘P’, given at

(2.2.2) and (2.2.4), respectively. Given ni observations X ij, i = 1, 2, ..., k; j = 1, 2,

…, ni from Xi, i = 1, 2,..., k and m observationsY j, j = 1, 2,…, m on Y’s, it is

to see that ∑=

∑=

=k

1i jiXin

1j1T and jYm

1j2T ∑=

= are complete and sufficient for

the family of binomial distributions p(x* ; r*, ?) and p(y; s, ß), respectively.

Furthermore, 1T and 2T follow binomial distributions with parameters

)?,ir ink

1i( ∑

= and (ms, ß), respectively.

Theorem 2: The UMVUE of R (to ) and ‘P’ are given, respectively, by )o(tUR

and UP , where

∑=

=

Tnr

xTr1)-(nT

otx xr

)o(tUR (2.3.3) and

Page 51: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

.

2T

s m

1T

ir ink

1i

1T

0*x

2T

*xy y2T s 1)-(m

*x1

T

ir 1)-i(nk

1iys

*x

*r

UP

∑=

∑=

∑=

∑=

= (2.3.4)

Proof: we can write the pmf (2.2.1) as

.ix?xr

0i ixri1)(

xr

?)r,(x;p +∑−

=

−−

= (2.3.5)

Using Lemma 1 of Chaturvedi and Tomer (2002) and Theorem 1, it follows from

(2.3.5) that, the UMVUE of p (x; r, ?), at a specified point ‘x’, is

ixU?

xr

0i i

xri1)( x

r?)r,(x;Up +∑

=

−−

=

.xr

0i Tnr

i-x-Ti-x-nr

ixri1)(

xr

∑−

=

−−

= (2.3.6)

Using a result of Feller (1960, p.62) that

integers,positivekj,n,;knan

kjn

jaj

j(-1)

−−

=

we obtain from (2.3.6) that

T.x;Tnr

xTr 1)(n

xr

?)r,(x;Up ≤

−−

= (2.3.7) Now

from (2.2.2), we get

Page 52: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

?)r,(x;Upr

otx)o(tUR ∑

== .

Utilizing (2.3.7), we have from the above equation

∑=

=

Tnr

xTr1)-(nT

otx xr

)o(tUR

and (2.3.3) follows.

From arguments similar to those used in obtaining the UMVUE of R (to ),

it can be shown that

)ßs,(y;p *r

0*x

s

*xy?),*r;*(xp UP ∑

=∑=

=

.

2T

s m

1T

ir ink

1i

1T

0*x

2T

*xy y2T s 1)-(m

*x1

T

ir 1)-i(nk

1iys

*x

*r

∑=

∑=

∑=

∑=

=

Hence the result (2.3.4) follows.

REMARK 1: The method of obtaining UMVUE of the parametric function

discussed in the Section (2.3) is very simple, as one can avoid the calculations for

conditional distributions and then going for Rao-Blackwelliazation. The UMVUEs

of powers of parameter form the basis of whole analysis.

Page 53: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

2.4 THE BAYESIAN ESTIMATION OF THE POWERS OF ?, R (to)

AND ‘P’ FOR BINOMIAL DISTRIBUTION

We first consider the estimation of powers of ? under natural conjugate family

of prior densities and SELF.

Given a random sample )nX...,2X,1(XX = from (2.2.1), let ∑=

=n

1i iX T .

Denoting by )x\(?L , the likelihood of observing X , we note that

.tnr?)(1t?t)\(?L −−∝ (2.4.1)

Thus we consider the conjugate prior for ? to be beta with parameters (?, µ), i.e.

).integerspositiveµ ?,(1µ?)(11??)?(g −−−∝ (2.4.2)

Combining (2.4.1) and (2.4.2) via Bayes’ theorem, the posterior density of ?

comes out to be

,1µt rn ?)(11?t ?k t)\(?*g −+−−−+=

where the normalizing constant k can be obtained as

d?t)\(?*g1

0 1-k ∫=

).tµrn?,t(B −++=

Hence, the posterior density of ? is

.)tµrn?,t(B

1tµnr?)(11?t?t)\(?*g−++

−−+−−+= (2.4.3)

Page 54: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

In the following theorem, we obtain Bayes estimator of p? (p > 0), which

comes in the expression for the pth factorial moment about origin [see Kendall and

Stuart (1958, p.122)].

Theorem 3: For p > 0, Bayes estimator of p? is given by

.)tµrn?,t(B

)tµrn,p?t(BpB?

−++−+++

=

Proof: We know that, under squared-error loss function, Bayes estimator of any

parametric function is its posterior mean.

d?1tµnr?)(11?pt?1

0)tµrn?,t(B1p

B? −−+−−++−++

= ∫

)tµrn?,t(B

)tµrn,p?t(B−++

−+++=

and the theorem follows.

In the following theorem, we provide Bayes estimators of R (to ) and ‘P’,

given at (2.2.2) and (2.2.4), respectively. Given ni observations Xij, i = 1, 2,..., k;

j = 1, 2,…, ni from Xi, i = 1, 2,..., k and m observationsYj, j = 1, 2,…, m on

Y’s, let us define ∑=

∑=

=k

1i jiXin

1j1T and .jYm

1j2T ∑=

= In order to estimate ‘P’,

we choose independent beta priors for ? and ß with parameters (ν1, µ1 ) and (ν2,

µ2), respectively.

Page 55: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Theorem 4: Bayes estimators of R (to) and ‘P’ are given, respectively, by

)o(tBR and BP , where

)tµrn?,t(B)xtµr)1n(x,?t(Br

otx xr

)o(tBR−++

−−++++∑=

= (2.4.4)

and.

)2t2µms,2?2tB()1tk

1i 1?irin,1?1tB(

*r

0*x

s

*xyy)2t2µs1)(m,y2?2tB()*x

1t*r1µir

k

1i1)i(n,*x1?1tB(

ys

*x

*r

BP−++−∑

=++

∑=

∑=

−−++++−

−++∑=

+++

=

(2.4.5)

Proof: Using Lemma 1 of Chaturvedi and Tomer (2002) (which holds good if we

replace UMVUEs by Bayes estimators) and Theorem 3, from (2.3.5), Bayes

estimator of p(x; r, ?), at a specified point ‘x’, is

ixB?

xr

0i i

xri1)( x

r?)r,(x;Bp +∑

=

−−

=

.)tµrn?,t(B

)tµrni,x?t(B

xr

0i i

xri1)( x

r

−++−++++

∑−

=

−−

= (2.4.6)

Utilizing Lemma 2 of Chaturvedi and Tomer (2002), it follows from (2.4.6) that

.)tµrn?,t(B

)xtµr1)(nx,?t(Bxr

?)r,(x;Bp−++

−−++++

= (2.4.7) Now

from the fact that

?).r,(x;Bpr

otx)o(tBR ∑

==

Page 56: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Utilizing (2.4.7), we obtain

)tµrn?,t(B)xtµr)1n(x,?t(Br

otx xr

)o(tBR −++−−++++∑

=

=

and the result (2.4.4) follows.

Similarly we can obtain

)ßs,(y;Bp*r

0*x

s

*xy?),*r;*(xBpBP ∑

=∑=

=

utilizing (2.4.7), we obtain

)2t2µms,2?2tB()1tk

1i 1?irin,1?1tB(

*r

0*x

s

*xyy)2t2µs1)(m,y2?2tB()*x

1t*r1µir

k

1i1)i(n,*x1?1tB(

ys

*x

*r

BP−++−∑

=++

∑=

∑=

−−++++−

−++∑=

+++

=

and the result (2.4.5) follows.

2.5 THE UMVUE OF THE POWERS OF ?, R (to ) AND ‘P’ FOR

POISSON DISTRIBUTION

In the following theorem we derive the UMVUE of p? (p>0) using the method

discussed in the Section (2.3).

Page 57: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Theorem 5: For p>0, the UMVUE of p? is

Tpif,p)(Tpn

TpU? ≤

−=

otherwise0,= (2.5.1)

where

.n

1i iXT ∑=

=

Proof: Given a random sample )nX...,2X,1(XX = from (2.2.5), it can be seen

that T is complete and sufficient for the family of Poisson distributions [see Patel,

Kapadia and Owen (1976, p.158)]. Moreover, T follows Poisson distribution with

parameter n?. Now we choose a function g (T) such that

p? E[g(T)] =

i.e. p?!t

t?tnn?e )

0tg(T =

−∑∞

=

or

1.!t

p-t?tn)ptg(T n?e =∑

=

− (2.5.2)

Equation (2.5.2) holds if we choose g (T) = pU? , as given by (2.5.1). Hence the

theorem.

Page 58: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

In the following theorem, we provide UMVUE of R (to) and ‘P’, given at

(2.2.5) and (2.2.8), respectively.

Theorem 6: The UMVUE of R (to) and ‘P’ are given, respectively, by

)o(tUR and UP , where

x-T1)-(nT

otx xT

Tn

1)o(tUR ∑

=

= (2.5.3) and

.y2T

1)-(m

*x1T1in

k

1I

1T

ot*x y 2

T2T

* xy *x1T

2T

m1

T

in

k

1I

1UP

−−

−∑=

∑=

∑=

∑=

=

(2.5.4)

Proof: we can write (2.2.5) as

.0i x!i!

ix?i1)(?)(x;p ∑∞

=

+−= (2.5.5)

Using Lemma 1 of Chaturvedi and Tomer (2002) and Theorem 5, it follows from

(2.5.5) that, the UMVUE of p (x; ?), at a specified point ‘x’, is

∑−

=

+−=

xT

0i x!i!

ix?i1)(?)(x;Up

∑−

= −+−=

xT

0i x)(T x!xin

T!i1)(

.xT

n1

1xnxT −

−−

= (2.5.6)

Page 59: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Now from (2.2.6), we get

?).(x;UpT

otx)o(tUR ∑

==

Utilizing (2.5.6)

x-T1)-(nT

otx xT

Tn

1)o(tUR ∑

=

=

and the result (2.5.3) follows.

From arguments similar to those used in obtaining the UMVUE of R (to), it

can be shown that

).ß (y;p 0*x *xy

?);*(xp UP ∑∞

=∑∞

==

.y2T

1)-(m

*x1T1in

k

1I

1T

ot*x y 2

T2T

* xy *x1T

2T

m1

T

in

k

1I

1 −−

−∑=

∑=

∑=

∑=

=

Hence the theorem.

2.6 THE BAYESIAN ESTIMATION OF THE POWERS OF ?, R (to)

AND ‘P’ FOR POISSON DISTRIBUTION

Denoting by ∑=

=n

1i iXT , the likelihood of observing a random sample

)nX...,2X,1(XX = from (2.5) is

Page 60: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

.t?n?et)\(?L −∝ (2.6.1) We

consider the conjugate prior for ? to be gamma with parameters (a, ?), i.e.

integer).positive(a??e1a?)?(g −−∝ (2.6.2)

From (2.6.1) and (2.6.2), the posterior density of ? is

.?) ?n ( e1a t?a)(tG

at?)(nt)\(?*g +−−+

+

++=

In the following theorem, we mention Bayes estimator of p? (p > 0),

which takes place in the expression for the pth factorial moment about origin [see

Johnson and Kotz (1969, p.91)].

Theorem 7: For p > 0, Bayes estimator of p? is

.p ?)(na)(tG

p)a(tGpB? −+

+++=

Proof of the theorem is similar to those of Theorem 3.

In the following theorem, we provide Bayes estimators of R (to ) and ‘P’. In

order to estimate ‘P’, we consider independent priors for ?* and ß to be gamma

with parameters (a1, ?1 ) and (a2, ?2), respectively.

Theorem 8: Bayes estimators of R (to) and ‘P’ are given, respectively, by

)o(tBR and BP , where,

Page 61: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

x1)?(notx x

1xat

at

1)?(n?)(n)o(tBR −++∑

=

−+++

+++= (2.6.3)

and

.y2a2t1)2?(m

k

1i

*x1a1t1)1?in(

0*x *xy2a2t)2?.(m

k

1i1a1t)1?in(

y

1y2a2t

*x

1*x1a1t

BP++

++∑=

++++

∑∞

=∑∞

= ++

∑=

++

−++

−++

=

(2.6.4)

Proof: We can write (2.2.5) as

,x!i!

xi?

0ii1)(?)(x;p

+∑∞

=−=

which on using Theorem 3 gives that Bayes estimator of p(x; ?), at a specified

point ‘x’, is

i!

xiB?

0ii1)(

x!1

?)(x;Bp+

∑∞

=−=

xi?)(n1)!a(ti!

1)!xia(t

0ii1)(

x!1

++−+

−+++∑∞

=−=

i?)(n

1i

1xiat

0ii1)(

x1xat

x?)(n

1

+

−+++∑∞

=−

−++

+=

Page 62: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

.xat1)?(n

at?)(nx

1xat++++

++

−++= (2.6.5)

Results (2.6.3) and (2.6.4) follow, respectively, from (2.2.6) and (2.2.8), on using

(2.6.5).

REMARK 2: Looking at the proofs of above Theorems, we conclude that, in the

present approach, the classical and the Bayes estimators of R (to ) and ‘P’ can be

obtained simply using the estimators of factorial moments and no separate dealing

is needed to estimate these parametric functions.

Page 63: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

CHAPTER III

SEQUENTIAL POINT ESTIMATION PROCEDURES FOR THE

GENERALIZED LIFE DISTRIBUTIONS

3.1 INTRODUCTION

A lot of work has been done in the area of sequential point estimation for

the parameters associated with various probabilistic models useful in reliability

analysis. Robbins (1959) considered the problem of sequential point estimation of

the mean of a normal population under absolute error loss and linear cost. Starr

(1966) generalized these results considering a family of loss functions and cost

function of the general form. Starr and Woodroofe (1969) proved the bounded

nature of ‘regret’ of the sequential procedure of Starr (1966). Later on, Starr and

Woodroofe (1972) proposed sequential procedure for the point estimation of mean

of an exponential distribution, which is very useful in reliability analysis, and

proved the asymptotic bounded nature of ‘regret’. Several authors have obtained

similar results for different estimation problems. For a brief review, one may refer

to Wang (1973, 1980), Nago and Takada (1980), Chaturvedi (1986, a; 1987)

Page 64: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

and Chaturvedi and Shukla (1990). Woodroofe (1977) introduced the concept of

‘second-order approximations’ in the area of sequential estimation and obtained

such approximations for the regret of the sequential procedure with the minimum

risk point estimation of the mean of gamma distribution. He considered UMVUE

at both the stopping and estimation stages. Chaturvedi (1986, b) obtained second-

order approximations for sequential procedure to estimate mean vector of a

multinormal population. Isogai and Uno (1995), through the bias-correction of

UMVUE, proposed another sequential estimator and showed its dominance over

the UMVUE in terms of having the smaller risk. Similar results for normal and

exponential distributions have been obtained by Isogai and Uno (1993, 1994) and

Mukhopadhyay (1994).

In the present chapter , we develop sequential estimation procedure for the

generalized distributions considered by Chaturvedi et.al. (2002; 2003, a). In

Section 3.2, we discuss the generalized life distributions and consider the problem

of minimum risk point estimation. Section 3.3 describes the set-up of the problem.

The sequential estimation procedure and second-order approximations are

obtained in Section 3.4. In Section 3.5, the condition for the negative regret of the

sequential procedure is obtained and an improved

estimator is proposed which dominates the UMVUE.

Page 65: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

3.2 THE GENERALIZED LIFE DISTRIBUTIONS

Let the random variable (rv) X follows the generalized life distributions

considered by Chaturvedi et.al. (2002; 2003, a) with probability density function

(pdf)

0d?,g(x),;ax;?

g(x)exp

G(d) d?

(x)'g(x)1dg?)d,(x;f >>

−= (3.2.1)

where ‘a’ is known and d and ? are parameters. Here, g (x) is real valued, strictly

increasing function of X with g (a) = 0 and (x)'g denotes the first derivative of g

(x).

The model (3.2.1) is called the generalized life distributions since it

includes various life distributions useful in reliability analysis as discussed in the

Chapter I.

The reliability function R(t) for specified mission time t (t > 0) can be

obtained as

t)(XP(t)R >=

= dxt ?

g(x)exp)( d?

(x)'g(x)1dg∫∞

−δΓ

( ) dy?g(t)

yexp)(

1dy∫∞

−δΓ

−= (3.2.2)

Page 66: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

?g(x)y where = .

The hazard-rate h(t) is defined as

(t)R?)d,(t;f

(t)h =

from (3.2.1) and (3.2.2), the above expression yields

?(t) 'g

1

?g(t)

?(t) g

-y -exp1d

(t) gy

(t)h

∫∞ −

=

?(t) 'g

1

?g(t)dss-e

1d

(t) gs

?1

∫∞ −

+=

.?

(t) g -y swhere

=

The behaviour of hazard- rate depends upon (t) 'g . If (t) 'g is constant

(exponential distribution and Weibull distribution for p=1), h (t) is also constant. If

(t) 'g is monotonically increasing in t (Weibull distribution for p>1, Rayleigh

distribution and Chi distribution for a = 2), h (t) is monotonically increasing in t. If

(t) 'g is monotonically decreasing in t (Weibull distribution for p<1, Burr

distribution and Pareto distribution), h (t) is monotonically decreasing in t.

The classical and Bayesian inferential procedure for the model (3.2.1) are

considered by Chaturvedi and Tomer (2003, a).

Page 67: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

3.3 THE SET-UP OF THE ESTIMATION PROBLEM

Our aim is to estimate parameter ? assuming d to be known. Given

a random sample )nX,...2X,1(XX = of size n, observed from (3.2.1), the

joint pdf of X is

[ ]

.n

1i)ig(x

?1expnG(d)nd?

n

1i)i(x'g)i(x1dg

?);nx,...,2x,1(xf

∑=

−∏=

=

(3.3.1)

It can be seen from (3.3.1) that ∑=

n

1i)ig(x = S (say) is complete and

sufficient for the model (3.2.1). Moreover from the additive property of gamma

distribution, S follows gamma distribution with parameters nd and ? [see Johnson

and Kotz (1970, p.170)].

The UMVUE of ? is ndS

n? = with pdf

.?

n?ndexp

G(nd)

1nd)n?(nd

?

nd?),n?(f

−=

Also

δ

θ==n

2)n?(Vand? )n?(E .

Page 68: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Let the loss incurred in estimating ? by n? under squared-error loss

function (SELF) and linear cost of sampling be

n,2?)n?(A(A)nL +−= (3.3.2)

where A is known and positive constant. The risk corresponding to the loss

function (3.3.2) is

(A)]nL[E(A)nR =

n2?)n?(EA +−=

ndn

2?A += . (3.3.3)

Our aim is to minimize the risk (3.3.3) while estimating ? by n? . The value

onn = minimizing the risk (3.3.3) is the solution of the equation

0onnn

(A)nR=

=

∂.

Which yields

?21)1(Adon −= (3.3.4)

and the associated minimum risk is

on2(A)

onR =

since δ= 2on2?A .

Page 69: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

It is obvious from (3.3.4) that o

n depends upon unknown parameter ?. In

the absence of any knowledge about parameter ?, no fixed sample size procedure

yields solution to the problem. In this situation, we adopt the following sequential

estimation procedure.

3.6 THE SEQUENTIAL ESTIMATION PROCEDURE AND SECOND

ORDER APPROXIMATIONS

Let us start with a sample of size 2m ≥ . Then, motivated by (3.3.4), the

stopping time N = N (A) is defined by

.n?21)1(Adn : m n inf N −≥≥= (3.4.1)

When stop, we estimate ? by N? . The risk associated with the estimator N? is

(A)]N[L E(A)NR =

(N). E2?)N?(AE +−= (3.4.2)

In the following theorem, we derive the second-order approximations for the

expected sample size and risk associated with the sequential procedure.

Theorem: For all md >1, as A ∞→ ,

o(1)1d?on (N)E +−−+= (3.4.3)

Page 70: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

and

o(1),13do2n(A)NR +−+= (3.4.4)

where ? is specified.

Proof: Denoting ∑=

==n

1i iZnSandd ?

)ig(xiZ , the stopping rule (3.4.1) can be

written as

∑=

−≥≥= n

1i nd

)ig(x 21)1(Adn :mn inf N

≤∑=

≥= on

2nn

1i d?

)ig(x :mn inf

.on

2nnS:m n inf

≤≥= (3.4.5)

Comparing (3.4.5) with equation (1.1) of Woodroofe (1977), we obtain in his

notations,

1.oLand onßcßµ?

1,1a

1ß ,1d)iV(Z2t1,)iE(Z µ 2,a 1,L(n) ,

on1

c

==−=

=−

=−=======

We have from Theorem 2.4 of Woodroofe (1977), for md > ß,

o(1)2µ2t2ßa21

0Lß?1µß? (N)E +−−−−+=

o(1),1d?on +−−+=

Page 71: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

and (3.4.3) follows.

It can be easily seen from (3.4.1) that N is of the form tC given by

Woodroofe (1977) with Xi = Zi.

Now

iZN

1i

iX

N

1i

∑∑

==

= or

NNS

N?θ

= .

Hence

2?

NNS?

A2?)N?(A

−=−

2N)N(S2N

2ond

−=

.1)-2-N2o(n12N)N(Sd

+−= (3.4.6)

Utilizing Theorem 1 of Chow et.al. (1965), we obtain that

(N). E1d2N)NE(S −=− (3.4.7)

On combining (3.4.2), (3.4.6) and (3.4.7),

−+= 2N)N(S 1)-2-N 2

o(n dE(N) 2E(A)NR . (3.4.8)

Now, we can write

( ) ( ) 2NNS 2N2-on-1 d2NNS 1-2-N2

on d −

=−

2-N 2on d+ ( ) 2NNS

22N2-on-1 −

Page 72: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

(say). III +=

We estimate I and II separately.

Firstly,

)N 1-o

n(1 )N 1-o

n-(1 )2N 2-o

n-(1 +=

)2N1-on-(N1-N N)1-

on(1+=

utilizing (1.2) of Woodroofe (1977), we get

01-onas) 1-

oO(nN)-N(S 1-o2n )2N 2-

on-(1 →+−= .

Now,

2N)-N(S )1-oO(n2N)-N(S2-

o4n 2on2-dN II +=

( ) )1-oO(n4NNS2N4d +−−=

)1-oO(n

4

21N

N-NS4d +

= .

Also

) ,(~)ig(xandN

1i d ?

)ig(xNS θδΓ∑

== .

Now

.N

)NS(VandN)NS(Eδ

==

Hence

Page 73: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

.1

,0NL

21N

N-N

S

δ (3.4.9)

The asymptotic distribution of II is that of 4Z4δ , where Z has the normal

distribution with mean 0 and variance 1−δ .

Now, utilizing (3.4.9), we have

∫∞

−=∞−

dz )2z2dexp(4Z

2p21-d

14dE(II)lim

∫∞

−=0

dz )2z2dexp(4Z

2p

23d 8

.112d−= (3.4.10)

Now I can be written as

2N)-N(S2N)1-on-(1)2N1-

on-(N1-o2n d I +=

δ−−= 1on2 3N)-N(S δ−− 1

on2

NS -2N1-on 2N)-N(S

2

N1-on12N)-N(S

−δ+

).say(3

I2

I1I +−−=

To estimate I1 , we use Theorem 8 of Chow et.al. (1965), which asserts that

)NNS(NE1on6)N(E1

on143)NNS(E1on2 −−+−−δ=−δ−

Page 74: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

)1(o1onE1

on14)NNS(NE1on6 +−δ−ν+−−δ+−−=

)1(o14)NNS(NE1on6 +−δ+−−= .

Finally

)]NS2N1on(N2N1

on[N1on)NNS(N1

on −−−−−−=−−

)NS2N1on(N1

on)onN()12N2on()onN( −−−−−−−+−=

)NS2N1on(N1

on)1N1on()2

on2N(1on)onN( −−−−−−−−+−=

).say(13

I12

I11

I −+=

Now, on utilizing (3.4.3), we get

o(1).1d? )11(I E +−−=

The estimation of I12 is similar to that of II. Hence

.1d2 )12(I E −=

Using the fact that

,1on

N

Alimas,)NS2N1

on(E =∞→

ν→−−

we get

.? )13

I ( E =

From the Theorem 2.1 and 2.2 of Woodroofe (1977) that

Page 75: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

δ→

1,0NL

21on

on-N

and

on

2)on-(N is uniformly integrable for all .1m >δ

Utilizing above fact and (3.4.9), we get

.01onas1-3 )3(I E limand2 )2(I E lim →−δ=ν=

Collecting the terms, we get

.)1(o21-7 (I) E lim +ν−δ−= (3.4.11)

Finally utilizing (3.4.10) and (3.4.11), we get

o(1) 2?-1d 52N)-N1)(S2-N( d E 2on +−=

− . (3.4.12)

Utilizing (3.4.3) and (3.4.12), we obtain from (3.4.8) that

o(1)215d)1don 2( (A)NR +ν−−+−−ν+=

o(1),13do2n +−+=

and the result (3.4.4) follows.

Page 76: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

3.7 CONDITION FOR NEGATIVE REGRET AND AN

IMPROVED ESTIMATOR FOR ?

Following Starr and Woodroofe (1969), we define the ‘regret’ of the sequential

procedure (3.1) by

o(1).13d

(A)onR - (A)NR (A)gR

+−=

=

Woodroofe (1977) concluded theoretically that, ,on 2 (A)NR > for all sufficiently

large value of on .

We give below a theoretical justification of numerical finding of Starr and

Woodroofe (1972) by giving the condition under which the regret may be negative

i.e. o2n (A)NR < .

Let us consider a stropping rule ‘N’ such that N? overestimates ?. Under

such condition, from (3.4.1)

??N21

)1(Ad N? −≥−−−

and (3.4.2) yields

(N). EN

2N)N(S Eond(A)NR +

−≤

(3.5.1)

Utilizing (3.4.9), we obtain from (3.5.1) that

Page 77: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

E(N)on(A)NR +≤

utilizing (3.4.3), we get

o(1) 1d?o2n(A)NR +−−+≤ .

Hence

o(1).1d?o2n(A)NR +−−≤−

For the distributions having 1d? −< , the regret will be negative. The

generalized life distributions (3.3.1) include exponential, Weibull, Rayeigh and

Burr distribution, which have negative regret.

In what follows, we propose an improved estimator of ?. An improved

estimator of ? having smaller risk as compared to N? for stopping rule (3.4.1) is

( ) 21Adk N? NT −+= (3.5.2)

where k is any scalar.

Now we find the value of k for which the dominance of N?overNT can

be established.

Let (A)oNR is risk associated with the improved estimator (3.5.2). Then,

(N). E2?)N(TAE(A)oNR +−=

1d2k?)N?(E21)1-(Ad2k (A)NR −+−+=

Page 78: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

.1d2kN

NNSE

21)1-(Ad 2k?(A)NR −+

−+=

(3.5.3)

Using Wald’s lemma, it can be shown that

o(1)1dN

NNSE?21)1(Ad +−−=

−− .

From (3.5.3),

1d2k1d2k(A)NR (A)oNR −+−−= .

From the above expression it is clear that

2][0,k provided (A),NR (A)oNR ∈≤

and

2.kor 0keither when (A),NR (A)oNR ===

Using principle of minima and maxima, the optimum value of k for which

NT has minimum risk can be obtained as k = 1 and such an optimum estimator of

? is given as ( ) 21Ad N? NT −+= .

REMARK: The sequential procedure for the generalized life distributions

considered in this chapter provides a better solution, where the fixed sample size

procedure fails to provide solution if the parameter ? is unknown. The second

order approximations for the ASN and the risk associated with the proposed

sequential procedure are derived. The condition for the negative regret of the

Page 79: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

sequential procedure is achieved and it is found that the generalized life

distribution considered in this chapter contains many distributions that have

negative regret. An improved estimator of ? is also proposed and it is found that it

has smaller risk as compared to the traditional UMVUE.

Page 80: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

CHAPTER-IV

BAYESIAN ESTIMATION PROCEDURES FOR A FAMILY OF LIFE

TIME DISTRIBUTIONS UNDER SELF AND GELF

4.1 INTRODUCTION

Bhattacharya (1967) introduced the Bayesian ideas in the reliability

analysis. He considered the problem of estimating the parameter and reliability

function of one-parameter exponential distribution under type II censoring and

SELF. Bhattacharya and Kumar (1986) and Bhattacharya and Tyagi (1988)

obtained Bayes estimators of the reliability function with other priors. The Bayes

estimators for the reliability function of exponential and Weibull distributions

using uniform and gamma priors have been obtained by Harris and Singpurwala

(1968). Chaturvedi, Tiwari and Kumar (2007) obtained the Bayes estimator of the

reliability function for binomial and Poisson distributions. Since then a lot of work

has been done in this direction. For a brief review, one may refer to Martz and

Waller (1982) and Sinha (1986).

Another measure of reliability under stress-strength set-up is the probability

P = P (X > Y), where the random variable X and Y represent strength and stress

Page 81: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

respectively. For the case when X and Y both were assumed to follow normal

distributions, the Bayes estimator of ‘P’ under SELF was obtained by Enis and

Geisser (1971). Chaturvedi and Tomer (2002) considered the Bayes estimation of

‘P’ under SELF for negative binomial distribution. Chaturvedi, Tiwari and Kumar

(2007) derived the Bayes estimator of ‘P’ for binomial and Poisson distributions.

The use of symmetrical loss function is inappropriate while estimating

reliability function because the overestimation is usually more serious than the

underestimation. For the situation when overestimation and underestimation are

not equally serious, Varian (1975) has suggested LINEX (linear in exponential)

loss function. Zellner (1986) used LINEX loss function in Bayesian estimation and

prediction. Basu and Ebrahimi (1991) obtained Bayes estimators of reliability

function and ‘P’ for exponential distribution under both the SELF and LINEX loss

functions. The LINEX loss function is suitable for the estimation of location

parameter but not for the estimation of scale parameter and other parametric

functions. Calabria and Pulcini (1994) suggested the general entropy loss function

(GELF) for estimating these quantities.

In the present chapter, we consider a family of lifetime distributions as

considered by Moore and Bilikam (1978). They obtained Bayesian estimation

procedures for the parameter and reliability function under SELF and type II

censoring. We consider the Bayes estimators for the powers of the parameter,

reliability function and P = P (X > Y) both under SELF and GELF with complete

sample. Bayes estimators of powers of parameter are utilized to obtain Bayes

Page 82: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

estimator of the pdf at a specified point. This estimator is now used to obtain the

Bayes estimators for the reliability function and P = P (X > Y).

In Section 4.2, the family of life distributions is discussed. This family

includes several probabilistic distributions useful in reliability analysis as

discussed in the Chapter I. In Section 4.3, the Bayes estimators of ?, ?(t)

(reliability function at specified mission time t) and ‘P’ under SELF are discussed.

Bayes estimators of powers of ?, ?(t) and ‘P’ under GELF is considered in Section

4.4. Throughout the above discussion, we have assumed that shape parameter is

known, while scale parameter is unknown. Finally in Section 4.5, we assume that

both the parameters are unknown and the Bayes estimators for both the parameters

are obtained after calculating the marginal posteriors in each case.

4.2 THE FAMILY OF LIFETIME DISTRIBUTIONS

Let the random variable X follows the family of lifetime distributions

considered Moore and Bilikam (1978) given by

−−′

= (x)/?ßg(x)exp1ßg (x)g

?ß?)ß,f(x; ; 0.?ß,x, > (4.2.1)

We assume that shape parameter ß is known but the scale parameter ? is

unknown. Here g(x) is a real-valued strictly-increasing function of x with

0)g(0 =+ and ∞=∞)g( . The family (4.2.1) is known as family of lifetime

Page 83: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

distributions since it includes various probabilistic distributions useful in

reliability analysis as particular cases as discussed in Chapter I.

For the family (4.2.1), the reliability function at a specified mission time ‘t’

is

t)P(X?(t) >=

.(t)/?ßgexp

−= (4.2.2)

Now, the hazard- rate is given as

?(t)?)ß,f(t;

h(t) =

utilizing (4.2.1) and (4.2.2), we have

(t).1ß

g(t)g?

ß)t(h

−′=

(4.2.3)

The behaviour of hazard-rate depends upon (t) 'g . If (t)g′ is constant

(exponential and Weibull distribution for ß=1), h(t) is also constant, h(t) is

monotonically decreasing in t if (t)g′ is monotonically decreasing in t (Burr,

Pareto and Weibull distribution for ß<1) and h(t) is monotonically increasing in t

if (t)g′ is monotonically increasing in t (Rayleigh and Weibull distribution for

ß>1).

Page 84: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

4.3 BAYES ESTIMATORS OF POWERS OF ?, R(t) AND ‘P’ UNDER

SELF

Let n items are put on a test and we obtain the random sample

)nX...,2X,1(XX = . The likelihood of observing X is

=−∏

=

−′= /?n

1i)

i(xßgexp

n

1i)

i(x1ß)g

i(xg

n

?

ß)x\L(?

)/nS(expn?)x\L(? θ−−∝ (4.3.1)

.n

1i)i(x gnSwhere ∑

=

β=

Looking at (4.3.1), we consider the natural conjugate prior (NCP) for ? to

be inverted gamma as

µ/?)exp( 1?? G(?)

?µp(?) −+= ; 0µ ?, > and ν positive integer.

Combining the likelihood and the prior via Bayes’ theorem, the posterior density

of ? is

µ)/?)n(Sexp( 1)?(n?

k)x|h(? +−++= (4.3.2)

where k is constant of proportionality and after calculation

.

1

?)G(n ?n-

µn

Sk

+−

+=

Page 85: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Now from (4.3.2),

( )

µ)/?).n(Sexp( 1)?(n? ?)G(n

?nµnS

)x|h(? +−+++

++

= (4.3.3)

In the following theorem we obtain Bayes estimators of powers (positive as

well as negative) of ?.

Theorem 1: For p>0, the Bayes estimators of p? and p?− under SELF, are pBS?

and pBS?− , respectively, where

( ) ν+<++

−+=

np;p

µnS?)G(n

p)?G(npBS? (4.3.4)

and

( ) .pµnS?)G(n

p)?G(npBS? −+

+++=−

(4.3.5)

Proof: Since under SELF, Bayes estimator is posterior mean, we have

( )

d? µ)/?)n(Sexp( 0

p1-?n??)G(n

?)(nµnSpBS? +−∫

∞ +−−+

++= .

Solving it, (4.3.4) follows. Similarly (4.3.5) can be obtained.

In the following theorem, we obtain expressions for the risks, posterior

risks and Bayes risks of Bayes estimators of pBS? and p

BS?− under SELF. In what

follows, we use

Page 86: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

)pBS?(SR : Risk of p

BS? under SELF

)pBS?(PSR : Posterior risk of p

BS? under SELF and

)pBS?(BSR : Bayes risk of p

BS? under SELF.

Theorem 2: The risk, posterior risk and Bayes risk for Bayes estimator of p? and

-p? , under SELF are respectively given by

∑=

+−

+−+=

2p

0ii?

G(n)n)G(ii2pµ

i2p2

?)G(np)?G(n)p

BS?(SR

;p

0i2p?i?

G(n)n)G(iipµ

ip

?)G(np)?G(np2? ∑

=+

+−

+−+

− ?np +< ,

(4.3.6)

( ) ;2pµnS2

?)G(np)?G(n

?)G(n2p)?G(n

)pBS?(PSR +

+−+

+−+

=

?n2p +< , (4.3.7)

;2pµG(?)

2p)G(?2p)??)G(nG(n

p)?(n2G1)pBS?(BSR −

−++−+−=

2?p < , (4.3.8)

∫∞

+

−−

+++−=−

0 2pµ/?)(z

dzze1nzG(n)

12

?)G(np)?G(n2p?)p

BS?(SR

Page 87: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

+∫

+

−−

+++− 1

0 pµ/?)(z

dzze1nzG(n)

1?)G(n

p)?G(n2 (4.3.9)

( ) 2pµnS2

?)G(np)?G(n

?)G(n2p)?G(n

)pBS?(PSR −+

+++

+++

=−

(4.3.10)

and

2pµG(?)

2p)G(?2p)??)G(nG(n

p)?(n2G1)pBS?(BSR −+

+++++−=− .

(4.3.11)

Proof : The risk corresponding to pBS? is

2p?pBS?/?nSE)p

BS?(SR

−=

p2pBS?/?nSEp2

2pBS?/?nSE θ+

θ−

=

2pµ)n(S/?nSE2

?)G(np)?G(n

+

+−+

=

.2p?pµ)n(S/?nSE?)G(n

p)?G(np2? ++

+−+

(4.3.12)

Page 88: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Let us make transformation (X)ßgU = . It is easy to see that U follows the

exponential distribution with pdf

)u/?exp(?1?)h(u; −= ; 0u > .

Since )i(xn

1ißg nS ∑

== , from the additive property of exponential distribution, the

pdf of nS is

?)/nSexp( G(n) n?

1nnS

?);nh(S −−

= ; 0nS > . (4.3.13) Hence

for q>0,

ndS )0

/?nS( expG(n)n?

1qnnS

)qnE(S ∫

∞−

−+

=

.q? G(n)

n)G(q

+

= (4.3.14)

On using (4.3.14) in (4.3.12), the result (4.3.6) follows.

Now

2p?pBS?x?/E)p

BS?(PSR

−=

2pBS?p?x?/E p

BS?22p?x?/E +

=

on using (4.3.4)

Page 89: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

( )

( )2pµnS2

?)G(np)?G(n

)p?(x?/E ?)G(n

p)?G(npµnS 2)2p?(x?/E)pBS?(PSR

+

+−++

+−+

+−=

utilizing (4.3.3)

( )

( )

( ) 2pµnS2

?)G(np)?G(n

0d? µ)/?)

n(S( exp1-?-n-p?

2?)G(n

p)?G(n?npµnS 2

d? µ)/?)n(S( exp1-?-n-2p?0 ?)G(n

?nµnS)p

BS?(PSR

+

+−++

∫∞

+−

+

−++++−

+−∫∞

+

++=

solution of above expression yields (4.3.7).

By the definition

.2pµ)n(SnSE

2

?)G(np)?G(n

?)G(n2p)?G(n

)pBS?(BSR +

+−+

+−+

=

(4.3.15)

Now, first of all we find the marginal density of nS , from the NCP for ? and

(4.3.13) we have

( )∫∞

+−++

−=

0d? µ)/?n(Sexp 1?n?

1G(?) G(n)

1nnS ?µ

)nf(S

0.nS;?nµ)n(S ?)B(n,

1nnS ?µ

>++

−= (4.3.16)

Now utilizing (4.3.16) we have from (4.3.15)

Page 90: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

nSd?n - 2pµ)n(S1nnS

0.

G(?) G(n) ?)G(n ?µ

2

?)G(np)?G(n

?)G(n2p)?G(n)p

BS?(BSR

−+−∞

+

+−+−

+−+=

2pµ G(?)

2p)G(?2p)??)G(nG(n

p)?(n2G1

−++−+

−= ;2?p <

hence result (4.3.8) follows.

Now we will find these risks for the negative powers of BS? .

2p-?p-BS?/?nSE)p-

BS?(SR

−=

2p?)pBS?(/?nSEp2?2)p

BS?(/?nSE −+−−−−= .

Utilizing (4.3.5), we have

( )

( ) 2p?pµnS/?nSE

.?)G(n

p)?G(np2?2

?)G(np)?G(np2µnS/?nSE)p

BS?(SR

−+−+

+++−−

+++−+=−

(4.3.17)

Using (4.3.13), for q > 0

∫∞ −−+=−+0

ndS/?nSe 1-nnSqµ)n(S

G(n) n?1qµ)n(S/?nSE

∫∞

+

−−=

0dz qµ/?)(z

ze1nzG(n)q?1 (4.3.18)

Page 91: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

where z = /?nS .

From (4.3.17) and (4.3.18)

∫∞

+

−−

+++

=−

0dz 2pµ/?)(z

ze1nz

G(n)2p?

12

?)G(np)?G(n

)pBS?(SR

∫∞

++

−−

+++−

0

2p-? dzpµ/?)(z

ze1nzG(n)p?1

?)G(np)?G(np-? 2

∫∞

+

−−

+++−=

0 2pµ/?)(z

dz ze1nzG(n)

12

?)G(np)?G(n2p?

+∫

+

−−

+++− 1

0 pµ/?)(z

dzze1nzG(n)

1?)G(n

p)?G(n2

hence result (4.3.9) follows.

Proceeding in a similar manner the results (4.3.10) and (4.3.11) can be

obtained.

Hence the Theorem 2 follows.

Theorem 3: The Bayes estimator of pdf (4.2.1)

1)?(n

µ)n(S(x)ßg

1µ)n(S

(x)1ßg (x)g ?)ß(n?)ß,(x;BSf

++−

++

+

−′+= .

Proof: The pdf (4.2.1) can be written as

1)(i? (x)ßig 0i i!

i1)( (x)1ßg (x)gß?)ß,(x; f +−∑∞

=−−′= .

Page 92: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Here we will utilize the Bayes estimators of power of ?. Utilizing

Lemma 1 of Chaturvedi and Tomer (2002), we have

1)(iBS? (x)ßig

0i i!

i1)( (x)1ßg (x)g ß?)ß,(x;BSf +−∑∞

=

−−′=

using result (4.3.5)

( ) 1)(iµnS

.?)G(n

1)i?G(n (x)ßig

0i i!

i1)( (x)1ßg (x)g ß?)ß,(x;BSf

+−+

++++

∑∞

=

−−′=

1)?(n

µ)n(S(x)ßg1

µ)n(S(x)1ßg (x)g ?)ß(n

++−

++

+

−′+=

and the theorem follows.

Theorem 4: The Bayes estimator of reliability function ?(t) defined at (4.2.2) is

given as

?)(n

µ)n(S(t)ßg1(t)BS?

+−

++=

.

Proof : We have

∫∞

=t

?)dxß,(x;BSf(t)BS?

∫∞

++−

++−′

++=

tdx

1)?(n

µ)n(S(x)ßg1(x)1ß(x)gg

µ)n(S?)(n ß

Page 93: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

++=∫

++++

+=µ)n(S

(x)ßg1u wheredu,µ)n(t)/(Sßg

1?nu)(1

1?)(n

After solving, we get

?)(n

µ)n(S(t)ßg1(t)BS?

+−

++=

and the theorem follows.

Theorem 5: The risk, posterior risk and Bayes risk of (t)BS? , as defined in

Theorem 4, are respectively,

G(n)

(t)/?ßg2e

0 ?)2(n?(t))/ßg(µz

dzze1nz?)2(nµ/?)(zG(n)

1(t))BS?(SR−

−∫∞

+

++

−−++=

(t)/ ?ß2ge0 ?)(n

(t))/ ?ßg(µz

dz ze1nz?nµ/?)(z. −+∫∞

+

++

−−++ (4.3.19)

+−

++−

+−

++=

?)2(n

µ)n(S(t)ßg1

?)(n

µ)n(S(t)ßg 21(t))BS?(PSR (4.3.20)

and

Page 94: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

∑ν+

=

−+

+

+−

+=

n

0i

i2?n

(t)ßgµ

µi

?n

?)?(n,1

?

(t)ß2gµ

µ(t))BS?(BSR

. ( )i2?ni,n ? −++ . (4.3.21)

Proof: By definition, we have

( ) 2)t((t)BS? /?nSE(t))BS?(SR ρ−=

2))t(((t))BS?(/?nSE (t) 22(t))BS?(/?nSE ρ+ρ−=

/?nSE (t)/?ßg2e

?)2(n

µ)n(S(t)ßg1/?nSE −−

+−

++=

(t)/?ßg 2e

?)(n

µ)n(S(t)ßg1 −+

+−

++

/?nSE (t)/?ßg2e

?)2(n

(t))ßgµn(S

µ)n(S/?nSE −−

+

++

+=

(t)/?ßg 2e

?)(n

(t))ßgµn(S

µ)n(S −+

+

++

+

Now, from (4.3.13) for q > 0.

∫∞

++

−+

=

++

+

0ndS q(t))ßgµn(S

/?nSe1-n

nS qµ)n(S

G(n) n?1

q

(t))ßgµn(S

µ)n(S /?nSE

Page 95: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

∫∞

++

−−+=0 q?)(t))/ ßg(µ(z

dz ze1nz qµ/?)(zG(n)

1 . (4.3.22)

Utilizing (4.3.22), we get

G(n)

(t)/?ßg2e

0 ?)2(n?(t))/ßg(µz

dzze1nz?)2(nµ/?)(zG(n)

1(t))BS?(SR

−−∫

∞+

++

−−++=

(t)/ ?ß2ge0 ?)(n

(t))/ ?ßg(µz

dzze1nz?nµ/?)(z. −+∫∞

+

++

−−++ and

result (4.3.19) follows.

Now

( ) 2)t((t)BS

? x?/E(t))BS?(PSR ρ−=

[ ] [ ]2)t(x?/E))t((x?/E (t)BS?22(t)BS? ρ+ρ−=

?)(n

µ)n(S(t)ßg1 2

?)2(n

µ)n(S(t)ßg1

+−

++−

+−

++=

−+

− (t)/?ß2gex?/E(t)/?ßgex?/.E

utilizing (4.3.3) , we have

Page 96: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

.

?)(n

µ)n(S(t)ßg1 2

?)2(n

µ)n(S(t)ßg1(t))BS?(PSR

+−

++−

+−

++=

∫∞

+++

++−++

0d?

1)?(n? ?)G(n

(t))/?ßgµn(Se ?nµ)n(S

∫∞

+++

++−+++

0d?

1)?(n? ?)G(n

(t))/?ßg2µn(Se ?)(nµ)n(S

?)(n2

µ)n(S(t)ßg1 2

?)2(n

µ)n(S(t)ßg1

+−

++−

+−

++=

.

?)(n

µ)n(S(t)ß2g1

+−

+++

Result (4.3.20) follows after solving above expression.

Now

+−

++−

+−

++=

?)2(n

µ)n(S(t)ßg1

?)(n

µ)n(S(t)ß2g1

nSE(t))BS?(SBR .

(4.3.23)

Utilizing (4.3.16)

∫∞

+

++

−=

+−

++

0 ?)(n(t)ß2gµnS

ndS1nnS

?)B(n,

?µ?)(n

µ)n(S(t)ß2g

1nSE

Page 97: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

?

(t)ß2gµ

µ

+=

and

i?nµ?n

0i i?n

?)?(n,

?µ?)2(n

µ)n(S(t)ßg

1nSE −+∑

+

=

+=

+−

++

∫∞

ν+++

−+

0 )2(n(t))ßgµn(S

ndS 1innS

.

i)2?ni,?(n

i2?n

(t)ßgµ

µ?n

0i i?n

?)?(n,1

−++

−+

+∑+

=

+= .

Using the above two results in (4.3.23), the result (4.3.21) follows. Hence the

Theorem 5 follows.

In the following theorem we obtain Bayes estimator of P=P(X>Y), where

‘X’ and ‘Y’ have the pdf’s )1?,1ß(x; f and )2?,2ß(y; f , respectively. Let n items

on ‘X’ and m items on ‘Y’ are put on a life test. Let us denote by ∑=

=n

1i)i(x1ß

gnS

and ∑=

=m

1j)j(y2ß

gmT . Here we assume that 1ß and 2ß are known whereas 1? and

Page 98: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

2? are unknown. We consider the conjugate priors for 1? and 2? with parameters

)1?,1(µ and )2?,2(µ , respectively.

Theorem 6: For ß2ß1ß == (say) , the Bayes estimator of ‘P’ under SELF

−≤−

++++++++

+−−

+++

+

<+++++++++

+−

+++

+

=

1,c if ),c1

c1;2?m1?n;1?n,2?m1?(n1F2.

)1?(nc)(1

)2?m1?(n

)2?(m

1|c| if c),1;2?m1?n1;2?m,2?m1?(n1F2.

2?mc)(1

)2?m1?(n

)2?(m

BSP

where )1µn(S)2µm(T1c ++−=

(4.3.24)

For 2ß1ß ≠ ,

( ) .1

0dz

12?nz

)1?(n

2/ß1ß11z

)1µn(S

2/ß1ß)2µm(T

1)2?n(SBSP ∫−+

+−

−−+

+++=

(4.3.25)

Proof: Using similar arguments as in Theorem 4, we have

Page 99: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

dydx)2?ß,(y;BSf0y yx

)1?ß,;(xBSfBSP ∫∞

=∫∞

==

∫∞

=0

dy)2?ß,(y;BSf)1?ß,(y;BS?

du

1)2?(m

)2µm(Tu

1

.0

)1?(n

)1µn(Su1

)2µm(T

)2?(m

++−

++

∫∞

+−

++

+

+=

where u = (y)ßg .

After solving, we obtain

dz1

0

1)2?(mcz)(1

12?n1?mz 2?m

c)(1)2?m(BSP ∫++−

−−++++

−+=

where )1µn(S)2µm(T1c ++−= .

The result (4.3.24) follows on using a result of Gradshteyn and Ryzhik (1980, p.

286).

For 2ß1ß ≠

dydx )2?,2ß(y;BSf 0y y x

)1?,1ß(x;BSfBSP ∫∞

=∫∞

==

∫∞

=0

dy )2?,2ß(y;BSf )1?,1ß(y; BS?

Page 100: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

( ) du

)1?(n

2/ß1ßu

)1µn(S

2/ß1ß)2µm(T

10

1)2?(mu1)2?m(

+−

+

++∫

∞ ++−++= .

Result (4.3.25) follows after solving above expression.

4.4 BAYES ESTIMATORS OF POWERS OF ?, ?(t) AND ‘P’

UNDER GELF

The general loss entropy function (GELF), when the parameter ? is

estimated by ? is given by

( ) .0a;1?

logaa

? ,? L ≠−

θ−

θ=θ (4.4.1)

The Bayes estimator of ? Under GELF is given by

1/a

)a(?x?/EBe?−

−= . (4.4.2)

In the following theorem we obtain Bayes estimators of powers of ?.

Page 101: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Theorem 7: For p>0, the Bayes estimators of p? and -p? under GELF are given

by pBe? and p

Be?− respectively, where

p)µnS(1/a

ap)?n(G?)n(Gp

Be? +++

+=

(4.4.3)

and

p-)µnS(1/a

ap)?n(G?)n(Gp-

Be? +

−++= ; ?rap +< . (4.4.4)

Proof: We have

d? µ)/?)n(Sexp( 0

1)ap?(n??)G(n

?nµ)n(S)ap(?x?/ E +−∫

∞ +++−+

++=−

ap-)µnS(?)n(Gap)?n(G +

+++= (4.4.5)

From (4.4.5), on utilizing (4.4.2), we obtain

p)µnS(1/a

ap)?n(G?)n(Gp

Be? +++

+=

In the similar manner, we can obtain the result (4.4.4). Hence the theorem follows.

Now we derive expressions for the risks, posterior risks and Bayes risks of

Bayes estimators of powers of ?.

Page 102: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Theorem 8: The risk, posterior risk and Bayes risk for Bayes estimator of p? and

-p? , under GELF are respectively given by

( )( ) ( )

dzze1nzap

0 ?µ

znG1

ap?nG?nG

)pBe?(eR −−

∫∞

+

+++

=

( )

( ) ( ) 1dz ze1nz0 ?

µzlog

nGp a

ap?nG?nG

log −−−∫∞

+−

+++

− (4.4.6)

( )( )

++

+++

−= ?)(n ap?ap?nG

?nG log)p

Be?(PeR (4.4.7)

( )( )

++

+++−= ?)(n ap?

ap?nG?nGlog)p

Be?(BeR (4.4.8)

( )( ) ( )

dzze1nz-ap

0 ?µ

znG

1ap?nG

?nG)p-

Be?(eR −−∫∞

+

−++

=

1dzze1nz 0 ?

µz log

G(n)p a

ap)?G(n?)n(G

log −−−∫∞

++

−++

− (4.4.9)

( )

( )

−++

+=ap?nG

?nG log-?)(n ap?)p-

Be?(PeR (4.4.10)

and

( )( )

−++

+=ap?nG

?nGlog-?)(n ap?)p-

Be?(BeR . (4.4.11)

Page 103: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Where (x) logdxd?(x) Γ= .

Proof: From (4.4.1), we have

θ−

θ= 1

p

pBe?

logaap

apBe?

/?nSE)pBe?(eR

utilizing (4.4.3), we get

+−

+++−

+

+++

=

1?

µnSaplog

ap)?G(n?)G(nlog

ap

?µnS

ap)?G(n?)G(n

/?nSE)pBe?(eR

on using (4.3.13)

∫∞ −−

+

+++=

0ndS/?nSe1n

nSap

?µnS

G(n)n?1

ap)?G(n?)G(n

1ndS/?nS

e1nnS

0 ?µnS

logG(n)n?ap

ap)?G(n?)G(n

log −−−

∫∞

+−

+++

− .

Result (4.4.6) follows on substituting z = /?ns .

Now,

( )

−++−

+++−

+

+++

=1aplog?µnSaplog

ap)?G(n?)G(nlog

ap

?µnS

ap)?G(n?)G(n

x?/EnSE)p

Be?(PeR

Page 104: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

?) (logx?/EnSE p aµ)nlog(S

nSE p aap)?G(n

?)G(nlog ++−

+++

−=

(4.4.12)

also from (4.3.3),

( ) µ)/?)d?n(Sexp(0

1)?(n? )1? (log?)G(n

?nµ)n(Slog x?/E +−∫

∞ ++−−+

++=θ−

µ)u)dun(Sexp(0

1?nu u) (log?)G(n

?nµ)n(S+−∫

∞ −++

++= .

From a result of Gradshteyn and Ryzhik (1980, p. 576), we have.

[ ]µlog?(?)?µG(?)

0dxlogx µxe1?x −=∫

∞ −−

utilizing it, we get

( ) µ)n(Slog?)(n?log? x /?E +−+=− . (4.4.13)

Utilizing (4.4.13) in (4.4.12), we get

( )( )

++++

+−= ?)(n ap?ap?nG

?nGlog)pBe?(PeR

and result (4.4.7) follows.

Now, by definition

= )p

Be?(PeRnSE)p

Be?(BeR

Since this expectation is independent of nS , hence

)pBe?(PeR)p

Be?(BeR =

Page 105: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

and (4.4.8) follows. Results (4.4.9), (4.4.10) and (4.4.11) can be obtained in the

similar way. Hence the theorem.

The following theorem provides the Bayes estimator of reliability function.

Theorem 9: The Bayes estimator of reliability function ?(t) defined at (4.2.2),

under GELF is given as

0afor 0,(t)ßagµnS;

?)/a(n

µ)n(S(t)ßag

1(t)Be? ≠>−+

+

+−=

Proof: By definition, we have from (4.4.2)

a1

) / (t)ßg a(expx?/E(t)Be?−

θ−=

Now using (4.4.3), we get

θ

θ++−∫

∞+++

++= d(t))/ßagµnS(exp

0 1?n? ?)G(n

?nµ)n(S)(t)/?ßag(ex?/E

?n(t)]ßagµn[S

?nµ)n(S+−+

++= .

Hence

Page 106: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

0afor ,

?)/a(n

µ)n(S(t)ßag

1(t)Be? ≠

+

+−=

and the theorem follows.

In the following theorem we derive risk, posterior risk and Bayes risk of

(t)Be? under GELF.

Theorem 10: The risk, posterior risk and Bayes risk of (t)BS? , under GELF are

respectively,

ndS/?nS

e 1-nnS

?n

L µ)n(S(t)ßga

1G(n)n?

(t)/?ßage(t))Be?(eR

−+

∫∞

+−=

1?

(t)ßagndS

/?nSe 1-n

nSL µ)n(S

(t)ßag1log G(n)n?

?)(n −−∫∞

+−+− (4.4.14)

++

+−+−=

µ)n(S(t)ßg a

µ)n(S(t)ßg a1log ?)(n(t))Be? (PeR (4.4.15) and

Page 107: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

∫∞

+++

+++∫∞

+−

+−=

L 1?nµ)n(S

ndS1-nnS

.

(t)ßag?nµ)n(S

ndS1-nnS

L µ)n(S(t)ßag

1log

?)B(n,

??)µ(n(t))Be?(eBR

(4.4.16)

where

0a , 0 0a ,µ - (t)ßga L

<>= (4.4.17)

Proof: By definition, we have

ρ−

ρ= 1

)t(

)t(Be? loga

a

)t(

)t(Be?

/?nSE)(t)Be?(eR

( ) ( )

−−

+−+−

+

+−

=

1?

(t)ßg a

µnS(t)ßg a1 ?)log(n(t)/?ßg ae

?n

µnS(t)ßg a1

?/ nSE .

Utilizing (4.3.13), we get

L

ndS

?/ nS-e 1-n

nS

?n

)µnS((t)ßag-1

G(n)n ?

? / (t)ßage(t))Be?(eR ∫∞

+

+=

Page 108: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

1?

(t)ßag

Ln

dS?/ nS-

e 1-nn

S

?n

)µnS((t)ßag-1log

G(n)n ?

?)(n- −−∫∞

+

++

and result (4.4.14) follows. Where L has been defined in (4.4.17).

Now

ρ−

ρ= 1

)t(

)t(Be

? loga

a

)t(

)t(Be

? x /?E)(t)Be?(PeR

( ) ( )

−−

+−+−

+

+−

=

1?

(t)ßg a

µnS(t)ßg a1?)log(n?(t)/ ßg ae

?n

µnS(t)ßg a1

x /?E

after solving the above expression result (4.4.15) follows.

Finally,

++

+−+=

µ)n(S(t)ßag

µ)n(S(t)ßag1 log ?)(n-

nSE(t))Be?(eBR

utilizing (4.3.16), the result (4.4.16) follows.

In the following theorem we obtain the Bayes estimator of ‘P’ under GELF.

Page 109: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Theorem 11: For )1µn(S)2µm(T1c ++−= and 2ß1ß = , the Bayes estimator

of ‘P’ under GELF is given by

)/a2?(mc)(1

1/a

)1?n, a2??(m

)2?m,1??(n1/a

a2?m

a2?1?nmBeP

+−−

+−+

++

−+

−+++=

[ ] 1/ac)1;a2?1?nm1;a2?ma,2?1?n(m1F2. −+−++++−+−+++

Proof: For ß2ß1ß == (say)

−−′∫

=∫∞

=−′

=

1?(x)ßgexp (y)1ßg (y)g

0y yx (x)1ßg(x)g

2?1?

2ßP

dydx 2?(y)ßg.exp

dydu 2?(y)ßg exp (y)1ßg (y)g

0

1?(y)ßg

ue2?ß

−−′∫

∞∫∞ −=

.2?1?

1?

+=

The joint posterior density of 1? and 2? can be written with the help of (4.3.3) as

1)2?(m2?1)1?(n

1?)2?G(m)1?G(n

2?m)2µmT(1?n

)1µnS()2?,1?(h ++++++

++

++

=

( ) ( )2/?)2µm(T exp 1/?)1µn(S .exp +−+− .

Page 110: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Let us consider the transformations )2?1/(?1?P += and 2?1?u += , so

that, uP1? = and P)u(12? −= . The Jacobian of transformation is u. Hence the

joint posterior density of P and u is

1)2?1?n(mu

)2?m(G )1?G(n

2?m)2µmT(1?n

)1µnS(u)P,(h

++++−

++

++

++

=

( )

++

+−++−−

++−

P1

)2µmT(

P

)1µnS(

u1exp1)2?(mP1

1)1?(n.P .

Hence the marginal posterior density of P is

11?n

P)(1 12?m

P )2?m,1?n( ?

2?m)1µnS()2µmT(

g(P)−+

−−+

++

+++

=

)2?1?n(m

)cP1.(+++−

Now

1/a)a(P xPE BeP

−= .

After solving it and utilizing the result of Gradshteyn and Ryzhik (1980), the

Theorem 11 follows.

Page 111: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

4.5 BAYES ESTIMATORS OF THE PARAMETERS UNDER SELF AND

GELF WHEN BOTH THE PARAMETERS ARE UNKNOWN

Now we consider the case when the shape parameter ß is also unknown.

We assume the prior distribution of is ), g(ßp(?)?)f(ß = , where p(?) is given by

µ/?)exp(1?? G(?)

?µp(?) −+= ; 0µ?, > and ν positive integer.

and

aß0,a1 g(ß <<∝)

i.e.

( ) µ/?).exp(1?? G(?) a

?µ,f −+∝θβ (4.5.1)

Now the likelihood of observing ß and ? from (4.2.1) with a sample

)nX...,2X,1(XX = is

∑=

−∏=

−′

= /?

n

1i)i(xßg exp

n

1i )i(x1ßg ) i(xg

n

)x|ß , L(?

−−

= /?nSexp1ß?

n

?ß (4.5.2)

Page 112: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

.n

1i)i(xßgnSand

n

1i)ig(x?where ∑

==∏

==

With the prior (4.5.1) and likelihood (4.5.2), the posterior density of (?, ß) is

.?µ)n(S exp1ß?1?n?

nßK )x|ß , h(? +−++

= (4.5.3)

After calculating the normalizing constant, we have

?µ)n(S e 1?n ? ?)G(n

1ß? nß 1

dß?n µ)n(S

1ß? nß a

0)x|ß , h(?

+−+++

−−

++

−∫= .

(4.5.4)

Integrating out ? in (4.5.4), we obtain the marginal posterior of ß as

d?0

)x|ß ,h(?)x|ß g( ∫∞

=

a.ß0;

dß ?n µ)

n(S

1ß? nßa

0

?n µ)n(S1ß? nß<<

++

−∫

++−

= (4.5.5)

Similarly, we obtain the marginal posterior of ? as

dß0

)x|ß ,h(?)x| p( ∫∞

Page 113: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

.?0;

dß ?n µ)n(S

1ß? nßa

0 ?)G(n

dß / µ)n(S-

ea

0

1ß?nß 1?n ?

1

∞<<

++

−∫+

θ+∫

−++

= (4.5.6)

From (4.5.5), the Bayes estimator of ß, under SELF is

.

dß a

0

?nµ)n(S1ß? nß

dß a

0

?nµ)n(S1ß?1nß*BSß

++−

++−+

=

Similarly from (4.5.6) the Bayes estimator of ?, under SELF is

dß a

0 ?n µ)n(S

1ß? nß ?)G(n

dßa

0d?

/ µ)n(S-e

0 ?n ?

11ß? nß

*BS?

++

−+

θ+∫∞

+−

=

.

dß a

0

?nµ)n(S1ß? nß

dß a

0

1-?nµ)n(S1ß?nß

1-?n1

++−

++−

+=

The Bayes estimator for ? and ß under GELF can be easily calculated as

1/a)a(ßx |ßE*

Beß

−=

Page 114: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

a1

dß a

0

?nµ)n(S1ß? nß

dß a

0

?nµ)n(S1ß?a-nß

++−

++−

=

and

1/a)a(?x |?E*

Be?

−=

.

a1

dß a

0

?nµ)n(S1ß? nß

dß a

0

1-a?nµ)n(S1ß?nß

1-a?n1

++−

+++−

++=

Hence the Bayes estimators of the parameters even both the parameters are

unknown can be obtained with the help of above expressions.

Page 115: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

CHAPTER-V

TWO STAGE POINT ESTIMATION PROCEDURE FOR THE MEAN OF

A NORMAL POPULATION WITH KNOWN COEFFICIENT OF

VARIATION

5.1 INTRODUCTION

English mathematician De-Moivre first discovered the normal distribution

in 1773 as a limiting case of the binomial distribution. In 1809, Gauss used the

normal distribution for the distribution of errors in Astronomy. Laplace also

contributed to this distribution. The normal distribution is of special significance

in inferential Statistics since it describes probabilistically the link between a

statistic and a parameter. It has wide applicability in statistical analysis since most

of the distributions occurring in practice can be approximated by the normal

distribution and many of the distributions of sample statistics tend to the normal

distribution for large samples. The normal distribution has also importance in

reliability analysis. Davis (1952) has shown that the normal distribution gives

quite a good fit for the failure time data, in the context of life testing and reliability

analysis. The support to the normal distribution is (-8, 8) by taking the mean µ to

Page 116: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

be sufficiently large positive valued and standard deviation s to be sufficiently

small relative to µ.

The pdf of normal distribution with location parameter µ (mean) and scale

parameter s (standard deviation) is given by

The reliability function and hazard-rate for this distribution are

−−=

sµtF1(t)R

and

−−

=

sµtF1s

sµtf

h(t)

where f (.) is pdf of standard normal variate (SNV) and ? (z) is the cumulative

distribution function (cdf) of SNV, given by

Although we can not obtain the hazard-rate in closed form, yet it can be

shown that the hazard-rate for this distribution is IFR .

There arise many situations in which the mean and variance of a population

0. s , µ x, - ; 2 µ) (x 2 s 2

1 exp 2 1 ) (2 p s

1 s ) µ, (x; f > ∞ < < ∞ − − =

du. 2

2 u exp

2 1 ) (2 p

1 z (z) F

− ∫

∞ − =

Page 117: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

are unknown but the population coefficient of variation (CV) is known. For a brief

review, one may refer to Snedecor (1946), Hald (1952), Davies and Goldsmith

(1976) and Chaturvedi and Tomer (2003, b). An estimator for the mean of a

normal population when CV is known is proposed by Searls (1964). He showed

that it is more efficient than the sample mean in terms of having lesser mean sum

of squares due to error (MSE).

Wald (1947) developed sequential probability ratio test (SPRT) for testing

simple versus simple hypotheses for normal distribution. Stein (1945) developed a

two-stage point estimation procedure to construct fixed-width confidence interval

for the mean of a normal population. A lot of work has been dome in the literature

for two-stage point estimation procedure for estimating parameters under different

models. For details, one may refer to Chatterjee (1959, 1960), Ruben (1961),

Mukhopadhyay (1980; 1982, a & b), Mukhopadhyay and Abid (1986, a & b),

Costanza et.al. (1986) and Kumar and Chaturvedi (1993). Several authors have

further generalized the concept of two-stage point estimation procedure by

introducing three-stage procedure, sequential procedure, and accelerated

sequential procedures. For some citations, one may refer to Robbins (1959), Starr

(1966), Hall (1981), Hamdy and Son (1991), Chaturvedi and Tomer (2003, b).

Chaturvedi, Tiwari and Pandey (1993) further analyzed the problem of

constructing a confidence interval of pre-assigned width and coverage probability

considered by Constanza, Hamdy and Son (1986). They utilized several multi-

stage (purely sequential, accelerated sequential, three-stage and two-stage)

Page 118: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

estimation procedures to deal with the same estimation problem.

Many authors have considered the testing and estimation procedure for the

mean of a population when the population CV is known. Joshi and Shah (1990)

proposed SPRT for testing simple versus simple hypotheses for the mean of an

inverse Gaussian distribution with known CV. Singh (1998) considered the

problem of minimum risk point estimation of the mean of a normal population

under SELF when CV was known. Using Searls’ estimator, he proposed a

sequential procedure and proved it to be ‘asymptotically risk-efficient’ in the sense

of Starr (1966). Chaturvedi and Tomer (2003, b) considered three-stage and

accelerated sequential procedures for the mean of a normal population with known

CV.

In the present Chapter, we develop a two-stage point estimation procedure

for the mean of a normal population when the population CV is known. Both the

minimum risk and the bounded risk estimation problems are considered. Second

order approximations are also considered for the proposed two-stage point

estimation procedure. In Section 5.2, we discuss the minimum risk estimation for

the parameters of normal distribution. In Section 5.3, we obtain second order

approximations for expected sample size (N), risk corresponding to two-stage

point estimation procedure [ (c)NR ] and the regret of the procedure (c)].g[R In

Section 5.4, the case of bounded risk point estimation is considered. In Section 5.5

we obtain second order approximations for expected sample size (N), )2E(N and

Page 119: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

(A)NR .

5.2 MINIMUM RISK POINT ESTIMATION

Let us consider that a random variable (rv) X follows the normal

distribution having pdf

(5.2.1)

where ),(µ ∞−∞∈ and )(0,s ∞∈ and are unknown mean and unknown standard

deviation, respectively. The population CV i.e. kµs = (say) is assumed to be

known.

Given a random sample nX...,2X ,1X of size n (= 2) from (5.2.1), let us

define

.2)nXn

1i iX (1-n

12nsand

n

1i iXn1

nX −∑=

=∑=

=

For fixed n, Searls (1964) has proposed the estimator

nX1

n

2k1nµ~−

+=

(5.2.2)

for estimating µ and showed that, under SELF, it has smaller risk than the usual

estimator nX .

. 0 s , µ x, - ; 2 µ) (x 2 s 2

1exp

2 1 ) (2 p s

1 s ) µ, (x; f > ∞ < < ∞ − − =

Page 120: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Let the loss incurred in estimating µ by nµ~ under squared-error loss

function (SELF) be

nc2µ) -nµ~(A)nµ~,(µL += (5.2.3)

where A and c are known and positive constants.

The risk corresponding to the loss function (5.2.3) is

nc2µ) -nµ~(EA(c)nR +=

nc2µ

2

11

n

2k12µ) -nX(E2

n

2k1A +

++

+=

nc2kn

2sA ++

= , where .µs

k = (5.2.4)

The value *n of n minimizing the risk (5.2.4) can be obtained by solving

the following expression

0*nnn

(c)nR ==∂

.

After solving above expression, we get

c22k*n

2sA =

+

which yields

Page 121: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

2k - s21

cA*n

= . (5.2.5)

If k = 0 then

s21

cA

on

= .

Now the associated minimum risk is

*nc2k*n

2sA(c)*nR +

+=

*nc2k*nc ++=

2kc*nc2 += .

It is obvious from the above expression that the optimal fixed sample size

*n depends upon unknown parameter s. In the absence of any knowledge about

parameter s, no fixed sample size procedure meets the goals. In what follows, we

propose a two-stage point estimation procedure.

At the first stage, we start with a sample X1 , X2…, Xm of size m (= 2) in

such a manner that )21-(c o m = as 0c → and 1.*n

mSupc

lim <∞→

Now compute

.2)mXm

1iiX (

1-m12

ms −∑=

=

The second stage sample size is given as

Page 122: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

+

= 12kms 2

1

cA

m,max N (5.2.6)

where [y] denotes the largest positive integer less than y.

By definitions

m2kms21

cAN2kms2

1

cA +−≤≤−

.

After stopping, we estimate µ by .Nµ~

The risk corresponding to the estimator Nµ~ is

E(N).c2µ) -Nµ~(EA(c)NR +=

Following Starr and Woodrofe (1969), we define the regret of the procedure

(5.2.6), as

(c).*nR - (c)NR (c)gR = (5.2.7)

5.3 SECOND ORDER APPROXIMATIONS TO E(N), (c)NR AND (c)gR

In the following theorem, we derive the second-order approximations for

the expected sample size, risk associated with Nµ~ i.e. [ (c)NR ] and the regret of

the two-stage procedure.

Page 123: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Theorem: For two-stage point estimation procedure as c 0→ ,

o(1) 21*n (N)E ++= (5.3.1)

)21(coo

n3c(c)*n

R(c)NR ++= (5.3.2)

and

).21(coo

n3c(c)gR += (5.3.3)

Proof: Let us consider

−= 2kms2

1

cA2kms2

1

cA1mT .

Following Hall (1981), we have

.mas1)U(0,LmT ∞→→

Now we can write

)mE(T2k)m(s E21

cAE(N) +−=

212k)m(s E2

1

cA

+−

= (5.3.4)

since mT follows U(0, 1).

Page 124: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

First we calculate )m(s E , as

2)mXm

1iiX (

1)(m1

2s

1-m2ms2s

1)(m −∑=−

=−

2m

1i

mXiX∑= σ

−=

jZ1-m

1i∑=

=

where jZ follows Chi-square distribution with one degree of freedom.

Hence

21)(m?

1)(m

2s2ms −−

=

2121)(m?

211)(m

smsor

−−= .

Now

)21(yE211)(m

s)msE(−

=

21)(m?ywhere −= .

dy

12

1-m

y2y

e

21-m2

1-m

2

121y0211)(m

s)msE(

Γ

−= ∫

Page 125: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

du2.ue

121

21-m

u0

21-m

2

121

21-m

2

21-m211)(m

s −

−+

−+

Γ−

= ∫

2yuwhere = .

Solution of above expression yields

.

21-m

G

21

21-m

G

21

21m

s)mE(s

+

=

Using a well known result of Neill and Rohatgi (1973) that

( )( )

.aas)1(aO1aG

baGba ∞→−+=+−

We get

∞→−

−+σ=

21mas

1

21mO1)mE(s

0cas21cO1 →

=

utilizing above result in (5.3.4)

+−

+

=

212k21cOs

21

cA(N)E

utilizing (5.2.5) , we get

Page 126: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

o(1) 21*n (N)E ++= .

Hence result (5.3.1) follows.

Furthermore,

)m(TE2k2 )m(SE21

cA

2k2-4k)2m(TE)2

m(sEcA

)2E(N −

++

=

)mTmE(S21

cA

2

+ .

Using Cauchy-Schwartz inequality,

)mV(T)mV(s )mTm(s2cov ≤

=

2)m(sE-)2

m(sE121 since )1,0(UmT →

0cas)1(o →= .

This implies ms and mT are asymptotically uncorrelated. Now utilizing this result,

we obtain

212k2 )21O(c

21

cA

2k2-4k41

1212

cA

)2E(N −

+

++σ

=

Page 127: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

+σσ

+ )21O(c

2121

cA

2

)21-O(cs21

cA2k

31 s

21

cA 2k2-4k2s

cA +

+−+

+

=

)21-o(c 31*n

2*n +++= . (5.3.5)

Now, we can write

1xf(x)where(N),Ec2k*n

2kNfE2k*nc (c)NR −=+

+

+

+=

.2k*nonand2kN'Nwhere(N),Eco

n

'NfEo

nc +=+=+

=

(5.3.6)

Expanding f(x) around ‘x=1’, by second order Taylor’s Series, we have for

,1x1U −≤−

(U)''f! 2

21)-(x (1)'f 1)-(x f(1) f(x) ++= .

Here

3x

2(x)''fand2x

1(x)'f =−= .

Hence for 1on

'N1U −≤− , we get the Taylor series expansion as,

Page 128: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

(U)''f

2

1on

'N21(1)'f1

on

'N(1)f on

'Nf

−+

−+=

3U2

2

1on

'N21(-1)1

on

'N1 −

−+

−+= (5.3.7)

utilizing (5.3.7) in (5.3.6), we get

)N(Ec3U

2

1on

'N1on

'N1Eonc)c(NR +

−+

−−= (5.3.8)

now utilizing (5.3.1) and (5.3.4), we have

)N(Ec

)'N(Eon2

)2on

2'N(E2o

n

12kon-o(1) 21*n

on1

1onc)c(NR

+

−++

+++−=

+

++++++++−= 2

ono(1)

21*n22k4k)21-o(c

31*n

2*n2on

1

on211

onc

+++

+++− o(1)

21*nc2k o(1)

21*nc2

)21o(con 3

c2k 2c- )on2k (2c)2ko(nc ++++−=

)21o(con 3

c2kc*nc2 +++=

Page 129: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

)21(coo

n3c(c)*n

R ++= . (5.3.9)

Hence (5.3.2) follows.

Now utilizing (5.2.7) and (5.3.9), we get

)21(coo

n3c(c)gR +=

Hence the theorem follows.

5.4 BOUNDED RISK POINT ESTIMATION

Now we consider the bounded risk point estimation of µ. Let the loss incurred

in estimating µ by Nµ~ be

( ) 2µ) -nµ~(Anµ~ µ,L = (5.4.1)

where A is known positive constant.

The risk corresponding to the loss function (5.4.1) is

2µ) -nµ~(EA(A)nR =

2kn

2sA

+= ; where

µs

k = . (5.4.2)

Page 130: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

In order to achieve the condition that the risk (5.4.2) should not exceed W

i.e. W(A)nR ≤ for a pre-specified W>0, the sample size required is the smallest

positive integer **nn ≥ , where

.2k -W

2sA**n

= (5.4.3)

If k = 0, W

2sAon = i.e.

on2k**n =+ .

The associated minimum risk is

.W2k**n

2sA(A)**nR =

+=

We start with a sample X1, X2…, Xm of size m (= 2) in such a manner that

)A( o m = as ∞→A and 1.**n

mSuplimA

<∞→

At the second stage, we collect

N - m more observations, where

.12k2ms

WA m,max N

+

=

After stopping we estimate µ by .Nµ~

The risk corresponding to the estimator Nµ~ is

.2µ) -Nµ~(EA(A)NR =

Page 131: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

5.5 SECOND ORDER APPROXIMATIONS TO E(N), E( 2N ) AND

(A)NR

In the following theorem, we derive the second-order approximations to

E(N), E( 2N ) and (A)NR .

Theorem: For two-stage point estimation procedure for A ∞→

o(1) 21**n (N)E ++= (5.5.1)

o(A) 31*n

2**n )2(NE +++= (5.5.2)

and

(A).o2o

n3

1

on211W(A)NR +

+−= (5.5.3)

Proof: Let us consider

−= 2k2

msWA2k2

msWA1mT .

Following Hall (1981), we have

.mas1)U(0,LmT ∞→→

Hence

Page 132: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

)m(TE2k)2m(s EAE(N)

W+−=

; 212k)2

m(s EWA

+−

= since mT follows U(0, 1).

o(1) 21**n ++=

hence (5.5.1) follows.

Result (5.5.2) can be similarly derived.

Now

)N(Ec2k*n

2kNfE2k*nc (c)NR ++

++=

on2k**nand2kN'Nwhere'N

on

EW =++=

=

=

on

'NfW ; .1x)x(fwhere −= (5.5.4)

Expanding f (x) around ‘x = 1’ by second order Taylor’s series, we have for

,1x1U −≤−

( ) (U)''f!2

21-x(1) 'f1)-(x(1) f(x) f ++=

Also

3x

2(x)''fand2x

1(x)'f =−= .

Page 133: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Hence, for ,1on

'N1U −≤− applying Taylor Series expression, we get from

(5.5.4)

−+

−−= 3U

2

1on

'N1

on

'N1EW)A(NR

−+

−−= 3U

2

on'N

2on

1o

n'Non 11EW

−++

−−= 'Non22

on2'NE

2on

1on)'(N E

on 1

1EW

+++−= 2k

on- o(1)

21**n

on11W

+++−+

+++++++

+2ko(1)

21**non22

on

o(1) 21**n2k24k(A)o

31*n

2**n

2on

1W

o(A)2on3

1

on2 11W +

+−=

and (5.5.3) follows. Hence the theorem.

Page 134: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

CHAPTER-VI

SHRINKAGE-TYPE BAYES ESTIMATOR OF THE PARAMETER OF A

FAMILY OF LIFETIME DISTRIBNUTIONS

6.1 INTRODUCTION

In the estimation of unknown parameter there often exists some form of

prior knowledge about the parameter which one would like to utilize in order to

get a better estimate. The Bayesian approach is a well known example. There

exists another kind of procedure for estima ting unknown parameter with the help

of prior estimate, which is known as shrinkage estimation. In this estimation

procedure prior knowledge about the parameter is assumed to be available in the

form of prior point estimate or in the form of interval which contain parameter in

it.

According to Thompson (1968, a) there is sometimes a natural origin O?

such that one would like to take the minimum variance unbiased linear estimator

(MVULE) ? for ? and move it closer to O? . Thus obtaining an estimator for ?

which is better than ? near O? , though possibly worse farther away, measured in

Page 135: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

terms of MSE. Such a procedure of modifying an estimator i.e., shrinkage of the

MVULE towards a natural origin is called shrinkage estimation. Thompson

suggested shrinkage estimator of a parameter by giving suitable weights to the

usual estimator and the prior point estimate. The shrinkage estimator s? (say) for a

parameter ? is

o? k)-(1 ?ks? +=

where k is any scalar between 0 and 1, such that it minimizes the MSE of s? .

Similarly, the MVULE can be shrunken towards an interval. For detailed

discussion one may refer to Thompson (1968, b).

Singh and Bhatkulikar (1977) considered the shrinkage estimation in Weibull

distribution. They proposed some preliminary test shrinkage estimators of the

shape parameter of the Weibull distribution under censored sampling and proved

that these estimators are more efficient than unbiased estimator. The problem of

shrinking the MLE of mean of various populations toward a natural origin is

studied by Mehta and Srinivasan (1971). Lemmer (1981) discussed a variety of

shrinkage methods for the estimation of some unknown parameter by considering

estimators based on a priori guess value of the parameter or an interval consisting

the parameter. He proposed a simple new estimator and compared a variety of

shrinkage estimator for the parameter of the binomial distribution. Pandey (1983)

considered the shrinkage estimation of the exponential scale parameter. He

derived some shrinkage estimators for the scale parameter of an exponential

Page 136: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

distribution and compared them with the minimum squared error (MSE) estimator.

The shrunken estimators have smaller mse than the mse estimator when the prior

estimator is good. Pandey and Singh (1984) considered the estimation of the shape

parameter of the Weibull distribution by shrinking toward an interval. Pandey and

Upadhyay (1985) considered weather it would be more realistic to postulate a

prior distribution for the two parameter of the Weibull distribution around the

prior values and use ordinary Bayes estimator instead of prior value in the

shrinkage estimator. They obtained Bayes shrinkage estimators and proved that

these are better than the unbiased estimator. Jani (1991) suggested a class of

shrinkage estimator for the scale parameter of the exponential distribution. Singh

and Singh (1997) discussed a class of shrinkage estimators for the variance of a

normal distribution. Singh and Shukla (2002) considered the problem of

estimating the square of population mean in normal distribution when a prior

estimate or guessed value of the population variance is available. They have

suggested a family of shrinkage estimators for square of population mean with its

mean squared error formula.

In the present chapter, we derived the Shrinkage-type Bayes estimator of the

parameter of a family of lifetime distributions. In Section 6.2, the set-up of the

estimation problem is described and the desired shrinkage-type Bayes estimators

are obtained. The optimality in the sense of efficiency of the shrinkage-type Bayes

estimator over the UMVUE and the minimum mean squared error estimator is

established in the Section 6.3 and Section 6.4, respectively. Finally, in Section 6.5,

Page 137: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

the Lindley approximation of the reliability function of the family of lifetime

distribution is considered.

6.2 THE SET-UP OF THE ESTIMATION PROBLEM

Let the random variable X follows the family of lifetime distributions as

considered in the Chapter IV, i. e.

θ−−′= (x)ßg exp (x)1ßg (x)g

?ß?)ß,f(x; ; 0.?ß,x, > (6.2.1)

We assume that shape parameter ß is known but the scale parameter ? is unknown.

We obtain a random sample )nX,...2X,1(XX = from (6.2.1) and let

∑=

β=n

1i).i(x gnS Denoting by )x|L(? , the likelihood of observing X is

θ

−∏=

−′= nSexp

n

1i)i(x1ß)gi(xg

n

?ß)x|L(? (6.2.2)

.nS expn?)x|L(?

θ−−∝ (6.2.3)

We consider inverted gamma distribution as prior distribution given by

∑=+

= )n

1i i(x g ?

o? 1)-(a- exp

1??

kp(?)

2a, ?

o? 1)-(a- exp

1a? 21a

a0? a1)-(a

>+

=

(6.2.4)

Page 138: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

combining (6.2.3) and (6.2.4) via Bayes theorem the posterior density is

−+−

+++

+−+=

?

)o? 1)(an(s exp

a)G(n 1an?

an ]o? 1)(an[S)x|h(? .

In the following theorem we derive the Bayes estimator of ?, under SELF.

Theorem 1: The Bayes estimators of ? , under SELF is BS? , where

1-an

o? 1)-(anSBS?

+

+= . (6.2.5)

Proof: Since under SELF, Bayes estimator is posterior mean, we have

( ) ( )

−+−∫

∞ −−+

+−+=

?o1)?(anS

exp 0

an? a)G(n

ano? 1)(anS

BS?

( )o? 1)-(anSa)G(n

1)aG(n+

+−+

=

solving it, (6.2.5) follows.

Now, let )i(x giY β= , it gives nnS

Y = .

Hence we can express (6.2.5) as

Y1-an

n o?

1-an1)-(a

BS?+

++

=

or

Yk o? k)(1BS? +−= (6.2.6)

where

Page 139: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

1)a(nnk

−+= . (6.2.7)

The estimator given by (6.2.6) is shrinkage estimator with shrinkage factor

k, lying between 0 and 1, and is given by (6.2.7).

Thus YBS? 1,k →→ and o.?BS? 0,k →→

6.3 SHRINKAGE ESTIMATOR VERSUS THE UMVUE

In this section we find the condition under which the shrinkage estimator is

more efficient as compared with UMVUE in terms of having lower mean square

error. An expression for efficiency of the shrinkage estimator has also been

derived.

First we calculate the following values

.n

2?)Y( Vand 2? n

1n2)Y( E,22?)2iY ( E ?,)Yi( E =+===

It is easy to see that Y is UMVUE of ?.

In the following theorem the shrinkage estimator is compared with

UMVUE.

Theorem 2: The shrinkage estimator is more efficient than UMVUE if

Page 140: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

1a

2n12t

−+< , where 1.

?o?

t −=

Proof: The mean square error of shrinkage estimator is

2?)BS?( E)BS? ( mse −=

[ ] 2 ?Yko?k)(1 E −+−=

2

1)a(n

? 1)a(nYno? 1)(aE

−+

−+−+−=

21)a(n

2? n2

o21)(a

−+

θ−θ−

=

21)a(n

2? n]2t21)[(a

−+

+−= , where 1.?o

?t −=

The shrinkage estima tor is more efficient than UMVUE if

)Y( mse)BS? ( mse <

Hence

n

2

2)1n(

2n2t21)-( θ<−α+

θ

?21)a(n2t21)(an 2n <−+−−+

1a2

n12t

−+< .

Page 141: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Hence the theorem.

Also as .1

22tif)Y(mse)BS? ( mse ,n −α

<<∞→

Theorem 3: For t = 0, the efficiency of shrinkage estimator with respect to

UMVUE, say sue is given by

.2,2

n1

1sue >α−α

+=

(6.3.1)

Proof: If oθ is the true value of ? i. e. t = 0, we get

21)a(n

2?n )BS? ( mse−+

= . (6.3.2)

In such a situation the efficiency of the shrinkage estimator with respect to

the UMVUE is

)BS? mse(

)Ymse(sue =

2

n1

1

−α+=

and (6.3.1) follows.

It is obvious from (6.3.1), the shrinkage estimator is always efficient than

the UMVUE since sue is always greater than 1, as a >2.

Page 142: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

6.4 SHRINKAGE ESTIMATOR VERSUS MINIMUM MEAN

SQUARED ERROR ESTIMATOR

In this section we find minimum mean squared error estimator (MMSE) of

the scale parameter of the family of lifetime distributions as a function of UMVUE

for which the mean square error is minimum. We show that the shrinkage

estimator is more efficient as compared with MMSE in terms of having lower

mean square error.

Firstly, we write

MSE= 2?)Y(k E − , (6.4.1)

where k takes that value which minimizes (6.4.1) and can be calculated as

1nnk +

= .

Hence the minimum mean squared error estimator (MMSE), say msY is

Y1n

nmsY

+= .

In the following theorem the shrinkage estimator is compared with MMSE.

Theorem 4: The shrinkage estimator is more efficient than minimum mean

squared error estimator if

)1n(21)(a

)32(1n

12t+−

−α++

< , where 1.?o

?t −=

Page 143: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Proof: The mean square error of MMSE is

2)msY( E)msY( mse θ−=

)YE(1n

2n22)YE(2

1nn θ

+−θ+

+=

1n

2

+θ= (6.4.2)

The shrinkage estimator is more efficient than minimum mean squared

error estimator if

)msYmse()BS? mse( <

Hence

1n

2

2)1n(

2n2t21)-(

<−α+

θ

n 3)(2a21)-(a2 t21)-(a1)(n −+<+

)1n(21)(a

)32(1n

12t+−

−α++

< .

Also as .2)1(

322tif)msY(mse)BS? ( mse ,n −α

−α<<∞→

Theorem 5: For t = 0, the efficiency of shrinkage estimator with respect to

minimum mean squared error estimator, say sme is given by

Page 144: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

.2,1n21

n11sme >α

+−α+−α+=

(6.4.3)

Proof: If oθ is the true value of ?, the efficiency of shrinkage estimator with

respect to MMSE can be obtained with the help of (6.3.2) and (6.4.3) as

)BS? mse(

)msYmse(sme =

+−α+

−α+=

1n21

n11 .

Hence the theorem.

It is obvious from (6.4.3), the shrinkage estimator is always efficient than

the minimum mean squared error estimator since sme is always greater than 1, as

a >2.

Hence we have shown the dominance of the shrinkage estimator over the

UMVVUE and the minimum squared error estimator.

In the following section, we find Lindley approximation for the reliability

function of the family of lifetime distributions (6.2.1).

6.5 LINDLEY APPROXIMATION

In many situations, Bayes estimators are obtained as a ratio of two integral

expressions and cannot be expressed in a closed form. However, these estimators

can be numerically approximated using complex computer programming.

Page 145: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Lindley (1980) suggested an asymptotic approximation to the ratio of two

integrals.

( )

( )∫

Ωθθθ

Ωθθθ

=d)(Lexp)(v

d)(Lexp)(w

I (6.5.1)

)(w;functionlikelihoodtheofarithmlogtheis)(L);...,,(where m21 θθθθθ=θ

)(vand θ are arbitrary functions of θ .

The basic idea behind it is to obtain Taylor series expansion of function

involved in (6.5.1) about the maximum likelihood estimator.

The reliability function for the family (6.2.1) at a specified mission time t is

t)P(XR(t) >=

θ−= (t)ßgexp (6.5.2)

and the hazard-rate is given as

(t)1ßg(t)g?ßh(t) −′=

Now we consider Jeffery’s prior for (ß, θ) as

( ) .?ß

1?ß, p ∝ (6.5.3)

The likelihood of observing a random sample )nX,...2X,1(XX = from

(6.2.1) when ß is also unknown is

Page 146: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

θ−∏

=

−′

=β nS

exp n

1i)i(x1ß)gi(xg

n

)x|? ,L( (6.5.4)

Combining the prior distribution (6.5.3) and the likelihood function (6.5.4)

via Bayes theorem, the posterior density is

( )?ß, p)x|? ,L(k) x?/ ß, h( β=

where k is the normalizing constant.

Under SELF, the Bayes estimator of R (t) say, )t(BSR is

)x/)t(R(E)t(BSR =

( )

( )∫∞

∫∞

βθβ

∫∞

∫∞

βθβ=

0 0d d ?ß, p)x|? ,L(

0 0d d ?ß, p)x|? ,L()t(R

.

Now, on utilizing (6.5.2) (6.5.3) and (6.5.4), we get

∫∞

β∫∞

θ

θ−+θ

∏=

−′

∫∞

β∫∞

θ

θ

+−+θ

∏=

−′

=

0d

0 dnS

exp 1n1n

1i)i(x1ßg )i(xg1-nß

0d

0 d

(t)ßgnS exp 1n

1n

1i)i(x1ßg )i(xg1-nß

)t(BSR

∫∞ ∏

=

−′

∫∞

+

∏=

−′

=

β

β

0 n]nS[

n

1i)i(x1ßg )i(xg1-nß

0 n(t)ßgnS

n

1i)i(x1ßg )i(xg1-nß

d

d

(6.5.5)

Page 147: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

It is obvious from (6.5.5), we can not express )t(BSR in closed form.

Hence we look for Bayes estimator of reliability function using Lindley

approximation say, )t(LSR as

[ ]x/),(uE)t(LSR θβ=

( ) 22s2?2u11s1?1u22

s22

u11

s11

u 21u ++++=

++ 2

22s2u03L211s1u30L

21 (6.5.6)

where

u = R (t), ?u

1u∂∂

= , β∂

∂=

u2u ,

2?

u2

11u∂

∂= ,

2u2

22uβ∂

∂= , ( )[ ]?ß, p log=ρ , ?1 ∂ρ∂=ρ ,

β∂ρ∂=ρ2 , L = )x|L(? ,

2L2

20Lθ∂

∂= , 20L1

11 −=σ , 2L2

02Lβ∂

∂= , 02L1

22 −=σ , 3L3

30Lθ∂

∂=

and 3L3

03Lβ∂

∂= .

On deriving these expressions, we have

Page 148: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

θ−= (t)ßgexpu , 2?

(t)ßgu 1u = ,

?(t) g log (t)ßgu

2u −= ,

θθ= 2(t)ßg

3(t)ßgu

11u , [ ]

θθ= 1(t)ßg2(t) g log (t)ßgu

22u ,

θβ=ρ log - log- , ?1

1 −=ρ , β

−=ρ1

2 ,

θ−−′∑

=+θ−β=

nS

)i(x1ß)gi(xglogn

1ilognlognL ,

2n

20Lθ

−= , n

2

11θ=σ ,

θ

β

−β

−=

2

)ix(glognS

2n

02L ’

β

θ+β=σ)ix(glognSn

2

22 , 3n4

30Lθ

= ,

θ

β

−β

−=

3

)ix(glognS

3n2

03L

all are calculated at θ=θ ˆ , where nnSˆ =θ is the mle of ?.

Substituting these values in the expression (6.5.6), we get the Lindley

approximation to the reliability function.

Page 149: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

CHAPTER-VII

SUMMARY

The word ‘reliable’ means able to be trusted or to do what is expected. It is

used in various contexts in our daily life such as reliable friend, reliable news,

reliable service center etc. The concept of reliability is as old as man himself. The

growth and development of reliability theory has strong links with quality control

and its development. The science of reliability is new and still growing.

The classical inferential procedures have been introduced in the field of

reliability analysis for deriving maximum likelihood estimators (MLE’s) and

uniformly minimum variance unbiased estimators (UMVUE’s) of the reliability

and other parametric functions. Davis (1952) examined that the exponential

distribution appears to fit most of the data related to reliability analysis. Epstein

(1958) remarks that the exponential distribution plays an important role in life

testing experiments. In case of censoring from right for one-parameter exponential

distribution, Epstein and Sobel (1953) derived the MLE of scale parameter.

Another measure of reliability under stress-strength set-up is the probability

P=P(X>Y), which represents the reliability of performance of an item of strength

X subject to stress Y. Owen, Craswell and Hanson (1964), Church and Harris

Page 150: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

(1970) and Dowton (1973) discussed the estimation of P when X and Y are

normally distributed.

The pioneering work in the field of sequential analysis was due to Wald

(1947), who developed sequential probability ratio test (SPRT) for testing a simple

hypothesis against a simple alternative hypothesis. He also obtained expressions

for the operating characteristic (OC) and average sample number (ASN) for the

proposed sequential test. Epstein and Sobel (1955) considered sequential life test

in exponential case to test the simple null hypothesis against a simple alternative

hypothesis. They derived approximate formulae for OC and ASN functions.

Dantzing (1940) proved the non-existence of test of student’s hypothesis

having power function independent of variance for normal population.

Consequently, one cannot construct a confidence interval of pre assigned width

and coverage probability for the mean of a normal population when variance is

unknown. To deal with this problem, Stein (1945) proposed a two-stage procedure

determining the sample size as a random variable. Woodroofe (1977) introduced

the concept of ‘second order approximations’ in the area of sequential estimation.

In this theory, one may be able to study the behavior of the remainder terms after

the optimum position achieved by the fixed sample size procedure

The Bayesian ideas in reliability analysis were introduced for the first time

by Bhattacharya (1967), who considered the Bayesian estimation of reliability

function for one-parameter exponential distribution under uniform and beta priors.

Bhattacharya and Kumar (1986) and Bhattacharya and Tyagi (1988) obtained

Page 151: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Bayes estimators of the reliability function with other priors. The pioneering work

on the Bayesian estimation of ‘P’ has been done by Enis and Geisser (1971), who

derived Bayes estimator of ‘P’ when X and Y follow normal distributions.

Thompson (1968, a & b) introduced the concept of shrinkage estimator.

Shrinkage estimation procedure is one of the interesting procedures in which prior

knowledge about the parameter is assumed to be available in the form of prior

point estimate or in the form of interval which contains parameter in it.

In many situations, Bayes estimators are obtained as a ratio of two integral

expressions and cannot be expressed in a closed form. However, these estimators

can be numerically approximated using complex computer programming. Lindley

(1980) suggested an asymptotic approximation to the ratio of two integrals.

In Chapter 1 of the thesis, the brief review of the literature including brief

historical development, basic definitions, concepts and different distributions

useful in the reliability analysis is given.

In Chapter 2 of the thesis, the binomial and Poisson distributions are

introduced as lifetime models. We obtain the UMVUES and Bayes estimators of

the powers of parameter, reliability function and Y)KX...2X1X(P ≤+++ ,

where X’s and Y are assumed to follow Binomial and Poisson distributions. In

order to obtain the estimators of these parametric functions, the basic role is

played by the estimators of the factorial moments of the two distributions.

Page 152: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

In Chapter 3 of the thesis, sequential point estimation procedure for the

generalized life distributions, which covers several distributions useful in

reliability analysis, including Weibull and gamma distributions as particular cases,

is considered. The failure of the fixed sample size procedure is established and

minimum risk point estimation for the parameters associated with the generalized

life distributions under SELF is considered. The second order approximations are

made and the ‘regret’ of the sequential procedure is obtained.

In Chapter 4 of the thesis, Bayesian estimation procedures for powers of

parameter, reliability function and P(X>Y) for a family of lifetime distributions

under squared-error loss function (SELF) and general entropy loss function

(GELF) is considered. Bayes estimators of these parameters are obtained by using

the technique of Chaturvedi et. al. (2002 & 2003,a). The Bayes estimators of the

parameters for the family of lifetime distributions when both the parameters are

unknown are also obtained after calculating the marginal posteriors.

In Chapter 5 of the thesis, we develop a two-stage point estimation

procedure for the mean of a normal population when the population CV is known.

Both the minimum risk and the bounded risk estimation problems are considered.

Second order approximations are also considered for the proposed two-stage point

estimation procedure. The second order approximations for expected sample size,

risk corresponding to two-stage point estimation procedure and the regret of the

procedure have also been obtained.

Page 153: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

In Chapter 6 of the thesis, we derive the shrinkage-type Bayes estimator of

the parameter for a family for lifetime distributions. The optimality in the sense of

efficiency of the shrinkage-type Bayes estimator over the UMVUE and the

minimum mean squared error estimator is established. The Lindley approximation

of the reliability function of the family of lifetime distribution is also considered

Page 154: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

REFERENCES:

Anscombe, F. J. (1949): Large sample theory of sequential estimation. Biometrika, 36,

457-458.

Aroian, L. A. (1976): Applications of the direct method in sequential analysis.

Technometrics, 18, 301-306.

Barton, D. E. (1961): Unbiased estimation of a set of probabilities. Biometrika, 48, 227-229.

Basu, A. P. (1964): Estimates of reliability for some distributions useful in life testing.

Technometrics, 6, 215-219.

Basu, A. P. and Ebrahimi, N. (1991): Bayesian approach to life testing and reliability

estimation using asymmetric loss function. Jour. Statist. Planning and Infer., 29,

21-31.

Basu, A. P. and Tarmast, G. (1987): Reliability of a complex system from Bayesian view

point. Probability and Bayesian Statistics (Ed. R. Viertl), 31-38, Plenum.

Bhattacharya, S. K. (1967): Bayesian approach to life testing and reliability estimation.

Jour. Amer. Statist. Assoc., 62, 48-62.

Bhattacharya, S. K. and Kumar, S. (1986): Bayesian life estimation with an inverse

Gaussian prior. S. Afr. Statist. Jour., 20, 37-43.

Bhattacharya, S. K. and Tyagi, R. K. (1988): Bayesian reliability estimation with a

truncated normal prior. Cal. Statist. Assoc. Bull., 37, 227-231.

Page 155: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Blyth, C. R. (1980): Expected absolute error of the usual estimator of the binomial

parameter. American Statistician, 34, 155-157.

Bryant, C. M. and Schmee, J. (1979): Confidence limits of MTBF for sequential test plans

of MIL-STD 781. Technometrics, 21, 33-42.

Cacoullos, T. and Charalambides, Ch. (1975): On minimum variance unbiased estimation for

truncated binomial and negative binomial distributions. Ann. Inst. Statist. Math.,

27, 235-244.

Calabria, R. and Pulcini, G. (1994): An engineering approach to Bayes estimation for

Weibull distribution. Microelectron. Reliab., 34, 789-802.

Calabria, R. and Pulcini, G. (1996): Point estimation under asymmetric loss functions for

left- truncated exponential samples. Commun. Statist.-Theor. Meth., 25(3), 585-

600.

Canfield, R. V. (1970): A Bayesian approach to reliability estimation using a loss function.

IEEE trans. Reliab., R-19, 13-16.

Canvos, G. C. and Tsokos, C. P. (1971): A study of an ordinary and empirical

Bayes approach to reliability estimation in the gamma life testing model. Proc.

1971. Annual Reliability and Maintainability Symposium, 343-349.

Canvos, G. C. and Tsokos, C. P. (1973): Bayesian estimation of life parameters in the

Weibull distribution. Operations Research, 21, 755-763.

Chatterjee, S. K. (1959): On an extension of Stein’s two-sample procedure for multinormal

problem. Calcutta Statist. Assoc. Bull. 8, 121-148.

Page 156: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Chatterjee, S. K. (1960): Some further results on the multinormal extension of Stein’s two-

stage procedure. Calcutta Statist. Assoc. Bull.9, 20-28.

Chaturvedi, A. (1986, a): Further remarks on sequential point estimation of the mean of a

multinomial population, Sequential Analysis, 5(3), 263-274.

Chaturvedi, A. (1986, b): Sequential estimation of the difference of the two multinormal

means, Sankhya, A48, 331-338.

Chaturvedi, A. (1987): Sequential point estimation of regression parameters in a linear

model, Ann. Inst. Statist. Math. A39, 55-67.

Chaturvedi, A. (1988): On Sequential procedures for the point estimation of the mean of a

normal population. Ann. Inst. Statist. Math., A40, 769-783.

Chaturvedi, A., Kumar, A. and Chauhan, P. (1998): The robustness of a sequential test for

the mean of an inverted gamma distribution. Jour. Indian Statist. Assoc., 36, 13-

25.

Chaturvedi, A. Kumar, A. and Kumar, S. (1998): Robustness of the sequential procedures

for a family of life-testing models. Metron, 56, 117-137.

Chaturvedi, A., Kumar, A. and Kumar, S. (2000): Sequential testing procedures for a class

of distributions representing various life testing models. Statistical papers, 41, 65-

84.

Chaturvedi, A. and Rani, U. (1997): Estimation procedures for a family of density functions

representing various life-testing models. Metrika, 46, 213-219.

Page 157: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Chaturvedi, A. and Rani, U. (1998): Classical and Bayesian reliability estimation of the

generalized Maxwell failure distribution. Jour. Statist. Res., 32, 113-120.

Chaturvedi, A. and Rani, U. (1999): On the classes of two-stage procedures to construct

fixed-width confidence regions. Statistics, 32, 341-352.

Chaturvedi, A. and Sharma, V. (2007): Bayesian estimation procedures for the zero

truncated negative binomial distribution. Jour. Applied Statistical Science, vol.15,

No.1, pp 67-75.

Chaturvedi, A. and Shukla, P. S. (1990): Sequential point estimation of location parameter

of a negative exponential distribution, Jour. Indian Statist. Assoc., 28, 41-50.

Chaturvedi, A. and Surinder, K. (1999): Further remarks on estimating the reliability

function of exponential distribution under type I and type II censorings.

REBRAPE, 13, 29-39.

Chaturvedi, A., Tiwari, N. and Kumar S. (2007): Some remarks on classical and Bayesian

reliability estimation of binomial and Poisson distributions. Statistical Papers, 48,

683-693.

Chaturvedi, A., Tiwari, N. and Tomer, S. K. (2002): Robustness of the sequential testing

procedures for the generalized life distributions. Brazilian. Jour of Prob. and

Statist., 16, 7-24.

Chaturvedi, A. and Tiwari, N. (2002): Some classes of Three-stage estimation procedures.

Jour. of Combinatorics, Information and system science. Vol. 27, No.1-4, 41-55.

Page 158: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Chaturvedi, A., Tiwari, N. and Pandey, S. K. (1992): Second-order approximations to a

class of sequential point estimation procedures. Metron, vol. L-n. 1-2, 30-VI.

Chaturvedi, A., Tiwari, N. and Pandey, S. K. (1993): Multi-stage estimation of the common

location parameter of several exponential distributions. Commun. Statist.-Theory

Meth., 22(5), 1413-1423.

Chaturvedi, A. and Tome r, S. K. (2002): Classical and Bayesian reliability estimation of the

negative binomial distribution. Jour. Applied Statist. Sci., 11(1), 33-43.

Chaturvedi, A. and Tomer, S. K. (2003, a): UMVU estimation of the reliability function of

the generalized life distributions, Statistical Papers, 44, 301-313.

Chaturvedi, A. and Tomer, S. K. (2003, b): Three-stage and accelerated sequential

procedures for the mean of a normal population with known coefficient of

variation, Statistics, vol. 37(1), pp.51-64.

Chew, V. (1971): Point estimation of the parameter of the binomial distribution. American

Statistician, 25, 47-50.

Chhikara, R. S. and Folks, J. L. (1974, a): The inverse Gaussian distribution as a lifetime

model. Technometrics, 19, 461-468.

Chhikara, R. S. and Folks, J. L. (1974, b): Estimation of the inverse Gaussian distribution

function. Jour. Amer. Statist. Assoc., 69, 250-254.

Chow, Y. S. and Robbins, H. (1965): On the the asymptotic theory of fixed-width sequential

confidence intervals for the mean. Ann, Math, Statist,. 36, 457-462.

Page 159: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Chow, Y. S., Robbins, H. and Teicher, H. (1965): Moments of randomly stopped sums.

Ann. Math. Statist. 36, 789-799.

Church, J. D. and Harris, B. (1970): The estimation of reliability from stress-strength

relationships. Technometrics, 12, 49-54.

Constantine, K., Karson, M. and Tse, S. K. (1986): Estimation of P(Y<X) in the gamma

case. Commun. Statist. Simul., 15(2), 365-388.

Costanza, M. C., Hamdy, H. I., and Son, M. S. (1986): Two-stage fixed width confidence

intervals for the common location parameter of several exponential distributions.

Commun. Statist.-Theory Meth. 15(8), 2305-2322.

Dantzing, G. B. (1940): On the nonexistence of tests of ‘Student’s’ hypothesis having power

functions independent of s. Ann. Math. Statist., 11, 186-192.

Davies, O. L. and Goldsmith, P. L. (1976): Statistical Methods in Research and Production,

Longman Group LTD. London.

Davis, D. J. (1952): The analysis of some failure data. Jour. Amer. Statist. Assoc., 47, 113-

150.

Dodge, H. F. and Roming, H. G. (1929): A method of sampling inspection. Bell. Syst. Tech.

Jour., 8, 613-631.

Downton, F. (1973): The estimation of Pr (Y<X) in the normal case. Technometrics, 15,

551-558.

Page 160: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Enis, P. and Geisser, S. (1971): Estimation of the probability that Y<X. Jour. Amer. Statist.

Assoc., 66, 162-168.

Epstein, B. (1958): Exponential distribution and its role in life testing. Industrial quality

control, 15, 4-6.

Epstein, B. (1960): Testing the validity of the assumption that the underlying distribution of

life is exponential. Technometrics, 2, 83-101.

Epstein, B. and Sobel, M. (1953): Life testing. Jour. Amer. Statist. Assoc., 48, 486-502.

Epstiein, B and Sobel, M. (1954): Some theorems relevant to life testing from an

exponential distribution. Ann. Math. Statist., 25, 373-381.

Epstein, B. and Sobel, M. (1955): Sequential life tests in the exponential case. Ann. Math.

Statist., 26, 82-93.

Feldman, D. and Fox, M. (1968): Estimation of the parameter in the binomial distribution.

Jour. Amer. Statist. Assoc., 63, 150-154.

Feller, W. (1960): An Introduction to Probability Theory and Its Applications. John Wiley

and Sons, New York.

Ghosh, B. K. (1970): Sequential Test of Statistical Hypothesis. Addison-wesley. Reading,

M.A.

Glasser, G. J. (1962): Minimum variance unbiased estimators for Poisson probabilities.

Technometrics, 4, 409- 418.

Page 161: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Goldthwaite, L. (1961): Failure rate study for the lognormal lifetime model. Proc. Seventh

Nat. Symp. Reliab. Qual. Contol, 208-213.

Gradshteyn, I. S. and Ryzhik, I. M. (1980): Tables of Integrals, Series and Products.

Academic Press, London.

Guttman, I. (1958): A note on a series solution of a problem in estimation. Biometrika, 45,

565-567.

Hald, A. (1952): Statistical Theory with Engineering Applications. John Wiley and Sons,

New York.

Hall, P. (1981): Asymptotic theory of triple sampling for sequential estimation of a mean.

Ann. Statist., 9, 1229-1238.

Hall, P. (1983): Sequential estimation saving sampling operations. Jour. Roy. Statist. Soc.,

B 45, 219-223.

Halmos, P. R. (1946): The theory of unbiased estimation. Ann. Math. Statist., 17, 34-45.

Hamdy, H. I. and Sons, M. S. (1991): On accelerating sequential procedures for the point

estimation: The normal case. Statistica, 51, 437-446.

Harris, C. M. and Singpurwala, N. D. (1968): Life distributions derived from stochastic

hazard function. IEEE Trans. Reliab., R-17, 70-79.

Harter, H. L. (1969): Order statistics and their use in testing and estimation. Washington.

D.C. : U.S. Government Printing Office.

Page 162: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Harter, H. L and Moore, A. H. (1976): An evolution of exponential and Weibull test plans.

IEEE Trans. Reliab., 25,100-104.

Irony, T. Z. (1992): Bayesian estimation for discrete distributions. Jour. Applied Statist., 19,

38-47.

Isogai, E. and Uno, C. (1993): A note on minimum risk point estimation of the variance of a

normal population. Commun. Statist.-Theor. Meth. 22(8), 2309-2319.

Isogai, E. and Uno, C. (1994): Sequential estimation of a parameter of an exponential

distribution. Ann. Inst. Statist. Math., 46, 77-82.

Isogai, E. and Uno, C. (1995): On the Sequential point estimation of the mean of a gamma

distribution. Statist. Prob. Letters, 22, 287-293.

Jani, P. N. (1991): A class of shrinkage estimators for the scale parameter of the exponential

distribution. IEEE Trans. Reliability, 40, 68-70.

Johnson, N. L. and Kotz, S. (1969): Discrete Distributions. John Wiley and Sons, New

York.

Johnson, N. L. and Kotz, S. (1970): Continuous Univariate Distributions-1. John Wiley and

Sons, New York.

Joshi, S. and Shah, M. (1990): Sequential analysis applied to testing the mean of an inverse

Gaussian distribution with known coefficient of variation. Commun. Statist. -

Theor. Meth., 19(4), 1457-1466.

Page 163: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Kelly, G. D., Kelly J. A. and Schuncany, W. R. (1976): Efficient estimation of

P(Y<X) in the exponential case. Technometrics, 18, 359-360.

Kendall, M. G. and Stuart, A. (1958): The Advanced Theory of Statistics, 1. Hafner

Publishing Company, New York.

Kolmogorov, A. (1950): Unbiased estimators. Izv. Akad. Nauk. SSSR, Ser. Math., 14, 303-

326.

Kumar, S. and Bhattacharya, S. K. (1989): Reliability estimation for negative binomial

distribution. Assam Statist. Rev., 3, 104-107.

Kumar, S. and Chaturvedi, A. (1993): A class of two-stage point estimation procedures.

Statistics and Decisions, 3, 103-114.

Kyriakoussis, A. and Papadopoulos, A. S. (1993): On Bayes estimators of the probability of

“Success” and reliability function of the zero truncated binomial and negative

binomial distribut ions. Sankhya, B55, 171-185.

Laurent, A. G. (1963): Conditional distribution of order statistics and distribution of the

reduced ith order statistics of the exponential model. Ann. Math. Statist. 34, 652-

657.

Lemmer, H. H. (1981): Notes on shrinkage estimators for the Binomial distribution.

Commun. Statist. Theor. Meth., A 10 (10), 1017-1027.

Page 164: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Lian, M. G. (1975): Bayes and empirical Bayes estimation of reliability for the Weibull

model. Ph. D. Dissertation, Texas Tech. Univ. Lubbock.

Lindley, D. V. (1980): Approximate Bayesian methods, Trabajos de Estadistica 31, 223-

237.

Maiti S. S. (1995): Estimation of P (X ≤ Y) in the geometric case. Jour. Indian Statist.

Assoc., 33, 87-91.

Martz, H. F. and Lian, M. G. (1977): Bayes and empirical Bayes point and interval

estimation of reliability for the Weibull model. The Theory and Applications of

Reliability, Vol. II, Academic Press, New York, 203-233.

Martz, H. F. and Waller, R. A. (1982): Bayesian Reliability Analysis. John Wiley and Sons,

Inc., New York.

Mehta, J. S. and Srinivasan, S. R. (1971): Estimation of the mean by shrinkage to a point.

Jour. Amer. Statist. Assoc., 66, (233), 86-90.

Montange, E. R. and Singpurwalla, N. D. (1985): Robustness of the sequential exponential

life-testing procedures. Jour. Amer. Statist. Assoc., 80, 715-719.

Moore, A. H. and Bilikam, J. E. (1978): Bayesian estimation of parameters of life

distributions and reliability from type II censored samples. IEEE Trans. Reliab.,

R-27, 64-67.

Mukhopadhyay, N. (1980): A consistent and asymptotically efficient two-stage procedure to

construct fixed-width confidence intervals for the mean. Metrika, 27, 281-284.

Page 165: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Mukhopadhyay, N. (1982, a): Stein’s two-stage procedure and exact consistency.

Scandinavian Acturial Jour., 110-122.

Mukhopadhyay, N. (1982, b): On the asymptotic regret while estimating the location of an

exponential distribution. Calcutta Statist. Asso. Bull., 31, 207-213.

Mukhopadhyay, N. (1994): Improved sequential estimation of means of exponential

distributions. Ann. inst. Statist. Math. 46, 509-519.

Mukhopadhyay, N. and Abid (1986, a): On fixed size confidence regions for the regression

parameters. Metron, XLIV, 297-306.

Mukhopadhyay, N. and Abid (1986, b): Fixed size confidence regions for the difference of

the means of two multinomial populations. Sequential analysis. 5(2), 169-191.

Nagao, H. and Takada, M. (1980): On sequential point estimation of the mean of a normal

distribution. Ann. Inst. Statist. Math. A32, 201-210.

O’Nill, R. T. and Rohtagi, V. K. (1973): A two-stage procedure for estimating the difference

between the mean vectors of two multivariate normal distributions. Trabjos De

Estadistica De Investigacion Operativa. 24, 123-130.

Owen, D. B., Craswell, K. J. and Hanson, D. L. (1964): Non-parametric upper confidence

bounds for Pr (Y<X) and confidence limits for Pr (Y<X) when X and Y are normal.

Jour. Amer. Statist. Assoc., 59, 906-924.

Padgett, W. J. and Tsokos, C. P. (1977): Bayes estimation of reliability for the Lognormal

Failure Model. The Theory and Applications of Reliability, Vol. II, Academic

Press, New York, 133-161.

Page 166: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Pandey, B. N. (1983): Shrinkage estimation of the exponential scale parameter. IEEE Trans.

Reliab. Vol 32, p. 203-205.

Pandey, B. N. and Singh, K. N. (1984): Estimating the shape parameter of the Weibull

distribution by shrinkage towards an interval. South African Statistics Journal. 18,

1-11.

Pandey, M. and Upadhyay, S. K. (1985): Bayes shrinkage estimators of Weibull estimators

of Weibull parameters. IEEE Trans. Reliab. Vol. R-34, no.5.

Patel, S.R. (1978): Minimum variance unbiased estimation of multivariate modified power

series distribution. Metrika, 25, 155-161.

Patel, S.R. and Jani, P.N. (1977): On minimum variance unbiased estimation of generalized

Poisson distribution and decapitated generalized Poisson distribution. Jour. Indian

Statist. Assoc., 15, 151-159.

Patel, J. K., Kapadia , C. H. and Owen, D. B. (1976): Handbook of Statistical Distributions.

Marcel Dekker, New York.

Patil, G. P. (1963): Minimum variance unbiased estimation and certain problems of additive

number theory. Ann. Math. Statist., 34, 1050-1056.

Patil, G. P. and Bildikar, S. (1966): On minimum variance unbiased estimation for the

logarithmic series distribution. Sankhya, A28, 239-250.

Patil, G. P. and Wani, J. K. (1966): Minimum variance unbiased estimation of the

distribution functions admitting a sufficient statistic. Ann. Inst. Statist. Math., 18,

39-47.

Page 167: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Phatarfod, R. M. (1971): A sequential test for gamma distribution. Jour. Amer. Statist.

Assoc., 66, 876-878.

Pugh, E. L. (1963): The best estimate of reliability in the exponential case. Operations

Research, 11, 57-61.

Pulskamp, R. (1990): A note on the estimation of binomial probabilities. American

Statistician, 44, 293-295.

Robbins, H. (1959): Sequential estimation of the mean of a normal population. Probability

and Statistics (H. Cramer vol.), Almquist and Wiksell, Uppsala, 235-245.

Roy, L. K. and Wasan, M. T. (1968): The first passage time distribution of Brownian

motion with positive drift. Mathematical Biosciences, 3, 191-204.

Roy, J. and Mitra, S. K. (1957): Unbiased minimum variance estimation in a class of discrete

distributions. Sankhya, 18, 371-378.

Ruben, H. (1961): Studentization of two-stage sample means from normal populations with

unknown common variance. Sankhya. A23, 231-250.

Sathe, Y. S. and Varde, S. D. (1969): Minimum variance unbiased estimation of reliability

for the truncated exponential distribution.Technometrics, 11, 609-612.

Searl, D. T. (1964): The utilization of a known coefficient of variation in the estimation

procedure. Jour. Amer. Statist. Assoc. 50, 1225-1226.

Shewhartz, W. A. (1931): Economic Control of Manufactured Product. Van Nostrand, New

York.

Page 168: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Simnoff, J. S., Hochberg, Y. and Reiser, B. (1985): Alternative estimation procedure for

P(X<Y) in discretized data. Proceedings of the 1985 joint Statistics Meetings.

Singh, R. K. (1998): Sequential estimation of the mean of a normal population with known

coefficient of variation. Metron, 56, 73-90.

Singh, J. and Bhatkulikar, S. G. (1977): Shrunken estimation in Weibull distribution.

Sankhya. B39, 382-393.

Singh, H. P. and Singh, R. (1997): A class of shrinkage estimators for the variance of a

normal population. Microelectron Reliab. 37, (5), 863-867.

Singh, Housila P. and Shukla, Sushil K. (2002): A family of shrinkage estimators for the

square of mean in normal distribution. Statistical Papers, 433-442.

Sinha, S. K. (1972): Reliability estimation in life testing in the presence of an outlier

observation. Op. Res. 20, 888-894.

Sinha, S. K. (1986): Reliability and life testing. Wiley Eastern ltd., New Delhi.

Snedecor, G. W. (1946): Statistical Methods. The Iowa State College Press, Ames.

Soland, R. M. (1969): Bayesian analysis of the Weibull process with unknown scale and

shape parameters. IEEE Trans. Reliab., R-18, 181-184.

Stacy, E. W. (1962): A generalization of the gamma distribution. Ann. Math. Statist., 33,

1187-1192.

Page 169: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Starr, N. (1966): On the asymptotic efficiency of a sequential procedure for estimating the

mean, Ann. Math. Statist., 37, 1173-1185.

Starr, N. and Woodroofe, M. (1969): Remarks on sequential point estimation, Proc. Nat.

acad. Sci. USA, 63, 285-288.

Starr, N. and Woodroofe, M. (1972): Further remarks on sequential point estimation: The

exponential case. Ann. Math. Statist., 43, 1147-1154.

Stein, C. (1945): A two-sample test for a linear hypothesis whose power is independent of

the variance. Ann. Math. Statist., 16, 243-258.

Tate (1959): Unbiased estimation: Functions of location and scale parameters. Ann. Math.

Statist., 30, 341-366.

Thompson, J. R. (1968, a): Some shrinkage techniques for estimating the mean. Jour. Amer.

Statist. Assoc., 63, 113-122.

Thompson, J. R. (1968, b): Accuracy borrowing in the estimation of the mean by shrinkage

to an interval. Jour. Amer. Statist. Assoc., 63, 953-963.

Tong, H. (1974): A note on the estimation of P(X<Y) in the exponential case.

Technometrics, 16, 625.

Tong, H. (1975): Letter to editor. Technometrics, 17, 393.

Tsokos, C. P. (1972, a): A Bayesian approach to reliability theory and simulation.

Proc.1972. Annual Reliability and Maintainability Symposium, 78-87.

Page 170: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Tsokos, C. P. (1972, b): A Bayesian approach to reliability using the Weibull distribution

with unknown parameters and its computer simulation. Report of applied

statistical research, Japanese Union of Scientist and Engineers, 19,123-134.

Tyagi R. K. and Bhattacharya, S. K. (1989, a): A note on the MVU estimation of reliability

for the Maxwell failure distribution, Estadistica, 41, 73-79.

Tyagi, R. K. and Bhattacharya, S. K. (1989, b): Bayes estimator of the Maxwell’s velocity

distribution function .Statistica, XLIX, 563-567.

Varian, H. R. (1975): A Bayesian approach to real estate assessment. Studies in Bayesian

econometrics and Statistics in Honour of Leonard J. Savage. (Fienberg and Zellner,

eds), North Holland, Amsterdam.

Voinov, V. and Nikulin, M. (1993): Unbiased Estimators and Their Applications, Vol.1,

Univariate Case. Kluwer Academic Publishers, Dordrecht.

Voinov, V. and Nikulin, M. (1996): Unbiased Estimators and Their Applications, Vol.2,

Multivariate Case. Kluwer Academic Publishers, Dordrecht.

Wald, A. (1947): Sequential Analysis. John Wiley and Sons, New York.

Wang, Y. H. (1973): Sequential estimation of the scale parameter of the Pareto distribution,

commun. Statist. - Theor. Meth., A2, 145-154.

Wang, Y. H. (1980): Sequential estimation of the mean of a multinormal population, Jour.

Amer. Statist. Assoc., 75, 977-983.

Weibull, W. (1951): A statistical distribution function of wide applicability. Jour. Appl.

Mech., 18, 293-297.

Page 171: CLASSICAL AND BAYESIAN INFERENTIAL ...shodhganga.inflibnet.ac.in/bitstream/10603/20269/2/thesis...CERTIFICATE This is to certify that the thesis entitled “CLASSICAL AND BAYESIAN

Woodall, R. C. and Kurkjian, B. M. (1962): Exact operating characteristic for truncated

sequential life test in the exponential case. Ann. Math. Statist. 33, 1403-1412.

Woodroofe, M. (1977): Second order approximations for sequential point and interval

estimation. Ann. Statist., 5, 984-995.

Zellner, A. (1986): Bayesian estimation and prediction using asymmetric loss functions.

Jour. Amer. Statist. Assoc., 81,446-451.