Upload
others
View
7
Download
0
Embed Size (px)
Citation preview
Proceeding of
The 6th Seminar on
Reliability Theory and its Applications
Department of Statistics
University of Mazandaran
Babolsar, Iran
August 11-12, 2020
This book contains the proceeding of the 6th Seminar on Reliability Theoryand its Applications. Authors are responsible for the contents and accuracy.Opinions expressed may not necessarily reflect the position of the scientificand organizing committees.
Title: Proceeding of The 6th Seminar on Reliability Theory and its Applications
Formulator: Ali Saadati Nik
Editors: Akbar Asgharzadeh, S.M.T.K MirMostafaee
Cover Designer: Javaneh Rakizadeh
Release Date: August 2020
Preface
Following the series of workshops on “Reliability Theory and its Applica-tions” in Ferdowsi University of Mashhad and five seminars in University ofIsfahan (2015), University of Tehran (2016), Ferdowsi University of Mash-had (2017), Shiraz University (2018) and University of Yazd (2019), we arepleased to organize the 6th seminar on “Reliability Theory and its Applica-tions” during 11-12 August, 2020 at the Department of Statistics, Universityof Mazandaran. On behalf of the organizing and scientific committees, wewould like to extend a very warm welcome to all participants and hope thatthis seminar provides an environment of useful discussions and would also ex-change scientific ideas through opinions. We wish to express our gratitude tothe numerous individuals that have contributed to the success of this seminar,in which around 50 colleagues, researchers, and postgraduate students fromuniversities and organizations have participated.
Finally, we would like to extend our sincere gratitude to the Research Councilof the University of Mazandaran, the administration of College of Mathemat-ical Sciences, the Ordered Data, Reliability and Dependency Center of Excel-lence, the Islamic Word Science Citation Center, the Iranian Statistical Society,the Scientic Committee, the Organizing Committee, the referees, and the stu-dents and staff of the Department of Statistics at the University of Mazandaranfor their kind cooperation.
Akbar Asgharzadeh (Chairman)
August, 2020
Topics
The aim of the seminar is to provide a forum for presentation and discussionof scientific works covering theories and methods in the field of reliability andits application in a wide range of areas:
• Accelerated life testing
• Bayesian methods in reliability
• Case studies in reliability analysis
• Computational algorithms in reliability
• Data mining in reliability
• Degradation models
• Lifetime data analysis
• Lifetime distributions theory
• Maintenance modeling and analysis
• Networks reliability
• Optimization methods in reliability
• Reliability of coherent systems
• Safety and risk assessment
• Software reliability
• Stochastic aging
• Stochastic dependence in reliability
• Stochastic orderings in reliability
• Stochastic processes in reliability
• Stress-strength modeling
• Survival analysis
Organizing Committee
1. Ahmadi, J., Ferdowsi University of Mashhad
2. Akbari Lakeh, M., University of Mazandaran
3. Asgharzadeh, A., University of Mazandaran (Chair)
4. Fayyaz Movaghar, A., University of Mazandaran
5. Jabbari Nooghabi, H., Ferdowsi University of Mashhad
6. Mirashrafi, S.B., University of Mazandaran
7. MirMostafaee, S.M.T.K., University of Mazandaran
8. Mohammadpour, M., University of Mazandaran
9. Naghizadeh Qomi, M., University of Mazandaran
10. Nasseri, S.H., University of Mazandaran
11. Pourdarvish, A., University of Mazandaran
Scientific Committee
1. Ahmadi, J., Ferdowsi University of Mashhad
2. Amini Seresht, E,. Hamedan University
3. Asadi, M., Isfahan University
4. Asgharzadeh, A., University of Mazandaran
5. Doustparast, M., Ferdowsi University of Mashhad
6. Haghighi, F., University of Tehran
7. Izadi, M., Razi University
8. Jahani, E., University of Mazandaran
9. Kelkinnama, M., Isfahan University
10. Khaledi, B., Razi University
11. Khanjari, M., Birjand University
12. Ku, C., Selcuk University, Konya, Turkey
13. Mahmoudi, E,. Yazd University
14. MirMostafaee, S.M.T.K., University of Mazandaran
15. Naghizadeh Qomi, M., University of Mazandaran
16. Pourdarvish, A., University of Mazandaran
17. Raqab, M. Z., University of Amman, Jordan
18. Razmkhah, M,. Ferdowsi University of Mashhad
19. Tavangar, M., Isfahan University
20. Tony Ng, H. K., Southern Methodist University, Dallas, USA
21. Zarezadeh, S., Shiraz University
Table of Contents
Inference on the Parameters of the Generalized Logistic Distribution Based on Left Censored Data
Salman Babayi and Gholamhossein Gholami - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 1
Ordering Results of Extreme Order Statistics from Independent and Dependent Heterogeneous Ex-ponentiated Gamma Random Variables
Esmaeil Bashkar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 12
Bayesian Prediction for Progressively Type-II Censored Order Statistics with Uniform Removals
Elham Basiri - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -21
Estimation for the Poisson-Exponential Distribution Based on Progressively Type-II Censored Datawith Uniform and Binomial Removals
Firozeh Bastan and S.M.T.K. MirMostafaee - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -31
Reliability Analysis of Phased Mission Systems with Ternary Components
Hamidreza Bidarmaghz and Somayeh Zarezadeh - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 43
An Optimization Design of the X Control Chart Under the Truncated Life Test for the Weibull Dis-tribution
AzamSadat Eizi and Bahram Sadeghpour Gide - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 54
Survival Function of a New Mixed δ -Shock Model
Marjan Entezari and Rasoul Roozegar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 65
Influence of a Cold Standby Component on the Performance of a k-out-of-n:F System in the Dy-namic Stress-Strength Model Based on Weibull Process
Sara Ghanbari, Abdolhamid Rezaei Roknabadi and Mahdi Salehi - - - - - - - - - - - - - - - - - - - - 73
Optimum Type-II Progressive Censoring Scheme with Random Removal Based on Cost Model
Fatemeh Hassantabar Darzi, Hasan Misaii, Samaneh Eftekhari Mahabadi and Firoozeh Haghighi -84
A Polya Process-Based Optimal Preventive Maintenance for Complex Systems
Marzieh Hashemi and Majid Asadi - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -97
Optimal Design of Accelerated Life Tests Under Periodic Inspection and Type-I Censoring for BurrType-X Distribution
Nooshin Hakamipour - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -110
Optimal Warranty Length for a Repairable System with Frailty Random Variable
Fatemeh Hooti and Jafar Ahmadi - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -122
Relationships Between Redundancy, Optimal Allocation and Components Importance in CoherentSystems
Mohammad Khanjari Sadegh - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 131
On Component Redundancy Versus System Redundancy for a System Composed of Different Typesof Components
Maryam Kelkinnama - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 141
On the Maximum Likelihood Prediction of a Future Record Based on Records and Inter-RecordTimes: A Corrigendum
Zahra Khoshkhoo Amiri and S.M.T.K. MirMostafaee - - - - - - - - - - - - - - - - - - - - - - - - - - - -152
E-Bayesian and Hierarchical Bayesian Estimation in a Family of Distributions
Azadeh Kiapour and Mehran Naghizadeh Qomi - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 161
Statistical Bayesian Inference on the Reliability Parameter Under Adaptive Type-II Hybrid Progres-sive Censoring Samples for Burr Type XII Distribution
Akram Kohansal and Shirin Shoaee - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -173
Residual Varentropy of Lifetime Distributions
Saeid Maadani, Gholamreza Mohtashami Borzadaran and Abdoulhamid Rezaei Roknabadi - 185
Semiparametric Inference for a Class of Mean Residual Life Regression Models with Right-CensoredLength-Biased Data
Zahra Mansourvar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 195
An Optimal Preventive Policy for Networks Consisting of Heterogenous Components
Maryam Memari, Somayeh Zarezadeh and Majid Asadi - - - - - - - - - - - - - - - - - - - - - - - - - - 203
Reliability Analyses Weighted-k-out-of-n Systems Consisting Multiple Types of Components
RahmatSadat Meshkat and Eisa Mahmoudi - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -215
Analysis of Masked Competing Risks Data Using Machine Learning Imputation Methods
Hasan Misaii, Samaneh Eftekhari Mahabadi, Negin Jafari and Firoozeh Haghighi - - - - - - -224
A Two-Parameter Distribution by Mixing Weibull and Lindley Models
Ali Saadati Nik, Akbar Asgharzadeh and Hassan Bakouch - - - - - - - - - - - - - - - - - - - - - - - 234
Inference on Multicomponent Stress-Strength Parameter in Lomax Distribution
Naqib Sadeqi and Akram Kohansal - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -246
Optimal Progressive Type-II Censoring Random Schemes Based on Expected Total Test Time
Maryam Sharafi - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -257
Bayesian Analysis for the Parameters of Mortality Rate in the Models of Dependent Lives
Shirin Shoaee and Akram Kohansal - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -268
A Note on the Cumulative Residual Entropy of Reliability Systems
Abdolsaeed Toomaj - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - – - - - - - - - 281
Efficient Estimation of Parameters of the Generalized Exponentiated Distribution Under RandomlyRight Censored Data
Parisa Torkaman - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -292
The 6th Seminar on Reliability Theory and its Applications
Inference on the Parameters of the Generalized Logistic Distribution Basedon Left Censored Data
Babayi, S.1, and Gholami, G.H.1
1 Department of Mathematics, Faculty of Science, Urmia University, Iran
Abstract: The generalized Logistic distribution is an important lifetime dis-tribution in survival analysis. This paper investigates the estimation of theparameters of the generalized Logistic distribution with a left-censored data.The maximum likelihood estimator (MLE) procedure of the parameters is con-sidered and the Fisher information matrix of the unknown parameters is usedto construct asymptotic confidence intervals. Bayes estimator of the parame-ters and the corresponding credible intervals are obtained by using the Gibbssampling technique. A real data set is provided to illustrate the two proposedmethods.
Keywords: Generalized Logistic Distribution, Maximum Likelihood Estima-tor, Bayesian Estimation, Left Censoring.
1 Introduction
The Logistic distribution is one of the most important statistical distributionsbecause of its simplicity and also its historical importance as a growth curve(Erkelens, [7]). Some applications of Logistic distribution have been men-tioned by Johnson and Kotz [8]. Babayi et al. [1] used generalized Logistic(GL) distribution for analysing stress-strength problem. The random variable
1Babayi, S.: [email protected]
1
The 6th Seminar on Reliability Theory and its Applications 2
X has the GL distribution if it has the following cumulative distributionfunction (cdf)
F(x; µ,σ ,α) =1
(1+ e−x−µ
σ )α , −∞ < x <+∞ (1)
where µ ∈ R and σ ,α ∈ (0,+∞). The probability density function (pdf) cor-responding to the cdf (1) is
f (x; µ,σ ,α) =αe−
x−µ
σ
σ(1+ e−x−µ
σ )α+1 , −∞ < x <+∞.
Here µ, σ and α are the location, scale and shape parameters, respectively.The GL distribution with the shape parameter α and the scale parameter σ
will be denoted by GL(α,σ). In the particular case of α = 1, F correspondsto the usual Logistic distribution. Zelterman [12] showed that the maximumlikelihood estimates do not exist for (µ,σ ,α). Therefore for convenience,without loss of generality it is supposed µ = 0.
There is a large number of applications and use of left censoring or leftcensored data in survival analysis and reliability theory. For example, in amedical study investigating Patterns of Health Insurance Coverage among Ru-ral and Urban Children[5] faces this problem due to the existence of a higherproportion of children living in small villages whose spells were left censoredin the sample (i.e., those children who entered the sample uninsured), andwho remained so throughout the sample. Mitra and Kundu [11] discussed themaximum likelihood estimator for parameters of generalized exponential dis-tribution in the presence of left censoring.
The rest of this paper is organized as follows. In Section 2, we derive theMLEs of the unknown parameters of the GL distribution for left censored data.In this case, the MLEs cannot be obtained in explicit form and the MLE of thescale parameter can be achieved by solving a non-linear equation using aniterative procedure. Once the MLE of the scale parameter is extracted, theMLE of the shape parameter can be obtained in explicit form. We have also
Babayi, S., and Gholami, G.H. 3
obtained the explicit expression of the Fisher information matrix and it hasbeen used to construct the asymptotic confidence intervals of the unknownparameters. In Section 3, we propose the Bayes estimator of parameters andthe corresponding credible interval applying the Gibbs sampling technique. InSection 4, analysis of a real data set is given for illustrative purposes.
2 Maximum likelihood estimation
In this section, the MLEs of the parameters are extracted in the presence of leftcensored observations.
Let X(r+1), . . . ,X(n) be the last n− r order statistics from a random sampleof size n from GL(α,σ) distribution. Therefore, the joint probability densityfunction of X(r+1), . . . ,X(n) becomes
f (x(r+1), . . . ,x(n);α,σ) =n!r!(F(x(r+1)))
r f (x(r+1)) . . . f (x(n))
=n!r!(1+ e−
x(r+1)σ )−rα
n
∏i=r+1
αe−x(i)σ
σ(1+ e−x(i)σ )
α+1
Then, the log likelihood function is
L(α,σ) = lnn!− lnr!−rα ln(1+ e−x(r+1)
σ )+(n− r) lnα− (n− r) lnσ
− 1σ
n
∑i=r+1
x(i)− (α +1)n
∑i=r+1
ln(1+ e−x(i)σ )
Hence, the likelihood equations are
∂L∂α
=−r ln(1+ e−x(r+1)
σ )+n− r
α−
n
∑i=r+1
ln(1+ e−x(i)σ ) = 0 (2)
∂L∂σ
=−αrx(r+1)e−
x(r+1)σ
σ2(1+ e−x(r+1)
σ )− n− r
σ+
1σ2
n
∑i=r+1
x(i)
− (α +1)σ2
n
∑i=r+1
x(i)e−x(i)σ
1+ e−x(i)σ
= 0
(3)
The 6th Seminar on Reliability Theory and its Applications 4
From (2) and (3), we get
α(σ) =n− r
n∑
i=r+1ln(1+ e−
X(i)σ )+ r ln(1+ e−
X(r+1)σ )
(4)
and σ can be given as the solution of the following non-linear equation
h(σ) = σ , (5)
where
h(σ) =
−α(σ)rx(r+1)e−
x(r+1)σ
1+e−x(r+1)
σ
+n∑
i=r+1x(i)− (α(σ)+1)
n∑
i=r+1
x(i)e−
x(i)σ
1+e−x(i)σ
n− r(6)
Since σ is a fixed point solution of the non-linear equation (6), therefore it canbe achieved by using an iterative scheme as follows:
h(σ( j)) = σ( j+1), (7)
where σ( j) is the jth iterate of σ . The iteration procedure should be stoppedwhen |σ( j)−σ( j+1)| is sufficiently small. First, σ is obtained, then α can beresulted from (4).
2.1 Fisher information matrix
In this section, we first obtain the Fisher information matrix of the unknownparameters of GL distribution when the data are left censored, which can beused to construct asymptotic confidence intervals. We denote the Fisher infor-mation matrix of θ = (α,σ) as I(θ) = [Ii j(θ)], i, j = 1,2. Therefore,
I(θ) =−
E( ∂ 2L∂α2) E( ∂ 2L
∂α∂σ)
E( ∂ 2L∂σ∂α
) E(σ2L∂σ2)
=
[I11 I12
I21 I22
]
where
E(∂ 2L∂α2) =−
n− rα2
Babayi, S., and Gholami, G.H. 5
E(∂ 2L
∂α∂σ) = E(
∂ 2L∂σ∂α
) =− rσ
E[Z(r+1)e
−Z(r+1)
1+ e−Z(r+1)]− 1
σ
n
∑i=r+1
E[Z(i)e
−Z(i)
1+ e−Z(i)]
E(∂ 2L∂σ2) =
2rα
σ2 E[Z(r+1)e
−Z(r+1)
1+ e−Z(r+1)]− rα
σ2 E[Z2(r+1)e
−Z(r+1)
(1+ e−Z(r+1))2 ]+
n− rσ2
− 2σ2
n
∑i=r+1
E[Z(i)]+2(α +1)
σ2
n
∑i=r+1
E[Z(i)e
−Z(i)
1+ e−Z(i)]
− α +1σ2
n
∑i=r+1
E[Z2(i)e−Z(i)
1+ e−Z(i)]
where Z(i) =X(i)σ
.
Using the results of Balakrishnan and Leung [3] the pdf of Z(i) (r+1≤ i≤ n)
is obtained as
fZ(i)(z) =n!
(i−1)!(n− i) !
n−i
∑j=0
(−1) j(
n− ij
)αe−z
(1+ e−z)α(i+ j)+1 , z ∈ R (8)
From (8) and some algebraic operations we get
E[Z(i)] =n!
(i−1)!(n− i) !
n−i
∑j=0
(−1) j
(n− i
j
)ψ(αbi j)−ψ(1)
bi j
E[Z(i)e
−Z(i)
1+ e−Z(i)] =
n!(i−1)!(n− i) !
n−i
∑j=0
(−1) j
(n− i
j
)ψ(αbi j)−ψ(2)
bi j(αbi j +1)
E[Z2(i)e−Z(i)
(1+ e−Z(i))2 ] =
n!(i−1)!(n− i) !
n−i
∑j=0
(−1) j
(n− i
j
)C(α,bi j)
where bi j = i + j, C(α,bi j) =α{ψ ′(αbi j+1)+ψ ′(2)+[ψ(αbi j+1)−ψ(2)]2}
(αbi j+1)(αbi j+2) , ψ(x) =d lnΓ(x)
dx , with Γ(t) =∫
∞
0 xt−1e−xdx.
3 Bayes estimation
In this section, we attempt to find the Bayes estimator of the parameters underthe assumption that the shape parameters α and the scale parameter σ are
The 6th Seminar on Reliability Theory and its Applications 6
random variables. It is assumed that α and σ have independent gamma priorswith the parameters α ∼Gamma(a1,b1) and σ ∼Gamma(a2,b2). Therefore,
π(α) =ba1
1Γ(a1)
αa1−1e−b1α ; α > 0 (9)
and
π(σ) =ba2
2Γ(a2)
σa2−1e−b2σ ; σ > 0. (10)
Here a1,b1,a2,b2 > 0.Based on the above assumptions, the likelihood function of the observed datais
L(data |α,σ ) =n!r!(1+ e−
x(r+1)σ )−rα
n
∏i=r+1
αe−x(i)σ
σ(1+ e−x(i)σ )
α+1
The joint density of the data, α and σ can be achieved as
L(data,α,σ) = L(data |α,σ )×π(α)×π(σ)
Therefore, the joint posterior density of α and σ given the data is
L(α,σ |data) =L(data,α,σ)∫
∞
0∫
∞
0 L(data,α,σ)dαdσ(11)
Since (11) cannot be obtained analytically, we adopt the Gibbs sampling tech-nique to compute the Bayes estimate of α and σ the corresponding credibleinterval of them. The posterior pdfs of α and σ are as follows:
α |σ ,data ∼ Gamma(n− r+a1,b1 + r ln(1+ e−x(r+1)
σ )+n
∑i=r+1
ln(1+ e−x(i)σ ))
and
π(σ |α,data) ∝ σa2−(n−r)−1 exp{−b2σ − 1
σ
n
∑i=r+1
x(i)}
× exp{−αr ln(1+ e−x(r+1)
σ )− (α +1)n
∑i=r+1
ln(1+ e−x(i)σ )}
The posterior pdf of σ is not known but its plot, as shown in Figure 1, issimilar to the normal distribution, so to generate random numbers from thisdistribution, we apply the Metropolis method with normal proposal distribu-tion. Therefore, the algorithm of Gibbs sampling is as follows.
Babayi, S., and Gholami, G.H. 7
Figure 1: Proposal and posterior density functions of scale parameter
Step 1: Start with an initial guess (α(0),σ (0)).
Step 2: Set t = 1.
Step 3: Generate α(t) from Gamma(n− r + a1,b1 + r ln(1+ e−
x(r+1)σ(t−1) ) +
n∑
i=r+1ln(1+ e
−x(i)
σ(t−1) )).
Step 4: Using Metropolis-Hastings, generate σ (t) from
π(σ∣∣∣α(t−1),β (t−1),data) with the N(σ (t−1),1) as a proposal distribution.
Step 5: Set t = t +1.
Step 6: Repeat steps 3-5, T times.
The corresponding graph for a sequence of 1000 generations from poste-rior density functions of scale parameter is displayed in Figure 2. Now theapproximate posterior mean, and posterior variance of α become
E(α |data) =1
T −K
T
∑t=K+1
α(t)
and
V (α |data) =1
T −K
T
∑t=K+1
(α(t)− E(α |data))2,
The 6th Seminar on Reliability Theory and its Applications 8
Figure 2: sequence of 1000 generations from posterior density functions of scale parameter
where K is the burn-in period. The burn-in is considered, here, to avoid theeffect of the starting values on the generated scale parameter.
Based on T , and R values, using the method proposed by Chen and Shao [1],the approximate highest posterior density (HPD) credible interval of R can beeasily constructed. Based on T , and α values, using the method proposed byChen and Shao [1], the approximate highest posterior density (HPD) credibleinterval of α can be easily constructed. Let α(K+1) < α(K+2) < .. . < α(T−K)
be the ordered α(t), and suppose we would like to construct a 100(1− γ)%approximate HPD credible interval of α , then consider the following:
{(α(T−K),α((1−γ)(T−K))), . . . ,(α(γ(T−K)),α(T−K))}
Choose that interval which has the shortest length. The Bayes estimation andcredible interval for σ are exactly the same as α .
4 Data Analysis
Here, analysis of the strength data, which was originally reported by Badar &Priest [2] is presented. Kundu & Gupta [10] observed that Weibull distributionworks quite well for these strength data which are presented in Table 1. TheGL distribution model for the data set, is fitted. The estimated scale and shape
Babayi, S., and Gholami, G.H. 9
parameters are proposed assuming the location parameter to be known as thesample median for the data set. We also obtained Kolmogrov-Smirnov (K-S)distance between the empirical distribution function and the fitted distribution,and corresponding p value. All the results have reported in Table 2. For com-parison purposes, we also compute the observed and the expected frequencies,the corresponding chi-square value based on the fitted model in Table 3.
Table 1: The real data set.
Data Set
1.312 1.314 1.479 1.552 1.700 1.803 1.861 1.865 1.944 1.958 1.966 1.997 2.006 2.021
2.027 2.055 2.063 2.098 2.140 2.179 2.224 2.240 2.253 2.270 2.272 2.274 2.301 2.301
2.359 2.382 2.382 2.426 2.434 2.435 2.478 2.490 2.511 2.514 2.535 2.554 2.566 2.570
2.586 2.629 2.633 2.642 2.648 2.684 2.697 2.726 2.770 2.773 2.800 2.809 2.818 2.821
2.848 2.880 2.954 3.012 3.067 3.084 3.090 3.096 3.128 3.233 3.433 3.585 3.585
It is clear that the GL distribution fits quite well to the data set. For thedata set, the fitted empirical cdf plot of the GL distribution model is shownin Figure 3. As it is seen, a satisfactory fit for the GL distribution model isprovided.
Table 2: Sample Median, Scale Parameter, Shape Parameter, K-S and p value of the fitted GL distribution to data set.
Sample Median Scale Parameter Shape Parameter K-S p value
2.478 0.2745 0.9489 0.0492 0.9933
Table 3: Observed Frequencies, and Expected Frequencies for modified data set when fitting the GL distribution.
Intervals Observed Frequencies Expected Frequencies Chi-Square
< 1.76 5 2.3900 0.6452
1.76-2.22 15 15.2904
2.22-2.68 27 26.9100
2.68-3.14 18 16.0011
> 3.14 4 5.4027
The 6th Seminar on Reliability Theory and its Applications 10
For illustrative purposes, for left censoring, we have left out about 20% ofthe data set (r = 14). From (4) and (5), the MLEs of α and σ become 0.9162and 0.2826, respectively. Also from Section 4, the Bayes estimations of α andσ become 0.9148 and 0.2845, respectively. To compute the Bayes estimate,as mentioned above, we have adopted the suggestion of Congdon ([6], p. 20)and Kundu and Gupta [9], that is, a1 = a2 = b1 = b2 = 0.0001. The 95% con-fidence intervals corresponding MLEs of α and σ become (0.6702,1.1623)and (0.1646,0.4006), respectively. Also, the 95% credible intervals of α andσ become (0.6840,1.1537) and (0.2273,0.3631), respectively. Comparing theaverage confidence lengths to the average credible lengths, we observe the av-erage credible lengths is less than the average confidence lengths. We observethat the results are not significantly different from the corresponding resultsobtained from completed data.
Figure 3: Empirical cdf plot of GL distribution for data set.
References
[1] Babayi, S., Khorram, E. and Tondro, F. (2014), Inference of R = P(X <Y )
for generalized Logistic distribution. Statistics, 48(4), 862–871.
[2] Badar, M.G and Priest, A.M. (1982), Statistical aspects of fiber and bundlestrength in hybrid composites. In: Hayashi T, Kawata K, Umekava S, edi-
Babayi, S., and Gholami, G.H. 11
tors. Progress in science and engineering of composites. Tokyo: Japanese
Society for Composite Materials, 1129–1136.
[3] Balakrishnan, N. and Leung, M.Y. (1988), Order statistics from the type Igeneralized Logistic distribution. Communication in Statistics: Simulation
and Computation, 17(1), 25–50.
[4] Chen, M.H. and Shao, Q.M. (1999), Monte Carlo estimation of Bayesiancredible and HPD intervals. Journal of Computational and Graphical
Statistics, 8, 69–92.
[5] Coburn, A.F., Kilbreth, E.H., Long, S.H., and Marquis, M.S. (1998),Urban-rural differences in employer-based health insurance coverage ofworkers. Medical Care Research and Review, 55 (4): 484-496.
[6] Congdon, p. (2001), Bayesian Statistical Modeling, Wiley, New York.
[7] Erkelens, J. (1968), A method of calculation for the Logistic curve. Statis-
tics Neerlandica, 22, 213–217.
[8] Johnson, N.L. and Kotz, S. (1970), Distribution in Statistics: Continuous
Univariate Distributions, 2, John Wiley, New York.
[9] Kundu, D. and Gupta, R.D. (2005), Estimation of P[Y < X ] for generalizedexponential distribution. Metrika, 61, 291–308.
[10] Kundu, D. and Gupta, R.D. (2006), Estimation of P[Y < X ] for Weibulldistributions. IEEE Transactions on Reliability, 55(2), 270-280.
[11] Mitra, S. and Kundu, D. (2008), Analysis of left censored data from thegeneralized exponential distribution. Journal of Statistical Computation
and Simulation, 78(7), 669–679.
[12] Zelterman, D. (1987), Parameter estimation in the generalized Logisticdistribution. Computational Statistics & Data Analysis, 5(3), 177–184.
The 6th Seminar on Reliability Theory and its Applications
Ordering Results of Extreme Order Statistics from Independent andDependent Heterogeneous Exponentiated Gamma Random Variables
Bashkar, E.1
1 Department of Statistics, Velayat University, Iranshahr, Iran
Abstract: In this paper, we derive new results on stochastic comparisons ofseries systems with dependent heterogeneous exponentiated gamma compo-nents with Archimedean survival copulas. For heterogeneous exponentiatedgamma samples with a common scale parameter and different shape param-eters, we study the likelihood ratio order between maximums of independentsamples. The result established here strengthens and generalizes some of theresults of Fang and Xu [6].
Keywords: Majorization, Archimedean Copula, Series Systems, StochasticOrders, Exponentiated Gamma Distribution.
1 Introduction
Suppose order statistics arising from random variables X1, . . . ,Xn are denotedby X1:n ≤ . . .≤ Xn:n. Then it is well-known that the (n−k+1)th order statisticof a sample of size n characterizes the lifetime of a k-out-of-n system. Thus,the study of lifetimes of k-out-of-n systems is equivalent to the study of thestochastic properties of order statistics. In particular, a 1-out-of-n system cor-responds to a parallel system and an n-out-of-n system corresponds to a seriessystem. Reliability and stochastic properties of series and parallel systemshave been considered by various researchers under different scenarios. For ex-
1Bashkar, E.: [email protected]
12
The 6th Seminar on Reliability Theory and its Applications 13
ample, stochastic comparisons of the lifetimes of series and parallel systems,in the case of heterogeneous component lifetimes with Weibull distributions,are considered in [10], [7], [19], [13], [20] and [4]. Attempts have also beenmade by [8] and [11] in the case of heterogeneous components with exponenti-ated Weibull (EW) distributions, and by [1] in the case of heterogeneous com-ponents with generalized exponential (GE) distributions. For a recent reviewon the topic one can refer to [2]. Recently, some efforts are made to investigatestochastic comparisons on order statistics of r.v.s with Archimedean copulas.See, for example, [3], [13], [12] and [5].
The aim of the present note is to compare the lifetimes of series and parallelsystems with heterogeneous components where the component lifetimes aredistributed as a a two-parameter exponentiated gamma (EG) distribution withcumulative distribution function (cdf)
F(x) = (1− (λx+1)e−λx)α (1)
where λ > 0 is a scale parameter, and α > 0 is shape parameters. If a randomvariable X has EG distribution in (5), then we write X ∼ EG(α,λ ). Guptaet al. [9] first introduced the EG distribution. The EG distribution is one ofmost commonly used lifetime distributions in reliability and survival analy-sis. Fang and Xu [6] studied the stochastic comparison of smallest and largestorder statistics from EG random variables with different scale and shape pa-rameters. In this paper, we derive the usual stochastic order for the smallestorder statistics of samples having EG distribution and Archimedean survivalcopulas. The results obtained here strengthen and generalize those knownin the [6]. Throughout this paper, we use the notations R = (−∞,+∞) andR++ = (0,+∞).
Let X and Y be two univariate random variables with distribution functionsF and G, density functions f and g, the survival functions F = 1−F and G =
1−G and reverse hazard rate functions rF = f/F and rG = g/G, respectively.The following definition contains stochastic orders to compare the magnitudes
Bashkar, E. 14
of two random variables. For a comprehensive discussion on various stochasticorders, see [18] and [14].
Definition 1.1. Let X and Y be two nonnegative random variables on R++.The random variable X is said to be smaller than Y in the
(i) likelihood ratio order, denoted by X ≤lr Y , if g(x)/ f (x) is increasing inx ∈ R++,
(ii) reversed hazard rate order, denoted by X ≤rh Y , if rF(x)≤ rG(x) for all x,
(iii) usual stochastic order, denoted by X ≤st Y , if F(x)≤ G(x) for all x.
A real function φ is n-monotone on (a,b) ⊆ R if (−1)n−2φ (n−2) is de-creasing and convex in (a,b) and (−1)kφ (k)(x) ≥ 0 for all x ∈ (a,b),k =
0,1, . . . ,n−2, in which φ (i)(.) is the ith derivative of φ(.). For a n-monotone(n ≥ 2) function φ : [0,+∞) −→ [0,1] with φ(0) = 1 and limx→+∞ φ(x) = 0,let ψ = φ ,−1 be the right continuous inverse of ψ , then
Cφ(u1, . . . ,un) = φ(ψ(u1)+ . . .+ψ(un)), for allui ∈ [0,1], i = 1, . . . ,n,
is called an Archimedean copula with generator φ . Archimedean copulascover a wide range of dependence structures including the independence cop-ula with generator φ(t) = e−t . For more on Archimedean copulas, readers mayrefer to [17] and [16].
It is well known that the notion of majorization is extremely useful and pow-erful in establishing various inequalities. For preliminary notations and termi-nologies on majorization theory, we refer readers to [15]. Let x= (x1, . . . ,xn)
and y = (y1, . . . ,yn) be two real vectors anf x(1) ≤ . . .≤ x(n) be the increasingarrangement of the components of the vector x.
Definition 1.2. The vector x is said to be
(i) weakly submajorized by the vector y (denoted by x �w y) if ∑ni= j x(i) ≤
∑ni= j y(i) for all j = 1, . . . ,n,
(ii) weakly supermajorized by the vector y (denoted by xw�y) if ∑
ji=1 x(i) ≥
∑ji=1 y(i) for all j = 1, . . . ,n,
The 6th Seminar on Reliability Theory and its Applications 15
(iii) majorized by the vector y (denoted by xm�y) if ∑
ni=1 xi = ∑
ni=1 yi and
∑ji=1 x(i) ≥ ∑
ji=1 y(i) for all j = 1, . . . ,n−1.
Definition 1.3. Clearly, xm� y implies x
w�(�w)y. A real valued function ϕ
defined on a set A⊆ Rn is said to be Schur-convex (Schur-concave) on A if
xm�y on A =⇒ ϕ(x)≤ (≥)ϕ(y).
Lemma 1.4 ([15], Theorem 3.A.8). For a function l on A∈Rn, x�w (w�)y im-
plies l(x)≤ l(y) if and only if it is increasing (decreasing) and Schur-convexon A.
2 Main results
Before going into the details, let us recall one important general family of dis-tributions. We say that random variable X belongs to the ES family of distri-butions if X ∼ H(x) = [G(λx)]α , where α,λ > 0 and G is called the baselinedistribution function which we assume that is absolutely continuous. In thesequel, we denote this family by ES(α,λ ). The following result, proved byBashkar et al. [3], considers the comparison of the lifetimes of parallel sys-tems in terms of the likelihood ratio order with respect to the shape parameterα .
Theorem 2.1. For i = 1, . . . ,n, let Xi and X∗i be two sets of mutually in-dependent random variables with Xi ∼ ES(αi,λ ) and X∗i ∼ ES(α∗i ,λ ). If
∑ni=1 αi ≥ ∑
ni=1 α∗i , then for any λ > 0, we have Xn:n ≥lr X∗n:n.
The following result follows immediately from Theorem 2.1.
Theorem 2.2. For i = 1, . . . ,n, let Xi and X∗i be two sets of mutually in-dependent random variables with Xi ∼ EG(αi,λ ) and X∗i ∼ EG(α∗i ,λ ). If
∑ni=1 αi ≥ ∑
ni=1 α∗i , then for any λ > 0, we have Xn:n ≥lr X∗n:n.
Bashkar, E. 16
Proof. For x > 0, the ratio of the density functions of Xn:n and X∗n:n is
fn(x)gn(x)
=∑
ni=1 αi
∑ni=1 α∗i
(F(λx))β ,
where β = ∑ni=1 αi−∑
ni=1 α∗i and F(λx) = 1− (λx+1)e−λx. Because β > 0,
fn(x)gn(x)
is increasing in x. This completes the proof of the required result.
Remark 2.3. It is worthwhile to note that (α1, . . . ,αn)�w (α∗1 , . . . ,α∗n) implies
∑ni=1 αi ≥ ∑
ni=1 α∗i . So, the condition ∑
ni=1 αi ≥ ∑
ni=1 α∗i in Theorem 2.2 is
weaker than the weak submajorization order. Therefore, the result of The-orem 2.2 remains true under the weak submajorization order between shapeparameters. In other words, we have the following result:
(α1, . . . ,αn)�w (α∗1 , . . . ,α∗n) =⇒ Xn:n ≥lr X∗n:n; (2)
also, it is easy to show that (α1, . . . ,αn)w�(α∗1 , . . . ,α∗n) implies ∑
ni=1 αi≤∑
ni=1 α∗i .
Then, according to Theorem 2.2, we have the following result:
(α1, . . . ,αn)w�(α∗1 , . . . ,α∗n) =⇒ Xn:n ≤lr X∗n:n. (3)
The following corollary, provides some sufficient conditions for comparingthe largest order statistics from two heterogeneous EG samples in terms of thereversed hazard rate order.
Corollary 2.4. Let (X1,X2, . . . ,Xn) be a vector of independent random vari-ables with Xi ∼ EG(αi,λ ) for i = 1, . . . ,n. Let (X∗1 ,X
∗2 , . . . ,X
∗n ) be another
vector of independent random variables with X∗i ∼ EG(α∗i ,λ ) for i = 1, . . . ,n.Then,
(α1, . . . ,αn)w�(α∗1 , . . . ,α∗n) =⇒ Xn:n ≤rh X∗n:n. (4)
Now, we consider samples of EG random variables with a common Archimedeansurvival copula. Specifically, by X ∼ ES(α,λ,φ) we mean that X has theArchimedean copula with generator φ and for i = 1, ...,n, Xi ∼ ES(αi,λi).Bashkar et al. [3]proved the following general result.
The 6th Seminar on Reliability Theory and its Applications 17
Theorem 2.5. Suppose for i = 1, ...,n, Xi ∼ ES(αi,λ ) and X∗i ∼ ES(α∗i ,λ )share a common Archimedean survival copula with generator φ . Then, X1:n≤st
X∗1:n if (α1, . . . ,αn)w�(α∗1 , . . . ,α∗n).
The following result follows immediately from Theorem 2.5.
Theorem 2.6. Suppose for i = 1, ...,n, Xi ∼ EG(αi,λ ) and X∗i ∼ EG(α∗i ,λ )
share a common Archimedean survival copula with generator φ . Then, X1:n≤st
X∗1:n if (α1, . . . ,αn)w�(α∗1 , . . . ,α∗n).
Theorem 2.6 extended the result in Theorem 6 of [6] for the case of seriessystems, from independent components to dependent components.
The following theorem generalizes the result of Theorem 2.6 to EG sampleswith not necessarily a common dependence structure.
Theorem 2.7. For X ∼ EG(α,λ ,φ1) and X∗ ∼ EG(α∗,λ ,φ2), if ψ2 ◦ φ1 is
super-additive, then αw� α∗ implies X1:n ≤st X∗1:n.
Proof. The smallest order statistic X1:n of the sample X ∼ EG(α,λ ,φ) has thesurvival function
GX1:n(x) = φ1(n
∑i=1
ψ1(1− (F(λx))αi))≡ J(α,λ ,x,φ1) (5)
where F(λx) = 1− (λx+1)e−λx. First we show that J(α,λ ,x,φ1) is increas-ing and Schur-concave function of αi, i = 1, . . . ,n. Since φ1 is decreasing, wehave
∂J(α,λ ,x,φ1)
∂αi=−Fαi(λx) log(F(λx))φ ′1(∑
ni=1 ψ(1−Fαi(λx)))
φ ′1(ψ(1−Fαi(λx)))≥ 0,
for all x > 0,
That is, J(α,λ ,x,φ1) is increasing in αi for i = 1, . . . ,n.
To prove its Schur-concavety, from Theorem 3.A.4. in [15], we need toshow that for i 6= j,
(αi−α j)(∂J(α,λ ,x,φ1)
∂αi− ∂J(α,λ ,x,φ1)
∂α j)≤ 0,
Bashkar, E. 18
that is, for i 6= j,
− log(F(λx))φ ′1(n
∑i=1
ψ1(1−Fαi(λx))))(αi−α j)×
(Fαi(λx)
φ ′1(ψ1(1−Fαi(λx)))− Fα j(λx)
φ ′1(ψ1(1−Fα j(λx)))
)≤ 0. (6)
Now, let us consider the function g(α) =Fα(λx)
φ ′(ψ(1−Fα(λx))). Taking deriva-
tive of g(α) with respect to α , we get
g′(α)sgn= Fα(λx) log(F(λx))φ ′(ψ(1−Fα(λx)))
+F2α(λx) log(F(λx))φ ′(ψ(1−Fα(λx)))
φ′′(ψ(1−Fα(λx)))≥ 0.
Thus, g(α) is increasing with respect to α , from which it follows that (6) holds.
According to Lemma 1.4, αw�α∗ implies J(α,λ ,x,φ1)≤ J(α∗,λ ,x,φ1). On
the other hand, since ψ2◦φ1 is super-additive by Lemma A.1. of [12], we haveJ(α∗,λ ,x,φ1)≤ J(α∗,λ ,x,φ2). So, it holds that
J(α,λ ,x,φ1)≤ J(α∗,λ ,x,φ1)≤ J(α∗,λ ,x,φ2).
That is, X1:n ≤st X∗1:n.
3 Conclusions
In this paper, we studied extreme order statistics from random variables fol-lowing the exponentiated gamma distribution. For heterogeneous exponen-tiated gamma samples with a common scale parameter and different shapeparameters, we obtained the likelihood ratio order between maximums of in-dependent samples. In the presence of the Archimedean copula for the ran-dom variables, we obtained new results on the usual stochastic ordering of thesmallest order statistics.
The 6th Seminar on Reliability Theory and its Applications 19
References
[1] Balakrishnan, N., Haidari, A. and Masoumifard, K. (2015), Stochasticcomparisons of series and parallel systems with generalized exponentialcomponents, IEEE transactions on reliability, 64, 333-348.
[2] Balakrishnan, N. and Zhao, P. (2013), Ordering properties of order statis-tics from heterogeneous populations: a review with an emphasis on somerecent developments, Probability in the Engineering and Informational
Sciences, 27, 403-443.
[3] Bashkar, E., Torabi, H. and Roozegar, R. (2017), Stochastic compar-isons of extreme order statistics in the heterogeneous exponentiated scalemodel. Journal of Statistical Theory and Applications, 16(2), 219-238.
[4] Fang, L. and Balakrishnan, N. (2016), Likelihood ratio order of parallelsystems with heterogeneous Weibull components. Metrika, 79, 693-703.
[5] Fang, R., Li, C. and Li, X. (2015), Stochastic comparisons on sampleextremes of dependent and heterogenous observations. Statistics, 1-26.
[6] Fang, L., and Xu, T. (2019), Ordering results of the smallest and largestorder statistics from independent heterogeneous exponentiated gammarandom variables. Statistica Neerlandica, 73(2), 197-210.
[7] Fang, L. and Zhang, X. (2013), Stochastic comparisons of series systemswith heterogeneous Weibull components. Statistics and Probability Let-
ters, 83, 1649-1653.
[8] Fang, L. and Zhang, X. (2015), Stochastic comparisons of parallel sys-tems with exponentiated Weibull components. Statistics and Probability
Letters, 97, 25-31.
[9] Gupta, R. C., Gupta, R. D., and Gupta, P. L. (1998), Modeling failuretime data by Lehman alternatives. Communications in Statistics-Theory
Methods, 27, 887-904.
Bashkar, E. 20
[10] Khaledi, B.E., Kochar, S.C. (2006), Weibull distribution: some stochasticcomparisons results. Journal of Statistical Planning and Inference, 136,3121-3129.
[11] Kundu, A. and Chowdhury, S. (2016), Ordering properties of order statis-tics from heterogeneous exponentiated Weibull models. Statistics and
Probability Letters, 114 119-127.
[12] Li, X. and Fang, R. (2015), Ordering properties of order statistics fromrandom variables of Archimedean copulas with applications, Journal of
Multivariate Analysis, 133, 304-320.
[13] Li, C. and Li, X. (2015), Likelihood ratio order of sample minimum fromheterogeneous Weibull random variables. Statistics and Probability Let-
ters, 97, 46-53.
[14] Li, H. and Li, X. (2013), Stochastic Orders in Reliability and Risk,
Springer, New York.
[15] Marshall, A.W., Olkin, I. and Arnold, B.C. (2011), Inequalities: Theory
of Majorization and its Applications. Springer, New York.
[16] McNeil, A. J. and Neslehova, J. (2009), Multivariate Archimedean Cop-ulas, d-Monotone Functions and `1-Norm Symmetric Distributions, The
Annals of Statistics, 3059-3097.
[17] Nelsen, R. B. (2006), An introduction to copulas. Springer, New York.
[18] Shaked, M. and Shanthikumar, J.G. (2007), Stochastic Orders. Springer,New York.
[19] Torrado, N. (2015), Comparisons of smallest order statistics from Weibulldistributions with different scale and shape parameters. Journal of the
Korean Statistical Society, 44(1), 68-76.
[20] Torrado, N., and Kochar, S. C. (2015), Stochastic order relations amongparallel systems from Weibull distributions. Journal of Applied Probabil-
ity, 52(1), 102-116.
The 6th Seminar on Reliability Theory and its Applications
Bayesian Prediction for Progressively Type-II Censored Order Statisticswith Uniform Removals
Basiri, E.1
1 Department of Statistics, Kosar University of Bojnord, Bojnord, Iran
Abstract: This paper deals with the Bayesian prediction problem in the two-sample case for predicting future progressively Type-II censored order statis-tics based on observed progressively Type-II censored order statistics whenthe scheme censoring follows a discrete uniform distribution. The Burr TypeXII distribution is considered for the lifetimes in the paper. Then, the highestposterior density and two-sided equi-tailed prediction intervals are obtained.Numerical computations are given to illustrate the approach by means of asimulation study. Finally, a real data set is given to illustrative the output re-sults.
Keywords: Random Censoring Scheme, Burr Type-XII Distribution, BayesianPrediction.
1 Introduction
The scheme of progressive Type-II censoring is an important method of ob-taining data in lifetime studies. Suppose n units are placed on a lifetime test.At the ith failure time, ri surviving items are randomly withdrawn from thetest, where ri = n−m−∑
i−1j=0 r j, i = 1, · · · ,m, where r0 = 0. Then the failure
times X1:m:n, · · · ,Xm:m:n are called progressively Type-II censored order statis-
tics (PCOs) based on censoring scheme r = (r1, . . . ,rm), where n = m+
1Basiri, E.: elham−[email protected]
21
Basiri, E. 22
∑mj=1 r j. For a detailed discussion of progressive censoring, we refer the reader
to [2], [1], [3] and the references contained therein. However, in some reliabil-ity experiment, the number of items dropped out of the experiment cannot beprefixed and they are random. In such situations, a random removals schemeis suited best with some censoring schemes. So far several researchers havestudied the problem of estimation and prediction in progressive censoring withrandom removals. See for example, [12], [13], [11], [6], [10], [8], [10] and [5].
This article studied the Bayesian prediction problem in the two-sample casefor predicting future progressively Type-II censored order statistics based onobserved progressively Type-II censored order statistics, when the removalsare discrete uniform random variables.
2 Main results
Let x = (x1:m1:n1, · · · ,xm1:m1:n1) be an observed progressively Type-II right cen-sored sample from a life test of size m1 from a sample of size n1 with in-dependent and identically distributed (iid) continuous random variables withcensoring scheme R = (R1,R2, · · · ,Rm1), where Ri, (i = 1, · · · ,m1), are randomvariables independent of X-sample and from a discrete uniform distributionsuch that
P(R1 = r1) =1
n1−m1 +1, r1 = 0, · · · ,n1−m1, (1)
and
P(Ri = ri|R1 = r1, · · · ,Ri−1 = ri−1) =1
n1−m1−∑i−1j=1 r j +1
, (2)
for ri = 0, · · · ,n1−m1−∑i−1k=1 rk, i= 2, · · · ,m1−1, and all the remaining items,
if there are some, are all removed from the test at the m1-th failure with prob-ability one. Relations (1) and (2) provide
P(R = r) = B(R,m1,n1,m1), (3)
The 6th Seminar on Reliability Theory and its Applications 23
where
B(R,m,n, j) =j−1
∏i=1
1n−m−∑
i−1k=1 rk +1
. (4)
Moreover, lifetimes have the one-parameter Lomax distribution with probabil-ity density function (pdf) and cumulative distribution function (cdf) as givenby
fθ (x) =θ
(1+ x)θ+1 , and Fθ (x) = 1− 1(1+ x)θ
, x > 0, θ > 0,
respectively, where θ is the shape parameter.With a pre-determined number of removal of units from the test, say R1 =
r1,R2 = r2, · · · ,Rm1 = rm1, the conditional likelihood function, takes the form(see, for example, [2])
L(θ , x|R= r) =Cm1
∏i=1
fθ (xi)(Fθ (xi))ri =C
{m1
∏i=1
1(1+ xi)
}θ
m1 exp(−θT ), (5)
where Fθ (x) = 1− Fθ (x) is the reliability function of the X-sample, T =
∑m1i=1(1+ ri) ln(1+ xi) and C =
m1
∏j=1
(n1−j−1
∑i=1
ri− j+1).
Now using (3) and (5), we can write the full likelihood function as
L(θ ; x, r) = L(θ , x|R = r)P(R = r) = B′L1(θ),
where B′ =C(
∏m1i=1
1(1+xi)
)B(R,m1,n1,m1) does not depend on the parameter
θ and L1(θ) = θ m1 exp(−θT ).The conjugate prior distribution for θ is considered as (see, for example,
[9])
π(θ) =ba+1
Γ(a+1)θ
ae−bθ , θ > 0, a,b > 0,
where Γ(·) is the complete gamma function. Therefore, the posterior distribu-tion of θ will be obtained as
π(θ |x, r) = θ a+m1e−θ(b+T )∫∞
0 θ a+m1e−θ(b+T )dθ=
(b+T )a+m1+1θ a+m1e−θ(b+T )
Γ(a+m1 +1). (6)
Independently, let Ys:m2:n2 be the sth future progressively Type-II right censoredorder statistic from a life test of size m2 from a sample of size n2 from the
Basiri, E. 24
same distribution with censoring scheme R′= (R′1,R′2, · · · ,R′m2
), where R′i, (i =1, · · · ,m2), are random variables independent of Y , such that ∑
m2i=1 R′i = n2−m2.
Then, given R′1 = r′1, · · · ,R′s−1 = r′s−1, the marginal pdf of Ys:m2:n2, (1≤ s≤m2)
from the one-parameter Lomax distribution, is given by (see, for example, [2])
fYs:m2:n2 |R′1=r′1,···,R
′s−1=r′s−1
(y) = c′s−1
s
∑i=1
a′i,s(Fθ (y))γ ′i−1 fθ (y)
= θc′s−1
s
∑i=1
a′i,s1
(1+ y)θγ ′i+1, y > 0, (7)
where γ ′i = n2− i+ 1−∑i−1j=1 r′j, c′s−1 = ∏
sj=1 γ ′j and a′i,s =
s
∏j=1, j 6=i
1γ ′j− γ ′i
, 1 ≤
i ≤ s ≤ m2. So, the marginal pdf of Ys:m2:n2, (1 ≤ s ≤ m2) can be evaluatedby taking expectation of both sides (7) with respect to R′ = (R′1,R
′2, · · · ,R′s−1).
That is
fYs:m2:n2(y) =
g(r′1)
∑r′1=0
g(r′2)
∑r′2=0
· · ·g(r′s−1)
∑r′s−1=0
fYs:m2:n2 |R′1=r′1,···,R
′s−1=r′s−1
(y)
×P(R′1 = r′1, · · · ,R′s−1 = r′s−1)
=g(r′1)
∑r′1=0
g(r′2)
∑r′2=0
· · ·g(r′s−1)
∑r′s−1=0
s
∑i=1
B(R′,m2,n2,s)θc′s−1a′i,s
(1+ y)θγ ′i+1, (8)
where g(r′i) = n2−m2−∑i−1j=0 r′j, i = 1, · · · ,s− 1 and B(·, ·, ·, ·) is defined in
(5). Then, from (6) and (8) the predictive density function for Ys:m2:n2 can beobtained by
f ∗Ys:m2:n2(y|x, r)
=g(r′1)
∑r′1=0
g(r′2)
∑r′2=0
· · ·g(r′s−1)
∑r′s−1=0
s
∑i=1
B(R′,m2,n2,s)c′s−1a′i,s(b+T )a+m1+1
Γ(a+m1 +1)(1+ y)
×∫
∞
0θ
a+m1+1 e−θ(b+T )
(1+ y)θγ ′idθ
=g(r′1)
∑r′1=0
g(r′2)
∑r′2=0
· · ·g(r′s−1)
∑r′s−1=0
s
∑i=1
B(R′,m2,n2,s)c′s−1a′i,s(b+T )a+m1+1
Γ(a+m1 +1)(1+ y)
The 6th Seminar on Reliability Theory and its Applications 25
×∫
∞
0θ
a+m1+1e−θ(b+T+γ ′i ln(1+y))dθ
=g(r′1)
∑r′1=0
g(r′2)
∑r′2=0
· · ·g(r′s−1)
∑r′s−1=0
B(R′,m2,n2,s)c′s−1(a+m1 +1)(b+T )a+m1+1
(1+ y)
×s
∑i=1
a′i,s(b+T + γ ′i ln(1+ y))a+m1+2 .
Finally, the survival function of Ys:m2:n2 can be written as
F∗Ys:m2:n2(y|x, r) =
g(r′1)
∑r′1=0
g(r′2)
∑r′2=0
· · ·g(r′s−1)
∑r′s−1=0
B(R′,m2,n2,s)c′s−1(b+T )a+m1+1
×s
∑i=1
a′i,sγ ′i(b+T + γ ′i ln(1+ y))a+m1+1 .
The Bayesian predictive bounds of a two-sided equi-tailed 100(1−α)% in-terval for Ys:m2:n2, 1 ≤ s ≤ m2, can be obtained through solving the followingequations:
F∗Ys:m2:n2(L|x, r) = 1− α
2, and F∗Ys:m2:n2
(U |x, r) = α
2, (9)
where L and U are the lower and upper bounds, respectively. Suppose thatζYs:m2:n2 ,α
(x) is the upper quantile of the predictive density, i.e.
F∗Ys:m2:n2
(ζYs:m2:n2 ,α
(x)|x, r)= α,
then clearly we have
L = ζYs:m2:n2 ,1−α/2(x), and U = ζYs:m2:n2 ,α/2(x).
In general, we do not have a closed-form expression for quantiles but theycan numerically be calculated using statistical packages like R 3.3.2. For thespecial case, s = 1, the minimum of the future sample, we have
ζY1:m2:n2 ,α(x) = exp
{(b+T )
n2
((1α
) 1a+m1+1
−1
)}−1.
The 100(1−α)% highest posterior density prediction interval (HPD PI) of theform (w1,w2), when s≥ 2, can be derived by solving the following equations
Basiri, E. 26
simultaneously
F∗Ys:m2:n2(w1|x, r)− F∗Ys:m2:n2
(w2|x, r) = 1−α,
f ∗Ys:m2:n2(w1|x, r) = f ∗Ys:m2:n2
(w2|x, r). (10)
The predictive density of Y1:m2:n2, the minimum of the future sample, is de-creasing with respect to y and consequently, the 100(1−α)% shortest widthBayesian prediction interval (SWB PI) for Y1:m2:n2 takes the following form[
0,exp
{(b+T )
n2
((1α
) 1a+m1+1
−1
)}−1
].
3 Simulation Study
In this section, a simulation study is carried out in order to assess the perfor-mances of the results in paper. Based on the algorithm proposed by [4], wehave used the following algorithm. In all cases we have taken a = b≈ 0.
Algorithm 3.1. Take θ = 1 and suppose (1−α), s, m1, m2, n1 and n2 are allgiven. Then:
1. Generate values of ri, (i = 1, · · · ,m1) and r′i, (i = 1, · · · ,m2) from
ri ∼ DU
{0, · · · ,n1−m1−
i−1
∑j=1
r j
}, rm1 = n1−m1−
m1−1
∑j=1
r j,
i = 1,2, · · · ,m1−1,
r′i ∼ DU
{0, · · · ,n2−m2−
i−1
∑j=1
r′j
}, r′m2
= n2−m2−m2−1
∑j=1
r′j,
i = 1,2, · · · ,m2−1.
2. Generate m1 and m2 independent Uniform (0,1) random variables
W1, . . . ,Wm1 and W ′1, . . . ,W′m2
.
The 6th Seminar on Reliability Theory and its Applications 27
3. Set Vi = W
1i+∑
m1j=m1−i+1 r j
i for i = 1, · · · ,m1 and V ′i = W ′i
1i+∑
m2j=m2−i+1 r′j for i =
1, · · · ,m2.
4. Take Ui = 1−∏m1j=m1−i+1Vj for i = 1, · · · ,m1 and U ′i = 1−∏
m2j=m2−i+1V ′j
for i = 1, · · · ,m2.
5. Set Xi:m1:n1 = F−1(Ui) for i = 1, · · · ,m1 and Yi:m2:n2 = F−1(U ′i ) for i =
1, · · · ,m2, where F−1(·) is the inverse cumulative distribution function ofthe Lomax distribution.
6. Obtain the 100(1−α)% equi-tailed PI and the 100(1−α)% HPD PI byusing (9) and (10), respectively.
7. Repeat the Steps 1-6 for K = 10000 times and let L(i) and U(i) be respec-tively the lower and the upper bounds of the PIs obtained in the ith itera-tion, i = 1, . . . ,K. Also, let Ys:m2:n2(i) be the sth progressive order statisticof a sample of size m2 and X1:m1:n1(i), . . . ,Xm1:m1:n1(i) be the sample of sizem1 generated in the ith iteration. Then, calculate the average widths (AWs)of the PIs, and the coverage probabilities (CPs) of the PIs by using the rela-
tions AW =1K
K
∑i=1
(U(i)−L(i)), and CP =1K
K
∑i=1
I(U(i)−L(i))(Ys:m2:n2(i)), re-
spectively, where IA(·) is the indicator function, i.e. IA(x) = 1 if x ∈ A andIA(x) = 0, otherwise.
Based on Algorithm 3.1, we have computed the values of AWs and CPs fordifferent values of s when m1 = m2 = 5 and n1 = n2 = 10. The results aretabulated in Table 1. From the results of Table 1, we find the following:
• The values of AWs are increasing with respect to s, when the other com-ponents are held fixed for both HPD and equi-tailed prediction intervals.
• As we would expect, the AWs of HPD PIs are smaller than the correspond-ing AWs of the equi-tailed PIs.
• Also, the coverage probabilities, CPs, are near the nominal prediction level0.95 for all cases.
Basiri, E. 28
Table 1: Values of AWs and CPs of %95 equi-tailed (ET) and HPD PIs for different values of s when m1 = m2 = 5 and n1 = n2 = 10.
HPD PIs ET PIs
n1 m1 n2 m2 s AW CP AW CP
10 5 10 5 1 0.6271 0.9450 0.6739 0.9510
3 1.1034 0.9530 1.1671 0.9440
5 1.9096 0.9480 2.0451 0.9460
4 Real example
In this section, the theoretical results of the paper are illustrated with an ex-ample. In this example, we consider the data which represent the time tobreakdown of an insulating fluid in an accelerated life test conducted at a volt-age of 34 kV. [13] used the data for generating progressively type II censoredorder statistics with uniform removals from Lomax distribution. These fail-ure times and removed numbers are reported in Table 2. Here, we considerthese data as an observed sample. Moreover, for the future sample we assumen2 = 5, m2 = 3 and R′ = (1,0,1) that has been generated from the discreteuniform distribution. Finally, the %90 equi-tailed (ET) and HPD predictionintervals with their length for different values of s have been computed andare presented in Table 3. From Table 3 we can see that for all cases, the HPDprediction intervals are shorter than their corresponding two-sided equi-tailedprediction intervals.
Table 2: The progressively type II censored order statistics and removed numbers obtained by [13].
i 1 2 3 4 5 6 7 8 9 10
xi:m1:n1 0.19 0.78 0.96 2.78 4.67 6.50 7.35 8.27 12.06 32.52
ri 1 0 3 2 1 2 0 0 0 0
Table 3: The %90 equi-tailed (ET) and HPD PIs with their length for different values of s, when n2 = 5, m2 = 3 and R′ = (1,0,1).
HPD PIs ET PIs
s PIs Length PIs Length
1 (0, 4.470) 4.470 (0.026, 4.590) 4.564
3 (0.1, 90.95) 90.85 (0.509, 152) 151.491
The 6th Seminar on Reliability Theory and its Applications 29
References
[1] Balakrishnan, N. (2007), Progressive censoring methodology: An ap-praisal, Test, 16, 211-259.
[2] Balakrishnan, N. and Aggarwala, R. (2000), Progressive Censoring: The-
ory, Methods, and Applications, Birkhauser, Boston.
[3] Balakrishnan, N. and Cramer, E. (2014), The Art of Progressive Censoring,Birkhauser, New York.
[4] Balakrishnan, N. and Sandhu, R. A. (1995), A simple simulational algo-rithm for generating progressive Type-II censored samples, The American
Statistician, 49(2), 229-230.
[5] Basiri, E. and Beigi, S. (2020) The optimal scheme in type II progressivecensoring with random removals for the Rayleigh distribution based ontwo-sample Bayesian prediction and cost function, Journal of Advanced
Mathematical Modeling, 10(1), 135-157.
[6] Dey, S. and Dey, T. (2014), Statistical inference for the Rayleigh distribu-tion under progressively Type-II censoring with binomial removal, Applied
Mathematical Modelling, 38(3), 974-982.
[7] Gunasekera, S. (2018), Inference for the Burr XII reliability under pro-gressive censoring with random removals. Mathematics and Computers in
Simulation, 144, 182-195.
[8] Meshkat, R. and Dehqani, N. (2018), Point prediction for the propor-tional hazards family based on progressive Type-II censoring with bino-mial removals, Journal of Statistical Modelling: Theory and Applications
(JSMTA), 1(1), 19-35.
[9] Nadar, M., and KIzIlaslan, F. (2015), Estimation and prediction of the Burrtype XII distribution based on record values and inter-record times, Jour-
nal of Statistical Computation and Simulation, 85(16), 3297-3321.
Basiri, E. 30
[10] Prakash, G. (2017), Progressive Censored Burr Type-XII DistributionUnder Random Removal Scheme: Some Inferences. Afrika Statistika,12(2), 1273-1284.
[11] Soliman, A. A., Ellah, A. H. A. , Abou-Elheggag, N. A. and El-Sagheer,R. M. (2013), Bayesian and frequentist prediction using progressive Type-II censored with binomial removals, Intelligent Information Management,5(05), 162.
[12] Wu, S. J. (2003). Estimation for the two-parameter Pareto distributionunder progressive censoring with uniform removals, Journal of Statistical
Computation and Simulation, 73(2), 125-134.
[13] Wu, S. J., Chen, Y. J., and Chang, C. T. (2007), Statistical inference basedon progressively censored samples with random removals from the Burrtype XII distribution. Journal of Statistical Computation and Simulation,77(1), 19-27.
The 6th Seminar on Reliability Theory and its Applications
Estimation for the Poisson-Exponential Distribution Based on ProgressivelyType-II Censored Data with Uniform and Binomial Removals
Bastan, F.1, and MirMostafaee, S.M.T.K.1
1 Department of Statistics, University of Mazandaran, Babolsar, Iran
Abstract: This paper includes a discussion regarding the estimation of theunknown parameters of the Poisson-exponential distribution based on pro-gressively type II censored data with random removals. We assume that theremovals are binomially distributed random variables once and they are dis-crete uniformly distributed random variables the other time. The problem ofmaximum likelihood and Bayesian estimation of the parameters is discussed.As it seems that the integrals related to the Bayes point estimates do not haveclosed forms, the importance sampling technique is employed to approximatethem. A simulation study is presented to compare the numerical results of thecensoring schemes with different removal patterns.
Keywords: Importance Sampling, Information Matrix, Maximum LikelihoodEstimation, Removal Pattern.
1 Introduction
The Poisson-exponential distribution is an extension of the exponential dis-tribution, introduced by [2]. It possesses a bounded increasing failure ratefunction which makes it suitable for modeling a life situation whose numberof failures increases with time but gets stable finally. Many researchers workedon statistical inference for the PE model, see for example [5] and [8]. Let Y
1MirMostafaee, S.M.T.K.: [email protected]
31
Bastan, F. and MirMostafaee, S.M.T.K. 32
follow a Poisson-exponential (PE) distribution with positive parameters θ andλ , then the probability density function (pdf) of Y is given by
f (y) =θλe−λy−θe−λy
1− e−θ, y > 0. (1)
The corresponding cumulative distribution (cdf) function is
F(y) = 1− 1− e−θe−λy
1− e−θ, y > 0, θ > 0, λ > 0. (2)
In many lifetime experiments, the censoring schemes are employed to savetime and cost. The type I and type II censoring schemes are well-used by re-searchers. In type II censoring plan, we stop the experiment when a fixed num-ber of failure times are observed. The progressive type II censoring plan, as ageneralization of the ordinary type II censoring plan, enjoys the permission toremove some of the unobserved units each time a failure occurs. Suppose thata life experiment includes n items and it is decided to observe m failure times.At the time of the first failure, R1 units are removed from the n− 1 operatingones, at the time of the second failure, R2 units are removed from the n−2−R1
remaining ones and this process goes on until the m-th failure time is observedand all the remaining Rm = n−m−R1− ·· ·−Rm−1 units are removed fromthe experiment, see [1] for more details regarding the progressive type II cen-soring scheme. Though it is preferable to determine the removals (Ri’s) beforethe experiment starts, but in many practical situations the removals cannot bepre-fixed, and therefore we may consider them as discrete random variables,see [10] and [12]. In such situations, the censoring plan is called the progres-
sive type II censoring scheme with random removals. We note the last removalconditioned on the previous ones is not a random variable. Yuen and Tse [10]assumed that each removal follows a uniform distribution while Tse et al. [12]considered the binomially distributed random removals. The estimation prob-lems for the PE model based on pro- gressively type II censored data withbinomial removals were investigated by [5] and [8]. Recently, Sharafi [11] fo-cused on some topics regarding the inferential problems for the two-parameter
The 6th Seminar on Reliability Theory and its Applications 33
Lindley distributed progressively type II censored data with binomial and uni-form removals. In this paper, we also assume that the removals are binomi-ally distributed once and they are discrete uniformly distributed the other timewhen the underlying model is the PE distribution. The novelty of this paper isthe numerical comparison of different removal patterns with each other for thePE model. The main results are given in Section 2. In this section, we derivethe maximum likelihood (ML) estimates and asymptotic confidence intervals(CI) for the parameters. The problem of Bayesian estimation for the parame-ters is discussed, as well. Finally, a simulation study is presented in Section 3for the purpose of comparison.
2 Main results
Let Y = {Y1:m:n, · · · ,Ym:m:n} be a set of progressively type II censored orderstatistics from the PE distribution with parameters θ and λ and y= {y1, · · · ,ym}be the corresponding observed set of Y . Then from (1) and (2), the conditionallikelihood function of θ and λ givenR= r= {r1, · · · ,rm} is given by (see [1])
L(θ ,λ ,y|R= r) =Cm
∏i=1
f (yi)[1−F(yi)]ri
=Cm
∏i=1
θλe−λyi−θe−λyi
1− e−θ
[1− e−θe−λyi
1− e−θ
]ri
. (3)
where C = n(n−1− r1)(n−2− r1− r2) · · ·(n−m+1− r1−·· ·− rm−1).Now we discuss two removal patterns, one of which considers the binomi-
ally distributed removals with the same probability parameter p, and the otherconsiders the discrete uniformly distributed removals.
2.1 Model with binomially distributed removals
Suppose that the number of removed items at each step of censoring schemefollows a binomial distribution with the same probability parameter p, con-sequently the probability mass function (pmf) of the first removal R1 is given
Bastan, F. and MirMostafaee, S.M.T.K. 34
by
P(R1 = r1; p) =(
n−mr1
)pr1(1− p)n−m−r1, 0≤ r1 ≤ n−m,
and for i = 2, · · · ,m− 1, the conditional pmf of Ri given R1, · · · ,Ri−1 is givenby
P(Ri = ri|Ri−1 = ri−1, · · · ,R1 = r1) =
(n−m−∑
i−1l=1 rl
ri
)pri(1− p)n−m−∑
il=1 rl ,
where 0≤ ri ≤ n−m−∑i−1l=1 rl.
Therefore, the joint pmf of R= (R1, · · · ,Rm) is obtained to be
P(R= r; p) = P(R1 = r1)P(R2 = r2|R1 = r1)P(R3 = r3|R2 = r2,R1 = r1) · · ·P(Rm−1 = rm−1|Rm−2 = rm−2, · · · ,R1 = r1)
=(n−m)! p∑
m−1i=1 ri(1− p)(m−1)(n−m)−∑
m−1i=1 (m−i)ri
(n−m−∑m−1i=1 ri)!∏
m−1i=1 ri!
, (4)
where 0≤ r1 ≤ n−m and 0≤ ri ≤ n−m−∑i−1l=1 rl for i≥ 2.
Thus from (3) and (2), the full likelihood function is given by
L(θ ,λ , p;y,r) = L(θ ,λ ,y|R= r)P(R= r; p) = A(r)L1(θ ,λ ;y,r)L2(p;r),
where
A(r) =C(n−m)!
(n−m−∑m−1i=1 ri)!∏
m−1i=1 ri!
,
L1(θ ,λ ;y,r) =θ mλ m exp
(−λ ∑
mi=1 yi−θ ∑
mi=1e−λyi
)(1− e−θ )n
m
∏i=1
(1− e−θe−λyi
)ri, (5)
andL2(p;r) = p∑
m−1i=1 ri(1− p)(m−1)(n−m)−∑
m−1i=1 (m−i)ri.
2.2 Model with discrete uniformly distributed removals
Suppose that the number of removed items at each step of censoring schemefollows a discrete uniform distribution such that
P(R1 = r1) =1
n−m+1, 0≤ r1 ≤ n−m,
The 6th Seminar on Reliability Theory and its Applications 35
and for i = 2, · · · ,m−1, the conditional pmf of Ri given R1, · · · ,Ri−1 is
P(Ri = ri|Ri−1 =1
n−m+1−∑i−1l=1 rl
, 0≤ ri ≤ n−m−i−1
∑l=1
rl.
The joint pmf of R= (R1, · · · ,Rm) is then given by
P(R= r) =1
n−m+1
m−1
∏i=2
1n−m+1−∑
i−1l=1 rl
, (6)
where 0≤ r1 ≤ n−m and 0≤ ri ≤ n−m−∑i−1l=1 rl for i≥ 2.
Thus from (3) and (7), the full likelihood function is given by
L(θ ,λ ;y,r) = L(θ ,λ ,y|R= r)P(R= r) = A∗(r)L1(θ ,λ ;y,r),
where
A∗(r) =C
n−m+1
m−1
∏i=2
1n−m+1−∑
i−1l=1 rl
,
and L1(θ ,λ ;y,r) is given in (5).
2.3 ML estimation of θ and λ
As P(R= r) is free of parameters θ and λ , the ML estimates of these parame-ters can be derived by maximizing L1(θ ,λ ;y,r) with respect to (w.r.t.) θ andλ , directly. Upon differentiating ln(L1(θ ,λ ;y,r)) w.r.t. the parameters andequating them with zero, we have
∂ ln(L1(θ ,λ ;y,r))∂θ
=mθ−
m
∑i=1
e−λyi− neθ −1
+m
∑i=1
rie−λyi−θe−λyi
1− e−θe−λyi= 0,
∂ ln(L1(θ ,λ ;y,r))∂λ
=mλ−
m
∑i=1
yi +θ
m
∑i=1
yie−λyi−θ
m
∑i=1
riyie−λyi−θe−λyi
1− e−θe−λyi= 0.
Let θ and λ denote the ML estimators of θ and λ , respectively, then undersome regularity conditions that are fulfilled for the parameters in the interiorparameter space, see [4], the asymptotic joint distribution of θ and λ is a 2-variate normal distribution, namely we have
√m(
θ −θ , λ −λ
)D−→ N2
(02, I−1(θ ,λ )
),
Bastan, F. and MirMostafaee, S.M.T.K. 36
where D−→ denotes the convergence in distribution, 02 denotes a 2× 1 vectorwhose both elements equal zero and I−1(θ ,λ ) is the inverse matrix of Fisherinformation matrix, I(θ ,λ ), and I(θ ,λ ) is defined as follows
I(θ ,λ ) =−
E(
∂ 2 ln(L1(θ ,λ ;Y ,R))
∂θ 2
)E(
∂ 2 ln(L1(θ ,λ ;Y ,R))
∂θ∂λ
)E(
∂ 2 ln(L1(θ ,λ ;Y ,R))
∂λ∂θ
)E(
∂ 2 ln(L1(θ ,λ ;Y ,R))
∂λ 2
) .
In practical situations, we may use the estimator of I(θ ,λ ), denoted as I(θ ,λ ),which is given by
I(θ ,λ ) = −
∂ 2 ln(L1(θ ,λ ;Y ,R))
∂θ 2∂ 2 ln(L1(θ ,λ ;Y ,R))
∂θ∂λ
∂ 2 ln(L1(θ ,λ ;Y ,R))
∂λ∂θ
∂ 2 ln(L1(θ ,λ ;Y ,R))
∂λ 2
|(θ ,λ )=(θ ,λ )
=
[V ar(θ) Cov(θ , λ )
Cov(λ , θ) V ar(λ )
].
We do not report the elements of I(θ ,λ ) for the sake of brevity. Now, anasymptotic 100(1− γ)% equi-tailed two-sided CI for θ is given by
θ ± zγ/2
√V ar(θ), (7)
where zγ/2 is the γ/2-th upper quantile of the standard normal distribution.
However, the lower bound of the confidence interval (7) is not guaranteednot to be negative, so we propose a modified asymptotic 100(1−γ)% CI for θ
as follows (max{0, θ − zγ/2
√V ar(θ)}, θ + zγ/2
√V ar(θ)
).
Similarly, a modified asymptotic 100(1− γ)% CI for λ is(max{0, λ − zγ/2
√V ar(λ )}, λ + zγ/2
√V ar(λ )
).
The 6th Seminar on Reliability Theory and its Applications 37
2.4 Bayesian estimation of θ and λ
In the Bayesian approach, it is assumed that the parameters have a joint priorprobability function, whose form and/or hyperparameters (the parameters ofthe joint prior density) can be determined with the help of prior informationabout the parameters. A gamma prior is popular for a positive parameter asit enjoys prior closed-form mean and variance. Therefore, we consider theindependent gamma priors for θ and λ as follows
π(θ) =ba1
1 θ a1−1e−b1θ
Γ(a1)and π(λ ) =
ba22 λ a2−1e−b2λ
Γ(a2),
where a1,b1,a2 and b2 are positive hyperparameters. Therefore from (5), giveny and r, the posterior joint pdf of θ and λ is given by
π(θ ,λ |y,r) =θ m+a1−1λ m+a2−1 exp
(−λ [b2 +∑
mi=1 yi]−θ [b1 +∑
mi=1e−λyi]
)K0(1− e−θ )n
×m
∏i=1
(1− e−θe−λyi
)ri, (8)
where
K0 =∫
∞
0
∫∞
0
θ m+a1−1λ m+a2−1 exp(−λ [b2 +∑
mi=1 yi]−θ [b1 +∑
mi=1e−λyi]
)(1− e−θ )n ∏
mi=1
(1− e−θe−λyi
)−ridθdλ .
Under the squared error loss (SEL) function, the Bayes point estimates of θ
and λ are the means of θ and λ w.r.t. the posterior joint pdf (8), respectively,and they are given by
θ =∫
∞
0
∫∞
0
θ m+a1λ m+a2−1 exp(−λ [b2 +∑
mi=1 yi]−θ [b1 +∑
mi=1e−λyi]
)K0(1− e−θ )n ∏
mi=1
(1− e−θe−λyi
)−ridθdλ , (9)
and
λ =∫
∞
0
∫∞
0
θ m+a1−1λ m+a2 exp(−λ [b2 +∑
mi=1 yi]−θ [b1 +∑
mi=1e−λyi]
)K0(1− e−θ )n ∏
mi=1
(1− e−θe−λyi
)−ridθdλ , (10)
respectively.
Bastan, F. and MirMostafaee, S.M.T.K. 38
The integrals (9) and (10) do not seem to have explicit forms, therefore, wepropose the importance sampling technique to approximate them. The jointposterior density (8) can be reexpressed as
π(θ ,λ |y,r) =C0g1(λ |y)g2(θ |λ ,y)h(θ ,λ ;y,r),
where g1(λ |y) is the density of gamma distribution with the shape parameterm+ a2 and the scale parameter b2 +∑
mi=1 yi, g2(θ |λ ,y) is the density of
gamma distribution with the shape parameter m+ a1 and the scale parameterb1 +∑
mi=1e−λyi,
h(θ ,λ ;y,r) =∏
mi=1
(1− e−θe−λyi
)ri
(b1 +∑mi=1e−λyi)m+a1(1− e−θ )n ,
and C0 =∏
2i=1 Γ(m+ai)
K0(b2 +∑mi=1 yi)m+a2
.
Now, we consider the following algorithm to calculate the approximatedBayes (AB) estimates of θ and λ .
Algorithm 1:
• Step 1: Generate λ1 from g1(λ |y) and given λ1, generate θ1 from g2(θ |λ1,y).
• Step 2: Repeat Step 1, N times to obtain (θ1,λ1), · · · ,(θN,λN), where N isa large number.
• Step 3: The AB estimates of θ and λ are given by θ ∗ = ∑Ni=1 θiwi and
λ ∗ = ∑Ni=1 λiwi, respectively, where wi =
h(θi,λi;y,r)
∑Nj=1 h(θ j,λ j;y,r)
.
Following [1], a 100(1− γ)% Chen-Shao shortest credible interval (CSS CI)for θ can be found by the applying the following algorithm.
Algorithm 2:
• Step 1: Sort the sample {θ j; j ≥ 1}, as θ(1) ≤ θ(2) ≤ ·· · ≤ θ(N).
• Step 2: Obtain the 100(1− γ)% credible intervals for θ as follows
C j(N) =(
θ( j/N), θ
({ j+[(1−γ)N]}/N)), j = 1, · · · ,N− [(1− γ)N],
where [x] is the integer part of x and θ (γ) = θ(i) if ∑i−1j=1 wi < γ ≤ ∑
ij=1 w j.
The 6th Seminar on Reliability Theory and its Applications 39
• Step 3: The 100(1− γ)% CSS CI for θ is given by the shortest intervalamong all C j(N)’s.
Similarly, we can derive a 100(1− γ)% CSS CI for λ .
3 Simulation Study
In this section, we provide a simulation study to see how the joint distributionof the removals affect the numerical results. We take θ = 2, λ = 1 and (m,n) =
(15,20),(20,30) in this simulation. In addition, we take two cases for the jointprior distribution of θ and λ as follows, Case I: a1 = b1 = a2 = b2 = 0.01 andCase II: a1 = 1,b1 = 0.5,a2 = b2 = 0.25. Case I is rather non-informative withVar(θ) = Var(λ ) = 100 while Case II is informative with E(θ) = 2,E(λ ) =1 and Var(θ) = Var(λ ) = 4. We also take the number of iterations of thissimulation to be M = 3000. In each iteration, we derive the ML and Bayespoint estimates as well as the 95% asymptotic (Asymp. for short) and CSSCIs for θ and λ based on 4 removal patterns, i.e. discrete uniformly removalsand removals that follow the binomial (Bin for short) distribution with p =
0.2,0.5 and 0.8. Let θ ∗ be an estimator of θ and θ ∗i be the correspondingestimate that is calculated in the i-th iteration of the simulation. Then theestimated mean squared error (EMSE) and estimated bias (bias for short) of
θ ∗ are given by EMSE(θ ∗) =1M
M
∑i=1
(θ ∗i −θ)2 and bias(θ ∗) =1M
∑Mi=1(θ
∗i −θ),
respectively. Similarly, we can define the EMSE and bias of an estimator ofλ . We computed the EMSEs and biases of point estimators of θ and λ andthe results are given in Table 1. In addition, we calculated the average widths(AWs) and coverage probabilities (CPs) of the 95% interval estimators of θ
and λ and the results are summarized in Table 2.From Table 1, we see that Pattern “Bin (p = 0.5)” possesses the smallest
EMSEs in the most cases (67%) while Pattern “Bin (p = 0.2)” has the largestEMSEs almost in all the cases (92%). In addition, the largest biases belongto Pattern “Bin (p = 0.2)” in half of the cases. Moreover, from Table 2, we
Bastan, F. and MirMostafaee, S.M.T.K. 40
observe that Pattern “Bin (p = 0.8)” possesses the smallest AWs in the mostcases (67%) while Pattern “Bin (p = 0.2)” has the largest AWs in the mostcases (75%).
Pattern “Uniform” worked better than Pattern “Bin (p = 0.2)” in the senseof EMSE in all the cases and in the sense of AW in 75% of the cases. However,in the most cases, it did not work better than Patterns “Bin (p = 0.5)” and “Bin(p = 0.8)” in the sense of EMSE and AW. All the computations of this paperis done with the help of the statistical software R [7].
Table 1: The EMSEs (the first lines) and the biases (the second lines) of the point estimators of θ and λ .
θ λ
Bayes Bayes
n m Pattern ML Case I Case II ML Case I Case II
20 15 Uniform 2.8628 1.0790 0.6086 0.1307 0.0722 0.0639
0.5371 0.1864 0.1211 0.1166 0.0265 0.0310
Bin (p = 0.2) 3.1771 1.1577 0.6926 0.1485 0.0815 0.0688
0.5619 0.1816 0.1044 0.1271 0.0403 0.0365
Bin (p = 0.5) 2.8353 0.9036 0.5917 0.1284 0.0679 0.0622
0.5348 0.1541 0.1113 0.1174 0.0239 0.0306
Bin (p = 0.8) 2.9240 1.0077 0.5889 0.1304 0.0720 0.0638
0.5436 0.1937 0.1317 0.1183 0.0263 0.0314
30 20 Uniform 1.7757 0.6374 0.4824 0.0891 0.0515 0.0476
0.3573 0.1353 0.1189 0.0849 0.0141 0.0182
Bin (p = 0.2) 1.8521 0.6958 0.5381 0.0941 0.0543 0.0491
0.3631 0.1083 0.0924 0.0856 0.0177 0.0214
Bin (p = 0.5) 1.8854 0.6924 0.4487 0.0868 0.0503 0.0436
0.3727 0.1487 0.1118 0.0872 0.0171 0.0162
Bin (p = 0.8) 1.8346 0.6649 0.4855 0.0864 0.0514 0.0460
0.3641 0.1650 0.1264 0.0829 0.0123 0.0146
The 6th Seminar on Reliability Theory and its Applications 41
Table 2: The AWs (the first lines) and CPs (the second lines) of interval estimators of θ and λ .
θ λ
CSS CI CSS CI
n m Pattern Asymp. Case I Case II Asymp. Case I Case II
20 15 Uniform 5.7014 3.4543 3.2201 1.1170 0.8685 0.8680
0.9990 0.9653 0.9817 0.9643 0.9133 0.9327
Bin (p = 0.2) 5.9453 3.5209 3.2554 1.0377 0.9379 0.9279
0.9993 0.9570 0.9793 0.9573 0.9330 0.9470
Bin (p = 0.5) 5.6248 3.4416 3.1956 0.8334 0.8764 0.8705
0.9997 0.9687 0.9820 0.9630 0.9250 0.9320
Bin (p = 0.8) 4.9897 3.4833 3.2088 0.6907 0.8558 0.8499
0.9997 0.9670 0.9810 0.9607 0.9097 0.9300
30 20 Uniform 6.8310 2.8719 2.7340 2.4599 0.7117 0.7103
0.9853 0.9620 0.9777 0.9640 0.9030 0.9150
Bin (p = 0.2) 5.1506 2.9222 2.7914 2.0703 0.7877 0.7819
0.9800 0.9550 0.9723 0.9633 0.9173 0.9360
Bin (p = 0.5) 5.4461 2.8628 2.7301 2.2393 0.7130 0.7077
0.9813 0.9567 0.9777 0.9670 0.9123 0.9203
Bin (p = 0.8) 4.6205 2.8653 2.7331 1.9539 0.6932 0.6889
0.9840 0.9657 0.9767 0.9683 0.8883 0.9063
Acknowledgement
We want to thank Mr. Ali Saadati Nik for his help.
References
[1] Balakrishnan, N. and Aggarwala, R. (2000), Progressive Censoring: The-
ory, Methods, and Applications, Springer Science+Business Media, LCC.
[2] Cancho, V.G., Louzada-Neto, F. and Barriga, G.D.C. (2011), The Poisson-exponential lifetime distribution, Computational Statistics and Data Anal-
ysis, 55(1), 677-686.
Bastan, F. and MirMostafaee, S.M.T.K. 42
[3] Chen, M.H. and Shao, Q.M. (1999), Monte Carlo estimation of Bayesiancredible and HPD intervals, Journal of Computational and Graphical
Statistics, 8(1), 69-92.
[4] Cox, D.R. and Hinkley, D.V. (1974), Theoretical Statistics, Chapman andHall/CRC, Boca Raton, Florida.
[5] Kumar, M., Singh, S.K. and Singh, U. (2016), Reliability estimation forPoisson-exponential model under progressive type-II censoring data withbinomial removal data, Statistica, 76(1), 3-26.
[6] R Core Team (2019), R: A language and environment for statistical com-
puting. R Foundation for Statistical Computing, Vienna, Austria.
[7] Sharafi, M. (2019), Inference of the two-parameter Lindley distri-bution based on progressive type II censored data with random re-movals, Communications in Statistics-Simulation and Computation, DOI:10.1080/03610918.2019.1691226.
[8] Singh, S.K., Singh, U. and Kumar, M. (2016), Bayesian estimation forPoisson-exponential model under progressive type-II censoring data withbinomial removal and its application to ovarian cancer data, Communica-
tions in Statistics-Simulation and Computation, 45(9), 3457-3475.
[9] Tse, S.K., Yang, C. and Yuen, H.K. (2000), Statistical analysis of Weibulldistributed lifetime data under Type II progressive censoring with binomialremovals, Journal of Applied Statistics, 27(8), 1033-1043.
[10] Yuen, H.K. and Tse, S.K. (1996), Parameters estimation for Weibulldistributed lifetimes under progressive censoring with random removals,Journal of Statistical Computation and Simulation, 55(1-2), 57-71.
The 6th Seminar on Reliability Theory and its Applications
Reliability Analysis of Phased Mission Systems with Ternary Components
Bidarmaghz, H.R.1, and Zarezadeh, S.1
1 Department of Statistics, Shiraz University, Shiraz 71454, Iran
Abstract: In this paper, we consider phased mission systems consisting ofindependent and identically distributed three-state components. A model issuggested to obtain the reliability of such a system at any time of the mission.To this, a new variant of survival signature is introduced which is free of therandom failure mechanism of the components. An example is also given toillustrate the results.
Keywords: Reliability, Survival Signature, Phased Mission System, TernaryComponent.
1 Introduction
A phased mission system (PMS) is designed to complete a mission by per-forming several consecutive tasks. The time elapsed in each task is consideredas a phase. A PMS accomplishes its mission when the system operates ineach phase. In other words, the failure of the system in any phase causes thefailure of the mission. The PMSs exist in many practical applications such asaerospace, nuclear power, airborne weapon systems and etc. A typical exam-ple of a PMS is the monitoring system in a satellite-launching mission withthree phases: launch, separation, and orbiting. Another example is the flightof an aircraft, which includes takeoff, cruise, and landing phases. It should benoted that a PMS has different structures in different phases because the
1Bidarmaghz, H.R.: [email protected]
43
Bidarmaghz, H.R., and Zarezadeh, S. 44
system is done a special task in each phase and the components are under cer-tain pressure and condition in each phase [11]. Many efforts have been madeto evaluate the reliability of the PMSs. There are two classes of scenarios re-garding the reliability analysis of PMSs including of combinatorial methodsand state space-oriented approaches. Further, in some cases, two classes ofapproaches are combined to have the advantages of both; see, e.g., [5], [7].The state space-oriented models usually use Markov chains or Petri nets torepresent system behavior. Although these methods are useful particularly formodeling complex dependencies among system components, the cardinalityof the state space becomes exponentially large when the number of compo-nents increases; [3], [1]. The combinatorial methods reduce the computationalcomplexity by using the Boolean algebra and various forms of decision dia-grams. Recently, the Binary Decision Diagram (BDD) of combinatorial meth-ods has been extensively used in the reliability analysis of PMSs. The BDDmethod presented for reliability assessment of PMSs for the first time by Zanget al. [13]. Tang et al. [9] gave a new BDD-based algorithm for the reliabilityanalysis of PMSs with multiple failure mode components. The efficiency ofTang’s method was improved using a heuristic selection strategy and reducingthe BDD size by Mo [6] and Reed et al. [8], respectively. For other researchworkes related to the reliability evaluation of PMSs based on BDD, we refer to[12], [4] and [10] The BDD method is a very efficient combinatorial method,but without the computational expense, analyzing large systems would be diffi-cult. Huang et al. [2] proposed a combinatorial analytical approach providinga new survival signature methodology for reliability analysis of PMSs. Themethod presented by Huang et al. [2] has similar computational complexity toBDD methods, see [2].
In the reliability theory, it is usually assumed that all components of system canonly be in one of two possible states: either working or failed. But in rea lity,the engineering systems may consist of a number of components which are inup state while those work partially. Then, it is reasonable to consider three
The 6th Seminar on Reliability Theory and its Applications 45
state for such these components: up state (full functionality), mid state (partialperformance), and down state (complete failure). Systems with ternary com-ponents are useful to model various real life situations. Natural objects withternary components are, for example, communication networks with three-level performance of their edges or nodes. Motivated by this, in this paper, wepropose a reliability model for the PMSs with three-state components. To this,a new variant of survival signature is introduced. An example is also given toillustrate the results.
2 Main results
Consider a PMS including of M ≥ 2 phases with n independent components.Let phase i begin at time τi and end at time τi+1 where i = 1,2, ...,M, τ1 =
0 and τM+1 is the time of accomplishing the mission. We assume that thelifetimes of components in the same phase are statistically independent (or,even exchangeable). Assume that each component can be in three states: up,mid, and down. In other words, the component is in up, mid, or down state ifit has perfect functioning, partially working, or complete failure, respectively.It is also assumed that the components are non-repairable in the duration ofthe mission. This means that, if a component at a given time and phase is inthe specified state, until the end of the mission remains in the same state orbecomes worse. The state of component j in phase i, i = 1, ...,M, j = 1, ...,nis denoted by a ternary variable Xi j where Xi j = 2 if component j is in up statefor all of phase i, Xi j = 1 if component j is in mid state from before the endof phase i to the end of phase i, and Xi j = 0 if component j is in down statebefore the end of phase i. So, we can suppose that Xi = (Xi1, ...,Xin) is the statevector of components in phase i and X (m) = (X1, ...,Xm), (m = 1,2, ...,M), isthe state vector of components in the first m phases. Further, the state of thesystem in phase i is indicated by the structure function of system in that phase:ϕi(Xi) = 1(0) if the system is in up (down) state until the end of phase i. Wealso denote ϕ(m)(X (m)) = 1(0) if the system is in up (down) state in the first
Bidarmaghz, H.R., and Zarezadeh, S. 46
m phases. As described, if the system has correctly worked in each phasethen it is said that the system has accomplished the full mission. Hence theprobability that a PMS operates successfully in all of its phases is written as
P[ϕ(M)(X (M)) = 1
]= P
[M⋂
i=1
(ϕi(Xi) = 1)
]
=M
∏i=1
P
[(ϕi(Xi) = 1)
∣∣∣∣∣ i−1⋂i0=1
(ϕi0(Xi0) = 1)
](1)
Let Ai and Bi , are the sets of components which are in up state and mid stateat the beginning of phase i, i = 1, ...,M, respectively. That is,
Ai ={ j;Xi j(τi) = 2, j = 1, ...,n},Bi ={ j;Xi j(τi) = 1, j = 1, ...,n}.
where Xi j(τi) is the state of component j at the start time of phase i as τi. DefineΦ[(`1,r∗1,r1), ...,(`M,r∗M,rM)] as the probability that the PMS completes themission provided that, in phase i, exactly `i components of Ai stays in up state,r∗i components of Ai enters the mid state and ri components of Bi stays in midstate. So r1 = 0, because the all components are in up state at time τ1 = 0, inother words, B1 is an empty set and |A1|= n, where |·| denotes the cardinalityof a set.
With the above assumptions, the phase i starts work with `i−1 componentsin up state, r∗i−1 + ri−1 components in mid state and remaining components indown state (`0 = n, r∗0 = r0 = 0). In other words,
ni := |Ai|= `i−1, |Bi|= r∗i−1 + ri−1,
So there are(`i−1`i,r∗i
)(r∗i−1+ri−1ri
)state vectors in which exactly `i components of
Ai are in up state, r∗i components of Ai are in mid state, ri components ofBi are in mid state and the remaining components are in down state, where( n
x,y
)= n!/(x!y!(n− x− y)!) and
(nx
)= n!/(x!(n− x)!). As seen as, phase i
dependence on the phase i− 1 via li−1 and ri−1 + r∗i−1. Since the lifetimes ofcomponents in the same phase are independent and exchangeable, the survival
The 6th Seminar on Reliability Theory and its Applications 47
signature is achieved as
Φ[(`1,r∗1,r1), ...,(`M,r∗M,rM)] =
[M
∏i=1
(ni
`i,r∗i
)(r∗i−1 + ri−1
ri
)]−1
× ∑x(M)∈S(M)
ϕ(M)(x(M)), (2)
where S(M) denotes the set of all possible state vectors for the system up tocomplete the mission where in phase i, i = 1, ...,M, `i components of Ai are inup state, r∗i components of Ai are in mid state and ri components of Bi are inmid state.
In some cases, we are interested to have the reliability function of systemat any time point of the full mission. For this issue we need to define thesurvival signature for the PMS up to and including a specific phase, say m
(m = 1,2, ...,M) for which this phase includes the same time point. To this, thesurvival signature Φm[(`1,r∗1,r1), ...,(`m,r∗m,rm)] can be defined as the proba-bility that the system completes the first m missions successfully provided that,in phase i (i = 1, ...,m), `i components of Ai are in up state, r∗i components ofAi are in mid state and ri components of Bi are in mid state. This survivalsignature can be achieved by substituting m instead of M in equation (15).
Suppose that γ : [0,∞)→ {1, ...,M} is a mapping which shows the phasethat the system is in that at time t. Then we have
γ(t) =
1 τ1 ≤ t < τ2
2 τ2 ≤ t < τ3
.
.
M τM ≤ t ≤ τM+1
.
The following theorem gives the reliability function of the PMS at time t.
Theorem 2.1. Consider a PMS with the same three-state components in each
phase. Let the components have exchangeable lifetimes and T1 and T2 are,
Bidarmaghz, H.R., and Zarezadeh, S. 48
respectively, the entrance times of the component into mid and down states.
Assume that (T1,T2) has joint DF H with marginal DFs G1 and G2, respec-
tively. Then the reliability of the system at time t is gotten as
R(t) =( n1
∑`1=0
n1−`1
∑r∗1=0
r∗0+r0
∑r1=0
)...( nγ(t)
∑`γ(t)=0
nγ(t)−`γ(t)
∑r∗γ(t)=0
r∗γ(t)−1+rγ(t)−1
∑rγ(t)=0
)×Φγ(t)[(`1,r∗1,r1), ...,(`γ(t),r
∗γ(t),rγ(t))]
×γ(t)
∏i=1
(ni
`i,r∗i
)(P[2]
i (t))`i(P[1]i (t))r∗i (P[0]
i (t))ni−`i−r∗i
×(
r∗i−1 + ri−1
ri
)(Q[1]
i (t))ri(Q[0]i (t))r∗i−1+ri−1−ri,
where
P[2]i (t) =
1−G1(tmin)
1−G1(τi),
P[1]i (t) =
G1(tmin)−H(tmin, tmin)−G1(τi)+H(τi, tmin)
1−G1(τi),
P[0]i (t) =
H(tmin, tmin)−H(τi, tmin)
1−G1(τi),
Q[1]i (t) =
G1(τi)−H(τi, tmin)
G1(τi)−H(τi,τi), Q[0]
i (t) =H(τi, tmin)−H(τi,τi)
G1(τi)−H(τi,τi),
in which tmin = min{t,τi+1}.
Example 2.2. Consider a PMS with the structure as shown in Figure 1. Sup-pose that the duration time in each phase is 10 hours and the components arei.i.d. in each phase. Let us consider each component can be in three states:weight 2 (up), weight 1 (mid), weight 0 (down). The system in phase i isdefined to be in up state if the sum of weights of working components whichcontribute in the system connection is at least wi i= 1,2,3. Let w1 = 4, w2 = 4,and w3 = 3.
The 6th Seminar on Reliability Theory and its Applications 49
Figure 1: A PMS with the same components in each phase.
Table 2 gives the non-zero elements of survival signature of the describedPMS. To emphasize the dependency between the phases, the table is groupedinto a consecutive sequence of phases. Note that, in all rows of Table 2 the re-quired weights are established for the phases, but in some case, the connectionis not hold. For example, in the last row of Table 2, we have
3
∏i=1
(ni
`i,r∗i
)×(
r∗i−1 + ri−1
ri
)=
(3
1,2
)(00
)(1
1,0
)(22
)(1
1,0
)(21
)= 6.
These six cases are represented in Table 1 where Xi1, Xi2, and Xi3 denote thestates of components E, F , and G in the i-th phase, i = 1,2,3, respectively.Note that the system is up in the all cases except for the cases 4 and 6 in whichthe component E is in down state.
Bidarmaghz, H.R., and Zarezadeh, S. 50
Table 1: Survival signature of the PMS shown in Figure 1
Phase 1 Phase 1+2 All Phases
`1 r∗1 r1 Φ `1 r∗1 r1 `2 r∗2 r2 Φ `1 r∗1 r1 `2 r∗2 r2 `3 r∗3 r3 Φ
3 0 0 1 3 0 0 3 0 0 1 3 0 0 3 0 0 3 0 0 1
3 0 0 3 0 0 2 1 0 1
3 0 0 3 0 0 2 0 0 23
3 0 0 3 0 0 1 2 0 1
3 0 0 3 0 0 1 1 0 23
3 0 0 3 0 0 0 3 0 1
3 0 0 2 1 0 1 3 0 0 2 1 0 2 0 1 1
3 0 0 2 1 0 2 0 0 23
3 0 0 2 1 0 1 1 1 1
3 0 0 2 1 0 1 1 0 23
3 0 0 2 1 0 1 0 1 23
3 0 0 2 1 0 0 2 1 1
3 0 0 2 0 0 1 3 0 0 2 0 0 2 0 0 23
3 0 0 2 0 0 1 1 0 23
3 0 0 1 2 0 1 3 0 0 1 2 0 1 0 2 1
3 0 0 1 2 0 1 0 1 23
3 0 0 1 2 0 0 1 2 1
2 1 0 1 2 1 0 2 0 1 1 2 1 0 2 0 1 2 0 1 1
2 1 0 2 0 1 2 0 0 23
2 1 0 2 0 1 1 1 1 1
2 1 0 2 0 1 1 1 0 23
2 1 0 2 0 1 1 0 1 23
2 1 0 2 0 1 0 2 1 1
2 1 0 2 0 0 1 2 1 0 2 0 0 2 0 0 23
2 1 0 2 0 0 1 1 0 23
2 1 0 1 1 1 1 2 1 0 1 1 1 1 0 2 1
2 1 0 1 1 1 1 0 1 23
2 1 0 1 1 1 0 1 2 1
1 2 0 1 1 2 0 1 0 2 1 1 2 0 1 0 2 1 0 2 1
1 2 0 1 0 2 0 1 2 1
1 2 0 1 0 2 1 0 1 23
Table 2: All Cases of the components state for the last row of Table 2
Phase 1 Phase 2 Phase 3
Case X11 X12 X13 X21 X22 X23 X31 X32 X33
1 2 1 1 2 1 1 2 1 0
2 2 1 1 2 1 1 2 0 1
3 1 2 1 1 2 1 1 2 0
4 1 2 1 1 2 1 0 2 1
5 1 1 2 1 1 2 1 0 2
6 1 1 2 1 1 2 0 1 2
Now, we achieve the reliability function of the system at any time during
The 6th Seminar on Reliability Theory and its Applications 51
the mission. It is assumed that the times that the components stay in up state(T1) and in mid state (T2−T1) are identically distributed as exponential withmean 1000 hours. Figure 2 shows the reliability function of the system basedon Theorem 2.1. Note that, the failure of the component E during phase 2does not necessarily cause the failure of the system at t = 20, however thePMS will fail instantaneously upon starting phase 3 at t = 20+, then the jumpdiscontinuity has occurred in the reliability function at t = 20, as shown inFigure 2.
Figure 3 depicts the plot of reliability function of the system for two caseswhere the time to stay the components in up and mid states in each phase areexponential with respective parameters λ1 and λ2. It is seen that decreasing thefailure rate of the components in the up state has more impact on the system’sbetter performance.
Figure 2: Reliability of the PMS in Example 2.2 for λ1 = λ2 =
λ = 0.001.
Figure 3: Reliability of the PMS in Example 2.2 for λ2 = 2λ1 =
0.002 and λ1 = 2λ2 = 0.002 from up to down.
References
[1] Chew, SP. and Dunnett, SJ. and Andrews, JD. (2008), Phased mission mod-elling of systems with maintenance-free operating periods using simulatedpetri nets. Rel. Eng. Syst. Saf., 93(7), 980-994.
Bidarmaghz, H.R., and Zarezadeh, S. 52
[2] Huang, X. and Aslett, L. J. M. and Coolen, F. P. A. (2019), Reliabilityanalysis of general phased mission systems with a new survival signature.Rel. Eng. Syst. Saf., 189, 416422.
[3] Kim, K. and Park, KS. (1994), Phased-mission system reliability underMarkov environment.IEEE. Trans. Reliab., 43(2), 301-309.
[4] Levitin, G. and Xing, L. and Amari, S. V. and Dai, Y. S. (2013), Reliabil-ity of nonrepairable phased-mission systems with propagated failures. Rel.
Eng. Syst. Saf., 119, 218-228.
[5] Meshkat, L. and Xing, L. and Donohue, S. and Ou, Y. (2003), An overviewof the phase-modular fault tree approach to phased-mission system analy-sis. In:Proceedings of the international conference on space mission chal-
lenges for information technology, 393-398.
[6] Mo, Y. (2009), Variable ordering to improve BDD analysis of phased-mission systems with multimode failures. IEEE. Trans. Reliab., 58(1), 53-57.
[7] Ou, Y. and Dugan, JB. (2004), Modular solution of dynamic multi-phasesystems. IEEE. Trans. Reliab., 53(4), 499-508.
[8] Reed, S. and Andrews, JD. and Dunnett, SJ. (2011), Improved efficiencyin the analysis of phased mission systems with multiple failure mode com-ponents. IEEE. Trans. Reliab., 60(1), 70-79.
[9] Tang, Z. and Dugan, JB. (2006), Bdd-based reliability analysis of phased-mission systems with multimode failures. IEEE. Trans. Reliab., 55(2),350-360.
[10] Wang, C. and Xing, L. and Levitin, G. (2015), Probabilistic commoncause failures in phased-mission systems. Rel. Eng. Syst. Saf., 144, 530-60.
[11] Xing, L. and Amari, S. V. (2008), Reliability of phased-mission systems,In Handbook of performability engineering. Springer, London, 349-368.
The 6th Seminar on Reliability Theory and its Applications 53
[12] Xing, L. Levitin, G. (2013), BDD-based reliability evaluation of phased-mission systems with internal/external common-cause failures. Rel. Eng.
Syst. Saf., 112, 145-5
[13] Zang, X, and Sun, N. and Trivedi, KS. (1999), A BDD-based algorithmfor reliability analysis of phasedmission systems. IEEE. Trans. Reliab.,48(1), 50-60.
The 6th Seminar on Reliability Theory and its Applications
An Optimization Design of the X Control Chart Under the Truncated LifeTest for the Weibull Distribution
Eizi, A.1, and Sadeghpour Gideh, B.1
1 Department of Statistics, Ferdowsi University of Mashhad, Mashhad, Iran
Abstract: In this article, we present an algorithm for the optimization designof the X control chart under the time truncated life test for the Weibull distribu-tion and obtain the optimal values of design parameters such that the expectedtotal cost per hour is minimized. Optimum values of this parameters are deter-mined using the standard genetic algorithm from the MATLAB Apps tab. Asimulation study is given for demonstrating the performance of the proposedcontrol chart.
Keywords: X , Weibull distribution, Control Chart, Quality Control, In-ControlTime.
1 Introduction
Control charts are widely employed to monitor and control manufacturing pro-cesses. The major function of control chart is to identify assignable causes sothat the necessary corrective action can be taken before a large quantity of non-conforming products are manufactured. Among various control charts the X
control chart plays a dominant role, as it has the capability of controlling theprocess mean that has a vital bearing on productivity. Prajapati and Mahapatra[1] considered the design of X and R chart to monitor the process mean andstandard deviation. Montgomery [2] gave a review of the literature in econo-
1Eizi, A.: [email protected]
54
The 6th Seminar on Reliability Theory and its Applications 55
mic designs of various control charts. Usually it assumed that the underly-ing distribution of the quality characteristic is normal. In many situations, wemay have reason to doubt the validity of this assumption. Lio and Park [3]designed a control chart for inverse Gaussian percentiles. Chen [4] presentedan economic model of the chart operating with generalized control limits fornon-normal process data using the Burr distribution. Pascual and Li [5] pro-posed control charts to monitor the Weibull shape parameter under type II(failure) censoring. Chen [6] presented a Shewhart control scheme to simulta-neously monitor the shape parameter and the scale parameter of Weibull datawithout subgrouping. Derya and Canan [7] designed the control charts for theWeibull distribution, gamma distribution, and log-normal distribution. Khanet al. [8] presented a variable control chart under the time truncated life test forthe Weibull distribution. They assumed that the distribution of mean approx-imately follows a normal distribution according to the central limit theorem.Aslam et al. [9] proposed an attribute control chart under the time truncatedlife test by assuming that the failure time of a product has the Weibull distribu-tion. Khan et al. [10] studied the design of a new mixed attribute control chartadapted to a truncated life test in the Weibull distribution. Ayyagari et al. [11]developed statistical-economic design of control chart and analyzed it with theassumption that the in-control times of the process are random variables thatfollow a left truncated Weibull distribution.In this paper, An optimization model of the X control chart is proposed forthe Weibull distribution using failure data from a time truncated life test. Asimulation study is provided to demonstrate the performance of the proposedcontrol chart. In Section 2, a X control chart is proposed for the Weibull dis-tribution. In Section 3, the average run length for the proposed control chartat in-control and out- of- control stages is calculated. In Section 4, an opti-mization design model is introduced. In Section 5, a numerical example issolved. Also, a sensitivity analysis is performed to study the effects of shapeparameters in the Weibull distribution on the optimal design and discussion of
Eizi, A., and Sadeghpour Gideh, B. 56
the results is provid. Finally, in Section 6, brief conclutions are given.
2 Design of the proposed control chart
It is assumed that the lifetime of a product (denoted by the random variable X)follows a Weibull distribution with shape parameter β and scale parameter γ .The cumulative distribution function of this distribution is
F(x,γ,β ) =
1− exp(−(x
γ)β ); x≥ 0
0; x < 0
(1)
The average life time, µ , based on Weibull distribution is given by
µ =γ
βΓ(
1β), (2)
where Γ(.) is the gamma function.
Let µ0 and γ0 be the average lifetime and scale parameter, respectively, whenthe process is in-control.We propose a control chart using the failure data from a truncated life test asfollows:step1. Take a random sample of size n from the production process. Test themuntil the specified time t0.step2. Obtain the time to failure of item i (denoted by Xi). Set Xi = t0 if item i
has not failed by time t0.
step3. Calculate statistics Y = Xβ and obtain Y =∑
ni=1Yi
n. Express the process
as in-control if Y > L or as out-of-control if Y < L, where L is the control limit.
3 Average Run Length
The production process is assumed to be start in an in-control state and to se-lect the specified test time t0 = aµ0 as a fraction of the in-control mean µ0,
The 6th Seminar on Reliability Theory and its Applications 57
where a is a constant.The average run length (ARL) is used as a measure of control chart perfor-mance. The ARL is the expected number of observations or samples neededfor a control chart to signal. If a process be in-control, a signal is called a falsealarm. If a process is out-of-control, a signal by the control chart prompts thequality engineer to regulate the process and bring it back to the control. Sup-pose P(δ ) be the probability of giving an out-of-control signal, then the ARL
can be computed by
ARL =1
P(δ ). (3)
We can calculate P(δ ) as
P(δ ) =
1−∫ nL
0 f0(q)dq ; ϑ = 0, 0 < nL < θ
1−{∫
θ
0 f0(q)dq+∫ nL
θ( f0(q)+ f1(q))dq} ; ϑ = 1, θ < nL < 2θ
1−{∫
θ
0 f0(q)dq+∫ 2θ
θ( f0(q)+ f1(q))dq+
∫ nL2θ
( f0(q)+ f1(q)+ f2(q))dq} ;
ϑ = 2, 2θ < nL < 3θ
.
.
.
(4)
where c = γβ , θ = tβ
0 , t0 = aµ0 and k = (1− e−cθ )−1. Let P0 = P(δ = 1) bethe probability of giving an out-of-control signal, when the process is actuallyin-control at γ0. The average run length for in-control process ARL0 can bewritten as follows
ARL0 =1P0. (5)
Let P1 = P(δ > 1) be the probability of giving an out-of-control signal, whenthe process has shifted due to a new scale parameter γ1 = δγ0. The averagerun length ARL1 for out-of-control process is given as follows
ARL1 =1P1. (6)
Eizi, A., and Sadeghpour Gideh, B. 58
4 An optimization design model
Duncan [12] assumed that the process is characterized by an in-control stateµ0 and that a signal assignable cause of magnitude δ , which occurs at random,results in a shift in the mean from µ0 to either µ1. Samples are taken at intervalsof h hours and the process is shut down during the search. Each cycle beginswith the production process in the in-control state and continues until processmonitoring via the control chart results in an out-of control signal. Followingan adjustment in which the process is returned to the in-control state, a newcycle begins.
Figure 1: Expected Cycle Time.
The time cycle is described as Figure 1 and includes four periods: 1) the in-control period, 2) the out-of control period, 3) the time to take a sample, and4) the time to find the assignable cause. The expected time in the in-control
period is1λ
. The out-of-control period is h×ARL1− τ . Let g be the timerequired to sample one item and interpret the results, then gn is the sampleperiod and the time to find the assignable cause is D. Therefore the expectedcycle time is
E(T ) =1λ+h×ARL1− τ +gn+D. (7)
The net income per hour of operation in the in-control state is V0, and the netincome per hour of operation in the out-of-control state is V1. Let a1 and a2
are the fixed cost and variable cost of sampling, respectively, then a1 +a2n isthe cost of taking a sample of size n. The cost of finding an assignable causeis a3, and the cost of investigating a false alarm is a′3. The expected number of
The 6th Seminar on Reliability Theory and its Applications 59
false alarms is given by α times the expected number of samples taken beforethe shift, i.e.
α
∞
∑j=0
∫ ( j+1)h
jhje−λ tdt =
αe−λh
1− e−λh , (8)
Therefor, the expected cost per cycle is
E(C) = V01λ+V1(h×ARL1− τ +gn+D)−a3−a′3
αe−λh
1− e−λh
− (a1 +a2n)E(T )
h, (9)
And the expected cost per unit time is
E(A) =E(C)
E(T )
=V0
1λ+V1(h×ARL1− τ +gn+D)−a3−a′3
αe−λh
1− e−λh
1λ+h×ARL1− τ +gn+D
− (a1 +a2n)h
, (10)
Let a4 =V0−V1, then (10) may be rewritten as
E(A) =V0−E(L), (11)
where
E(L) =(a1 +a2n)
h−
a4(h×ARL1− τ +gn+D)+a3 +a′3αe−λh
1− e−λh
1λ+h×ARL1− τ +gn+D
. (12)
The expression E(L) represents the expected loss per hour by the process. Thecost function E(L) can be minimized to find the optimal parameters of the
Eizi, A., and Sadeghpour Gideh, B. 60
chart subjected to the constraints on ARL.
min . E(L)
s.t. ARL0 ≥ ARL∗0n > 0L > 0h > 0n ∈ N,
(13)
5 Example
Here we illustrated our approach using an example taken from Montgomery[13]. In this example we have:
λ = 0.05,a1 = 1,a2 = 0.1,a′3 = 50,a4 = 150,D = 2,
a3 = 25,g = 0.0167,ARL∗0 = 370,δ = 0.1,a = 0.2,β = 0.5,µ0 = 50
We put the above mentioned values into (5) to construct the model. Then weapplied the genetic algorithm to find the optimal solution. Sensitivity analysisis performed to investigate the effects of model parameters and shape parame-ters in the Weibull distribution on the optimal values of the design parameters.In the Tables 1-4, the optimal parameters n,L,h and the expected cost per hourE(L) are presented for proposed control chart. From Tables 1-4 and Figure 1,the following conclusions can be derived:1. For other fixed parameters, as a increases from 0.2 to 0.4, E(L) will also in-crease, but ARL1 will decrease. This seems reasonable because as a increasesthe number of failures observed will increase.2. For other fixed parameters, when µ0 increases from 50 to 100, E(L) willalso increase, but ARL1 will decrease.3. For other fixed parameters, as β increases from 0.5 to 1, E(L) will decrease,but ARL1 will increase
The 6th Seminar on Reliability Theory and its Applications 61
Table 1: The optimal design of the proposed control chart when a = 0.2,β = 0.5 and µ0 = 50
n L h ARL1 E(L)
δ
0.1 11 0.41 0.78 1.14 21.60
0.2 15 0.37 0.72 1.39 23.69
0.3 20 0.34 0.66 2.50 26.41
0.4 20 0.35 0.59 3.22 34.41
0.5 20 0.34 0.46 6.66 42.44
0.6 9 0.43 0.15 33.33 56.88
0.7 8 0.37 0.13 40.00 60.93
0.8 7 0.47 0.10 76.92 80.14
0.9 6 0.50 0.10 250.00 98.73
1 3 0.67 0.10 370.37 116.06
Table 2: The optimal design of the proposed control chart when a = 0.2,β = 0.5 and µ0 = 100
n L h ARL1 E(L)
δ
0.1 13 0.27 1.06 1.07 23.16
0.2 15 0.26 0.91 1.38 25.51
0.3 16 0.26 0.65 2.18 29.30
0.4 20 0.24 0.60 3.22 34.39
0.5 20 0.24 0.42 6.48 42.43
0.6 18 0.25 0.26 15.38 57.53
0.7 20 0.24 0.22 30.86 70.64
0.8 2 0.57 0.10 75.86 89.02
0.9 6 0.35 0.10 217.39 104.80
1 2 0.57 0.10 370.37 114.02
Eizi, A., and Sadeghpour Gideh, B. 62
Table 3: The optimal design design of the proposed control chart when a = 0.2,β = 1 and µ0 = 50
n L h ARL1 E(L)
δ
0.1 2 0.10 0.53 1.40 20.40
0.2 2 0.10 0.35 2.44 23.12
0.3 2 0.10 0.24 5.00 27.24
0.4 1 0.11 0.16 10.64 32.36
0.5 1 0.12 0.12 19.23 38.32
0.6 1 0.12 0.10 34.72 46.13
0.7 1 0.12 0.10 62.89 57.62
0.8 1 0.12 0.10 111.11 73.48
0.9 1 0.12 0.10 204.08 92.56
1 1 0.12 0.10 370.37 112.00
Table 4: The optimal design of the proposed control chart when a = 0.4,β = 0.5 and µ0 = 50
n L h ARL1 E(L)
δ
0.1 14 0.38 1.11 1.05 23.33
0.2 15 0.37 0.71 1.38 23.69
0.3 19 0.35 0.81 1.86 29.07
0.4 20 0.35 0.61 3.22 38.29
0.5 20 0.35 0.42 6.49 42.43
0.6 7 0.47 0.13 32.26 57.67
0.7 19 0.35 0.21 33.33 70.69
0.8 17 0.36 0.15 71.43 91.88
0.9 3 0.67 0.10 200.00 102.47
1 10 0.42 0.10 370.37 130.13
6 Conclusions
In this paper, an optimization model of the X control chart is proposed un-der the truncated life test for a Weibull distribution. Minimizing the expectedcost per a unit time the optimal design parameter sample size, time, intervalbetween successive samples and control limits are derived. The genetic algo-
The 6th Seminar on Reliability Theory and its Applications 63
rithm from the MATLAB 2014a Apps tab is used to solve the optimizationproblem of optimization model of proposed control chart. A numerical ex-ample is provided for demonstrating the performance of the proposed controlchart. Sensitivity analysis showed that the optimum values of n,L,h,E(L) andthe average run length ARL are sensitive in the cost and shape parameters.Also, the proposed chart was sensitive in detecting the shift in the process.The proposed control chart can be used in a real industry where the lifetime ofthe product follows the Weibull distribution.
References
[1] Prajapati, D. R., & Mahapatra, P. B. (2007), An effective joint X-bar andR chart to monitor the process mean and variance. International Journal
of Productivity and Quality Management, 2(4), 459-474.
[2] Montgomery, D. C. (1980), The economic design of control charts: a re-view and literature survey, Journal of Quality Technology, 12(2), 75-87.
[3] Lio, Y. L., & Park, C. (2010), A bootstrap control chart for inverse Gaus-sian percentiles, Journal of Statistical Computation and Simulation, 80(3),287-299.
[4] Chen, H., & Cheng, Y. (2007), Non-normality effects on the economicsta-tistical design of X charts with Weibull in-control time, European Journal
of Operational Research, 176(2), 986-998.
[5] Pascual, F., & Li, S. (2012), Monitoring the Weibull shape parameter bycontrol charts for the sample range of type II censored data. Quality and
Reliability Engineering International, 28(2), 233-246.
[6] Chen, J. T. (2014), A Shewharttype control scheme to monitor Weibull datawithout subgrouping, Quality and Reliability Engineering International,30(8), 1197-1214.
Eizi, A., and Sadeghpour Gideh, B. 64
[7] Derya, K., & Canan, H. (2012), Control charts for skewed distributions:Weibull, gamma, and lognormal, Metodoloski zvezki, 9(2), 95-106.
[8] Khan, N., Aslam, M., Khan, M., & Jun, C. H. (2018), A VariableControl Chart under the Truncated Life Test for a Weibull Distribu-tion,Technologies,6(2), doi:10.3390/technologies6020055.
[9] Aslam, M., Arif, O. H., & Jun, C. H. (2017), An attribute control chart fora Weibull distribution under accelerated hybrid censoring, PloS one, 12(3),doi: 10.1080/08982112.2015.1017649.
[10] Khan, N., Aslam, M., Kim, K. J., & Jun, C. H. (2017), A mixed controlchart adapted to the truncated life test based on the Weibull distribution,Operations Research and Decisions, 27, 43-55.
[11] Ayyagari, A., Kraleti, S. R., & Jayanti, L. (2016), Determination of opti-mal design parameters for control chart with truncated weibull in controltimes, Journal of Production Technology and Management (IJPTM), 7(1),1-17.
[12] Duncan, A. J. (1956), The economic design of X charts used to maintaincurrent control of a process, Journal of the American Statistical Associa-
tion, 51(274), 228-242.
[13] Montgomery, D. C. (2007), Introduction to statistical quality control.John Wiley & Sons.
The 6th Seminar on Reliability Theory and its Applications
Survival Function of a New Mixed δ -Shock Model
Entezari, M.1, and Roozegar, R.1
1 Department of Statistics, Yazd University, 89175-741, Yazd, Iran
Abstract: Shock models and multi-state systems both have been studied in thereliability literature. The shock models have attracted great deal of attentionbecause of their important role in the engineering systems. A δ -shock model iscalled when a system fails if the interval time between two consecutive shocksis less than a pre-defined threshold δ . In this paper, we define the mixed shockmodels that combined with two δ -shock and extreme shocks for the multi-statesystem which suffers shocks that occur randomly and their occurrence causesa change in the system performance. The system fails when: first, k out ofinterarrival times between two successive shocks with magnitude bigger thanthe critical threshold γ are in [δ1,δ2]; second, upon the occurrence of eachinterarrival time between two successive shocks is less than δ1. We obtain thesurvival function of the proposed system and also the survival function for thissystem in a perfect functioning state.
Keywords: δ -Shock Model, Interarrival Times, Survival Function, δ -ShockModel, Multi-State System.
1 Introduction
The shock model is mainly used to describe the failure process of the systemin the random working environment. In the literature, various shock modelshave been defined and studied, which contains four basic shock models: cu-
1Entezari, M.: [email protected]
65
Entezari, M., and Roozegar, R. 66
mulative shock model is introduced by Gut [6], Shanthikumar, Sumit [15].Gut [8] is used extreme shock model that is the system failure occurs when themagnitude of a shock is bigger than a critical threshold γ . run shock modelis proposed by Mallor and Omey [13]. The δ -shock model is a special typeof shock model, which if the interarrival time between two shocks is shorterthan a prespecified threshold δ , the system fails; proposed by Li et al. [9, 10],Wang and Zhang [16], Xu and Li [17], Li and Kong [11], Bai and Xiao [1] andEryilmaz [2, 3]. Eryilmaz and Bayramoglu [5] have discussed the δ -shockmodels when the external shock arrival processes following a renewal pro-cess with a uniform distribution as the time between renewals and Parvardehand Balakrishnan [14] extended the work of Eryilmaz and Bayramoglu [5] onδ -shock models. They supposed the system fails when the interarrival timebetween two successive shocks is less than a critical threshold δ , or the mag-nitude of the shock is larger than another critical threshold γ . Gut [7] obtainedthe combining two different shock models, a mixed shock model. Eryilmaz[4] extended the classical extreme shock model to a model in which if themagnitude of a shock varies between two critical levels, then the system shiftsinto a lower partially function state, but still works with a decreased valency.Recently, Poursaeed et al. [12] studied about life distribution properties of anew δ -shock model which system fails in three ways. They obtained the sur-vival function of the systems lifetime and of the time used by the system in aperfectly functioning state.
The rest of paper is organized as follows. In Section 2, some notations areprovided. The survival function of the system is obtained in Section 3. Finally,Concluding remarks are given in Section 4.
2 Notations
In this section, some notations are provided for mixed δ -shock model. Thefollowing notations are needed.
The 6th Seminar on Reliability Theory and its Applications 67
N Number of interarrival times between two successive shocksuntil complete fail the system
Z The magnitude of shockXi Interval time between the (i−1)th and ith shocks, i = 1,2, ...δ j Arrival time for shock j = 1,2γ The critical threshold for magnitude shockM Number of interarrival times between two successive shocks
until reach to the first interarrival time less than δ2
or first shock bigger than γ
T Lifetime of the δ -shock modelS(i) Sum of i.i.d random variables i = 1, · · · ,4F(i) Common cumulative distribution function (CDF) of
the interval time Xi, i = 1,2, ...,4p1 P(δ1 < X1 < δ2,Z1 > γ)
p2 P(δ1 < X1 < δ2,Z1 < γ)
p3 P(δ1 < Xn < δ2,Zn > γ)
p4 P(Xn < δ1)
p5 P(X1 > δ2,Z1 < γ)
p6 P(X1 > δ2,Z1 > γ)
3 Survival function of Model
Suppose a system which is influenced by shocks with random magnitudes andrandom times between two consecutive shocks then the system fails when:first, the k out of interarrival times between two consecutive shocks with themagnitude of shock is bigger than the critical threshold γ are in [δ1,δ2]; second,system could fail absolutely, when the interarrival times between two consec-utive shocks are less than δ1. In order to establish the survival function ofproposed mixed δ -shock model, we calculate the number of interarrival times
Entezari, M., and Roozegar, R. 68
between two successive shocks until complete fail the system N as
(N = n) ⇐⇒ [k−1 o f n(Xi,Zi) are (δ1 < Xi < δ2,Zi > γ)
and n− k o f n(Xi,Zi) are (δ1 < Xi < δ2,Zi < γ)
and δ1 < Xn < δ2,Zn > γ]
∪ [k−1 o f n(Xi,Zi) are (δ1 < Xi < δ2,Zi > γ)
and n− k o f n(Xi,Zi) are (δ1 < Xi < δ2,Zi < γ)and Xn < δ1]
∪ [k−1 o f n(Xi,Zi) are (δ1 < Xi < δ2,Zi > γ)
and n− k o f n(Xi,Zi) are (Xi > δ2,Zi < γ)and Xn < δ1]
∪ [k−1 o f n(Xi,Zi) are (δ1 < Xi < δ2,Zi > γ)
and n− k o f n(Xi,Zi) are (Xi > δ2,Zi < γ)and δ1 < Xn < δ2,Zn > γ]
∪ [k−1 o f n(Xi,Zi) are (δ1 < Xi < δ2,Zi > γ)
and n− k o f n(Xi,Zi) are (Xi > δ2,Zi > γ)and Xn < δ1]
∪ [k−1 o f n(Xi,Zi) are (δ1 < Xi < δ2,Zi > γ)
and n− k o f n(Xi,Zi) are (Xi > δ2,Zi > γ)and δ1 < Xn < δ2,Zn > γ]
∪ [ j o f n(Xi,Zi) are (δ1 < Xi < δ2,Zi > γ)
and n− j−1 o f n(Xi,Zi) are (Xi > δ2,Zi > γ)and Xn < δ1]
∪ [ j o f n(Xi,Zi) are (δ1 < Xi < δ2,Zi < γ)
and n− j−1 o f n(Xi,Zi) are (Xi > δ2,Zi < γ)and Xn < δ1]
∪ [ j o f n(Xi,Zi) are (δ1 < Xi < δ2,Zi < γ)
and n− j−1 o f n(Xi,Zi) are (Xi > δ2,Zi > γ)and Xn < δ1]
∪ [ j o f n(Xi,Zi) are (δ1 < Xi < δ2,Zi > γ)
and n− j−1 o f n(Xi,Zi) are (Xi > δ2,Zi < γ)and Xn < δ1],
hence,
P(N = n) =
(n−1k−1
)pk−1
1 (p3 + p4)(pn−k2 + pn−k
5 + pn−k6 )
+k−2
∑j=0
(n−1
j
)p4(pn− j−1
5 + pn− j−16 )(p j
1 + p j2). (1)
The 6th Seminar on Reliability Theory and its Applications 69
So, S = ∑Mi=1 Xi is the time that used by system in complete functioning. The
number of interarrival times between two successive shocks until reach to thefirst interarrival time less than δ2 (M) is as following
(M = m) = (X1 > δ2,Z1 < γ, · · · ,Xm−1 > δ2,Zm−1 < γ,δ1 < Xm < δ2,Zm > γ).
Therefore,
P(M = m) = pm−15 (p3 + p4). (2)
Theorem 3.1. The survival function of the mixed δ -shock model is given as
P(T > t) =
[P(X1 < δ1)−P(X1 < t)
]I(0,δ1](t)
+
[P(max{δ1, t}< X1 < δ2,Z1 > γ)
]I[δ1,δ2](t)
+∞
∑n=k
(n−1k−1
)pn−k
2 pk−11
[∫δ2
δ1
P(S(2)n−k +S(1)k > t)dF(x)
+∫
δ1
0P(S(2)n−k +S(1)k−1 > t− x)dF(x)
]+
∞
∑n=k
(n−1k−1
)pn−k
5 pk−11
[∫δ1
0P(S(3)n−k +S(1)k−1 > t− x)dF(x)
+∫
δ2
δ1
P(S(3)n−k +S(1)k > t)dF(x)]
+∞
∑n=k
(n−1k−1
)pn−k
6 pk−11
[∫δ1
0P(S(4)n−k +S(1)k−1 > t− x)dF(x)
+∫
δ2
δ1
P(S(4)n−k +S(1)k > t)dF(x)]
+∞
∑n=2
k−2
∑j=0
(n−1
j
)pn− j−1
6 p j1
∫δ1
0P(S(4)n− j−1 +S(1)j > t− x)dF(x)
+∞
∑n=2
k−2
∑j=0
(n−1
j
)pn− j−1
5 p j2
∫δ1
0P(S(3)n− j−1 +S(2)j > t− x)dF(x)
+∞
∑n=2
k−2
∑j=0
(n−1
j
)pn− j−1
6 p j2
∫δ1
0P(S(4)n− j−1 +S(2)j > t− x)dF(x)
Entezari, M., and Roozegar, R. 70
+∞
∑n=2
k−2
∑j=0
(n−1
j
)pn− j−1
5 p j1
∫δ1
0P(S(3)n− j−1 +S(1)j > t− x)dF(x),
(3)
where S(i)0 = 0, i = 1, · · · ,4 and S(i)n , i = 1, · · · ,4 are the sum of n i.i.d ran-
dom variables having the following distribution functions
F(1) = P(X ≤ x|δ1 < X < δ2,Z > γ) =P(δ1 < X < min{x,δ2},Z > γ)
P(δ1 < X < δ2,Z > γ),
F(2) = P(X ≤ x|δ1 < X < δ2,Z < γ) =P(δ1 < X < min{x,δ2},Z < γ)
P(δ1 < X < δ2,Z < γ),
F(3) = P(X ≤ x|X > δ2,Z < γ) =P(δ2 < X < x,Z < γ)
P(X > δ2,Z < γ),
F(4) = P(X ≤ x|X > δ2,Z > γ) =P(δ2 < X < x,Z > γ)
P(X > δ2,Z > γ).
Theorem 3.2. The survival function of S = ∑Mi=1 Xi is
P(S > t) = P(M
∑i=1
Xi > t)=∞
∑m=1
P(m
∑i=1
Xi > t,M = m)
=
[P(X1 < δ2)−P(X1 < t)
]I[0,δ2)(t)+P(max{δ2, t}< X1,Z1 < γ)
+∞
∑m=2
pm−15
∫δ2
δ1
P(S(4)m−1 > t)dF(x)−∞
∑m=2
pm5
∫δ2
δ1
P(S(4)m > t)dF(x).
4 Conclusion
In this paper, we have discussed a mixed δ -shock model with random shocksthat is caused to fail system when: first, k out of interarrival consecutive shockswith the magnitude of shock is more than critical threshold γ are in [δ1,δ2];second, the time between two consecutive shocks is less than δ1. By assum-ing that the shocks occure independently, randomly and Xi are independentand identical random variables, we have derived explicit expressions for thesurvival function of the system and the time used by the system in a perfectlyfunctioning state.
The 6th Seminar on Reliability Theory and its Applications 71
References
[1] Bai, J.M. and Xiao, H.M. (2008). A class of new cumulative shock mod-els and its application in insurance risk. Journal of Lanzhou University.
Natural Sciences, 44, 13-26.
[2] Eryilmaz, S. (2012). Generalized δ -shock model via runs, Statistics and
Probability Letters, 82, 326-331.
[3] Eryilmaz, S. (2013). On the lifetime behavior of discrete time shock model.Computional and Applied Mathematics 237(1), 38-48.
[4] Eryilmaz, S. (2015). Assessment of a multi-state system under a shockmodel. Applied Mathematics and Computation, 269, 1-8.
[5] Eryilmaz, S. and Bayramoglu, K. (2014). Life behavior of d-shock modelsfor uniformly distributed interarrival times. Statistical Papers, 55 (3), 84-152.
[6] Gut, A. (1990). Cumulative shock models. Advances in Applied Probabil-
ity, 22(2), 504-507.
[7] Gut, A. (2001). Mixed shock models. Bernoulli, 7(3), 541-555.
[8] Gut, A., and Husler, J. (1999). Extreme shock models. Extremes, 2, 293-305.
[9] Li, Z.H., Chan, L.Y. and Yuan, Z.X. (1999). Failure time distribution un-der a δ -shock model and its application to economic design of system,International Journal of Reliability, Quality and Safety Engineering, 3,237-247.
[10] Li, Z. ,Huang, B.S. and Wang, G.J. (1999). Life distribution and its prop-erties of shock models under random shocks, Journal of Lanzhou Univer-
sity, 35, 1-7.
[11] Li, Z.H., Kong, X.B. (2007). Life behavior of δ -shock model. Statistics
and Probability Letters, 77(6), 577-587.
Entezari, M., and Roozegar, R. 72
[12] Lorvand, H., Nematollahi, A. and Poursaeed M.H. (2019). Life distri-bution properties of a new δ -shock model, Communications in Statistics-
Theory and Methods, doi: 10.1080/03610926.2019.1584316, 1-16.
[13] Mallor, F., Omey, E. (2001). Shocks, runs and random sums. Journal of
Applied Probability, 38(2), 438-448.
[14] Parvardeh, A., and Balakrishnan, N. (2015). On mixed δ -shock models.Statistics Probability Letters, 102, 51-60.
[15] Shanthikumar, J.G. Sumita, U. (1983). General shock-models associatedwith correlated renewal sequences. Journal of Applied Probability, 20(3),600-614.
[16] Wang, G.J. and Zhang, Y.L. (2001). δ -shock model and its optimal re-placement policy. Journal of Southeast University, 31, 121-4.
[17] Xu, Z.Y., and Li, Z.H. (2004). Statistical inference on δ -shock modelwith censored data. Chinese Journal of Applied Probability Statistics,
20,147-53.
The 6th Seminar on Reliability Theory and its Applications
Influence of a Cold Standby Component on the Performance of ak-out-of-n:F System in the Dynamic Stress-Strength Model Based on
Weibull Process
Ghanbari, S.1, Rezaei Roknabadi, A.H.1, and Salehi, M.2
1 Department of Statistics, Ferdowsi University of Mashhad, Mashhad, Iran
2 Department of Mathematics and Statistics, Neyshabur University ofNeyshabur, Iran
Abstract: In this article, we want to inspect the effect of adding a cold standbycomponent on the reliability of a k-out-of-n:F system by examining the dy-namic stress-strength model. For this purpose, we calculate the reliability ofthe k-out-of-n:F system in two cases with and without the cold standby com-ponent. In the following, it is assumed that the stress and strength variablesfollows the Weibull distribution and the Weibull process, respectively. Also,we have used the mean time to failure and the average rate costs, which are twoof the most important characteristics that has been widely applied in dynamicreliability analysis.
Keywords: Cold Standby Component, Mean Time to Failure, Reliability,Stress-Strength Model, Weibull Process.
1 Introduction
The reliability of a system is a probability that, under defined conditions, thesystem will do its intended purpose sufficiently. For the stress-strength model,
1Ghanbari, S.: [email protected]
73
Ghanbari, S., Rezaei Roknabadi, A.H., and Salehi, M. 74
both the strength Y of the system and the stress X imposed on the system isstudied as random variables. In this model, if the stress is greater than strength,then the system will fail. So, the system reliability R =P(Y >X) is equivalentto the probability that the system strength is greater than the stress imposed onit. The concepts of the stress-strength model were introduced by Birnbaumand McCarty[2]. Also, Kotz et al.[9] provided an excellent review of the ex-tends of the stress-strength up to 2003.Stress-strength models have many applications in engineering and medical sci-ences. Examples of the application of this model to Johnson[8] are presented.In the following, we describe some of the uses of this model.Consider the successful design of the rocket engine. If X represents the high-est pressure on the piston of the engine which is produced by the ignition of asolid actuator and Y represents the piston strength of the rocket motor againstthe imposed pressure. Then R is the probability that the rocket engine will besuccessfully turned on.-In designing a dam, what is important is the strength of the dam to with-stand any water flow that it encounters. Suppose that the random variable Y
is the strength of the dam to the flow of water and the random variable X isthe amount of water pressure from a frisky flood. Then R can be probabilitysuccessful in this design.There are several works on the inference methods for R based on completeand incomplete data from X and Y samples. Many researchers estimated R onthe assumption that X and Y are distributed from a parametric model. Kunduand Raqab[10] used a three-parameter Weibull distribution and Eryilmaz[4]considered an exponential distribution for a general coherent system. Alsofor the multicomponent mode, we can refer to Liu et al.[10] considered thestress-strength reliability of the multi-state system based on generalized sur-vival signature.In the industry, the stress and strength values of a system or component mayvary over time. So, it’s better to use dynamic modeling instead of static mod-
The 6th Seminar on Reliability Theory and its Applications 75
eling. Clearly, in this case, the reliability of the system is time-dependent. Letus suppose that X(t) and Y (t) denotes the stress imposed on each componentover time and its strength at the time t, respectively. Then, the lifetime of theith component can be regarded as the following random variable
Ti = inf{τ ≥ 0 : Xi(τ)> Yi(τ)}. (1)
On the other hand, the reliability function of the ith component at the time s,denoted by Ri(s), is the probability that it will be still active at the time s, i.e.P{Ti > s}. Hence, from (1) we have
Ri(s) = P(Ti > s) = P( inf0<t<s
{Yi(t)−Xi(t)}> 0). (2)
T1, T2, · · ·, Tn are independent and identically distributed (iid) random vari-ables from the survival function (2). Eryilmaz[2] studied the reliability of thestress-strength model in the case where the random variable strength Y is time-dependent and the random variable stress X is constant.In this paper, we assumei) Yi(t) is decreasing in time, that is Yi(t2) < Yi(t1) for all t1 < t2 and i =
1,2, · · · ,n. (Our reason for expressing this assumption is that, in reality, thestrength of the system Yi(t) decreases over time.)ii) Xi(t) = X , that is, the stress entered into the components has been fixed overtime (static) for i = 1,2, · · · ,n.Also, we consider a k-out-of-n: F system. As stated in reliability literature, ak-out-of-n: F system consists of n components such that the system fails if andonly if at least k of its components fail. This system is widely used in engi-neering. For more details on the work done on systems and multi-componentcase related to stress-strength models, please refer to Bhattacharyya and Jon-son [1] studied the reliability of a k-out-of-n: G system by assuming that thecomponents of the stress and strength has the exponential distribution withunknown scale parameters. Rao et al. [13] investigated the reliability of thestress-strength in the multi-component case by assuming generalized Rayleighdistribution for strength and stress variables. On the other hand, an important
Ghanbari, S., Rezaei Roknabadi, A.H., and Salehi, M. 76
technique for increasing reliability in the system is the use of ways redundancyin the system. There are several ways to create redundancy. One of them is toequip the system with standby cold components. So, the system does not failas long as standby cold components are waiting. This method increases thereliability of the system. Recently, Liu et al. [10] investigated the reliabilityof a multi-component system with N subsystems each containing M compo-nents, with only one subsystem working under stress and the rest subsystemsstandby. Also, Eryilmaz [5] considered the mean residual life of a k-out-of-n:G system with a single cold standby component.In this article, we examine the effect of adding a cold standby component toa k-out-of-n: F system. In Section 2, we obtain the reliability of a k-out-of-n:F system in the dynamic stress-strength model before and after the impact ofthe cold standby component. In Section 3, calculate the values of Rk:n(t) andR∗(t) by assuming the Y (t) and X follows the Weibull process and Weibulldistribution, respectively. In the following, the value of the cost indices is cal-culated in two cases. The results of numerical calculations are given in thetable.
2 System Reliability
In a k-out-of-n: F system, let the random variable X is the common stress im-posed by the system on the components with cumulative distribution function(CDF) FX(x) and Yi(t) is the strength of the ith component at time t with CDFGt(y). Then, the lifetime of such the system without a cold standby componentis equal to the kth order statistics, Tk:n, that is,
Tk:n = inf{s > 0, X > Yj1(s),X > Yj2(s), · · · ,X > Yjk(s)}, (3)
where { j1, · · · , jk} is a permutation of {1,2, · · · ,n}. Therefore, from (2) andunder the assumption that the components work independently and followfrom an identical distribution, the reliability function of the system at time
The 6th Seminar on Reliability Theory and its Applications 77
t is obtained as
Rk:n(t) = P(Tk:n > t)
=k−1
∑i=0
(ni
)∫ +∞
0
(Gt(x)
)i (Gt(x)
)n−i
dFX(x), (4)
where, Gt(x) = 1−Gt(x).When the system is equipped with a cold standbycomponent, the lifetime of the system is defined as
T∗ = inf{s > Tk:n, X > Yi′(s),or X > Z(s−Tk:n)}, (5)
where i′ is all permutations {1, · · · ,n} except to { j1, · · · , jk}, the random vari-able X and process Z(t) are the stress and the strength of the cold standby com-ponent, respectively, and Tk:n is the failure time of the system without standbycomponent defined in (3). It should be noted, that the cold standby componentis assumed to be subjected to the same X common stress. The random processZ(t) follows CDF Ht(z). According to the Corollary 1 of Eryilmaz [5], withsome changes, reliability function of such a k-out-of-n: F system with a coldstandby component can be obtained
R∗(t) = P(T∗ ≥ t)
= Rk:n(t)+n(
n−1k−1
)×
∫ +∞
0
∫ x
0Gn−k
t (x)Ht(x− y) Gk−1t (x) dGt(y) dFX(x), (6)
where, Ht(z) = 1−Ht(z).
Remark 2.1. When k=1, the reliability function of a series system with a coldstandby component can be simplified as follows
R∗(t) = Rn1(t)+n
∫ +∞
0
∫ x
0Gn−1
t (x)H(x− y) dGt(y) dFX(x). (7)
Remark 2.2. When k = n, the reliability function of a parallel system with acold standby component can be simplified as follows
R∗(t) = Rn:n(t)+n∫ +∞
0
∫ x
0Gn−1(x)H(x− y) dGt(y) dFX(x) (8)
Ghanbari, S., Rezaei Roknabadi, A.H., and Salehi, M. 78
Remark 2.3. From (6), it is clear that the reliability of a k-out-of-n:F systemincreases with the use of a cold standby component.
3 Weibull process in a k-out-of-n:F system based on dynamic stress-strength
model
In this section, we want to use the Weibull process for components strength(Yi(t)) and the Weibull distribution for stress imposed on the components (X).So, for a system consists of n components, let us assume that the componentsstrength Y1(t), Y2(t), · · ·, Yn(t) at time t that comes from a Weibull processwhose one-dimensional distribution is
Gt(y) = P(Y (t)< y)
= 1− exp{−( yα(t)
)β2}; y > 0, (9)
where the shape parameter β2 is assumed is independent of time and the inten-sity function α(t) = 1
t is decreasing in time with α(0) = ∞. Also, componentsare subjected to a common random stress X has a Weibull distribution withCDF
FX(x) = 1− exp{−( xθ)β1}; x > 0,θ ,β1 > 0. (10)
We show the strength of the cold standby component at time t with Z(t), whichcomes from a Weibull process whose one-dimensional distribution is
Ht(z) = P(Z(t)< z)
= 1− exp{−( zα(t)
)λ}; z > 0, (11)
where the shape parameter λ is assumed is independent of time and the inten-sity function α(t) = 1
t is decreasing in time with α(0) = ∞. Also, the stressimposed on the cold standby component strength is the same as the stress im-posed on the other components with CDF (10).Easily, the reliability of a k-out-of-n: F system without and with a cold standbycomponent is calculated by using (4), (6) ,and the above assumptions, respec-tively.
The 6th Seminar on Reliability Theory and its Applications 79
Rk:n(t) =k−1
∑i=0
(ni
)β1
θ β1
∫ +∞
0xβ1−1
(1− e−(tx)
β2
)i
e−{(xθ)β1+(n−i)(tx)β2} dx.
(12)and
R∗(t) = n(
n−1k−1
)β1β2
θ β1
∫ +∞
0
∫ x
0tβ2xβ1−1yβ2−1
× e−{(n−k)(tx)β2+(t(x−y))λ+(yt)β2+( xθ)β1}(1− e−(tx)
β2)k−1 dy dx
+Rk:n(t). (13)
Integrals (12) and (13) cannot be solved algebraically because of their com-plexity. For this purpose, we have used R software to draw system reliabilitydiagrams for different parameter values. Figures 1 and 2 shows the systemreliability diagram for different parameter values β1 = 0.5,1.5 ,β2 = 0.8 ,θ =
0.5,1.5, and λ = 0.1,0.2 in two cases before and after effect the standby com-ponent. The following results can be obtained from Figures 1 and 2:
Figure 1: System reliability behavior in two cases with and without cold standby component for θ = 0.5, β1 = 0.5,
and β2 = 0.8.
• For all parameter values, The diagram R∗(t) is higher than the diagramRk:n(t) over time. Of course, this result is quite evident from (6).• Based on Figure 2, by fixing the values of the β1, β2, and θ and increasingthe value of the λ , first the value of R∗(t) increases, then decreases.
Ghanbari, S., Rezaei Roknabadi, A.H., and Salehi, M. 80
Figure 2: System reliability behavior in the cases with cold standby component for θ = 1.5, β1 = 0.5, and β2 = 0.8.
4 The utility of cold standby component based on the average cost rate of
the system
Industries want to use redundancy methods that, in addition to increasing sys-tem reliability, reduce the average cost rate of the system. For this purpose,average rate costs without and with a standby component are as follows, re-spectively.
C1(n,k) =nc∫ +∞
0 Rk:n(t) dt(14)
and
C2(n,k) =(n+1)c∫ +∞
0 R∗(t) dt(15)
Where c is the ownership cost of one component. (See, Eryilmaz [6] for moredetails)Table 1 shows the values of the C1(n,k) and C2(n,k) in the k-out-of-n: F
system for different values of k, n, θ , β1, β2, λ and c = 1.
The 6th Seminar on Reliability Theory and its Applications 81
Table 1: The values of the C(n,k) for a k-out-of-n:F system with and without a standby component
n k θ β1 β2 λ C1(n,k) C2(n,k)
5 3 0.5 0.5 0.8 0.5 0.0728 0.0533
5 4 0.0469 0.0301
7 3 0.1762 0.1304
7 4 0.1296 0.0913
7 5 0.0948 0.0619
5 3 0.5 0.8 0.5 0.5 0.2412 0.1291
5 4 0.1001 0.0565
7 3 0.8263 0.3980
7 4 0.4241 0.2067
7 5 0.2210 0.1126
5 3 1.5 0.5 0.8 0.5 0.1218 0.0906
5 4 0.0781 0.0512
7 3 0.2950 0.2199
7 4 0.2160 0.1528
7 5 0.1573 0.1038
5 3 0.5 0.5 0.8 1.5 0.0728 0.0515
5 4 0.0469 0.0300
7 3 0.1762 0.1176
7 4 0.1296 0.0851
7 5 0.0948 0.0612
The following results can be obtained from Table 1.• The value of the C2(n,k) is less than the value of the C1(n,k) for all valuesn, k, and parameters β1, β2, λ , and θ .• By fixing the parameters β1(β2), θ , λ , n, k and increasing β2(β1), the valuesof the C1(n,k) and C2(n,k) increases.• By fixing the parameters β1, β2, θ , λ , k and increasing n, the value of theC1(n,k) and C2(n,k) increases, but under the same conditions and by fixing n
and increasing k, the value of the C1(n,k) and C2(n,k) significantly decreases.• With increasing θ , while other parameters β1, β2, λ , n, k are constant, thevalue of the C1(n,k) and C2(n,k) increases.• With increasing λ , while other parameters β1, β2, θ , n, k are constant, thevalue of the C1(n,k) and C2(n,k) constant and decrease, respectively.
Ghanbari, S., Rezaei Roknabadi, A.H., and Salehi, M. 82
5 Conclusions
In this article, the effect of the cold standby component on a k-out-of-n: F
system in the dynamics stress-strength model was investigated. Then, usingthe Weibull distribution and Weibull process for the stress (X) and the strengthof the components at time t (Y (t)), respectively, we examined in more detailthe effect of the cold stand component and concluded that the cold standbycomponent increases system reliability. The effect of the λ parameter on thereliability of the redundant system was examined. Also, by using the averagecost rate C(n,k) index, the reliability sensitivity of the redundant system wasevaluated.
References
[1] Bhattacharyya, G. and Johnson, R. (1974), Estimation of reliability in amulticomponent stress-strength model, American Statistical Association,69, 966-970.
[2] Birnbaum, Z. and McCarty, B. (1958), A distribution-free upper confi-dence bounds for P{Y < X} based on independent samples of X and Y .Annals of Mathematical Statistics, 29, 558-562.
[3] Eryilmaz, S. (2013), A study on reliability of coherent systems equippedwith a cold standby component, Metrika, 77, 349-359.
[4] Erylmaz, S. (2010), On system reliability in stress–strength setup. Statis-
tics & Probability Letters, 80, 834-839.
[5] Eryilmaz, S. (2012), On the mean residual life of a k-out-of-n:G systemwith a single cold standby component, European Journal of Operational
Research, 222, 273-7.
[6] Eryilmaz, S. (2013), On stress-strength reliability with a time-dependentstrength, Quality and Reliability Engineering, 013, Article ID 417818, pp.6.
The 6th Seminar on Reliability Theory and its Applications 83
[7] Eryilmaz, S. and Tutuncu, G. (2015), Stress-strength reliability in the pres-ence of fuzziness,Computational and Applied Mathematics, 282, 262-267.
[8] Johnson, R. (1988), Stress-strength models for reliability,In Handbook of
Statistics, Quality Control and Reliability, 7, 27-54. Elsevier, New York.
[9] Kotz, S. Lumelskii, Y. and Pensky, M. (2003), The stress-strength modeland its generalizations, theory and applications, Singapore: World Scien-tific.
[10] Kundu, D. and Raqab, M. (2009), Estimation of P(Y < X) for three-parameter weibull distribution, Statistics & Probability Letters, 79, 1839-1846.
[11] Liu, Y. Shi, Y. Bai, X. and Zhan, P. (2018), Reliability estimation of a N-M-cold-standby redundancy system in a multicomponent stress-strengthmodel with generalized half-logistic distribution, Physica A: Statistical
Mechanics and its Applications, 490, 231-249.
[12] Liu, Y. Shi, Y. Bai, X. and Liu, B. (2018), Stress-strength reliability anal-ysis of multi-state system based on generalized survival signature, Com-
putational and Applied Mathematics, 342, 274-291.
[13] Rao, G. (2014), Estimation of reliability in multicomponent stress-strength based on generalized Rayleigh distribution, Modern Applied Sta-
tistical Methods, 13, 367-379.
The 6th Seminar on Reliability Theory and its Applications
Optimum Type-II Progressive Censoring Scheme with Random RemovalBased on Cost Model
Hassantabar Darzi, F., 1 Misaii, H.1, Eftekhari Mahabadi, S.1, andHaghighi, F.1
1 School of Mathematics, Statistics and Computer Science, College ofScience, University of Tehran, Tehran, Iran
Abstract: In this paper the analysis of a Type-II progressively censored sam-ple with random removals, where the number of dropouts at each failure timefollows a binomial distribution is explored. Maximum likelihood estimators ofthe parameters and their asymptotic variances are derived from Inverse Lomaxdistributed lifetime data. For our proposed procedure, the behaviour of theexpected experiment time has been also investigated through Monte Carlo in-tegration method. Designing Type-II progressive censoring scheme with ran-dom removal is done by cost model. Finally, the optimal Type-II progressivelycensored scheme with random removal is provided based on the measure of thesmallest of experiment cost.
Keywords: Type-II Progressive Censoring, Expected Experiment Time, MonteCarlo Integration, Cost Function.
1 Introduction
Censored observations of different types frequently occur in many applicationsdepending on the setting of the data collection process. In fact, censoring couldbe either controlled or uncontrolled by the researcher who plans the experim-
1Hassantabar Darzi, F.: [email protected]
84
The 6th Seminar on Reliability Theory and its Applications 85
ent. For example an experimenter may terminate the life test study when adetermined number of failed products are observed in order to save time orcost which is refereed to as Type-II censoring. Furthermore, some survivingtest units may have to be removed from the study at different failure times dueto various reasons which would result in Type-II progressive censoring. Thisprocedure works as follows. Life test experiment starts with n units and afterobserving mth failure, the test is terminated. After observing the first failure,r1 units are randomly selected from the n1 surviving units and removed. Atthe time of second failure which is the smallest lifetime among the n− r1−1units, r2 units are randomly chosen from n− r1− 2 remained units and with-drawn from the experiment. This process continue until observing mth failurethen all n− r1−·· ·− rm−1−m remained units will be removed from the ex-periment. Note that if r1 = r2 = · · · = rm = 0 then n = m which correspondsto the complete sample. Also if r1 = r2 = · · · = rm−1 = 0 so that rm = n−m
which corresponds to the conventional Type-II right censoring plan.
Inferential issues, for progressively Type-II censored samples, have beenaddressed by several authors. Ng et al.[9] considered three optimality criteriafor finding optimal progressive censoring plans. They computed the expectedFisher information and the asymptotic variancecovariance matrix of the max-imum likelihood estimates based on a progressively Type-II censored samplefrom a Weibull distribution. Many optimality criteria have been proposed bydifferent researchers which can refer to Wu and Haung [14], Cramer and En-senbach [6] and references cited therein for further study. For deep descriptionof the different progressive censoring schemes and the related issues, the au-thors can refer the books by Balakrishnan and Aggarwala [1] and Balakrishnanand Cramer [2].
All these works assumed that the number of units removed from the test isfixed in advance. However, in many practical situations, these numbers canoccur at random due to safety time or cost considerations. This type of censor-ing is defined as Type-II progressive censoring with random removals, denoted
Hassantabar Darzi, F., Misaii, H., Eftekhari Mahabadi, S., and Haghighi, F. 86
as Type-II PCR. Yuen and Tse [17] and Tse et al.[12] have considered Type-II PCR where the number of units removed at each stage follows a discreteuniform and binomial distribution with certain probability, p. The expectedexperiment time under Type-II PCR model are studied by Tse and Yuen. [13].The Type-II PCR model, with probability mass function on random vectorsas the binomial or the uniform discrete distributions, has been implementedon numerous lifetime distributions. Readers can find more details by referringto Wu et al. [11], Yan et al. [16], Dey and Dey [7] where are respectivelyconsidered Gompertz, Generalized Exponential and Rayleigh as lifetime dis-tributions. Recently, several papers have been published on the various lifetimedistributions such as Day et al.[8], Gunasekera [10] and Sharafi[11].
In the performance of the progressively type-II censoring scheme, one of themajor challenges is to determine a removal vector. Many authors which someof them were mentioned above, have attempted to select appropriate schemesby two different strategies. The first strategy considers the pre-specified andfixed censoring numbers and the second strategy chooses the censoring num-bers according to a probability distribution. But something that has been over-looked in these studies, is the cost of experiment which may be affected by re-moval pattern. Therefor cost is an important decision-making factor for deter-mining optimal removal pattern in Type-II progressive censoring. Cost-basedmodels under progressively censoring can be found in work of Budhiraja andPradhan [5] and Bhattacharya et al. [4].
In this article, we consider the two-parameter Inverse Lomax distribution forlifetime data and develop inference based on progressively Type-II censoredsamples with determining removal schemes, which are derived from binomialdistributions. We formulate the cost function based on the cost of censoringand cost of time duration. The asymptotically optimal censoring scheme is ob-tained by minimizing the cost function. In Section 2, we discuss estimation ofthe parameters based on the maximum likelihood method. Section 3, presentsthe expected experiment time and the comparisons of the expected test time
The 6th Seminar on Reliability Theory and its Applications 87
of PCR scheme with the complete sample. Proposed cost function for find-ing optimal removal vector are given in Section 4. In section 5, Monte Carlosimulation study is used to compute expected experiment time and the cost ofexperiment. Finally, discussions and comments are provided based on thesesimulation results.
2 Model
Let random variable X has an Inverse Lomax distribution with scale and shapeparameters θ and α , respectively. The probability density function of X isgiven by,
f (x) =αθ
x(1+
θ
x)−α−1 x > 0,α,θ > 0.
and the corresponding cumulative distribution function is F(x) = (1+ θ
x )−α .
Let T1 < · · · < Tm denote a Type-II progressive censoring sample. with pre-determined number of removals R = (R1 = r1, . . . ,Rm = rm), the likelihoodfunction can be written as:
L1(θ ,α;T |R) = CRΠmi=1 f (ti)[1−F(ti)]ri (1)
= CRΠmi=1
αθ
ti(1+
θ
ti)−α−1[1− (1+
θ
ti)−α ]ri
where CR = nΠmi=2(n−∑
i−1j=1 r j− i+1). Presume that an individual unit being
removed from the life test is independent of the others but with the same prob-ability p. The number of units which are removed at each failure time followsa binomial distribution such that
R1 ∼ b(n−m, p)
R2 | R1 = r1 ∼ b(n−m− r1, p)
Ri | R1 = r1, . . . ,Ri−1 = ri−1 ∼ b(n−m−i−1
∑j=1
r j, pi), i = 2, . . . ,m−1
Rm = n−m−1
∑i=1
ri−m
Hassantabar Darzi, F., Misaii, H., Eftekhari Mahabadi, S., and Haghighi, F. 88
where the joint probability of R1,R2, . . . ,Rm is given by
P(R, p) = P(Rm = rm|Rm−1 = rm−1, . . . ,R1 = r1) . . .P(R1 = r1). (2)
Now, we further suppose that Ri’s is independent of Ti’s for all i. Then, the fulllikelihood function takes the following form,
L(θ ,α, p;R,T ) = L1(θ ,α;T |R)P(R, p) (3)
= CrΠmi=1
[αθ
ti(1+
θ
ti)−α−1[1− (1+
θ
ti)−α ]ri
]× (n−m)!
Πm−1i=1 ri!(n−m−∑
m−1j=1 r j)!
× p∑m−1i=1 ri(1− p)(n−m)(m−1)−∑
m−1j=1 (m− j)r j.
Since P(R) does not depend on the parameter θ and α , hence the maximumlikelihood estimators (MLEs) of Inverse Lomax distribution can be derived bymaximizing (1) directly. Similarly, since L1(θ ,α;T |R) does not involve thebinomial parameter p, the MLE of p can be found by maximizing (2) directly.In particular, the MLEs of parameters can be found by solving the followingequation:
∂`
∂θ=
mθ+(1+α)
m
∑i=1
(θ + ti)−1,
∂`
∂α=
mα+
m
∑i=1
ln(1+θ
ti),
∂`
∂ p=
1p
m−1
∑j=1
r j−1
1− p[(m−1)(n−m)−
m−1
∑j=1
(m− j)r j].
Hence, the MLE of θ ,α and p can be obtain numerically using numericalmethods such as Newton-Raphson. The elements of the approximate sam-ple information matrix, for Type-II PCR sample of Inverse Lomax distributedlifetimes will be:
I(ξ , p) =
[I1(ξ ) 0
0 I2(p)
]
The 6th Seminar on Reliability Theory and its Applications 89
where I1(ξ = (θ ,α)) and I2(p) are the Fisher information matrices whichincludes the corresponding parameters. For large n, the above matrix is areasonable approximation to inverse of Fisher information matrix. DefineV = limn−→∞I−1(θ ,α). The joint distribution of the MLEs of θ and α isapproximately bivariate normal.
3 Expected Experiment time
In practical applications, it is important to have an idea about the duration of alife test. The experiment termination time is directly associated with the costof the experiment. In Type-II progressive censoring plan, the termination timeis given by the expectation of the mth order statistics in a sample of size n.The conditional expectation of Tm for a fixed set of R = (R1 = r1, . . . ,Rm = rm)
could be obtained using the following expression,
E(Tm|R = r) =CR
r1
∑l1=0
. . .rm
∑lm=0
(−1)A
(r1l1
). . .(rm
lm
)Π
m−1i=1 h(li)
∫∞
0x f (x)Fh(li)−1(x)dx (4)
where A = l1 + l2 + · · ·+ lm and h(li) = l1 + l2 + · · ·+ li + i. If Ri are zerofor all i, this gives the expected time to complete a Type-II censoring test.The expected termination time for Type-II progressive censoring with randomremoval is evaluated by taking the expectation on both sides of (4) with respectto the R. It is given by,
E(Tm) = ER[E[Tm|R]] =g(r1)
∑r1=0
g(r2)
∑r2=0
. . .g(rm−1)
∑rm−1=0
E(Tm|R = r)P(R) (5)
where g(ri) = n−m−∑i−1j=1 r j. Thus, this gives an expression to compute the
expected time for given values of m and n. A natural way to approximatethe above complicated expression is to use Monte Carlo method of integra-tion which take advantage of the special nature of (5), namely the fact thatf (T m,Rm−1) is probability density. So that if it is possible to generate K sam-
Hassantabar Darzi, F., Misaii, H., Eftekhari Mahabadi, S., and Haghighi, F. 90
ples from (T m,Rm−1) and averaging by,
E(Tm) =1K
K
∑j=1
t( j)m ,
where t( j)m denotes the jth sample of the mth order statistics, converges(almost
surely) to (5) when K goes to ∞, according to the Law of Large Number. Basedon this approximation, one can simply obtain values of the expected experi-ment time for different values of parameters without solving such a compli-cated and time consuming integration and summation. The ratio of the ex-pected experiment time of lifetime data under the Type-II PCR, E(Tm), overexpected experiment time for complete sample, E(T ∗m), given by REET =E(Tm)E(T ∗m)
that it does not depend on the scale parameter. When REET is close to 1that means termination point is closer to the complete sample. Suppose that anexperimenter wants to observe the failure of at least m complete failures whenthe test is anticipated to be conducted under Type-II PCR. Then, the REETprovides important information in determining whether the experiment timecan be shortened significantly if a much larger sample of n test units is usedand the test is stopped once m failures are observed.
4 Proposed cost function
In this section, the cost function is introduced under Type-II PCR scheme.There are some important questions about how to design an appropriate pro-gressively censoring that would result in the optimal model. It could be in-cluded how to determine the cost of test units, the cost of censoring at eachstage of failure and the cost of the length of experiment time. In this paper,suppose that the cost of conducting the test is incurred: the fixed cost, C f , costof per unit for putting items on the experiment, Cs, cost of per unit for censoreditem, Cr, and cost of per unit time for conducting the test Ct . Then consideringall the costs associated, we propose the cost function as follows:
CT =C f +Csn+CrE(R|p)+CtE(T |p) (6)
The 6th Seminar on Reliability Theory and its Applications 91
where E(R|p) and E(T |p) are sensitive to removal pattern.
5 Simulation Study
In this section, several experimental results are presented to illustrate the be-havior of the proposed model. The simulation algorithm to generate Type-IIPCR samples of Inverse Lomax distribution was proposed by Balakrishnanand Sandhu [3]. For investigating properties of proposed model, we have con-sidered different values of parameters . For each combination, we have takendifferent sample sizes n = 6,12,18 and m is chosen such that the observedsample contains the 100%,90%, ...,50% of the available sample units. Thecomplete sampling plan is included when m = n.
Table1 presents the approximate values of E(Tm)(using 5000 iterations) as-suming α = 0.5 and 2 along with different values of removal probability. Theresults of the numerical studies show that the expected experiment time of aType-II PCR is highly influenced by the p. Smaller values of the removalprobability means more items are stayed in the study and removed during ofthe experiment that cause in the reduction of the experiment time. The shortestexperiment time can happen when all units are removed at the end of exper-iment (Type-II censoring). On the other hand, if p is large, even when thenumber of test units n is large, most units would be removed at the early stagesand the experiment would resemble a complete sampling test. This would leadto the collection of observations much closer to the tail of the lifetime distri-bution and increases the experiment time.
Hassantabar Darzi, F., Misaii, H., Eftekhari Mahabadi, S., and Haghighi, F. 92
Table 1: Expected experiment time, E[Tm] assuming θ = 1, α = .5,2.
α = .5
n m pi
0.1 0.3 0.5 0.7 0.9
6 6 41.6242 41.6242 41.6242 41.6242 41.6242
5 6.2216 11.8581 14.8003 16.0643 16.4238
4 1.5746 4.7310 9.8763 14.0154 15.0556
3 0.4931 1.4386 3.0326 6.6955 8.0784
12 12 73.2086 73.2086 73.2086 73.2086 73.2086
10 13.1342 52.2332 58.5313 59.6542 60.2781
8 3.1739 19.4375 28.9465 31.5999 32.4748
6 0.6284 5.8601 18.6302 22.7027 24.8988
18 18 157.9205 157.9205 157.9205 157.9205 157.9205
15 27.1144 54.5171 58.0679 58.7084 58.9424
12 4.5456 37.0904 45.6714 47.0702 47.9061
9 1.0908 17.7256 34.9953 40.3062 42.4092
α = 2
6 6 84.3963 84.3963 84.3963 84.3963 84.3963
5 30.2428 56.2164 63.9273 69.5773 71.7039
4 7.0945 24.7346 39.4491 45.8407 53.0686
3 2.9014 9.7345 13.1851 25.6671 32.8632
12 12 197.7140 197.7140 197.7140 197.7140 197.7140
10 59.0738 136.1519 145.9956 149.1448 150.0087
8 15.4358 102.6668 146.5885 156.3763 162.6711
6 3.5834 24.4810 53.4511 75.7503 82.9496
18 18 364.5633 364.5633 364.5633 364.5633 364.5633
15 96.5144 273.4024 286.8621 288.9711 290.6887
12 21.8871 218.3347 248.0570 252.6570 257.8575
9 5.4873 118.5139 182.5417 195.2267 206.5704
Figure1 shows the REET versus m for various combinations of p and n. Forlarge values of p and m, we see that the REET approaches 1 quite sharply. Itis notable that the experimentation time is closely dependent on the removalprobability. It is observed that the expected experiment time for Type-II pro-gressive censoring sample is getting close to the complete sample when m isincreased. But for fixed n and m, the values of REET and the expected exper-iment time of the progressive censoring with binomial removals are increas-ing as p increases. Therefor, expected experiment time proposed the smallerremoval probability for having optimum removal vector. Choosing small p
caused experiment unites are stayed in the study and removed at the end ofexperiment which may be impose a higher cost on testing.
Computing the cost of experiment according (3) are given at Tabel 2. For
The 6th Seminar on Reliability Theory and its Applications 93
●●
●
●
3.0 3.5 4.0 4.5 5.0 5.5 6.0
0.0
0.4
0.8
n=6
m
RE
ET
● p=.1p=.3p=.5p=.7p=.9
● ● ●●
●
●
●
6 7 8 9 10 11 12
0.0
0.2
0.4
0.6
0.8
1.0
n=12
m
RE
ET
● p=.1p=.3p=.5p=.7p=.9
●●
●
●
●
8 10 12 14 16
0.0
0.2
0.4
0.6
0.8
1.0
n=16
m
RE
ET
● p=.1p=.3p=.5p=.7p=.9
● ●●
●
●
●
10 12 14 16 18 20
0.0
0.2
0.4
0.6
0.8
1.0
n=20
m
RE
ET
● p=.1p=.3p=.5p=.7p=.9
Figure 1: REET plot when θ = 1,α = 3 assuming different values of n
the computation of optimum scheme, we use 1000 iterations Type-II PCR ofInverse Lomax distribution. The cost coefficients are taken as C f = 10, Cs = 1,Ct = 0.1 and the censoring cost is different. The cost of censoring depends on,when failure times is accrued and how many unites are removed. Therefor, thecost of censoring in each stages of failure is determined by the experimenterwhich is different at each stage in our study. Tabel 2 shows that the minimumvalues of expected experiment time doesn’t have the minimum cost of theexperiment. According to the test conditions and the importance of removingor maintaining units in the experiment, the removal vector may be changed.
Hassantabar Darzi, F., Misaii, H., Eftekhari Mahabadi, S., and Haghighi, F. 94
Table 2: Expected experiment time and cost of experiment assuming θ = 1, α = 2.
pi
0.1 0.3 0.5 0.7 0.9
n m E(Tm) E(CT ) E(Tm) E(CT ) E(Tm) E(CT ) E(Tm) E(CT ) E(Tm) E(CT )
6 6 21.6832 18.1683 21.6832 18.1683 21.6832 18.1683 21.6832 18.1683 21.6832 18.1683
5 7.4780 17.1607 14.9189 17.7701 19.2149 18.1126 20.7047 18.2135 21.5929 18.2707
4 1.5425 16.8422 5.6958 17.0783 9.2844 17.3002 11.3949 17.4239 12.5843 17.4802
3 1.5344 16.8381 7.8369 17.2932 10.8301 17.4573 17.7339 18.0562 18.9649 18.1191
12 12 48.6799 26.8680 48.6799 26.8680 48.6799 26.8680 48.6799 26.8680 48.6799 26.8680
10 15.9660 24.8988 30.9826 25.7518 37.8309 26.1834 38.6853 26.1518 39.2073 26.1447
8 3.4651 24.6215 18.7509 25.1316 24.0091 25.2070 27.5707 25.3294 28.2334 25.2663
6 0.8572 24.8963 5.9589 24.3629 16.4395 24.8243 25.1354 25.3691 27.0705 25.3714
18 18 79.9612 35.9961 79.9612 35.9961 79.9612 35.9961 79.9612 35.9961 79.9612 35.9961
15 21.3745 32.5201 69.6985 35.9654 73.6218 35.9611 74.5361 35.8790 74.7397 35.8046
12 4.8339 32.80607 31.7102 33.1298 42.7781 33.4775 44.2750 33.2858 44.8274 33.1498
9 0.8972 33.6250 14.5186 32.3397 29.5471 32.7533 35.4728 32.8331 36.9623 32.6944
6 Conclusion
In this paper, we study the Inverse Lomax distribution when the data are Type-II progressively censored under binomial removal scheme. Optimum vectorof Type-II PCR is drawn based on expected experiment time and the proposedcost function. The expected experiment time and cost of experiment are com-puted by Mont Carlo methods. By using a numerical study, we are able toconfirm that the role of removal probability is quite significant with respect tothe length of the experimentation time and considering the cost function maybechange the result. Therefor, it is important to consider the cost of experiment.
References
[1] Balakrishnan, N. and Aggarwala, R. (2000), Progressive censoring: the-
ory, methods, and applications. Springer Science & Business Media.
[2] Balakrishnan, N., and Cramer, E. (2014), The art of progressive censoring.
Statistics for Industry and Technology.
[3] Balakrishnan, N. and Sandhu, R.A. (1995), A simple simulational algo-rithm for generating progressive Type-II censored samples. The American
Statistician, 49(2), 229-230.
The 6th Seminar on Reliability Theory and its Applications 95
[4] Bhattacharya, R., Pradhan, B., and Dewanji, A. (2014), Optimum life test-ing plans in presence of hybrid censoring: A cost function approach. Ap-
plied Stochastic Models in Business and Industry, 30(5), 519-528.
[5] Budhiraja, S., and Pradhan, B. (2019), Optimum reliability acceptancesampling plans under progressive type-I interval censoring with randomremoval using a cost model. Journal of Applied Statistics, 46(8), 1492-1517.
[6] Cramer, E., and Ensenbach, M. (2011), Asymptotically optimal progres-sive censoring plans based on Fisher information. Journal of Statistical
Planning and Inference, 141(5), 1968-1980.
[7] Dey, S. and Dey, T. (2014), Statistical inference for the Rayleigh distribu-tion under progressively Type-II censoring with binomial removal. Applied
Mathematical Modelling, 38(3), 974-982.
[8] Dey, S., Kayal, T. and Tripathi, Y.M. (2018), Statistical Inference for theWeighted Exponential Distribution under Progressive Type-II Censoringwith Binomial Removal. American Journal of Mathematical and Manage-
ment Sciences, 37(2), 188-208.
[9] Ng, H.K.T., Chan, P.S. and Balakrishnan, N. (2004), Optimal progressivecensoring plans for the Weibull distribution. Technometrics, 46(4), 470-481.
[10] Gunasekera, S. (2018), Inference for the Burr XII reliability under pro-gressive censoring with random removals. Mathematics and Computers in
Simulation, 144, 182-195.
[11] Sharafi, M. (2019). Inference of the two-parameter Lindley distributionbased on progressive type II censored data with random removals. Com-
munications in Statistics-Simulation and Computation, 1-15.
Hassantabar Darzi, F., Misaii, H., Eftekhari Mahabadi, S., and Haghighi, F. 96
[12] Tse, S.K., Yang, C. and Yuen, H.K. (2000), Statistical analysis of Weibulldistributed lifetime data under Type II progressive censoring with binomialremovals. Journal of Applied Statistics, 27(8), 1033-1043.
[13] Tse, S.K. and Yuen, H.K. (1998), Expected experiment times for theWeibull distribution under progressive censoring with random removals.Journal of Applied Statistics, 25(1), 75-83.
[14] Wu, S. J., and Huang, S. R. (2010), Optimal warranty length for aRayleigh distributed product with progressive censoring. IEEE Transac-
tions on Reliability, 59(4), 661-666.
[15] Wu, C.C., Wu, S.F. and Chan, H.Y. (2006), MLE and the estimated ex-pected test time for the two-parameter Gompertz distribution under pro-gressive censoring with binomial removals. Applied mathematics and com-
putation, 181(2), 1657-1670.
[16] Yan, W., Shi, Y., Song, B. and Mao, Z. (2011), Statistical analysis ofgeneralized exponential distribution under progressive censoring with bi-nomial removals. Journal of Systems Engineering and Electronics, 22(4),707-714.
[17] Yuen, H.K. and Tse, S.K. (1996), Parameters estimation for Weibulldistributed lifetimes under progressive censoring with random removeals.Journal of Statistical Computation and Simulation, 55(1-2), 57-71.
The 6th Seminar on Reliability Theory and its Applications
A Polya Process-Based Optimal Preventive Maintenance for ComplexSystems
Hashemi, M.1, and Asadi, M.1,2
1 Department of Statistics, Faculty of Mathematics and Statistics, Universityof Isfahan, Isfahan 81746-73441, Iran
2 School of Mathematics, Institute of Research in Fundamental Sciences(IPM), P.O Box 19395-5746, Tehran, Iran
Abstract: We propose an optimal preventive maintenance strategy for n- com-ponent coherent systems. It is assumed that in the early time of the systemoperation all failed components are repaired, such that the state of a failedcomponent gets back to a working state, worse than that of prior to failure. Tomodeling this repair action, we utilize a counting process on the interval (0,τ],known as generalized Polya process (which subsumes the non-homogeneousPoisson process as spacial case). A generalized Polya process-based repairstrategy is proposed. The criterion that will be optimized is the cost functionformulated based on the costs of the repairs of failed components/system, toget the optimal time of preventive maintenance of the system. To illustrate thetheoretical results, a coherent system is studied for which the optimal preven-tive maintenance times are explored under different conditions.
Keywords: Preventive Maintenance, Corrective Maintenance, Minimal Re-pair, Generalized Polya Process, Signature.
1Hashemi, M.: [email protected]
97
Hashemi, M., and Asadi, M. 98
1 Introduction
In recent years, there has been an increasing interest in assessing the optimalmaintenance models for multi-component systems. Maintenance actions canbe categorized into two types: preventive maintenance (PM) and correctivemaintenance (CM). The PM action is carried out on an operating system torestore it to better working condition, whereas the CM action is performedon a failed system and it restores the system to the operating condition. TheCM and PM actions can themselves be categorized according to restorationdegrees. They may be arranged from minimal maintenance to perfect main-tenance. The intermediate situations are known as imperfect maintenance.Minimal maintenance is a popular repair action that restores the system to theworking state similar to that of prior to failure. However, practically, the repairmay bring back the system state to a working state worse than that of prior tofailure. This eventually results in system degradation and hence, a decrease inthe reliability performance of the system. For an example of how this may betrue in practical situations, see Cha and Finkelstein (2018).
In maintenance policies, any policy of repairing in a period of time, can bemodeled by a counting process. The perfect repair, under which the systemreturns to as-good-as-new state, is described by the renewal process. The min-imal repair which restores the system state after the repair to the as-bad-as-old
condition, corresponds to the non-homogeneous Poisson process (NHPP). It isshown recently by Cha (2014) that the process under which the repair bringsback the system to a working state worse than that of prior to failure, can bedescribed by a so-called Generalized Polya Process (GPP). For some recentmaintenance models based on GPP, one can refer to Lee and Cha (2016) andBadıa et al. (2018).
Let {N(t), t ≥ 0} be an orderly counting process on the interval (0, t]. It isassumed that Ht− ≡ {N(ν),0 ≤ ν < t} is the history of the process in [0, t).The notion of stochastic intensity λt , which can be utilized to describe mathe-
The 6th Seminar on Reliability Theory and its Applications 99
matical properties of the process, is defined as (Aven and Jensen (1999))
λt ≡ lim∆t→0
P(N(t, t +∆t) = 1|Ht−)
∆t= lim
∆t→0
E(N(t, t +∆t)|Ht−)
∆t,
where N(s, t),s < t, denotes the number of events in [s, t). The following defi-nition gives the concept of GPP.
Definition 1.1. A counting process {N(t), t ≥ 0} is called the GPP with the setof parameters (λ (t),α,β ), α ≥ 0, β > 0, if
i N(0) = 0;
ii λt = (αN(t−)+β )λ (t).
In special case that α = 0 and β = 1, the GPP reduces to NHPP with theintensity function λ (t). The following results, due to Cha (2014), are useful inour derivations.
• The GPP {N(t), t ≥ 0}, with the set of parameters (λ (t),α,β ), α ≥ 0,β > 0, follows a negative binomial distribution with parameters
(exp{−αΛ(t)},β/α).
• Assuming that u > 0 is a fixed time point, the process {Nu(t), t ≥ 0} iscalled as the future process from u, if Nu(t) = N(u+ t)−N(u). The fu-ture process of the GPP is also the GPP with parameter set (ψ(t,u),α,β )
where
ψ(t,u) =λ (u+ t)exp{αΛ(u+ t)}
1+ exp{αΛ(u+ t)}− exp{αΛ(u)}. (1)
The corresponding stochastic intensity λ ut is given by
λut = (α[N((u+ t)−)−N(u−)]+β )ψ(t,u).
• For a GPP with the parameter set (λ (t),α,β ), the survival function of thetime until the first failure is given by
F(t) = exp{−∫ t
0βλ (u)du
}, t ≥ 0. (2)
Hashemi, M., and Asadi, M. 100
Throughout the paper we assume, without loss of generality, that β = 1 (formore details see, Badıa et al. (2018)). A repair on the system is said to be aGPP repair with the parameter α if {N(t), t ≥ 0} is the GPP with parameterset (λ (t),α,1).
In the present study, we assume that the system under consideration is coher-ent and consists of n components whose lifetimes are independent and identi-cally distributed according to a distribution function F with the correspondingprobability density function f . Using the notion of the signature of the system,introduced by Samaniego (1985), the reliability function of the system lifetimecan be represented as mixture of the reliability functions of the ordered life-times of the components. Let X1:n,X2:n, . . . ,Xn:n denote the ordered lifetimesof components. Then the reliability function of the system’s lifetime T , at anytime t, may be expressed as
P(T > t) =n−1
∑i=0
Si
(ni
)F i(t)(1−F(t))n−i,
where si = P(T = Xi:n), i = 1,2, . . . ,n and Si = ∑nj=i+1 s j; see also Samaniego
(2007). The aim of the present study is to propose maintenance PM modelsfor complex coherent systems consists of n≥ 1 components.
2 Maintenance models for coherent systems
In this section, we propose a new maintenance model for a multi-componentcoherent system based on the concepts discussed in previous sections. As-sume that a new coherent system consisting of n components begins to operateat time t = 0. Upon the failure of each component of the system on the interval(0,τ], a GPP repair is immediately carried out. After τ we apply an age-basedPM policy for the system. If the system fails in the interval (τ,TPM), the opera-tor decides to perform a CM on the failed components and PM on the unfailedcomponents. However, if age of the system reaches TPM the operator decidesto perform PM on the whole system. Although, at TPM the system is working,
The 6th Seminar on Reliability Theory and its Applications 101
but it should be noted that some components may have been failed in the inter-val (τ,TPM). After a perfect maintenance, either a CM/PM on components or aPM at TPM, the process repeats. The maintenance model is drawn in Figure 1.
0 τ
GPP repair
system failure×
CM with PM on components
TPM
PM on system
Figure 1: The proposed maintenance model.
In the following, we investigate the optimal values of the decision variablesby imposing an expected cost function based on the repair costs of failed andunfailed items. In the above mentioned maintenance policy, τ and TPM aredecision variables that influence on the maintenance cost.
2.1 Expected cost function
First, we evaluate the expected cost of the system maintenance per renewalperiod. To assess the mean cost in the time interval (0,τ], we shall utilize ageneral cost function proposed by Sheu (1991) to investigate the optimal PMtime for a system whose components are under minimal repair at failures (seealso, Pham and Wang (2000)). The cited author defined a cost function ofthe form h(c1(t, i),c2(t)), which is a nondecreasing function of t and i. Thefunction contains a deterministic part c1(t, i) corresponding to the ith minimalrepair at age t and an age-dependent random part c2(t). In what follows, weuse the same cost function h(c1(t, i),c2(t)), in the case where the GPP repairis performed on each failed component. Let N(τ) denote the total number ofGPP repairs performing on each component in the interval (0,τ]. Suppose thatS1,S2, ... are the successive failure times at which GPP repairs have been per-formed. Thus, the expected cost of repairs for the whole system in a renewalcycle is
cτ =nE
[N(τ)
∑i=1
h(c1(Si, i),c2(Si))
]. (3)
Hashemi, M., and Asadi, M. 102
Under the GPP repair model, cτ is given as (see Hashemi and Asadi (2020) forthe proof)
cτ = n∫
τ
0EN(t)Ec2(t)[λth(c1(t,N(t)+1),c2(t))]dt. (4)
In a special case where h(c1(t, i),c2(t)) is a constant, say cGPP, the expectedcost of GPP repair of the whole system, cτ in Equation (3), is simplified as
cτ =ncGPPE(N(τ))
=ncGPP1α(exp{αΛ(τ)}−1).
In the time interval (τ,TPM), the system fails at the rth component fail-ure with conditional probability P(Tτ = (Xτ)r:n | Tτ < TPM− τ), r = 1,2, ...,n,where Tτ denotes the residual lifetime of the system whose components areunder GPP repair on (0,τ], and (Xτ)r:n is the rth order statistic from a popula-tion with reliability function Fτ(t), where from Equations (1) and (2), is givenas
Fτ(t) = exp
{−∫ t+τ
τ
λ (u)eαΛ(u)
1+ eαΛ(u)− eαΛ(τ)du
}
=(
1+ eαΛ(t+τ)− eαΛ(τ))− 1
α
, t ≥ 0. (5)
Let cCM, cPM, cPMS be the cost of CM for each component, the cost of PMfor each component and the cost of PM for the entire system, respectively. Ifthe system fails in the interval (τ,TPM), the mean cost is Hτ(TPM− τ)S(TPM),where
Hτ(t) = 1−P(Tτ > t) = 1−n−1
∑j=0
S j
(nj
)(1− Fτ(t)) jFn− j
τ (t), (6)
and
S(TPM) =n
∑r=1
(rcCM +(n− r)cPM)P(Tτ = (Xτ)r:n | Tτ < TPM− τ). (7)
From Equation (7) of Mahmoudi and Asadi (2011), we have
P(Tτ = (Xτ)r:n | Tτ < TPM− τ) =srF(Xτ)r:n(TPM− τ)
∑ni=1 siF(Xτ)i:n(TPM− τ)
,
The 6th Seminar on Reliability Theory and its Applications 103
where F(Xτ)r:n is the distribution function of (Xτ)r:n, given as
F(Xτ)r:n(t) =n
∑j=r
(nj
)(1− Fτ(t)) jFn− j
τ (t), t ≥ 0. (8)
On the other hand, after τ , if the system lifetime reaches TPM, the mean costis Hτ(TPM− τ)cPMS, where Hτ = 1−Hτ .
Thus, assuming that all maintenance actions take negligible times, the meancost rate for the coherent system under this maintenance model is
η(τ,TPM) =cτ +Hτ(TPM− τ)S(TPM)+ Hτ(TPM− τ)cPMS
τ +E(min(TPM− τ,Tτ)), (9)
where
E(min(TPM− τ,Tτ)) =∫ TPM−τ
0Hτ(t)dt.
The aim here is to minimize η(τ,TPM) with respect to the decision variables(τ,TPM); that is, we should find two possible values τ∗ and T ∗PM such that
η(τ∗,T ∗PM) = minτ<TPM
η(τ,TPM).
Remark 2.1. In the model described above, the special case α→ 0 correspondsto minimal repair in the interval (0,τ). In this case, using Equation (4), themean cost of repair on (0,τ], cτ , is given by
cτ = n∫
τ
0EN(t)Ec2(t)[h(c1(t,N(t)+1),c2(t))]λ (t)dt,
(see also Sheu (1991)). In particular, if h(c1(t, i),c2(t)) = cmin (a constant)then
cτ =ncmin limα→0
1α(exp{αΛ(τ)}−1)
=ncminΛ(t).
On the other hand, if α → 0
Fτ(t) = exp{−∫
τ+t
τ
λ (u)du}=
F(t + τ)
F(τ), t ≥ 0.
Hashemi, M., and Asadi, M. 104
Hence, the mean cost rate of the system repair is given by (9) with followingchanges:
Hτ(t) =n−1
∑j=0
S j
(nj
)(1− F(t + τ)
F(τ)
) j(F(t + τ)
F(τ)
)n− j
, t ≥ 0,
and
F(Xτ)r:n(t) =n
∑j=r
(nj
)(1− F(t + τ)
F(τ)
) j(F(t + τ)
F(τ)
)n− j
, t ≥ 0.
2.2 Availability criterion
Another well-known criterion in maintenance policies is the stationary avail-
ability of the system. The stationary availability is defined as the ratio of aver-age time that the system is in operating state to the average length of a cycle.Assume that in the above model CM and PM repairs, for each component, takewCM and wPM time units, respectively, and PM on the system takes wPMS timeunits. The stationary availability for is then given by
A(τ,TPM)
=τ +E(min(TPM− τ,Tτ))
τ +E(min(TPM− τ,Tτ))+Hτ(TPM− τ)W (TPM)+ Hτ(TPM− τ)wPMS,
where
W (TPM) =n
∑r=1
(rwCM +(n− r)wPM)P(Tτ = (Xτ)r:n | Tτ < TPM− τ). (10)
We maximize A(τ,TPM) with respect to (τ,TPM); that is, we find two valuesτ∗ and T ∗PM such that
A(τ∗,T ∗PM) = maxτ<TPM
A(τ,TPM).
3 Numerical example
The following example illustrates the theoretical results given above.
The 6th Seminar on Reliability Theory and its Applications 105
Example 3.1. Consider a coherent system consists of 10 components withstructure pictured in Figure 2.
1
2
3
4
5
6
7
8
9
10
Figure 2: A system with 10 components.
The signature of the system is computed, using a Mathematica program, as
s=
(0,
145
,37
360,
57280
,163630
,143630
,19140
,120
,0,0).
The lifetimes of components are assumed to be independent and have a com-mon Weibull distribution with failure rate λ (t) = 0.01( t
300)2. In the interval
(0,τ], the components fail according to GPP and are assumed to immediatelybe repaired. Suppose that all maintenance actions take negligible times. Theeffect of different values α , cGPP, cPMS, cPM and cCM on both the optimal timeuntil terminating GPP repair and the optimum PM times are analyzed. Table 1contains the optimum values (τ∗,T ∗PM) and the optimum cost η(τ∗,T ∗PM) . Itis seen that when α increases, both τ∗ and T ∗PM decrease although η(τ∗,T ∗PM)
increases. This situation may be justified on noting the fact that an increase inα will result in a worse state of the system after the GPP repair. On the otherhand, when cPMS increases, then τ∗, T ∗PM, and η(τ∗,T ∗PM) increase. This meansthat an increase in the cost of preventive maintenance of the system makes theoperator to postpone both the time until terminating GPP repair and the timeof PM action. A similar behaviour is observed when cPM increases. Note thatan increase in cCM results in an increase in η(τ∗,T ∗PM). Also, it is interesting tonote that when cCM increases, then, as expected, the length of the time interval(τ∗,T ∗PM) declines and hence, after τ∗, an earlier PM action will be carried out.
Hashemi, M., and Asadi, M. 106
Table 1: Optimal PM times (τ∗,T ∗PM) for different values of α , cPMS, cPM and cCM with cGPP = 0.7.
cPM = 2, cCM = 4 cPM = 1, cCM = 4 cPM = 2, cCM = 3
α cPMS τ∗ T ∗PM η(τ∗,T ∗PM) τ∗ T ∗PM η(τ∗,T ∗PM) τ∗ T ∗PM η(τ∗,T ∗PM)
0.1 8 224.07 239.88 0.0470 219.27 239.11 0.0467 218.84 239.05 0.0467
10 243.21 257.69 0.0550 238.33 256.92 0.0546 237.86 256.85 0.0545
12 259.04 272.87 0.0624 253.71 272.07 0.0619 253.18 271.99 0.0618
0.2 8 219.41 235.41 0.0474 214.49 234.67 0.0471 214.05 234.62 0.0470
10 237.27 251.91 0.0555 232.28 251.20 0.0552 231.79 251.14 0.0551
12 251.82 265.81 0.0631 246.37 265.07 0.0626 245.82 265.01 0.0626
0.3 8 215.25 231.45 0.0478 210.20 230.74 0.0474 209.75 230.70 0.0474
10 232.11 246.93 0.0560 226.98 246.25 0.0556 226.49 246.20 0.0555
12 245.69 259.84 0.0637 240.07 259.16 0.0632 239.50 259.10 0.0632
In Table 2 the optimum times (τ∗,T ∗PM) and the optimum cost η(τ∗,T ∗PM) arepresented for several values of α and cGPP with fixed values cPM = 2, cCM = 4and cPMS = 10. As it is observed, when cGPP increases, then both τ∗ and T ∗PM
decrease. Note that for larger α , increase in cGPP leads to a lower reduction inthe amount of τ∗ and T ∗PM.
Table 2: Optimal PM times (τ∗,T ∗PM) for different values of α , cGPP with cPM = 2, cCM = 4 cPMS = 10.
cPMS = 10, cPM = 2, cCM = 4
α cGPP τ∗ T ∗PM η(τ∗,T ∗PM)
0.1 0.5 278.79 287.19 0.0500
0.7 243.21 257.69 0.0550
0.9 214.12 236.51 0.0586
0.2 0.5 270.37 278.78 0.0507
0.7 237.27 251.91 0.0555
0.9 209.39 232.24 0.0591
0.3 0.5 263.38 271.81 0.0513
0.7 232.11 246.93 0.0560
0.9 205.11 228.45 0.0594
In order to have a comparison between the GPP and NHPP repaires, in Ta-ble 3 we give optimal PM times when the components are minimally repairedin the interval (0,τ]. When the value of cMIN gets bigger, then so does theoptimal cost, but both τ∗ and T ∗PM decrease. The effect of increasing other costparameters, i.e., cPMS, cPM and cCM, on τ∗, T ∗PM and η(τ∗,T ∗PM) is completelysimilar to the case where the GPP repair is performed in (0,τ]; see Table 1. A
The 6th Seminar on Reliability Theory and its Applications 107
comparison of Tables 1 and 3 with cPMS = 10, cPM = 2 and cCM = 4 shows thatGPP repair will be preferable to minimal repair, since η(τ∗,T ∗PM) in Table 1are less than the corresponding ones in Table 3.
Table 3: Optimal PM times (τ∗,T ∗PM) for different values of cPMS, cPM and cCM with cMIN = 1 and cMIN = 1.2.
cPM = 2, cCM = 4 cPM = 1, cCM = 4 cPM = 2, cCM = 3
cMIN cPMS τ∗ T ∗PM η(τ∗,T ∗PM) τ∗ T ∗PM η(τ∗,T ∗PM) τ∗ T ∗PM η(τ∗,T ∗PM)
1 8 178.06 213.36 0.0504 156.52 210.78 0.0488 168.64 212.35 0.0498
10 199.02 230.98 0.0590 178.42 228.41 0.0572 189.48 229.94 0.0583
12 215.61 246.17 0.0670 194.08 243.58 0.0649 204.99 245.10 0.0661
1.2 8 145.86 198.19 0.0519 106.38 195.59 0.0497 128.58 197.08 0.0510
10 168.97 215.00 0.0611 135.99 212.47 0.0586 152.97 213.91 0.0600
12 186.05 229.46 0.0696 153.67 227.10 0.0666 168.95 228.45 0.0682
Figure 3 depicts the three-dimensional plot of the cost function in termsof (τ,TPM) for α = 0.1, cGPP = 0.7, cPM = 2, cCM = 4 and cPMS = 10. In
Figure 3: The plot of cost function for α = 0.1, cGPP = 0.7, cPM = 2, cCM = 4 and cPMS = 10.
Figure 4(a), the graphs of the cost function are presented for different valuesof τ and fixed values of α = 0.1, costs cGPP = 0.7, cPM = 2, cCM = 4 andcPMS = 10. Also, the cost function is plotted in Figure 4(b) for different valuesof α and a fixed value of τ = 220.
Figure 4: (a) The cost function for τ = 100 (bold line), τ = 150 (dotted line), τ = 200 (dashed line); (b) The cost function for α = 0.1,0.2,0.3
from down to up.
Hashemi, M., and Asadi, M. 108
The stationary availability is plotted in Figure 5(a) for different values ofτ and fixed values of α = 0.1 and the repair times wPM = 0.01, wCM = 0.02and wPMS = 0.1. Also, the stationary availability is plotted in Figure 5(b) fordifferent values of α and fixed value of τ = 220.
Figure 5: (a) The stationary availability for τ = 100,150,200 from down to up; (b) The stationary availability for α = 0.05,0.3,0.5 from up to
down.
References
[1] Aven, T. and Jensen, U. (1999). Stochastic models in reliability. Springer,New York.
[2] Badıa,F.G., Berrade, M.D., Cha, J.H. and Lee, H. (2018). Optimal replace-ment policy under a general failure and repair model: Minimal versusworse than old repair. Reliability Engineering & System Safety, 180, 362–372.
[3] Cha, J.H. (2014). Characterizations of the generalized polya process andits applications. Advances in Applied Probability, 46, 1148–1171.
[4] Cha, J.H. and Finkelstein, M. (2018). On preventive maintenance underdifferent assumptions on the failure/repair processes. Quality and Relia-
bility Engineering International, 34, 66–77.
[5] Hashemi, M. and Asadi, M. (2020). New Approaches to Optimal Preven-tive Maintenance of Coherent Systems. Submitted.
The 6th Seminar on Reliability Theory and its Applications 109
[6] Lee, H. and Cha, J.H. (2016). New stochastic models for preventive main-tenance and maintenance optimization. European Journal of Operational
Research, 255, 80–90.
[7] Mahmoudi, M. and Asadi, M. (2011). The dynamic signature of coherentsystems. IEEE Transactions on Reliability, 60(4), 817–822.
[8] Pham, H. and Wang, H. (2000). Optimal (τ,T ) opportunistic maintenanceof a k-out-of-n : G system with imperfect PM and partial failure. Naval
Research Logistics, 47, 223–239.
[9] Samaniego, F.J. (1985). On closure of the IFR class under formation ofcoherent systems. IEEE Transactions on Reliability, 34(1), 69–72.
[10] Samaniego, F.J. (2007). System Signatures and Their Applications in En-
gineering Reliability. Springer, New York.
[11] Sheu, S. (1991). Generalized block replacement policy with minimal re-pair and general random repair costs for a multi-unit system. Journal of the
Operational Research Society, 42(4) , 331–341.
The 6th Seminar on Reliability Theory and its Applications
Optimal Design of Accelerated Life Tests Under Periodic Inspection andType-I Censoring for Burr Type-X Distribution
Hakamipour, N.1
1 Department of Mathematics, Buein Zahra Technical University, BueinZahra, Qazvin, Iran
Abstract: For the Burr-Type X distributed lifetimes, optimal accelerated lifetest plans are determined under the assumptions of periodic inspection andType I censoring. Computational results indicate that for the range of param-eter values considered the asymptotic variance of the estimated mean or pthquantile at the use stress is not sensitive to the number of inspections at over-stress levels. Senstivity analyses are also conducted to see how sensitive theasymptotic variance of the estimated mean is with respect to the uncertaintiesinvolved in the guessed failure probabilities at the use and high stress levels.
Keywords: Accelerated Life Testing, Burr-Type X Distribution, OptimumDesign, Periodic Inspection, Step Stress.
1 Introduction
Previous studies on accelerated life testing (ALT) assumed continuous inspec-tion of test items. However, further reductions in testing effort and administra-tive convenience may be achieved by employing periodic inspection in whichtest items are checked only at certain points in time. The information obtainedfrom a periodic inspection consists of the number of failures in each inspectionperiod, resulting in ”grouped” or ”interval” data.
1Hakamipour, N.: [email protected]
110
The 6th Seminar on Reliability Theory and its Applications 111
Optimal ALT plans have been developed by several authors under the assump-tion of continuous inspection (e.g., see [4], [5], [6], [10] and [14]). On theother hand, studies on statistical analysis of grouped data or on the design ofa periodic inspection have been largely concerned with life tests conducted atthe use condition (e.g., [1], [2], [9], [11]). The present investigation is an at-tempt to combine these interesting and important features of life tests, namely,acceleration and periodic inspection.
This paper considers ALT planning for items whose lifetimes follow a two-parameter Burr-Type X distribution.
Burr [3] introduced twelve different forms of cumulative distribution func-tions for modelling data. Among those twelve distribution functions, Burr-Type X and Burr-Type XII received the maximum attention. Several aspectsof the one parameter Burr-Type X distribution were studied by Sartawi andAbu-Salih [13], Jaheen [7, 8] and Raqab and Kundu [12]. Recently Surles andPadgett [15] proposed Burr-Type X distribution and observed that it can beused quite effectively in modelling strength data and also modelling generallifetime data.
This paper is a generalization over the previous works on the design of ALTplans for Type I censoring and periodic inspection. The Burr-Type X distribu-tion has been considered to describe the failure mechanism of the units undertest. Statistically optimal ALT plans have been developed for the Burr-Type Xdistribution under Type I censoring and periodic inspection at two test stresslevels. It is assumed that a log-linear function exists between the Burr-Type Xscale parameter and the stress and that the shape parameter is constant and isindependent of the stress. The unknown parameters in the log-linear relation-ship are estimated by maximum likelihood estimation (MLE) method. Underthe case of known shape parameter, the low test stress and associated propor-tion of test units are optimally determined at design stress. The optimal testplans are derived by minimizing the asymptotic variance (AV) of the maxi-mum likelihood estimator of log mean life or of qth quartile at design stress.
Hakamipour, N. 112
Computational studies are conducted for various combinations of parametersto examine how the optimal plans vary with respect to these parameters at de-sign and high-test stresses. Sensitivity analyses have also been performed forvarious combinations of parameters to assess the effect of inaccuracy to mis-specification of imputed failure probabilities on the optimal plan at design andhigh-test stresses.
2 The Model
• Three test stress levels s0, s1, s2 are used such that s0 < s1 < s2, where s0 isthe design stress level representing use condition and s1 and s2 are higherthan usual stresses representing accelerated conditions.
• The lifetimes (T ) of test items are independently and identically distributedas follow the Burr-Type X distribution. The probability density functionof Burr-Type X distribution is given by:
f (t) = (2σt/θ2)(1− e−(t/θ)2
)σ−1e−(t/θ)2; t ≥ 0,
where σ > 0 is a shape parameter and θ > 0 is a scale parameter.
• The power law relationship between the mean lifetime θ and stress level s
is given by:
θ = eβ0+β1s, (1)
where β0 and β1 are unknown parameters to be estimated. The aboverelationship is frequently used in ALT. This model includes the inversepower model and the Arrhenius relation rate model.
• The shape parameter σ is independent of stress (constant for any stress).
Censoring plans are often used to shorten the time of life testing. We haveused Type I censoring plan. It involves running each unit for a predetermined
The 6th Seminar on Reliability Theory and its Applications 113
time. In this case, the censoring time is fixed while the number of failures israndom.
Three stress levels are considered. That is, the use stress level s0, the lowstress level s1, and the high stress level s2. It is assumed that s0 and s2 areprespecified, while s1 is to be optimally determined.
The number of test items allocated to s1 and s2 are respectively given by
n1 = α1N,
n2 = α2N = (1−α1)N,
where N is the total number of test items given and α1 is to be optimally deter-mined. At si, ni units are to be put on test at time 0 and run until a prespecifiedtime tci (i.e., Type I censoring is assumed), and inspections are conducted onlyat specified points in time ti1, ti2, . . . , tik(i) where tik(i) = tci. In addition, letti0 = 0 and ti,K(i)+l = ∞, and at stress level si, the number of failures xi j andcorresponding probability of failures Pi j in the respective intervals (ti, j−1, ti j)
are recorded for i = 1,2 and j = 1,2, ...,K(i)+1.
The grouped data {xi j}, i = 1,2; j = 1,2, . . . ,K(i)+ 1 are used to estimateβ0 and β1 in 1. The estimated relationship is then extrapolated to estimatesome quantities at the use condition. Of particular interest is the logarithm ofthe mean lifetime at the use condition which is defined by
µ0 = lnq0 = β0 +β1s0.
Note that tq, the qth quantile of the Burr-Type X distribution at the use con-dition, is related to µ0 as follows:
yq = ln tq = β0 +β1s0 +12
ln[− ln(1−q1/σ)].
Let β0 and β1 be the ML estimates of β0 and β1, respectively. Then:
µ0 = β0 + β1s0,
Hakamipour, N. 114
and
yq = µ0 +12
ln[− ln(1−q1/σ)]. (2)
Let with and without the prime represent the original and standardized scale,respectively, then the following transformation makes the design and highstress to be, respectively, 0 and 1 in the standardized scale:
s =(s′− s′0)(s′2− s′0)
,
or equivalently:
s′ = s(s′2− s′0)+ s′0.
We also standardized all the time related variables with respect to censoringtime t ′c (say tc1 = tc2 = t ′c). For instance, t = t ′/t ′c and θ = θ ′/t ′c. Under theabove standardization, the mean lifetime is represented by:
θ =eβ ′0+β ′1s′
t ′cBecause θ = eβ0+β1s; we have:
β0 = β′0 +β
′1s′0− ln t ′c
β1 = β′1(s′2− s′)
Then from equation (2) it can be shown that:
y′q = β0 + ln t ′c +12
ln[− ln(1−q1/σ)]
Note that t ′c becomes 1 in the standardized time scale:
y′q = β0 +12
ln[− ln(1−q1/σ)] = yq.
Therefore, no generality is lost under the above transformation.
3 Maximum Likelihood Estimation and Optimal Plans
The likelihood function of the set of observations {xi j}K(i)+1j=1 which are multi-
nomially distributed with ni and {Pi j}K(i)+1j=1 at stress level si, is given by:
The 6th Seminar on Reliability Theory and its Applications 115
L′ =2
∏i=1
L′i =2
∏i=1
ni!(K(i)+1
∏j=1
xi j!)−1(K(i)+1
∏j=1
Pxi ji j
),
Taking logarithm of both the sides, we get:
L = lnL′ =2
∑i=1
lnL′i =C+2
∑i=1
K(i)+1
∑j=1
xi j lnPi j,
where C is constant with respect to β0 and β1 and:
Pi j =[1− e−(ti j/θi)
2]σ
−[1− e−(ti, j−1/θi)
2]σ
,
for i = 1,2 and j = 1,2, . . . ,K(i)+1. Then the ML estimates of β0 and β1 areobtained from the following set of equations:
∂L∂βr
=2
∑i=1
s′iK(i)+1
∑j=1
(xi j(Ai, j−1−Ai j)
Pi j
)= 0, for r = 0,1,
where
Ai j = 2σ(ti j/θi)2e−(ti j/θi)
2(1− e−(ti j/θi)
2)σ−1,
for i = 1,2 and j = 0,1,2, . . . ,K(i)+1.The Fisher information matrix is:
F = N( fgh), for g,h = 0,1,
where
fgh =2
∑i=1
αis(g+h)i
k(i)+1
∑j=1
(Ai, j−1−Ai j)2
Pi j, for g,h = 0,1.
Note that, ∂Pi j∂β1
= si∂Pi j∂β2
. And the asymptotic covariance matrix of the MLestimates β0 and β1 is the inverse of the Fisher information matrix F , (i.e.,V = 1
N ( fgh)−1).
Hakamipour, N. 116
The optimization problem is to determine s1 and α1 that minimize AV(µ0).The AV of the MLE of log mean life (µ0) is:
AV (µ0) = (1,s0)V (1,s0)′ = AV (β0)+ s2
0AV (β1)+2s0ACov(β0, β1),
= N−1( f00 f11− f 201)−1( f11 + s2
0 f00−2s0 f01), (3)
which is also the AV of yq (i.e. AV(yq)) for any q.The optimal plans are determined with the following simplified assumptions
and standardization:
1. The number of inspections at each stress level is the same, that is, K(1) =K(2) = K (known).
2. Parameters are standardized such that the common censoring time, as wellas the high test stress becomes 1, and the design stress is 0. That is, tc =
s2 = 1, and s0 = 0. Such standardization does not alter the nature of ourproblem.
Based upon the above assumptions and standardization, equation (3) is re-duced to:
AV (µ0) = AV (β0) = N−1( f00 f11− f 201)−1 f11.
In actual experiments the following quantities are used instead of β0 and β1.Pu = probability that an item fails in (0, tu) at the use condition,Ph = probability that an item fails in (0, th) at the high stress.It is believed that Pu and Ph are more familiar to the experimenter and easier
to estimate than β0 and β1. In computational experiments tu and th are set to 1.Then, the corresponding β0 and β1 can be determined as follows:
β0 =12
ln( −1
ln(1−P1/σu )
),
β1 =12
ln(ln(1−P1/σ
u )
ln(1−P1/σ
h )
).
The 6th Seminar on Reliability Theory and its Applications 117
For given values of K, Pu, Ph and σ , optimal values of s1 and α1 are deter-mined by the following two-step procedure that minimizes AV (µ0).
We optimize α1 (say α∗1 ). That is, from equation (3):
∂AV (µ0)
∂α1= N−1
((s21Q1−Q2)α
21 +2Q2α1−Q2
Q1Q2(s1−1)2(−α21 +α1)2
)= 0,
where
Qi =K(i)+1
∑j=1
(Ai, j−1−Ai j)2
Pi j, for i = 1,2.
The optimum value of α1, 0 < α < 1 is given by:
α∗1 =−Q2 +
√s2
1Q1Q2
s21Q1−Q2
.
4 Computational Results and Discussions
The results of the analysis of optimal plan are presented in Tables 1 and 2 forvarious combinations of Pu, Ph, K and σ .
The following trends are observed:
• For given values of Pu, Ph and σ , AV (µ0) is almost same over the numberof inspection (K). Thus, increasing K has little effect on the AV . Thisimplies that the number of inspection need not be too large.
• When Pu and Ph are fixed, s∗1 and α∗1 are fairly stable over K in all the casesunder study.
• For selected values of σ and Ph, as Pu increases, AV (µ0) decreases and itbecomes is minimum when Pu equals 0.01.
• For given σ , s∗1 gets close to zero (the design stress) and α∗1 to 1, as Pu
increases and/or Ph decreases. For instance, when Pu = 0.1 and Ph ≤ 0.99,s∗1 ∼= 0 and α∗1
∼= 1. Similar trends are also observed when Pu is less than0.1, for small values of Ph. This implies that there is almost no need for anALT.
• For each Pu and Ph, AV (µ0) decreases as σ increases.
Hakamipour, N. 118
5 Conclusion
In this paper, we develop optimal ALT plans for minimizing AV (µ0) underthe assumptions of Burr-Type X lifetime distribution, periodic inspection, andType I censoring.
Based upon computational results, it can also conclude that the number ofinspections need not be large and the plan is insensitive to misspecification ofimputed failure probabilities at the design and high stress levels.
We also observe that the schemes with equally spaced inspection times ateach stress level are administratively convenient and statistically optimal.
Table 1: Optimal ALT plans when σ = 2.0
Pu Ph β0 β1 K s∗1 α∗1 NAV (µ0)
0.0001 0.99 9.200 -12.534 2 0.600 0.699 141.766
5 0.620 0.736 119.417
0.5 9.200 -9.611 2 0.652 0.811 358.275
5 0.652 0.813 356.495
0.01 9.200 -4.700 2 0.438 0.888 5360.032
5 0.438 0.888 5359.944
0.001 0.99 6.876 -10.210 2 0.510 0.732 85.672
5 0.532 0.766 73.509
0.5 6.876 -7.286 2 0.540 0.838 192.861
5 0.542 0.839 192.026
0.1 6.876 -4.941 2 0.424 0.885 546.183
5 0.424 0.885 546.049
0.01 0.99 4.501 -7.835 2 0.360 0.795 42.874
5 0.390 0.817 38.053
0.5 4.502 -4.911 2 0.318 0.898 76.344
5 0.320 0.898 76.120
The 6th Seminar on Reliability Theory and its Applications 119
Table 2: Optimal ALT plans when σ = 1.0
Pu Ph β0 β1 K s∗1 α∗1 NAV (µ0)
0.0001 0.99 18.421 -21.475 2 0.684 0.721 537.225
5 0.698 0.751 472.114
10 0.704 0.760 453.407
∞ 0.706 0.767 441.469
0.9 18.421 -20.089 2 0.706 0.784 665.891
5 0.710 0.790 647.541
10 0.710 0.794 642.377
∞ 0.710 0.795 639.355
0.5 18.421 -17.688 2 0.698 0.823 1383.409
5 0.698 0.824 1380.529
10 0.698 0.824 1379.807
∞ 0.698 0.824 1379.427
0.1 18.421 -13.920 2 0.630 0.849 5075.474
5 0.630 0.849 5075.270
10 0.630 0.849 5075.221
∞ 0.630 0.849 5075.197
0.01 18.421 -9.220 2 0.444 0.890 21099.466
5 0.444 0.890 21099.460
10 0.444 0.890 21099.459
∞ 0.444 0.890 21099.458
0.001 0.99 13.815 -16.869 2 0.598 0.747 308.596
5 0.616 0.774 274.442
10 0.622 0.782 264.528
∞ 0.626 0.788 258.185
0.9 13.815 -15.483 2 0.618 0.806 374.619
5 0.622 0.812 365.295
10 0.624 0.814 362.670
∞ 0.624 0.816 361.129
0.5 13.815 -13.081 2 0.590 0.847 716.401
5 0.590 0.847 715.105
10 0.592 0.846 714.780
∞ 0.592 0.846 714.607
0.1 13.815 -9.314 2 0.446 0.888 2077.101
5 0.446 0.888 2077.039
10 0.446 0.888 2077.024
∞ 0.446 0.888 2077.016
0.01 0.99 9.200 -12.255 2 0.446 0.799 142.702
5 0.472 0.817 129.854
10 0.480 0.823 126.037
∞ 0.484 0.828 123.573
0.9 9.200 -10.868 2 0.456 0.849 166.254
5 0.462 0.854 163.009
10 0.464 0.855 162.091
∞ 0.466 0.855 161.551
0.5 9.200 -8.467 2 0.368 0.898 266.305
5 0.368 0.899 265.980
10 0.368 0.899 265.899
∞ 0.368 0.899 265.856
Hakamipour, N. 120
References
[1] Ahmad, N. and Islam, A. (1996), Optimal accelerated life designs forBurr type XII distributions under periodic inspection and type I censor-ing, Naval Research Logistic, 43(8), 1049–1077.
[2] Ahmad, N., Islam, A., Kumar, R. and Tuteja, R.K. (1994), Optimal designof accelerated life test plans under periodic inspection and type I censoring:the case of Rayleigh failure law, South African Statistical Journal, 28(2),27–35.
[3] Burr, I. W. (1942), Cumulative frequency functions, The Annals of Mathe-
matical Statistics, 13(2), 215–232.
[4] Hakamipour, N. (2019), Time and cost constrained optimal designs of mul-tiple step stress tests under progressive censoring, International Journal of
Quality & Reliability Management, 36(10), pp. 1721–1733.
[5] Han, D. and Ng, H. T. (2014), Asymptotic comparison between constant-stress testing and step-stress testing for Type-I censored data from expo-nential distribution, Communications in Statistics-Theory and Methods,43(10–12), 2384–2394.
[6] Hu, C.H., Plante, R.D. and Tang, J. (2013), Statistical equivalency andoptimality of simple step-stress accelerated test plans for the exponentialdistribution, Naval Research Logistics, 60(1), 19–30.
[7] Jaheen, Z. F. (1995), Bayesian approach to prediction with outliers fromthe Burr type X model, Microelectronics Reliability, 35(4), 703–705.
[8] Jaheen, Z. F. (1996), Empirical Bayes estimation of the reliability and fail-ure rate functions of the Burr type X failure model, Journal of Applied
Statistical Science, 3(4), 281–288.
[9] Meeker, W. Q. (1986), Planning life tests in which units are inspected forfailure, IEEE Transactions on Reliability, 35(5), 571–578.
The 6th Seminar on Reliability Theory and its Applications 121
[10] Nelson, W. B. (1990), Accelerated testing: statistical models, test plans,
and data analysis, John Wiley and Sons, New York.
[11] Nelson, W. (1977), Optimum demonstration tests with grouped inspec-tion data from an exponential distribution, IEEE Transactions on Reliabil-
ity, 26(3), 226–231.
[12] Raqab, M. Z., and Kundu, D. (2006), Burr type X distribution: revisited,Journal of probability and statistical sciences, 4(2), 179–193.
[13] Ahmad Sartawi, H. and Abu-Salih, M. S. (1991), Bayesian predictionbounds for the Burr type X model. Communications in Statistics-Theory
and Methods, 20(7), 2307–2330.
[14] Sharon, V. A. and Vaidyanathan, V. S. (2016), Analysis of simple step-stress accelerated life test data from Lindley distribution under type-I cen-soring, Statistica, 76(3), 233–248.
[15] Surles, J. G. and Padgett, W. J. (2001), Inference for reliability and stress-strength for a scaled Burr type X distribution. Lifetime Data Analysis, 7(2),187–200.
The 6th Seminar on Reliability Theory and its Applications
Optimal Warranty Length for a Repairable System with Frailty RandomVariable
Hooti, F.1, and Ahmadi, J.1
1 Department of Statistics, Ferdowsi University of Mashhad, Mashhad, Iran
Abstract: In many real life applications there is a substantial heterogeneitybetween apparently identical repairable systems, which cannot be describedby observed covariates. This unobservable heterogeneity is often called frailtyin the survival analysis literature. The main purpose of this paper is to discusabout the optimal allocation of minimal repairs and time duration of the servicein such systems. A total expected cost function is introduced and optimizationproblem is studied based on it.
Keywords: Frailty Model, Minimal Repair, Optimization, Cost Function.
1 Introduction
Vaupel et al. (1979) introduced the term frailty and they used it in order toshow that the difference between persons is at risks, even if the appearance ofthem such as height, weight and age are the same. Frailty models are exten-sively used in the survival analysis to account for the unobserved heterogeneityin individual risks to disease and death. This model is a random effect modelfor time to event data. It takes into account that the population is not homoge-neous. Heterogeneity is usually explained by covariates, but when importantcovariates have not been observed, it leads to unobserved heterogeneity. Formore details, we refer the readers to the books by, Duchateau and Janssen
1Hooti, F.: [email protected]
122
The 6th Seminar on Reliability Theory and its Applications 123
(2008), Hanagal (2011) and Wienke (2011) and the references therein. In ad-dition to using random effect models in survival analysis, it is also necessary toinvestigate repairable systems in heterogeneous populations. Engelhardt andBain (1987) and Lawless (1987) studied parameter estimation for repairablesystems based on compound power law model and Poisson process regressionmodel, respectively, without using the word frailty. Finkelstein (2004) studiedminimal repairs in heterogeneous populations.Cha and Finkelstein (2011) ex-tended the notion of minimal repair to items from heterogeneous populations.Slimacek and Lindqvist (2016) developed a method for estimation of non-homogeneous Poisson process (NHPP) parameters with unobserved hetero-geneity. They showed that there is no need for parametric assumptions aboutthe heterogeneity. So, their proposed method avoids the frequently encoun-tered numerical problems associated with the standard models of unobservedheterogeneity. On the other hand, one of the important issues in a warrantyplan issued by sellers is determining the plan duration and the frequency ofservice provision. These studies focus on minimizing the costs of providingsuch services or maximizing profits. The purpose of this work is to study theoptimization problem for a repairable system in heterogeneous populations.Frailty model is used to explain this unobserved heterogeneity. we try to findthe optimal number of repair and time duration of the service using by intro-ducing an expected cost function.Model description and basic results are presented in Section 2. In Section 3an expected cost function is introduced to find the optimal number of minimalrepair and time duration of the service. The numerical optimization results arestudied in Section 4.
2 Model description
Let us consider a repairable system that can be minimally repaired after eachfailure with negligible repair time. Also suppose that the system can be prof-itable in a limited time. We carry out our study under the following assump-
Hooti, F., and Ahmadi, J. 124
tions and notation.
1) A new system is put into the operation at time t = 0.
2) All failures are detected immediately, and the repair times are negligible.
3) A minimal repair does not change the failure rate.
We assume that the system is used as long as the nth (n ≥ 1) minimal repairoccurs or a predetermined time τ has been reached. It means that the timeduration of the services is stopped at T = min{Tn,τ}, where Tn is the timeat which the nth minimal repair is done. The purpose of this study is to findoptimal values of n and τ by minimizing the total expected cost of the durationof the system operation. Assume that we have an NHPP such that the intensityfunction of the system is given by
λ (t|z) = zλ0(t), z, t ≥ 0, (1)
where z and λ0(.) are the frailty of the system and baseline intensity func-tion, respectively. Suppose that the baseline intensity is power law intensityfunction, then (2) is
λ (t|z) = zabtb−1, t ≥ 0, a,b > 0, (2)
where a and b are the scale and shape parameters of the baseline intensity,respectively. Without loss of generality, let us take a = 1. Assume that Z isdistributed as gamma random variable with E(Z) = 1 and Var(Z) = α , i.e., theprobability density function of Z is as follows
h(z) =1
α1α Γ( 1
α)z
1α−1 exp
(− z
α
), z > 0. (3)
Therefore, from (15) and (3), we can compute the unconditional (mixture)intensity function, that is
λ (t) =btb−1
1+αtb . (4)
Not that if b < 1, i.e., λ0(t) is decreasing function in t, then (4) is decreasing int. On the other hand. if b > 1, i.e., the baseline intensity function is increasing
then (4) is inverted bathtub-shaped function with tmax =(b−1
α
)1b .
The 6th Seminar on Reliability Theory and its Applications 125
In the following, we present the relationships that we need in the next sec-tion. The cumulative distribution function of Tn is given by
FTn(t) = 1−n−1
∑i=0
(Λ(t))i
i!exp(−Λ(t)) , (5)
where Λ(t) = tb is the cumulative intensity function of the system. By positiv-ity of T , the expected value of T is as follows
E(T ) =∫
τ
0FTn(x)dx. (6)
On the other hand, the expected number of minimal repairs in [0,T ] is
E (N(T )) = nFTn(τ)+Λ(τ)(1−FTn(τ)) . (7)
To find out the optimal values of n and τ that minimize the costs or maximizethe utilities of the duration of the warranty, we have to determine the optimalitycriterion which will be studied in the next section.
3 Cost function
Many factors are effective in minimizing the system’s costs or maximizing itsutility to find the optimal values of n and τ . For instance, a poor estimationof parameters of the model or their functions that affect the system’s perfor-mance or its failures, maximizes the costs of the system. Bhattacharya et al.(2014) considered the cost of the imprecision (variance) of the estimates of theunknown parameters of the lifetime distribution under consideration. Assumethat
1) C0,C1, and C3 are the costs of start-up system, each minimal repair, theestimate of the variance of the frailty random variable, respectively;
2) C2 is the utility of the duration of experiment;
3) The costs Ci,(i = 0,1,2,3) are known.
Hooti, F., and Ahmadi, J. 126
We consider the following cost function
C(n,τ) =C0 +C1E (N(T ))−C2E(T )+C3MSE(T ), (8)
where MSE(T ) = E (T −α)2. We use the following algorithm to find theoptimal values based on (8).
1) The first step is to specify C0, C1, C2, C3, and the parameters of the ROCOFof the system.
2) Take n = 1.
3) Compute τ∗ that satisfies (8) and compute C(n,τ∗).
4) If conditions C(n−1,τ∗)> C(n,τ∗) and C(n,τ∗)≤ C(n+1,τ∗) are satis-fied, then n∗ = n.
5) Else, put n = n+1 and return to step 3).
6) The optimal values of the warranty length, the number of minimal repairs,and the optimal expected cost are τ∗, n∗, and C(n∗,τ∗).
4 Numerical results.
To illustrate the results in the previous sections, we present some graphical andnumerical computations. Throughout this section, we assume that C0 = 20,C1 = 25, C2 = 50, C3 = 30 and α = 0.5. In Figure 1, it is shown that the totalexpected cost function (8) has an optimal solution in n and τ when b = 0.5 andb = 2. Figure 4 is presented to show that the minimum of C(n,τ) is uniquein τ for n = 3,5,8 for both b = 0.5 and b = 2. From Figure 3, it is obviousthat the minimum of this cost function is unique in n for some selected valuesof τ . The behaviour of the cost function for different values of b is shown inFigure 4. We have determined the optimal values of (n∗,τ∗) and C(n∗,τ∗) forthe Weibull model for different values of b, the results are presented in Tables3 and 2.
The 6th Seminar on Reliability Theory and its Applications 127
Table 1: Optimum values of (n∗,τ∗) and C(n,τ)
b (n∗,τ∗) C(n∗,τ∗)
0.1 (12, 4.85) -535.8250
0.3 (10, 4.85) -535.8251
0.5 (4, 5.24) -547.1239
0.9 (3, 7.2) -809.8351
(C0,C1,C2,C3) = (20,25,50,30) and α = 0.5.
Table 2: Optimum values of (n∗,τ∗) and C(n,τ)
b (n∗,τ∗) C(n∗,τ∗)
2 (6, 9.542) -6033.2592
2.5 (11, 9.6) -10572.8796
3 (14, 9.643) -2439.5381
3.5 (15, 9.66) -112329.400
(C0,C1,C2,C3) = (20,25,50,30) and α = 0.5.
(i) b = 0.5 (ii) b = 2
Figure 1: Plot of C(n,τ).
Hooti, F., and Ahmadi, J. 128
(i) b = 0.5 (ii) b = 2
Figure 2: Plot of C(n,τ) for selected values of n.
(i) b = 0.5 (ii) b = 2
Figure 3: Plot of C(n,τ) for selected values of τ .
(i) b = 0.5 (ii) b = 2
Figure 4: Plot of C(n,τ) for selected values of b.
The 6th Seminar on Reliability Theory and its Applications 129
5 Conclusion
In real life, most populations of manufactured systems are heterogeneous. Be-cause not all factors such as instability of the production process, environ-mental factors and etc., can always be controlled. In reliability theory, frailtyrandom variable is considered to describe the heterogeneity of populations ofsystems. Ignoring this random value may affect the behaviour of failure rate.For instance, taking into account the frailty random variable can result in thedecreasing failure rate as opposed to the increasing failure rate when this ran-domness is neglected. Therefore, decisions about optimizing the number ofrepairs or warranty periods for such populations need to consider the frailtyvariable.
References
[1] Bhattacharya, R., Pradhan, B., and Dewanji, A. (2014), Optimum life test-ing plans in presence of hybrid censoring: a cost function approach. Ap-
plied Stochastic Models in Business and Industry, 30, 519–528.
[2] Cha, J. H. and Finkelstein, M. (2011), Stochastic intensity for minimalrepairs in heterogeneous populations, Journal of Applied Probability, 48,868–876.
[3] Duchateau, L. and Janssen, P. (2008), The Frailty Model, Springer, NewYork.
[4] Engelhardt, M. and Bain, L. J. (1987), Statistical analysis of a compoundpower-law model for repairable systems, IEEE Transactions on Reliability,R-36, 392–396.
[5] Finkelstein, M. (2004), Minimal repair in heterogeneous populations.Journal of Applied Probability, 41, 281–286.
[6] Hanagal, D. D. (2011), Modeling Survival Data Using Frailty Models,Chapman and Hall/ CRC, New York.
Hooti, F., and Ahmadi, J. 130
[7] Lawless, J. F. (1987), Regression methods for Poisson process data, Jour-
nal of the American Statistical Association, 82, 808–815.
[8] Slimacek, V. and Lindqvist, B. H. (2016), Nonhomogeneous Poissonprocess with nonparametric Frailty, Reliability Engineering and System
Safety, 149, 14–23.
[9] Vaupel, J. W., Manton, K. G. and Stanllard, E. (1979), The impact of het-erogeneity on individual frailty on the dynamic of mortality, Demography,16, 439-454.
[10] Wienke, A. (2011), Frailty Models in Survival Analysis, Chapman andHall/ CRC, New York.
The 6th Seminar on Reliability Theory and its Applications
Relationships Between Redundancy, Optimal Allocation and ComponentsImportance in Coherent Systems
Khanjari Sadegh, M.1
1 Department of Statistics, University of Birjand, Birjand, Iran
Abstract: In this paper the connections between the optimal redundancy allo-cation problems and components importance in a coherent system consistingof n independent components and m identical redundant components are stud-ied. The effect of improvement one or two components on both the systemreliability and the system failure rate is also discussed. Using these, a newmeasure of component importance is introduced. This measure is useful forboth active and standby redundancy problems in coherent systems. Particularcases when m= 1,2 in series systems and m= 1 in parallel systems are studiedin details.
Keywords: Coherent Systems, Redundancy, Importance Measures, StochasticOrders.
1 Introduction
Consider a system consisting of n components in which all components andthe system are in working or failed state. The state of the system is completelydetermined by the states of the components. Let φ(x1, . . . ,xn) denote the stateof the system and xi denotes the state of the ith component for i = 1, . . . ,n(xi =
1 means that the ith component is working and xi = 0 that it is not). φ is calledthe system structure function. The system is coherent if φ be increasing, that
1Khanjari Sadegh, M.: [email protected]
131
Khanjari Sadegh, M. 132
is, when the state of a component is improved, the state of the system can notbe worse, and every component be relevant for the system, that is, φ is strictlyincreasing in each variable in at least a point. For details on coherent systemsrefer to Barlow and Proschan (1975).The use of redundancy mechanisms is an important and effective way to im-
prove the performance of the system. Two common schemes for allocating theredundant components to the system, are called active and standby redundan-cies. In the former, the redundant components are put in parallel to the originalcomponents of the system, while in later, they start working immediately aftercomponent failures. In fact the use of redundancy means that, the improve ofsystem performance via the improvement of its components. It is thereforeimportant to study the effect of the improving system components in improve-ment of the whole system. The relevant and essential problem related to theredundancy in systems is how to find the optimal allocation strategy such thatthe performance of the system be optimal in the sense of some stochastic or-ders. In last three decades the redundancy allocation problem has been widelystudied by many authors. See for examples Boland et al. (1992), Kotz et al.(2003), Hu and Wang (2009), Valdes and Zequeira (2003 and 2006), Belzunceet al. (2011 and 2013) and Jeddi and Doostparast (2016).In a fixed point of time consider a coherent system with n components andstructure φ and let
h(p) = E[φ(X)] = P(φ(X) = 1)
be the reliability function of the system where X=(X1, . . . ,Xn), p=(p1, . . . , pn)
and pi = P(Xi = 1) is the reliability of ith component.In during of the time we denote the system lifetime by T = φ(T1, . . . ,Tn) whereTi is the lifetime of ith component. We assume that the system components areindependent, that is Xi’s or Ti’s are independent random variables. Also assumethat Ti’s are nonnegative and absolutely continuous. We denote by Fi(t) =
P(Ti ≤ t), Fi(t) = 1−Fi(t), fi(t), hi(t) = fi(t)/Fi(t) and ri(t) = fi(t)/Fi(t) thedistribution, reliability, density, hazard rate and reversed hazard rate functions
The 6th Seminar on Reliability Theory and its Applications 133
of Ti, respectively.
The rest of this paper is organized as follows: In Section 2 we obtained aformula for system reliability when the reliabilities of two components i and j
are increased to pi + δi and p j + δ j, respectively. It extends the result of Xieand Shen (1989). The main result of the paper is introducing a new measureof component importance which is an extension of the Birnabaum measureof importance and is useful in both active and standby redundancy problemsin coherent systems. Also the effect of the reduction of hi(t), the hazard ratefunction of ith component lifetime on hT (t), the hazard rate function of thesystem lifetime is also considered. It is known in general that the reductionin hi(t) does not necessary imply a reduction in hT (t). We showed that thereductions in hT (t) and hi(t) are the same, if and only if the ith componentbe in series with the other system components. Using this a characterizationfor the series systems is given. Similar results for the effect of reversed hazardrate of components on the reversed hazard rate of the system are also obtained.Finally in Sections 3, the active and standby redundancy problems in the seriesand parallel systems are discussed, respectively. It is shown in a series systemthat if we want to allocate all m identical redundant components to a singleoriginal component, the optimal allocation is obtained if we allocate the sparescomponents to the weakest component, in the sense of usual stochastic order.If the spares could be allocated to different components then a necessary andsufficient condition for the optimal allocation is obtained when m = 2. Alsoin a parallel system with n components and m = 1 standby spare, a relativemutual importance measure for component i with respect to component j isproposed.
We recall the two stochastic orders that will be used in section 2. A randomvariable X is said to be less than Y in usual stochastic order and denoted byX ≤st Y , if FX(t) ≤ FY (t) for all t. Also X is said to be less than Y in hazardrate order and denoted by X ≤hr Y , if hX(t)≥ hY (t) for all t.
Khanjari Sadegh, M. 134
2 A new measure of component importance useful in active and standby
redundancies
In this section, we consider a coherent system with n independent componentsand study the effect of improvement in system components on the system relia-bility and then give our new measure of component importance which is usefulin redundancy problems of coherent systems. For the sake of completeness wefirst give the effect of improving one component on the system reliability, aresult obtained by Xie and Shen (1989).
Lemma 2.1. Let ∆i denote the increase of system reliability due to the increas-ing of ith component reliability as much as δi. Then
∆i = δiIB(i) (1)
where
IB(i) = P(φ(1i,X)−φ(0i,X) = 1) = h(1i,p)−h(0i,p) =∂h(p)
∂ pi
is the well known Birnbaum importance measure of component i.
Now we extend the above lemma when two components i and j are improved.
Lemma 2.2. Let ∆i j denote the increase of system reliability due to the in-creasing of ith and jth component reliabilities as much as δi and δ j, respec-tively. Then
∆i j = δi j∂ 2h(p)∂ pi∂ p j
+δi∂h(0 j,p)
∂ pi+δ j
∂h(0i,p)∂ p j
(2)
where δi j = (pi +δi)(p j +δ j)− pip j.
Proof. If we use double pivotal decomposition as
φ(X)=XiX jφ(1i,1 j,X)+Xi(1−X j)φ(1i,0 j,X)+(1−Xi)X jφ(0i,1 j,X)+(1−Xi)(1−X j)φ(0i,0 j,X) we then have
h(p) = pip jh(1i,1 j,p)+ pi(1− p j)h(1i,0 j,p)+ (1− pi)p jh(0i,1 j,p)+ (1−pi)(1− p j)h(0i,0 j,p). In view of the given formula for IB(i) and note to ∆i j =
h(pi +δi, p j +δ j,p)−h(p), the proof of the lemma follows.
The 6th Seminar on Reliability Theory and its Applications 135
Remark 2.1. Obviously ∆i j ≥ 0 as the system is coherent but the first term inEquation (2.2) may be negative. Also note that when δ j = 0 it can be shownthat (2.2) reduces to (2.1).We now return to the Equation (2.1) and consider three special cases as fol-lows.Case 1. If δi = δ , i = 1, . . . ,n, that is the improvement of all components bethe same, then from (2.1) we see that in view of the Birnbaum measure ofimportance, improvement of the most important component causes the largestincreasing in system reliability. In other words the Birnbaum measure of im-portance is crucial to find the best component in order to increase the systemreliability. This is not case if δi 6= δ and another measure of importance maybe used. One may use ∆i = δiIB(i) as the new measure of importance for com-ponent i. But it is not applicable as it depends on δi and all components arenot comparable under the same conditions. Now look at the following case.Case 2. Suppose we want to allocate one active redundancy and independentcomponent with reliability p to a single system component. The question ishow to find the optimal allocation. If we allocate it to the component i then pi
will be increased to 1− (1− pi)(1− p) = 1−qiq and therefore δi = qi−qiq =
qip. Hence ∆i = qipIB(i). Based on this we now introduce our new measureof importance for component i as follow
IAR(i) = (1− pi)IB(i). (3)
Notation AR in IAR(i) refers to active redundancy. It is a generalization ofIB(i) as it depends to reliability of component i but IB(i) does not. Also notethat in this case the active component is exposed to all system componentsunder the same condition. Hence the optimal allocation is the component thathas the largest IAR(.).
Case 3. In this case we want to allocate one independent standby componentwith reliability p and lifetime S to a single system component and find theoptimal allocation. If we allocate it to the component i with lifetime Ti thenpi will be increased to pi ∗ p. By pi ∗ p we mean Fi ∗ F(t) = P(Ti + S > t),
Khanjari Sadegh, M. 136
the convolution of Fi and F , which are the reliability functions of Ti and S,respectively. Therefore
δi = pi ∗ p− pi = P(Ti +S > t)−P(Ti > t)
and our new measure of importance for component i is
ISR(i) = (pi ∗ p− pi)IB(i). (4)
Notation SR in ISR(i) refers to standby redundancy. Hence in this case theoptimal allocation is the component that has the largest ISR(.).
Remark 2.2. If the system components are identical, that is p1 = · · ·= pn wethen in both cases 2 and 3 have, δ1 = · · ·= δn and therefore in order to find theoptimal allocations, IAR(i) and ISR(i) will be equivalently reduced to IB(i).
We now consider the relationship between component failure rates with sys-tem failure rate. Let T = φ(T1, . . . ,Tn) denote the lifetime of a coherent systemwith structure φ where the component lifetimes, Ti’s, are independent and ab-solutely continuous random variables. It is known that
FT (t) = P(T > t) = h(F1(t), . . . , Fn(t)) = h(F(t)) = h(p)|p=F(t)
where Fi(t) = P(Ti > t). Note that h is a multilinear of its arguments. Let hi(t)
be the failure rate function of component i and hT (t) the failure rate functionof the system. By the chain rule for differentiation we have
hT (t) =n
∑i=1
hi(t)Fi(t)
∂h(p)∂ pi|p=F(t)
h(F1(t), . . . , Fn(t)). (5)
Now assume that T ′i is an improved lifetime for component i such that h′i(t)≤hi(t) for all t ≥ 0, that is Ti ≤hr T ′i .
If T ′ = φ(T1, . . . ,Ti−1,T ′i ,Ti+1, . . . ,Tn) then in general T ≤hr T ′ is not hold trueexcept for the series systems. See for example Boland et al.(1994). As we havealready seen that Ti ≤st T ′i implies T ≤st T ′ but this is not true in general forhazard rate order. In other words a reduction in the failure rate of component i
The 6th Seminar on Reliability Theory and its Applications 137
does not necessary imply a reduction in the system failure rate. Here we havethe following result.
Lemma 2.3. The reduction in the failure rate function of component i impliesthe same reduction in the system failure rate if and only if the component i bein series with the remaining components.Proof. The Equation (5) can be written as
hT (t) =n
∑i=1
hi(t)ci(t)
where ci(t) =Fi(t)
∂h(p)∂ pi|p=F(t)
h(F1(t),...,Fn(t)). It is easy to show that the component i is in series
with the other components if and only if ci(t) = 1. This completes the proofof the lemma.
Remark 2.3. Similar to Equation (5), we have the following expression forrT (t), the reversed hazard rate function of the system:
rT (t) =n
∑i=1
ri(t)Fi(t)
∂h(p)∂ pi|p=F(t)
1−h(F1(t), . . . , Fn(t))(6)
where ri(t) is the reversed hazard rate of component i.Regarding the Equation (6) we have the following result.
Lemma 2.4. The reduction in the reversed hazard rate function of componenti implies the same reduction in the reversed hazard rate of the system if andonly if the component i be in parallel with the remaining components.Proof. The proof is similar to the proof of Lemma 2.3.
Remark 2.4. From Lemmas 2.3 and 2.4 we have a simple characteriza-tion for series and parallel systems, respectively. If the reduction in failurerate(reversed failure rate) of each component implies the same reduction in thefailure rate(reversed failure rate) of the system then system is series(parallel).It is known that in a series system hT (t) = ∑
n1 hi(t) and in a parallel system
rT (t) = ∑n1 ri(t).
Khanjari Sadegh, M. 138
3 Redundancy problems in series and parallel systems
In this section we first consider the active redundancy problem in a seriessystem with n independent components and m = 2 independent and identi-cal spares and then the standby redundancy problem in a parallel system withn components and m = 1 spare. Suppose that the original and spare compo-nents are also independent. We want to find the optimal allocation in the senseof usual stochastic order.
Let h(p) = p1p2 · · · pn denote the reliability function of a series system and p
be the common reliability of two spares. Without loss of generality we assumethat p1 ≤ p2 ≤ ·· · ≤ pn. When m = 1, it is easy to show that
IAR(1)≥ IAR(2)≥ ·· · ≥ IAR(n)
and therefore optimal allocation is obtained when the spare component isadded to component 1 which is the weakest component.Now let m = 2. Note that different allocation strategies may be used. One mayallocate both spare components to a single component of the system or allo-cate one spare component to one original component and the other one to theelse. We use r = (r1,r2, . . . ,rn) as the allocation vector where ri is the numberof spares allocated to component i and r1 + r2 + · · ·+ rn = 2. The followinglemma gives a necessary and sufficient condition for optimal allocation.
Lemma 3.1. Under the above assumptions we have only two optimal alloca-tions as follows:(a) The allocation vector r1 = (2,0, . . . ,0) is optimal if and only if q2/q1 <
q≤ 1 where qi = 1− pi and q = 1− p.(b) The allocation vector r2 = (1,1,0, . . . ,0) is optimal if and only if 0 < q≤q2/q1.Proof. In view of the optimal allocation in the case where m = 1 and sincep1 ≤ p2 ≤ ·· · ≤ pn, it is easy to show that no other allocation vectors ex-cept r1 and r2 can be optimal. We have hr1(p, p) = (1− q1q2)∏
n2 pi and
The 6th Seminar on Reliability Theory and its Applications 139
hr2(p, p) = ∏21(1−qiq)∏
n3 pi as the reliability functions of the redundant sys-
tems under the allocation vectors r1 and r2, respectively. Now by simpli-fication of algebraic equations we have hr1(p, p) ≤ hr2(p, p) if and only if0 < q≤ q2/q1. This completes the proof of the lemma.
Remark 3.1. In special case when the system components are identical, thatis p1 = · · · = pn, then from part (b) of the Lemma 3.1, r2 = (1,1,0, . . . ,0) isonly optimal allocation.
Now consider the standby redundancy problem in a parallel system with n
components and m = 1 standby component. It is easy to see that
IAR(1) = IAR(2) = · · ·= IAR(n)
that is, regardless the values of pi’s, there is no difference to allocate a singleactive redundancy component to any original component of the system. In factthe active redundancy in a parallel system with n components is equivalent toa parallel system with n+1 components.Let T = max{T1, . . . ,Tn} denote the system lifetime where Ti is the lifetime ofcomponent i and suppose S is the lifetime of the single standby component.We define a relative mutual measure of importance for components i and j asfollow.
Definition 3.1. In standby redundancy problem of a parallel system with asingle standby component, we say that the component i is more importantthan component j if
max{Ti +S,Tj} ≥st max{Ti,Tj +S}and component i∗ is the most important one if it is more important than theother components. Therefore S should be added to Ti∗.
References
[1] Barlow, R.E. and Proschan, F. (1975), Statistical Theory of Reliability and
Life Testing, Holt, Rinehart and Winston.
Khanjari Sadegh, M. 140
[2] Belzunce, F., Martinez-Puertas, H. and Ruiz, J.M. (2013), On allocationof redundant components for systems with dependent components, European
Journal of Operational Research, 230, 573-580.
[3] Boland, P.J., EI-Neweihi, E. and Proschan, F. (1992), Stochastic order forredundancy allocations in series and parallel systems, Advances in Applied
Probability, 24, 161-171.
[4] Boland, P.J., EI-Neweihi, E. and Proschan, F. (1994), Applications of thehazard rate ordering in reliability and order statistics, Journal of Applied Prob-
ability, 31, 180-192.
[5] Hu, T. and Wang, Y. (2009), Optimal allocation of active redundancies inr-out-of-n systems, Journal of Statistical Planning and Inference, 139, 3733-3737.
[6] Valdes, J.E. and Zequeira, R.I. (2006), On the optimal allocation of twoactive redundancies in a two-component series system, Operations Research
Letters, 34, 49-52.
[7] Xie, M. and Shen, K. (1989), On ranking of system components with re-spect to different improvement actions, Microelectron, Reliability, 29, 159-164.
The 6th Seminar on Reliability Theory and its Applications
On Component Redundancy Versus System Redundancy for a SystemComposed of Different Types of Components
Kelkinnama, M.1
1 Department of Mathematical Sciences, Isfahan University of Technology,Isfahan, Iran
Abstract: In this note, we explore the problem of stochastic comparison ofactive redundancy at component level versus system level. In the other words,we want to obtain some conditions under which the redundancy at componentlevel has superiority to the redundancy at system level for a coherent systemwith multiple active redundancies. It is supposed that the system consistingof components from some different types where they are possibly dependent.The conditions are presented to compare component and system redundanciesby means of the usual stochastic, hazard rate and reversed hazard rate orders.Some numerical examples are also provided to illustrate the theoretical results.
Keywords: Coherent System, Active Redundancy, Component Level, SystemLevel, Stochastic Orders.
1 Introduction
Consider a coherent system which consists of n components from K differenttypes, such that there exists ni components from ith type, i = 1, · · · ,K, where
∑Ki=1 ni = n. Let T (i)
j be the lifetime of jth component from type i, j = 1, · · · ,ni,i = 1, · · · ,K. Denote the lifetime of the coherent system by τ(T) where
T = (T (1)1 , ...,T (1)
n1 , · · · ,T (K)K , ...,T (K)
nK ). (1)1Kelkinnama, M.: [email protected]
141
Kelkinnama, M. 142
The reliability function of system can be represented as follows
Fτ(t) =n1
∑m1=0· · ·
nK
∑mK=0
Φ(m1, · · · ,mK)Pr(C1(t) = m1, · · · ,CK(t) = mK) (2)
where, Ci(t) denotes the number of components of type i working at time t,and Φ which is called the survival signature, represents the probability thatsystem is working when exactly mi components of type i is alive, see Coolenand Coolen-Maturi [1]. They obtained the following representation for thecase that the components are independent:
Fτ(t) =n1
∑m1=0· · ·
nK
∑mK=0
Φ(m1, · · · ,mK)K
∏i=1
(ni
mi
)[Fi(t)]mi[Fi(t)]ni−mi
where Fi and Fi are the common distribution and reliability functions of com-ponents from type i, respectively. Consider a more general case in which therandom failure times of components of the same type are exchangeable and therandom failure times of components of different types are dependent. Eryilmazet al. [2] using the survival copula C for dependence structure of components,proved that (2) can be represented as
Fτ(t) =n1
∑m1=0· · ·
nK
∑mK=0
m1
∑l1=0· · ·
mK
∑lK=0
(−1)m1−l1+···+mK−lK
(n1
l1
)· · ·(
nK
lK
)×(
n1− l1m1− l1
)· · ·(
nK− lKmK− lK
)Φ(l1, · · · , lK)C(F1(t)︸︷︷︸
m1
, 1︸︷︷︸n1−m1
, · · · , FK(t)︸ ︷︷ ︸mK
, 1︸︷︷︸nK−mK
),
Note that Fτ(t) is, in fact, a generalized distorted reliability function as
Fτ(t) = H(F1(t), · · · , FK(t)) (3)
where, H : [0,1]K → [0,1] is an increasing continuous multivariate distortionfunction such that H(0, · · · ,0) = 0 and H(1, · · · ,1) = 1, see Navarro et al. [4].The reliability characteristics of a system can often be enhanced by incorpo-rating redundancies (spares) into the system. One commonly used type ofredundancy is active redundancy. In the active redundancy, the original com-ponents and the redundant ones work together as parallel and hence the system
The 6th Seminar on Reliability Theory and its Applications 143
lifetime is equal to the maximum of their lifetimes. This strategy has mostlyused when replacement of the components during the operation time of thesystem is impossible. The allocation of redundant components into the systemcan be done generally in two levels: redundancy at the component level andredundancy at the system level. In the former case, some spares are providedfor each component; while in the latter, the original coherent system attachesto some copies of itself. Here, suppose that for each component there are m
active redundancies such that their common distribution is the same as thatof the original component (this situation is called matched redundancy in theliterature). Let Y (i)
j,l be the lifetime of lth redundant component for jth compo-nent from type i, i = 1, · · · ,K, j = 1, · · · ,ni, and l = 1, · · · ,m. In this paper, it isassumed that the active redundancies are independent of the original compo-nents. Assume that
Yl = (Y (1)1,l , · · · ,Y
(1)n1,l
, · · · ,Y (K)1,l , · · · ,Y (K)
nK ,l), (4)
and let τ(Yl) be the lifetime of the coherent system with components lifetimesYl. Under this setup, τ(T) and τ(Yl), l = 1, ...,m are independent. For thesystem with redundancy at the component level, denote the lifetime with τc =
τ(T∨Y1∨ ...∨Ym) and for the system with redundancy at the system level letthe corresponding lifetime be τs = τ(T)∨τ(Y1)∨ ..∨τ(Ym), where ′∨′ meansmax operator. Using (3) the reliability function of τc can be written as
Fτc(t) = H(1− (1− F1(t))m+1, . . . ,1− (1− FK(t))m+1)
= Hτc(F1(t), · · · , FK(t)), (5)
where Hτc(u1, · · · ,uK) = H(1−(1−u1)m+1, · · · ,1−(1−uK)
m+1). For the sys-tem with redundancy at the system level we have
Fτs(t) = 1− [1− H(F1(t), · · · , FK(t))]m+1
= Hτs(F1(t), · · · , FK(t)), (6)
where Hτs(u1, · · · ,uK) = 1− [1− H(u1, · · · ,uK)]m+1.
It is well-known in reliability engineering that the lifetime of a coherentsystem consisting of independent components with active redundancy at the
Kelkinnama, M. 144
component level dominates a coherent system having redundancy at the sys-tem level, using usual stochastic order. Hence it is an interesting and importantproblem that whether this principle holds under the other assumptions for co-herent systems and using some other stochastic orderings. In this regard, forexample Gupta and Kumar [3] provided sufficient and necessary conditionsfor stochastic comparisons between the component and system active redun-dancies for a coherent system with possibly dependent components and usingsome well-known orderings . Afterwards, Zhang et al. [5] generalized theirresults for multiple redundancies and possibly multiple non-matching spares.For studying the other related papers the interested reader can refer to [5] andreferences therein.
This paper investigate the aforementioned problem for a coherent systemconsisting of components from different types. In other words, for such systemwe conducts stochastic comparisons on the component and system redundan-cies in the sense of the usual stochastic, hazard rate, and reversed hazard rateorders. Hence, we first recall the definition of these stochastic orderings.
Definition 1.1. The random variable X is said to be smaller than Y in the
• usual stochastic order (denoted by X ≤st Y ) if F(x)≤ G(x) for all x,
• hazard rate order (denoted by X ≤hr Y ) if G(x)F(x) is increasing in x,
• reversed hazard rate order (denoted by X ≤rhr Y ) if G(x)F(x) is increasing in x.
2 Main results
Theorem 2.1. For any fixed m ∈ N+, it holds that τc ≥st τs for all F1, · · · , FK iff
H(1− (1−u1)m+1, . . . ,1− (1−uK)
m+1)≥ 1− [1− H(u1, · · · ,uK)]m+1 (7)
for all u1, · · · ,uK ∈ (0,1).
Example 2.2. Consider a series system with independent components whichconsisting of ni components of type i, with common reliability function Fi,
The 6th Seminar on Reliability Theory and its Applications 145
i = 1, · · · ,K. The reliability function of this system is obtained as
F(t) = [F1(t)]n1×·· ·× [FK(t)]nK
= H(F1(t), · · · , FK(t))
where
H(u1, · · · ,uK) = un11 ×·· ·×unK
K . (8)
Hence from (5) and (6) we have
Hτc(u1, · · · ,uK) = (1− (1−u1)m+1)n1×·· ·× (1− (1−uK)
m+1)nK ,
Hτs(u1, · · · ,uK) = 1− [1− (un11 ×·· ·×unK
K )]m+1.
Now, suppose that a series system has five components from two types suchthat n1 = 3 and n2 = 2, and also let m = 2. For cheking the condition in (7), the3D plot of Hτc(u1,u2)− Hτs(u1,u2) for all values of u1,u2 ∈ (0,1) is given inFigure 1. As it can be seen, the desired function is positive and hence τc≥st τs.
Let us assume now that the components of a series are dependent with a
Figure 1: Hτc (u1,u2)− Hτs (u1,u2) for independent case in Example 2.2
Figure 2: Hτc (u1,u2)− Hτs (u1,u2) for dependent case with θ = 0.5 in Example 2.2
Kelkinnama, M. 146
common FGM survival copula
C(u1, · · · ,uK) =K
∏i=1
ui(1+θ
K
∏i=1
(1−ui)),
where, θ ∈ [−1,1]. Note that the independency occurs for θ = 0. For theconsidered series system we have
H(u1,u2) = u31u2
2(1+θ(1−u1)3(1−u2)
2). (9)
The required condition for ”st” ordering between τc and τs is illustrated inFigure 2, which show that the redundancy at component level is superior tothe redundancy at system level in the sense of usual stochastic order.
In the following theorem, we will explore the hazard rate ordering betweenτc and τs.
Theorem 2.3. For any fixed m ∈ N+, it holds that τc ≥hr τs for all F1, · · · , FK if
H(1− (1−u1)m+1, . . . ,1− (1−uK)
m+1)
1− [1− H(u1, · · · ,uK)]m+1 := Γ(u1, ...,uK) (10)
is decreasing in (0,1)K.
The next theorem, provides a sufficient condition on generalized distortedfunction for hr ordering in Theorem 2.3. First note the following lemma fromZhang et al. [5].
Lemma 2.4. For any positive integer m, the function ω(u) = u(1−u)m
1−(1−u)m+1 is
decreasing and positive in u ∈ (0,1).
In the sequel, we use the notation
DiH(u1, · · · ,uK) =∂
∂uiH(u1, · · · ,uK).
Theorem 2.5. Suppose that
(i) α Hi (u1, · · · ,uK)=
uiDiH(u1,···,uK)H(u1,···,uK)
is decreasing in (0,1)K, for all i= 1, · · · ,K,
(ii) H(u1, · · · ,uK)≤ ui for all i = 1, · · · ,K,
then, we have τc ≥hr τs.
The 6th Seminar on Reliability Theory and its Applications 147
Proof. It must be showed that the partial derivative of (10) w.r.t. ui is non-positive for all i = 1, ...,K. Let 1− (1−ui)
m+1 = vi. We have
∂
∂uiΓ(u1, ...,uK)
sgn=
(1−ui)m
1− (1−ui)m+1viDiH(v1, ...,vK)
H(v1, ...,vK)− (1− H(u1, ...,uK))
mDiH(u1, ...,uK)
1− (1− H(u1, ...,uK))m+1
≤ (1−ui)m
1− (1−ui)m+1uiDiH(u1, ...,uK)
H(u1, ...,uK)− (1− H(u1, ...,uK))
mDiH(u1, ...,uK)
1− (1− H(u1, ...,uK))m+1
=ui(1−ui)
m
1− (1−ui)m+1DiH(u1, ...,uK)
H(u1, ...,uK)
− DiH(u1, ...,uK)
H(u1, ...,uK)
H(u1, ...,uK)(1− H(u1, ...,uK))m
1− (1− H(u1, ...,uK))m+1
=DiH(u1, ...,uK)
H(u1, ...,uK)ω(ui)−
DiH(u1, ...,uK)
H(u1, ...,uK)ω(H(u1, ...,uK))
≤ DiH(u1, ...,uK)
H(u1, ...,uK)ω(H(u1, ...,uK))−
DiH(u1, ...,uK)
H(u1, ...,uK)ω(H(u1, ...,uK))
= 0
where the first inequality follows from condition (i) and the fact that vi ≥ ui
and the second inequality follows Lemma 2.4 and condition (ii). Thus, theproof is finished.
Example 2.6. For the system in Example 2.2, we aim to check the conditionsgiven in Theorem 2.5. From (8) we have
DiH(u1, · · · ,uK) = niuni−1i
K
∏j=1, j 6=i
un jj ,
hence
αHi (u1, · · · ,uK) = ni,
which shows the correctness of condition (i). Also, the condition (ii) clearlyholds. Next, for series system with dependent components for θ = 0.5, thefunction α H
1 based on (9) for all values of u1,u2 is plotted in Figure 2. As it
Kelkinnama, M. 148
can be seen, α H1 is decreasing-increasing in u1 and increasing in u2, hence the
condition (i) in Theorem 2.5 does not hold. Hence for ”hr” ordering betweenτc and τs, we must use Theorem 2.3, directly. The plot of (10) is depicted inFigure 4 and it shows that (10) is decreasing in u1 and u2 hence we can implythat τc ≥hr τs in this case.
Figure 3: Plot of the function α H1 in Example 2.6
Figure 4: Plot of the function (10) in Example 2.6
The following theorem is devoted to reversed hazard rate ordering betweenτc and τs.
Theorem 2.7. For any fixed m ∈ N+, it holds that τc ≥rhr τs for all F1, · · · , FK
if
1− H(1− (1−u1)m+1, . . . ,1− (1−uK)
m+1)
[1− H(u1, · · · ,uK)]m+1 := ∆(u1, ...,uK) (11)
is decreasing in (0,1)K.
The 6th Seminar on Reliability Theory and its Applications 149
In the next theorem, a sufficient condition on generalized distorted functionis obtained for rhr ordering between τc and τs.
Theorem 2.8. If β (ui) =(1−ui)DiH(u1,···,uK)
1−H(u1,···,uK)is increasing in (0,1)K, for all i =
1, · · · ,K, then τc ≥rhr τs.
Proof. The desired result holds if we can show that the partial derivative of(11) w.r.t. ui is non-positive for all i = 1, ...,K.
∂
∂ui∆(u1, ...,uK)
sgn=−(1−ui)
mDiH(1− (1−u1)m+1, ...,1− (1−uK)
m+1)
1− H(1− (1−u1)m+1, ...,1− (1−uK)m+1)
+DiH(u1, ...,uK)
1− H(u1, ...,uK)
sgn=−(1−ui)
m+1DiH(1− (1−u1)m+1, ...,1− (1−uK)
m+1)
1− H(1− (1−u1)m+1, ...,1− (1−uK)m+1)
+(1−ui)DiH(u1, ...,uK)
1− H(u1, ...,uK)
=−β (1− (1−ui)m+1)+β (1− (1−ui))
≤ 0
where the inequality follows from the assumption. Thus, the proof is finished.
Example 2.9. At the continue of Example 2.2, now, we show the validity ofthe condition in Theorem 2.8. We have
(1−ui)DiH(u1, · · · ,uK)
1− H(u1, · · · ,uK)=
(1−ui)niuni−1i ∏
Kj=1, j 6=i u
n jj
1−∏Kj=1 u
n jj
:= gi(u1, · · · ,uK).
(12)
For the especial case of m = 2,n1 = 3 and n2 = 2, the plot of g1(u1,u2) is givenin Figure 2. As we can see, g1(u1,u2) is increasing-decreasing in u1, hence thecondition in Theorem 2.8 is not satisfactory. Now, we plot the function (11),directly and see that it is not decreasing in u1 and u2, hence we conclude thatτc �rhr τs; see Figure 6.
Kelkinnama, M. 150
Figure 5: Plot of the function g1(u1,u2) in Example 2.9
Figure 6: Plots of the function in (11) in Example 2.9
References
[1] Coolen, F.P.A. and Coolen-Maturi, T. (2013), Generalizing the signature
to systems with multiple types of components. In: Complex systems and
dependability. Springer, Berlin.
[2] Eryilmaz, S., Coolen, F.P.A., and Coolen-Maturi, T. (2018), Mean residuallife of coherent systems consisting of multiple types of dependent compo-nents. Applications to Coherent Systems, Naval Research Logistics, 65(1),86-97.
[3] Gupta N., and Kumar, S. (2014), Stochastic comparisons of componentand system redundancies with dependent components, Operations Re-
search Letters, 42, 284-289.
[4] Navarro J., del Aguila Y., Sordo M.A., and Suarez-Llorens A. (2016),
The 6th Seminar on Reliability Theory and its Applications 151
Preservation of stochastic orders under the formation of generalized dis-torted distributions. Applications to coherent systems, Methodology and
Computing in Applied Probability, 18(2), 529-454.
[5] Zhang, Y., Amini Seresht, E., and Ding, W. (2017), Component and sys-tem active redundancies for coherent systems with dependent components.Applied Stochastic Models in Business and Industry, 33(4), 409-421.
The 6th Seminar on Reliability Theory and its Applications
On the Maximum Likelihood Prediction of a Future Record Based onRecords and Inter-Record Times: A Corrigendum
Khoshkhoo Amiri, Z.1, and MirMostafaee, S.M.T.K.1
1 Department of Statistics, University of Mazandaran, Babolsar, Iran
Abstract: This paper provides the correct relation of the predictive likelihoodfunction of a future record based on records and inter-record times in general.Actually, it is proved that the relation which has been used by many authors forthe predictive likelihood function of a future record based on records and inter-record times is wrong and the correct one is obtained. A real data example isprovided for the purpose of illustration.
Keywords: Inter-Record Times, Record Values, Maximum Likelihood Pre-diction, Predictive Likelihood Function.
1 Introduction
Let {Xn,n = 1,2, ...} be a sequence of independent and identically distributed(iid) random variables from an absolutely continuous distribution whose cu-mulative distribution function (cdf) and probability density function (pdf) aredenoted by F(x;θ) and f (x;θ), respectively, where θ is an unknown parame-ter or the vector of unknown parameters. Define the sequence {L(n),n≥ 1} asfollows: L(1) = 1 and L(n+1) = min{ j > L(n) : X j < XL(n)} for n≥ 2. Then{Rn,n≥ 1} with Rn = XL(n), n = 1,2, . . . , is the sequence of lower record val-ues, that is extracted from {Xn,n≥ 1}. In addition, {L(n),n≥ 1} is called thesequence of lower record times. The n-th lower inter-record time is defined as
1Khoshkhoo Amiri, Z.: [email protected]
152
The 6th Seminar on Reliability Theory and its Applications 153
follows:
Tn = L(n+1)−L(n), n = 1,2, . . . .
We note that R1 equals X1 which is called the trivial record. Similar defini-tions can be given for the upper records, upper record times and upper inter-record times. Record data are widely applied to many practical situations (forexample when access to all data is restricted). Some of these situations canlisted as industrial stress testing, finance, meteorological analysis, hydrology,seismology, sporting and athletic events, and mining surveys, see [2] and thereferences therein for more details.The joint pdf of lower record values R1,R2, . . . ,Rn is given by (see [2])
fR1,...,Rn(r1, . . . ,rn;θ) = f (rn;θ)n−1
∏i=1
f (ri;θ)
F(ri;θ), r1 > r2 > .. . > rn.
The marginal pdf of Rm, m≥ 1 is then given by
fRm(x;θ) =1
(m−1)![− ln(F(x;θ))]m−1 f (x;θ).
The joint pdf of the lower record values R1,R2, . . . ,Rn and inter-record timesT1, . . . ,Tn−1 is also given by (see [10])
fR1,...,Rn,T1,...,Tn−1(r1, . . . ,rn, t1, . . . , tn−1;θ)= f (rn;θ)n−1
∏i=1
f (ri;θ)[1−F(ri;θ)]ti−1,
(1)where r1 > r2 > .. . > rn and ti’s are positive integers, for i = 1, . . . ,n−1.
Predicting future observations is of crucial importance in many phenom-ena. The problem of prediction of future records based on observed recordshas attracted many authors’ attention and consequently many researches havebeen done in this regard in the last decades, see for example [2] and the ref-erences therein. However, prediction of a future record based on records andinter-record times has been discussed in recent years as a newer topic. Inaddition, [1] focused on the problem of interval prediction of order statisticsbased on records by employing inter-record times for the exponential distri-bution. One of the methods of prediction has been proposed by [3], which is
Khoshkhoo Amiri, Z., and MirMostafaee, S.M.T.K. 154
well-known as the maximum likelihood prediction. Maximum likelihood pre-diction applies the maximum likelihood principle to the joint prediction of afuture observation and estimation of the unknown parameter(s). The predictorand estimator(s), that will be obtained through this approach, are then calledthe maximum likelihood predictor (MLP) and the predictive maximum likeli-hood estimator(s), respectively. The problem of maximum likelihood predic-tion of a future record based on observed records and inter-record times wasperhaps first considered by [5] and then followed by [4] and [6]. However,the relation of the predictive likelihood function that was provided by [5], isnot correct. In what follows, we obtain the correct the predictive likelihoodfunction of a future record based on records and inter-record times in Section2. A real data example is provided in Section 3 for the purpose of illustration.
2 Main results
Let {R1,T1,R2,T2, · · · ,Rn−1,Tn−1,Rn} be the set of the first n records and n−1inter-record times from a population with the pdf and the cdf f (·;θ) andF(·;θ), respectively. Suppose that r = (r1, · · · ,rn) and t = (t1, · · · , tn−1) arethe observed vectors ofR= (R1, · · · ,Rn) and T = (T1, · · · ,Tn−1), respectively.Here, we wish to predict the m-th record value, Y = Rm, (m > n). In order toapply the maximum likelihood prediction approach, first we must obtain thepredictive likelihood function of the future record and the unknown parame-ter(s). We note that the predictive likelihood function is obtained by consid-ering the joint pdf of the future record and the information sample, where theinformation sample includes the records and inter-record times. Nadar andKızılaslan [5] considered the problem of maximum likelihood prediction ofa future record statistic based on records and inter-record times for the Burrtype XII distribution and then reported the following relation for the predictivelikelihood function
L∗(y,θ ;r,t) =f (y;θ) [Q(y;θ)−Q(rn;θ)]m−n−1
Γ(m−n)
n
∏i=1
f (ri;θ)(1−F(ri;θ))ti−1,
The 6th Seminar on Reliability Theory and its Applications 155
(2)
where tn = 1, and Q(r;θ) =− ln(F(r;θ)).
In the following theorem, we prove that the relation (2) is wrong by meansof providing the correct relation.
Theorem 2.1. Let {R1,T1,R2,T2, · · · ,Rn−1,Tn−1,Rn} be the set of the first n
records and n− 1 inter-record times from a population with the pdf and the
cdf f (·;θ) and F(·;θ), respectively, in which θ is the vector of the unknown
parameter(s). Suppose that r = (r1, · · · ,rn) and t = (t1, · · · , tn−1) are the ob-
served vectors of R = (R1, · · · ,Rn) and T = (T1, · · · ,Tn−1), respectively. The
predictive likelihood function of the m-th future record value, Y = Rm, (m > n)
and θ is given by
L∗∗(y,θ ;r,t) =f (y;θ) [Q(y;θ)−Q(rn;θ)]m−n−1
Γ(m−n)F(rn;θ)
n
∏i=1
f (ri;θ)(1−F(ri;θ))ti−1,
(3)
where tn = 1, and Q(r;θ) =− ln(F(r;θ)).
Proof. Here, we provide two proofs, Proof (a) and Proof (b).Proof (a): From (1), the joint pdf of the lower record values R1,R2, . . . ,Rn andinter-record times T1, . . . ,Tn−1 can be rewritten as
fR1,...,Rn,T1,...,Tn−1(r1, . . . ,rn, t1, . . . , tn−1;θ) =n
∏i=1
f (ri;θ)[1−F(ri;θ)]ti−1,
where tn = 1.
Samaniego and Whitaker [10] emphasized that the conditional distributionof Rn given {r1, t1,r2, t2, · · · ,rn−1, tn−1} depends only on rn−1. Therefore wecan conclude similarly that the conditional distribution of Y = Rm given
{r1, t1,r2, t2, · · · , rn−1, tn−1,rn, t ′n} depends only on the value of rn, where t ′n isthe observed value of n-th inter-record time. Of course we can conclude thatthe conditional distribution of Y = Rm given
{r1, t1,r2, t2, · · · ,rn−1, tn−1,rn} depends only on the value of rn, as well. The
Khoshkhoo Amiri, Z., and MirMostafaee, S.M.T.K. 156
conditional distribution of Y = Rm given Rn = rn is (see [2])
fY (y|rn) =[Q(y;θ)−Q(rn,θ)]
m−n−1
Γ(m−n)F(rn;θ)f (y;θ).
The predictive likelihood function must be the joint probability function ofR1,T1,R2, T2, ...,Rn−1,Tn−1,Rn and Y = Rm, which is given by
L∗∗(y,θ ;r,t) = fR1,...,Rn,T1,...,Tn−1,Y (r1, . . . ,rn, t1, . . . , tn−1,y;θ)
= fY (y|r,t) fR1,...,Rn,T1,...,Tn−1(r1, . . . ,rn, t1, . . . , tn−1;θ)
= fY (y|rn) fR1,...,Rn,T1,...,Tn−1(r1, . . . ,rn, t1, . . . , tn−1;θ)
=[Q(y;θ)−Q(rn;θ)]m−n−1
Γ(m−n)F(rn;θ)f (y;θ)
×n
∏i=1
f (ri;θ)(1−F(ri;θ))ti−1,
which gives the result.Proof (b). From (1), the joint probability function of R1,T1,R2, T2, · · · ,Rn−1,Tn−1,
Rn,Tn,Rn+1, · · · ,Rm−1,Tm−1,Rm is given by
f ∗∗θ (r1, t1, ..., tn−1,rn, tn,rn+1, ..., tm−1,rm) = f (rm;θ)
×m−1
∏i=1
f (ri;θ)(1−F(ri;θ))ti−1,
where rm < rm−1 < · · ·< rn+1 < rn < · · ·< r1.
The joint distribution of R1,T1,R2,T2, ...,Rn−1,Tn−1,Rn and Rm is obtained asfollows
f (r1, t1, ...,rn,rm) =∫ rn
rm
· · ·∫ rn
rn+2
∞
∑tn=1· · ·
∞
∑tm−1=1
f (rm;θ)
×m−1
∏i=1
f (ri;θ)(1−F(ri;θ))ti−1drn+1 · · ·drm−1
= f (rm;θ)n−1
∏i=1
f (ri;θ)(1−F(ri;θ))ti−1∫ rn
rm
· · ·∫ rn
rn+2
×∞
∑tn=1
f (rn;θ)(1−F(rn;θ))tn−1×·· ·
The 6th Seminar on Reliability Theory and its Applications 157
×∞
∑tm−1=1
f (rm−1;θ)(1−F(rm−1;θ))tm−1−1drn+1 · · ·drm−1
= f (rm;θ)n−1
∏i=1
f (ri;θ)(1−F(ri;θ))ti−1
×∫ rn
rm
· · ·∫ rn
rn+2
m−1
∏j=n
f (r j;θ)
F(r j;θ)drn+1 · · ·drm−1
= f (rm;θ)f (rn;θ)
F(rn;θ)
n−1
∏i=1
f (ri;θ)(1−F(ri;θ))ti−1
×∫ rn
rm
· · ·∫ rn
rn+2
m−1
∏j=n+1
f (r j;θ)
F(r j;θ)drn+1 · · ·drm−1
= f (rm;θ)f (rn;θ)
F(rn;θ)
n−1
∏i=1
f (ri;θ)(1−F(ri;θ))ti−1
×[ln(F(rn;θ))− ln(F(rm;θ))]m−n−1
(m−n−1)!,
and the required result follows.
A random variable X possesses a Rayleigh distribution with a scale param-eter θ , if its pdf is given by
f (x;θ) = 2θxe−θx2, x > 0, θ > 0.
Suppose that {R1,T1,R2,T2, · · · ,Rn−1,Tn−1,Rn} are the set of the first n recordsand n−1 inter-record times from a Rayleigh distribution with a scale param-eter θ , and r = (r1, · · · ,rn) and t = (t1, · · · , tn−1) are the observed vectors ofR = (R1, · · · ,Rn) and T = (T1, · · · ,Tn−1), respectively. We want to predictthe m-th record value, Y = Rm, using the maximum likelihood prediction ap-proach. The predictive likelihood function is then given by
L∗∗(y,θ ;r,t) =2n+1θ n+1 y
(− ln[1− exp(−θy2)]+ ln[1− exp(−θr2
n)])m−n−1
Γ(m−n)(1− exp(−θr2n))
×exp(−θ [y2 +n
∑i=1
tir2i ])
n
∏i=1
ri, θ > 0, y < rn. (4)
Khoshkhoo Amiri, Z., and MirMostafaee, S.M.T.K. 158
We note that if m = n+1, then (4) can be simplified as follows
L∗∗(y,θ ;r,t) =2n+1θ n+1 yexp(−θ [y2 +∑
ni=1 tir2
i ])
(1− exp(−θr2n))
n
∏i=1
ri, (5)
where θ > 0 and y < rn.
Upon maximizing (4), with respect to (w.r.t.) θ and y, we can find the MLPof Y and the predictive maximum likelihood (PML) estimator of θ .
3 A real data example
Here, we consider the following data on the amount of rainfall (in inches)recorded at the Los Angeles Civic Center in February from 1998 to 2012, seethe website of Los Angeles Almanac: www.laalmanac.com/weather/we08aa.php.
0.56, 5.54, 8.87, 0.29, 4.64, 4.89, 11.02, 2.37, 0.92, 1.64, 3.57,
4.27, 3.29, 0.16.
We used the formal Kolmogorov-Smirnov (K-S) goodness-of-fit test to see ifthe Rayleigh distribution fits the above data and observed that the K-S p-valuewas greater than 0.29. Thus we may conclude that the Rayleigh distribution issuitable for modeling the above data.
Table 1: The lower record values and inter-record times extracted from the data of the example.
i 1 2 3
ui 0.56 0.29 0.16
ti 3 10 1
Now, we want to derive the MLP of R4, R5 and R6. Setting Y = R4, from(5), we see that L∗∗ is increasing w.r.t. y for y < 1√
2θand is decreasing w.r.t.
y for y > 1√2θ
when θ is kept fixed. Therefore, if θ > 12r2
n= 19.53125, then
L∗∗ achieves its maximum in (0,0.16), otherwise L∗∗ is increasing w.r.t. y in(0,0.16). Maximizing L∗∗ w.r.t. y and θ jointly, we arrive at θ = 1.6481 as thePML estimate of θ and for this value of θ , L∗∗ is increasing w.r.t. y in (0,0.16).Are we allowed to take the maximum likelihood prediction of R4 as 0.16? Or
The 6th Seminar on Reliability Theory and its Applications 159
we must state here that the MLP of R4 does not exist as y < r3 = 0.16? (Wenote that L∗ and L∗∗ have similar behavior w.r.t. y when m = n+1. Using thewrong relation (2), the PML estimate of θ is derived as 2.1822). Next, we setY = R5. Maximizing L∗∗ w.r.t. y and θ jointly, we arrive at θ = 1.6634 asthe PML estimate of θ and y = 0.05783 as the maximum likelihood predictionof R5. Here, we note that if we use the wrong relation (2), then the PMLestimate of θ and the maximum likelihood prediction of R5 are obtained to be2.2024 and 0.057505, respectively. Finally, we set Y = R6 and we found θ =
1.6654 as the PML estimate of θ and y = 0.02138 as the maximum likelihoodprediction of R6. Using the wrong relation (2), the PML estimate of θ andthe maximum likelihood prediction of R6 are calculated as 2.205 and 0.02129,respectively. Summing up, we see that the correct relation and the wrong onelead to different numerical results and surely, we must apply the correct one.The computations are done using Maple 16 and the statistical software R (see[7]).
References
[1] Amini, M. and MirMostafaee, S.M.T.K. (2016), Interval prediction of or-der statistics based on records by employing inter-record times: A studyunder two parameter exponential distribution, Metodoloski Zvezki, 13(1),1-15.
[2] Arnold, B.C., Balakrishnan, N. and Nagaraja, H.N. (1998), Records, JohnWiley and Sons, New York.
[3] Kaminsky, K.S. and Rhodin, L.S. (1985), Maximum likelihood prediction,Annals of the Institute of Statistical Mathematics, 37, 507-517.
[4] Kızılaslan, F. and Nadar, M. (2016), Estimation and prediction of the Ku-maraswamy distribution based on record values and inter-record times,Journal of Statistical Computation and Simulation, 86(12), 2471-2493.
Khoshkhoo Amiri, Z., and MirMostafaee, S.M.T.K. 160
[5] Nadar, M. and Kızılaslan, F. (2015), Estimation and prediction of the Burrtype XII distribution based on record values and inter-record times, Jour-
nal of Statistical Computation and Simulation, 85(16), 3297-3321.
[6] Pak, A. and Dey, S. (2019), Statistical inference for the power Lindleymodel based on record values and inter-record times, Journal of Computa-
tional and Applied Mathematics, 347, 156-172.
[7] R Core Team (2018), R: A language and environment for statistical com-
puting. R Foundation for Statistical Computing, Vienna, Austria.
[8] Samaniego, F.J. and Whitaker, L.R. (1986), On estimating population char-acteristics from record-breaking observations. I. Parametric results, Naval
Research Logistics, 33(3), 531-543.
The 6th Seminar on Reliability Theory and its Applications
E-Bayesian and Hierarchical Bayesian Estimation in a Family ofDistributions
Kiapour, A.1, and Naghizadeh Qomi, M.2
1 Department of Statistics, Islamic Azad University, Babol branch, Babol,Iran
2 Department of Statistics, University of Mazandaran, Babolsar, Iran
Abstract: In this paper, we deal with Bayesian, E-Bayesian and hierarchicalBayesian estimation in a family of distributions under a squared log error lossfunction. Specially, E-Bayesian and hierarchical Bayesian estimators for theshape parameter of a Pareto distribution is provided when the scale parameteris known. A monte carlo simulation is conducted for comparision of Bayesand E-Bayesian estimators. A real data set is used for illustrating the proposedestimators.
Keywords: E-Bayesian Estimation, Hierarchical Bayes, Pareto Distribution.
1 Introduction
A Bayesian approach to a statistical problem requires defining a prior distribu-tion over the parameter space and loss function. Many Bayesian believe thatjust one prior can be elicited. In practice, the prior knowledge is vague andany elicited prior distribution is only an approximation to the true one. So, weelect to restrict attention to a given flexible family of priors. Various solutionsto this problem have been proposed. One of the proposed solution is E-Baye-
1Kiapour, A.: azadeh [email protected]
161
Kiapour, A., and Naghizadeh Qomi, M. 162
sian approach, which has been applied over the last decades. The E-Bayesianmethod was first introduced by Han (1997). The E-Bayesian estimator of un-known parameter is obtained on the basis of distribution of the hyperparame-ter(s), for more details, see Han (2007,2009,2011), Jaheen and Okasha (2011)and Kiapour (2018). In some situation, prior distribution parameters may bedepend on the hyper parameters. In this situation, we often use of the hierar-chical Bayesian estimation method. The hierarchical Bayes method were firstintroduced by Lindly and Smith (1972).
In Bayesian inference, the most commonly used loss function is convex andsymmetric Squared Error loss (SEL) function which is widely used in decisiontheory due to its simple mathematical properties. But in some cases, it doesnot represent the true loss structure. For example it is not useful for estima-tion of the scale parameter and it assigns the same penalizes to overestimationand underestimation. For estimation of the scale parameter θ , Brown (1968)proposed the Squared Log Error Loss (SLEL) function, which is given by
L(θ ,δ ) = (lnδ − lnθ)2 =
[ln
δ
θ
]2
, (1)
where both θ and δ are positive. This loss is not symmetric and convex, it isconvex when δ
θ≤ e and concave otherwise, but has a unique minimum at δ =
θ and L(θ ,δ ) is increasing as δ moves away from θ in either direction. In theestimation problems that underestimation is more serious than overestimation,this loss is appropriate to use, see Kiapour and Nematollahi (2011).
In this paper, Bayes, E-Bayesian and hierarchical Bayesian estimators in afamily of distributions have been obtained under the loss function (1). In sec-tion 2, we state preliminary definitions and formulas of Bayesian, E-Bayesianand hierarchical Bayesian estimation of unknown parameter. In section 3, wefind the Bayes estimator for the parameter θ in a family of distributions underthe loss function (1). E-Bayesian estimators are developed in section 4. AMonte Carlo simulation is used for a comparision of the E-Bayesian estima-tors of the shape parameter of a Pareto distribution in section 5. Hierarchical
The 6th Seminar on Reliability Theory and its Applications 163
Bayesian estimators are obtained in section 6. The golfers income data is usedfor practical illustration in section 7. Finally, we end the paper by a discussion.
2 Preliminaries
let Xn = (X1, ...,Xn) be independent and identically distributed (i.i.d.) randomvariables from a distribution pθ indexed by a real unknown parameter θ . Also,let (χ,B, p) denoted the probability space generated by X , where χ ⊂ Rn, B isthe σ -field of χ , p= {pθ (x)|θ ∈ θ} and θ is the space parameter. In estimationof θ , let L(θ ,δ ) be the loss function (1). Then, the posterior risk of δ bases onobservations xn = (x1, ...,xn) can be expressed as
ρ(π,δ ) = ln2δ (xn)+E[ln2
θ | xn]−2lnδ (xn)E[lnθ | xn]. (2)
The Bayes estimate of θ based on observation xn is any estimate δ B(xn) thatminimizes the posterior risk (2), which is given by
δB(xn) = eE[lnθ |xn]. (3)
Information on the appropriate prior is often inadequate to unambiguouslyspecify a prior distribution. The problem of expressing uncertainty regardingprior information can be solved by using a class of prior distributions.
E-Bayesian inference deals with such a problem by constructing methodswhich are stable to such a lack of information. Cosider a prior π(θ |a,b) for θ
with hyperparameters a and b. The E-Bayesian estimator of θ is the expecta-tion of the Bayes estimator for the all hyperparameters and is defined as
δEB(xn) =
∫ ∫D
δB(xn)π(a,b)dadb = E(δ B(xn)), (4)
where π(a,b) is the prior density function of hyperparameters a and b.According to Lindley and Smith (1972), one prior distribution may be adapted
to the hyper parameters while the prior distribution includes hyper parameters.The corresponding hierarchical prior density function of θ is
π(θ) =∫ ∫
Dπ(θ |a,b)π(a,b)dadb (5)
Kiapour, A., and Naghizadeh Qomi, M. 164
Therefore, the hierarchical Bayesian estimator is obtained based on the hierar-chical posterior distributon using (3) as δ HB(xn) = eE[lnθ |xn] .
3 Bayesian estimation strategy
Let {pθ |θ ∈ Θ} be an one-parameter family of distributions with probabiltydensity function (p.d.f.)
fθ (x) = c(x,n)θ s(x)e−t(x)θ , x ∈ R,
where c(x,n) is a function of x and n, and t, s are fixed. Examples of suchdistributions are given in Table 1.
Let X1,X2, ...,Xn be a sequence of i.i.d. random variable with distributionfθ . Set X = (X1,X2, ...,Xn). Also, let πa,b be a conjugate family of distributionwith p.d.f.
π(θ |a,b) = ba
Γ(a)θ
a−1e−bθ , θ > 0, (6)
where Γ(a) =∫
∞
0 xa−1e−xdx is the gamma function, and hyper parameters a >
0 and b > 0. It is easy to verify that the posterior distribution of θ given x isGamma(S+α,T + β ), where S = ∑
ni=1 s(xi) and T = ∑
ni=1 t(xi). Therefore,
the Bayes estimator of θ under the loss function (1) is given by
δB(x) =
eψ(S+a)
T +b, (7)
where ψ(ν) = dd(ν) lnΓ(ν) = Γ′(ν)
Γ(ν) is the digamma function.
The 6th Seminar on Reliability Theory and its Applications 165
Table 1: Representation of the family p
Distribution pθ (x) s(x) t(x)
Poisson Poi(θ) x 1
Exponential E(θ) 1 x
Gamma G(α,θ), α > 0 known α x
Pareto Par(α,θ), α > 0 known 1 ln( xα)
Power P(λ ,θ), λ > 0 known 1 ln(λ
x )
Negative exponential NE(µ,θ), µ > 0 known 1 x−µ
Inverse gamma IG(α,θ), α > 0 known α1x
Inverse gaussian IGa(µ,θ), α > 0 known 12
(x−µ)2
2µ2x
4 E-Bayesian estimation
According to Han (1997) , a and b should be selected to guarantee that π(θ |a,b)is a decreasing function of θ . If we take the conjugate prior (6), hyperparam-eters a and b should be in the ranges 0 < a < 1 and b > 0, respectively, due todπ(θ |a,b)
dθ< 0. Prior distribution with thinner tail would make worse robustness
of Bayesian distribution. Accordingly, b should not too big while 0 < a < 1.It is better to choose 0 < a < 1 and 0 < b < c (c > 0, and c is a constant).
Suppose that the prior distributions of a and b are uniform distribution in(0,1) and uniform distribution in (0,c), respectivelly, when a and b are inde-pendent. Therefore, the joint prior distribution of a and b is given by
π1(a,b) =1c, 0 < a < 1, 0 < b < c. (8)
In the following theorem, we obtain E-Bayesian estimator of θ under the lossfunction (1) and prior distribution prior distribution (8).
Theorem 4.1. Let xn = (x1,x2, ...,xn) be the sample observations from the one-
parameter exponential family. Then, the E-Bayesian estimator of θ corre-
sponding to the prior given in (8) under the loss function (1) is equal to
δEB1(xn) =
1c
ln(1+cT)∫ 1
0eψ(S+a)da. (9)
Kiapour, A., and Naghizadeh Qomi, M. 166
Proof. For π(α,β ), the E-Bayesian estimator under the function(1) is givenby
δEB1(xn) =
∫ 1
0
∫ c
0
eψ(S+a)
c(T +b)dbda
=1c
ln(1+cT)∫ 1
0eψ(S+a)da.
which ends the proof.
Also, suppose that the prior distribution of a is Beta distribution Beta(u,v),and the prior distribution of b is uniform distribution in (0,c), when a and b
are independent. Then, the joint prior distribution of a and b is given by
π2(a,b) =1
cB(u,v)au−1(1−a)v−1, 0 < a < 1, 0 < b < c, (10)
where B(u,v) =∫ 1
0 xu−1(1− x)v−1dx is the beta function. In the followingtheorem, we obtain the E-Bayesian estimator of θ under the loss function (1)and prior distribution (10).
Theorem 4.2. If xn = (x1,x2, ...,xn) are the sample observations from the one-
parameter exponential family, then, the E-Bayesian estimator of θ correspond-
ing to the prior given in (10) under the loss function (1) is all equal to
δEB2(xn) =
1c
ln(1+cT)∫ 1
0eψ(S+a) 1
B(u,v)au−1(1−a)v−1da. (11)
Proof. The E-Bayesian estimator under the function (1) is given by
δEB2(xn) =
∫ 1
0
∫ c
0
eψ(S+a)
(T +b)1
cB(u,v)au−1(1−a)v−1dbda
=1c
ln(1+cT)∫ 1
0eψ(S+a) 1
B(u,v)au−1(1−a)v−1da.
which ends the proof.
5 Simulation study
In this section, we perform a numerical comparison between the Bayes andE-Bayesian estimators for the shape parameter of a Pareto distribution. For
The 6th Seminar on Reliability Theory and its Applications 167
this purpose, we generate sequences n of independent random samples fromPareto distribution with true value of parameter α = 200 and θ = 3.
Let δ ki , k = 1,2,3 stands for δ B(xn) with a = 0.6 and b = 2 given by (7) and
E-Bayesian estimators δ EBi(xn), i = 1,2 given by (8) and (10) for selectedvalues c = 2.5,3,3.5, u = 3 and v = 2. in ith replication, respectively. Repeatthese tasks M = 104 times and calculate the value of Estimated Risk (ER) usingthe following formula
ER(δ k) =1M
M
∑i=1
(lnδki − lnθ)2. (12)
The results are summarized in Table 2. It is seen from Table 2 that the perfor-mance of the E-Bayes estimators are quite satisfactory than the Bayes estima-tor. Moreover, the estimated risk decreases as the sample size increases.
Table 2: Results of ER for Bayes and E-Bayesian estimators
n c δ B δ EB1 δ EB2
20 2.5 0.10209 0.07367 0.07223
3 0.08048 0.07875
3.5 0.08870 0.08669
50 2.5 0.03120 0.02572 0.02545
3 0.02718 0.02686
3.5 0.02898 0.02860
100 2.5 0.01910 0.01637 0.01624
3 0.01715 0.01699
3.5 0.01807 0.01789
6 Hierarchical Bayesian estimation
In this section, we obtain hierarchical Bayesian estimators of θ Based on twoproposed prior distributions π1(a,b) and π2(a,b). First, consider the priordistributions π1(a,b). Then, the hierarchical prior distrbution is given by
π1(θ) =∫ 1
0
∫ c
0π(θ |a,b)π(a,b)dbda
Kiapour, A., and Naghizadeh Qomi, M. 168
=1c
∫ 1
0
∫ c
0
ba
Γ(a)θ
a−1e−bθ dbda, θ > 0. (13)
In the following theorem, we obtain the hierarchical Bayesian estimator ofθ under the loss function (1) and the hierarchical prior distribution of θ in(13).
Theorem 6.1. Let xn = (x1,x2, ...,xn) be the sample observations from the one-
parameter exponential family. Then, the hierarchical Bayesian estimator of θ
under the loss function (1) is equal to
δHB1(xn) = exp(
∫ 10∫ c
0baΓ(S+a)
Γ(a)(T+b)S+a(ψ(S+a)− ln(T +b))dbda∫ 10∫ c
0baΓ(S+a)
(T+b)S+aΓ(a)dbda). (14)
Proof. The hierarchical posterior density function of θ is given by
π1(θ |xn) =π1(θ)L(θ |xn)∫
∞
0 π1(θ)L(θ |xn)dθ
=
∫ 10∫ c
0β α
Γ(a)θS+a−1e−(T+b)θ dbda∫ 1
0∫ c
0ba
Γ(a)
∫∞
0 θ S+a−1e−(T+b)θ dθdbda
=
∫ 10∫ c
0ba
Γ(a)θS+a−1e−(T+b)θ dbda∫ 1
0∫ c
0baΓ(S+a)
(T+b)S+aΓ(a)dbda(15)
We have
E[lnθ |xn] =∫
∞
0(lnθ)π1(θ |xn)dθ
=
∫ 10∫ c
0ba
Γ(a)
∫∞
0 (lnθ)θ S+a−1e−(T+b)θ dθdbda∫ 10∫ c
0baΓ(S+a)
(T+b)S+aΓ(a)dbda
=
∫ 10∫ c
0baΓ(S+a)
Γ(a)(T+b)S+α (ψ(S+a)− ln(T +b))dbda∫ 10∫ c
0baΓ(S+a)
(T+b)S+aΓ(a)dbda. (16)
Thus, the proof is completed.
The 6th Seminar on Reliability Theory and its Applications 169
Now, consider the prior distributions π2(a,b). Then, the hierarchical priordistrbution is given by
π2(θ) =1
cB(u,v)
∫ 1
0
∫ c
0
ba
Γ(a)θ
a−1e−bθ au−1 (1−a)v−1dbda, θ > 0. (17)
In the following theorem, we obtain the hierarchical Bayesian estimator of θ
under the loss function (1) and the hierarchical prior distribution of θ in (17).
Theorem 6.2. Let xn = (x1,x2, ...,xn) be the sample observations from the one-
parameter exponential family. Then, the hierarchical Bayesian estimator of θ
under the loss function (1) is equal to
δHB2(xn) = exp(
∫ 10∫ c
0baΓ(S+a)au−1(1−a)v−1
Γ(a)(T+b)S+a (ψ(S+a)− ln(T +b))dbda∫ 10∫ c
0baΓ(S+a)au−1(1−a)v−1
(T+b)S+aΓ(a) dbda) (18)
Proof. The hierarchical posterior density function of θ is given by
π2(θ |xn) =π2(θ)L(θ |xn)∫
∞
0 π2(θ)L(θ |xn)dθ
=
∫ 10∫ c
0baau−1(1−a)v−1
Γ(a) θ S+a−1e−(T+b)θ dbda∫ 10∫ c
0baau−1(1−a)v−1
Γ(a)
∫∞
0 θ S+a−1e−(T+b)θ dθdbda
=
∫ 10∫ c
0baau−1(1−a)v−1
Γ(a) θ S+a−1e−(T+b)θ dbda∫ 10∫ c
0baΓ(S+a)au−1(1−a)v−1
(T+b)S+aΓ(a) dbda(19)
We have
E[lnθ |xn] =∫
∞
0(lnθ)π2(θ |xn)dθ
=
∫ 10∫ c
0baau−1(1−a)v−1
Γ(a)
∫∞
0 (lnθ)θ S+a−1e−(T+b)θ dθdbda∫ 10∫ c
0baΓ(S+a)au−1(1−a)v−1
(T+b)S+aΓ(a) dadb
=
∫ 10∫ c
0baΓ(S+a)au−1(1−a)v−1
Γ(a)(T+b)S+a (ψ(S+a)− ln(T +b))dbda∫ 10∫ c
0baΓ(S+a)au−1(1−a)v−1
(T+b)S+αΓ(a) dbda(20)
which ends the proof.
Kiapour, A., and Naghizadeh Qomi, M. 170
7 A real example
Consider the golfers incomae data (Arnold, 2015). The given 50 golfers earn-ing more than 70000 dollar, their income by the end of the 1980 years data areshown in Table 3 (unit: 1000 dollar). A Pareto distribution with scale param-eter α = 703 and the shap2e parameter θ = 2.23 has a good fit to data. TheBayes estimates with a = 0.6 and b = 2, E-Bayesian and hierarchical Bayesianestimates with u = 3, v = 2 and selected values of c = 2.5,3,3.5 are summa-rized in Table 4. It is observed that the E-Bayesian and hierarchical Bayesianestimates are very close. Also, these estimates are all robust.
Table 3: the golfers income data
3581 1960 1433 1184 1066 1005 883 841 778 753
2474 1684 1410 1171 1056 1001 878 825 778 746
2202 1627 1374 1109 1051 965 871 820 771 729
1858 1537 1338 1095 1031 944 849 816 769 712
1829 1519 1208 1092 1016 912 844 814 759 708
Table 4: Results for Bayes, E-Bayesian and hierarchical estimates
c δ B δ EB1 δ EB2 δ HB1 δ HB2
2.5 2.1084 2.1749 2.1793 2.2327 2.2318
3 2.1524 2.1567 2.2306 2.2301
3.5 2.1305 2.1348 2.2296 2.2293
8 Discussion
Our aim of this paper is to study the Bayes, E-Bayesian and hierarchicalBayesian estimation of the unknown scale parameter for an exponential familyof distributions under the SLEL function. First, we derive the Bayes estimatorby choosing an explicit prior distribution over the parameter of interest. Inpractical situations, the prior knowledge is vague and any elicited prior distri-bution is only an approximation to the true one. So, the E-Bayesian and thehierarchical Bayesian analysis can be employed. Therefore, we investigated
The 6th Seminar on Reliability Theory and its Applications 171
the performance of E-Bayesian estimators for selected values of c (an upperbound for b) in comparison with the Bayes estimator. Our ndings in a sim-ulation study showed that E-Bayesian estimators work better than the Bayesestimator. We also considered the golfers income data. In this case, the E-Bayesian estimators performed better than other estimators.
References
[1] Arnold, B. C. (2015), Pareto distributions, Chapman and Hall/CRC Press.
[2] Brown, L. D. (1968), Inadmisibility of the usual estimator of scale param-eters in problems with unknown location and scale parameters, Annals of
Mathematical Statistics, 39, 29-48.
[3] Han, M. (1997), The structure of hierarchical prior distribution and its ap-plications, Chin. Oper. Res. Manag. Sci. 6, 31-40.
[4] Han, M. (2007), E-Bayesian estimation of failure probability and its appli-cation, Math. Chin, Comput. Model. 45, 1272-1279.
[5] Han, M. (2009), E-Bayesian estimation and hirarchical Baysian estimationof failure rate, Appl. Math. Model. 33, 1915-1922.
[6] Han, M. (2011), E-Bayesian estimation and hirarchical Baysian estimationof failure probability, Commun. Stat. Theory Methods. 40, 3303-3314.
[7] Jaheen, Z. F. and Okasha, H. M. (2011), E-Bayesian estimation for theburr type XII model based on type- 2 censoring, Appl. Math. Model. 35,4730-4737.
[8] Kiapour, A. (2018), Bayes, E-Bayes and robust Bayes premium estimationand prediction under the squared log error loss function, Journal of the
Iranian Statistical Society, 17, 33-47.
Kiapour, A., and Naghizadeh Qomi, M. 172
[9] Kiapour, A., and Nematollahi, N. (2011), Robust Bayesian prediction andestimation under a square error loss function, Statistics and Probability
Letters, 81, 1717-1724.
[10] Lindley, D. V, and Smith, A. F. M. (1971), Bayes estimates for the linearmodel, J. Stat. Soc. Ser B, 41, 141.
The 6th Seminar on Reliability Theory and its Applications
Statistical Bayesian Inference on the Reliability Parameter Under AdaptiveType-II Hybrid Progressive Censoring Samples for Burr Type XII
Distribution
Kohansal, A.1, and Shoaee, S.2
1 Department of Statistics, Imam Khomeini International University, Qazvin,Iran
2 Department of Actuarial Science, Faculty of Mathematical Sciences, ShahidBeheshti University, Tehran, Iran
Abstract: In this paper, the Bayesian inference of R = P(X <Y ) for Burr typeXII distribution under the adaptive Type-II hybrid progressive censored sam-ples is considered. We solve the problem in three cases. In first case, assumingthat X and Y have the unknown common first shape parameter and differentsecond shape parameters, the Bayes estimate of R is derived by two approxi-mation method: Lindley’s approximation and MCMC method. In second case,assuming that X and Y have the known common first shape parameter and un-known different second shape parameters, the exact Bayes estimate of R isderived. In third case, assuming that all parameters are different and unknown,the Bayesian inference of R is derived by MCMC method. We use one MonteCarlo simulation study to compare the performance of different methods.
Keywords: Adaptive Type-II Hybrid Progressive Censored Sample, Stress-Strength Model, Burr Type-XII Distribution, Bayesian Inference.
1Kohansal, A.: [email protected]
173
Kohansal, A., and Shoaee, S. 174
1 Introduction
Statistical inference about the stress-strength parameter, R = P(X <Y ), is oneof the most important problem in reliability theory and statistics and has beendone from the frequentist and Bayesian viewpoints. In spite of the fact that,many papers have studied the stress-strength models in complete samples,much consideration has not been paid to censored data (see [3]).
Type-I and Type-II censoring schemes are two most fundamental schemesand by mixing of theses two schemes, hybrid scheme is derived. Unfortu-nately, none of above schemes cannot remove active units during the experi-ment. So, the progressive censoring scheme is mentioned. Combining hybridand progressive schemes, hybrid progressive scheme is provided. One of themost objectionable of this scheme is that the sample size is random and a verysmall number may be turn out under this scheme. So, Ng et al. [5] introducedthe adaptive hybrid progressive scheme, so that in this scheme the sample sizeis fixed. The adaptive Type-II hybrid progressive censoring scheme (AT-IIHPC) can be described as follows: Suppose that X1:n:N, . . . ,Xn:n:N be a pro-gressive censoring sample and T > 0 is fixed. In this condition, if Xn:n:N < T ,the experiment ends at time Xn:n:N and n failures, X1:n:N, . . . ,Xn:n:N , with theprogressive censoring scheme (R1, . . . ,Rn), achieved. Also, if XJ:n:N < T <
XJ+1:n:N , then we will not withdraw any items from the experiment by setting
RJ+1 = . . . = Rn−1 = 0, Rn = N− n−J∑
i=1Ri. We denote an AT-II HPC sam-
ple with {X1, . . . ,Xn} under the scheme {N,n,T,J,R1, . . .Rn}. The likelihoodfunction of the AT-II HPC samples is as follows:
L(θ) ∝
n
∏i=1
f (xi)J
∏i=1
[1−F(xi)]Ri[1−F(xn)]
Rn.
Burr type XII (Bur) distribution with the first and second shape parameters λ
and α , respectively, has the probability density function as f (x)= λαxλ−1(1+xλ )−α−1 x,λ ,α > 0. In this paper, we obtain the Bayesian in- ference of theR=P(X <Y ) based on AT-II HPC sample, when X and Y are two independent
The 6th Seminar on Reliability Theory and its Applications 175
random variables from the Bur distribution.
2 Bayesian inference of R with unknown common λ
If X ∼ Bur(λ ,α) and Y ∼ Bur(λ ,β ), then the stress-strength parameter canbe obtained as
R = P(X < Y ) =α
α +β.
In this section, the Bayesian inference of R is considered under squared errorloss functions, when α , β and λ are independent gamma random variables.Based on the observed censoring samples, the joint posterior density functionis as follows:
π(α,β ,λ |data) ∝ L(data|α,β ,λ )π1(α)π2(β )π3(λ ) (1)
where π1(α) ∝ αa1−1e−b1α , α,a1,b1 > 0, π2(β ) ∝ β a2−1e−b2β , β ,a2,b2 > 0and π3(λ ) ∝ λ a3−1e−b3λ , λ ,a3,b3 > 0. As we see, from equation (1), theBayes estimate cannot be obtained in a closed form. So, we should approxi-mate it by applying two methods:
• Lindley’s approximation,
• MCMC method.
2.1 Lindley’s approximation
Lindley [5] introduced one of the most numerical techniques to derive theBayes estimate. If U(θ) be a function of θ = (θ1,θ2,θ3), Lindley’s approxi-mation of it, I(data), is
I(data) = u+(u1d1 +u2d2 +u3d3 +d4 +d5)+12[A(u1σ11 +u2σ12 +u3σ13)
+B(u1σ21 +u2σ22 +u3σ23)+C(u1σ31 +u2σ32 +u3σ33)],
Kohansal, A., and Shoaee, S. 176
calculated at θ = (θ1, θ2, θ3), where `(θ) is the logarithm of the likelihoodfunction, and ρ(θ) is the logarithm of the prior density of θ . Also, ui =
∂u(θ)/∂θi, ui j = ∂ 2u(θ)/∂θi∂θ j, `i jk = ∂ 3`(θ)/∂θi∂θ j∂θk, ρ j = ∂ρ(θ)/∂θ j,and σi j = (i, j)th element in the inverse of matrix [−`i j] all evaluated at theMLE of the parameters. Moreover,
di = ρ1σi1 +ρ2σi2 +ρ3σi3, i = 1,2,3, d4 = u12σ12 +u13σ13 +u23σ23,
d5 =12(u11σ11 +u22σ22 +u33σ33),
A = `111σ11 +2`121σ12 +2`131σ13 +2`231σ23 + `221σ22 + `331σ33,
B = `112σ11 +2`122σ12 +2`132σ13 +2`232σ23 + `222σ22 + `332σ33,
C = `113σ11 +2`123σ12 +2`133σ13 +2`233σ23 + `223σ22 + `333σ33.
In our case, for (θ1,θ2,θ3)≡ (α,β ,λ ), we have
ρ1 =a1−1
α−b1, ρ2 =
a2−1β−b2, ρ3 =
a3−1λ−b3, `11 =−
nα2 ,
`22 =−mβ 2 ,
`12 = 0, `13 =−n
∑i=1
xλi
log(xi)
1+ xλi−
j1
∑i=1
rixλi
log(xi)
1+ xλi− rnxλ
nlog(xn)
1+ xλn,
`23 =−n
∑j=1
yλjlog(y j)
1+ yλj−
j2
∑j=1
s jyλjlog(y j)
1+ yλj− smyλ
mlog(ym)
1+ yλm,
`33 =−n+m
λ 2 − (α +1)n
∑i=1
xλi (
log(xi)
1+ xλi)2− (β +1)
m
∑j=1
yλj (
log(y j)
1+ yλj)2
−α
( j1
∑i=1
rixλi (
log(xi)
1+ xλi)2 + rnxλ
n (log(xn)
1+ xλn)2)
−β
( j2
∑j=1
s jyλj (
log(y j)
1+ yλj)2 + smyλ
m(log(ym)
1+ yλm)2),
σi j, i, j = 1,2,3 are obtained by using `i j, i, j = 1,2,3 and
`111 =2nα3 , `222 =
2mβ 3
The 6th Seminar on Reliability Theory and its Applications 177
`133 =−n
∑i=1
xλi (
log(xi)
1+ xλi)2−
j1
∑i=1
rixλi (
log(xi)
1+ xλi)2− rnxλ
n (log(xn)
1+ xλn)2,
`233 =−m
∑j=1
yλj (
log(y j)
1+ yλj)−
j2
∑j=1
s jyλj (
log2(y j)
1+ yλj)2− smyλ
m(log(ym)
1+ yλm)2,
`333 =2(n+m)
λ 3 − (α +1)n
∑i=1
xλi (1− xλ
i )(log(xi)
1+ xλi)3− (β +1)
×m
∑j=1
yλj (1− yλ
j )(log(y j)
1+ yλj)3−α
( j1
∑i=1
rixλi (1− xλ
i )(log(xi)
1+ xλi)3 + rnxλ
n (1− xλn )
× (log(xn)
1+ xλn)3)−β
( j2
∑j=1
s jyλj (1− yλ
j )(log(y j)
1+ yλj)3 + smyλ
m(1− yλm)(
log(ym)
1+ yλm)3),
and other `i jk = 0. Hence,
A = `111σ11 + `331σ33, B = `222σ22 + `332σ33,
C = 2`133σ13 +2`233σ23 + `333σ33, d4 = u12σ12, d5 =12(u11σ11 +u22σ22).
So, the approximate Bayes estimate of R, under the squared error loss functionis obtained by setting u(θ) = R = α
α+β. Also, u3 = 0, ui3 = 0, i = 1,2,3 and
u1 =β
(α +β )2 , u2 =−α
(α +β )2 ,u11 =−2β
(α +β )3 ,
u12 =−2(α−β )
(α +β )3 , u22 =2α
(α +β )3 .
Consequently, under the squared error loss function, the Bayes estimate of R
is
RLB = E(u(θ)|data) = u(θ)+ [u1d1 +u2d2 +d4 +d5]+12[A(u1σ11 +u2σ12)
+B(u1σ21 +u2σ22)+C(u1σ31 +u2σ32)]. (2)
Notice that all parameters are calculated at (α, β , λ ).
2.2 MCMC method
From the equation (1), the posterior pdfs of of α , β and λ can be derived as:
α|λ ,data∼ Γ(n+a1,b1 +V (λ )),
Kohansal, A., and Shoaee, S. 178
β |λ ,data∼ Γ(m+a2,b2 +U(λ )),
π(λ |α,β ,data) ∝ λn+m+a3−1e−λb3
( n
∏i=1
xλ−1i (1+ xλ
i )−α−1
)×( m
∏j=1
yλ−1j (1+ yλ
j )−β−1
)( j1
∏i=1
(1+ xλi )−αri
)
×( j2
∏j=1
(1+ yλj )−β s j
)(1+ xλ
n )−αrn(1+ yλ
m)−β sm,
where
V (λ ) =n
∑i=1
log(1+ xλi )+
j1
∑i=1
ri log(1+ xλi )+ rn log(1+ xλ
n ),
U(λ ) =m
∑j=1
log(1+ yλj )+
j2
∑j=1
s j log(1+ yλj )+ sm log(1+ yλ
m). (3)
It is observed that generating samples from the posterior pdf of λ should bedone by the Metropolis-Hastings method. So, we propose the following algo-rithm of Gibbs sampling:
1. Start with initial values (α(0), β(0), λ(0)).
2. Set t = 1.
3. Generate λ(t) from π(λ |α(t−1),β(t−1),data), using Metropolis-Hastings method.
4. Generate α(t) from Γ(n+a1,b1 +V (λ(t−1))).
5. Generate β(t) from Γ(m+a2,b2 +U(λ(t−1))).
6. Evaluate Rt =αt
αt+βt.
7. Set t = t +1.
8. Repeat T times, steps 3-7.
Therefore, the Bayes estimate of R, under the squared error loss functions is:
RMB =1T
T
∑t=1
Rt . (4)
The 6th Seminar on Reliability Theory and its Applications 179
Also, the 100(1− γ)% HPD credible interval of R can be constructed, usingthe method of Chen and Shao [2].
3 Bayesian inference of R with known common λ
In this section, the Bayesian inference of R is considered under the squarederror loss function, when α and β are independent gamma random variables.Based on the observed censoring samples, the joint posterior density functionis as follows:
π(α,β |λ ,data) =(α(V (λ )+b1))
n+a1(β (U(λ )+b2))m+a2
αβΓ(n+a1)Γ(m+a2)
× e−α(V (λ )+b1)−β (U(λ )+b2),
(5)
where V (λ ) and U(λ ) are given in (3). So, the Bayes estimate of R underthe squared error loss function, should be achieved by solving the followingintegral:
RB =∫
∞
0
∫∞
0
α
α +β×π(α,β |λ ,data)dαdβ .
By applying the idea of Kizilaslan and Nadar [2], the exact Bayes estimate isobtained as:
RB =
(1− z)n+a1(n+a1)
w 2F1(w,n+a1 +1;w+1,z) if |z|< 1,
(n+a1)
(1− z)m+a2w 2F1(w,m+a2;w+1,z
1− z) if z <−1,
(6)
where w = n+m+a1 +a2, z = 1− V (λ )+b1
U(λ )+b2and
2F1(α,β ;γ,z) =1
B(β ,γ−β )
∫ 1
0tβ−1(1− t)γ−β−1(1− tz)−αdt, |z|< 1
is hypergeometric series, which is quickly calculated and readily available instandard software such as MATLAB. Furthermore, the 100(1− γ)% Bayesianinterval of R can be constructed as (L,U), where L and U should be satisfied,
Kohansal, A., and Shoaee, S. 180
respectively, in ∫ L
0fR(R)dR =
γ
2,∫ U
0fR(R)dR = 1− γ
2, (7)
where fR(R), using change-of-variable method, can be earned by (5) as
fR(R) =(1− z)n+a1Rn+a1−1(1−R)m+a2−1(1−Rz)−w
B(n+a1,m+a2), 0 < R < 1.
4 Bayesian inference of R in general case
If X ∼ Bur(λ1,α) and Y ∼ Bur(λ2,β ), then the stress-strength parameter canbe obtained as
R = P(X < Y ) = 1−∫
∞
0βλ2yλ2−1(1+ yλ2)−β−1(1+ yλ1)−αdy.
In this section, the Bayesian inference of R is considered under squared errorloss functions, when α , β , λ1 and λ2 are independent gamma random vari-ables. Like in section 2, as the Bayes estimate of R can not be evaluated in aclosed form, it is approximated by MCMC method. From the joint posteriordensity function, we can be derived the posterior pdfs of α , β , λ1 and λ2 asfollows:
α|λ1,data∼ Γ(n+a1,b1 +V (λ1)),
β |λ2,data∼ Γ(m+a2,b2 +U(λ2)),
π(λ1|α,data) ∝ λn+a3−11 e−λ1b3
( n
∏i=1
xλ1−1i (1+ xλ1
i )−α
)×( j1
∏i=1
(1+ xλ1i )−αri
)(1+ xλ1
n )−αrn,
π(λ2|β ,data) ∝ λn+a3−12 e−λ2b4
( m
∏j=1
yλ2−1j (1+ yλ2
j )−β
)
×( j2
∏j=1
(1+ yλ2j )−β s j
)(1+ yλ2
m )−β sm.
The 6th Seminar on Reliability Theory and its Applications 181
It is observed that generating samples from the posterior pdfs of λ1 and λ2
should be done by the Metropolis-Hastings method. So, we propose the fol-lowing algorithm of Gibbs sampling:
1. Start with initial values (α(0), β(0), λ1(0),λ2(0)).
2. Set t = 1.
3. Generate λ1(t) from π(λ1|α(t−1),data), using Metropolis-Hastings method.
4. Generate λ2(t) from π(λ2|β(t−1),data), using Metropolis-Hastings method.
5. Generate α(t) from Γ(n+a1,b1 +V (λ1(t−1))).
6. Generate β(t) from Γ(m+a2,b2 +U(λ2(t−1))).
7. Evaluate Rt = 1−∫
∞
0 β(t)λ2(t)yλ2(t)−1(1+ yλ2(t))−β(t)−1(1+ yλ1(t))−α(t)dy.
8. Set t = t +1.
9. Repeat T times, steps 3-8.
Therefore, the Bayes estimate of R, under the squared error loss functions is:
RMB =1T
T
∑t=1
Rt . (8)
Also, the 100(1− γ)% HPD credible interval of R can be constructed, usingthe method of Chen and Shao [2].
5 Simulation Study
We consider the performance of different Bayes estimates, under AT-II HPCschemes by using the Monte Carlo simulations. The different estimates, interms of mean squared errors (MSEs) are compared together and the differentconfidence intervals, in terms of average confidence lengths and coverage per-centages are compared together. Based on 3000 replications and T = 0.9, allresults are gathered. Also, the used censoring schemes are as:
Scheme 1: r1 = . . .= rn =N−n
n,
Kohansal, A., and Shoaee, S. 182
Scheme 2: r2k =N−n
n−1, r2k−1 =
N−nn
+1, k = 1, . . . ,n2,
Scheme 3: r2k =2(N−n)
n, r2k−1 = 0, k = 1, . . . ,
n2.
In the first case, with unknown common λ , the parameter values (α,β ,λ ) =
(2,2,2) are used to obtain the simulation results. Also, the Bayesian inferenceis considered by assuming three priors as Prior 1: a j = 0, b j = 0, j = 1,2,3,Prior 2: a j = 1, b j = 0.1, j = 1,2,3. Under the above hypotheses, the MSEs ofBayesian estimates of R, via Linldey’s approximation and MCMC method arederived by (2) and (4), respectively. Also, we derived the 95% HPD intervalsfor R. The simulation results are given in Table 1.
In the second case, with known common λ , the parameter values (α,β ) =
(2,2) are used to obtain the simulation results. Also, the Bayesian inferenceis considered by assuming three priors as Prior 3: a j = 0, b j = 0, j = 1,2,Prior 4: a j = 1, b j = 0.1, j = 1,2. Under the above hypotheses, the Bayesestimate and Bayesian intervals of R are derived by (6) and (7), respectively.The results are provided in Table 1.
In the third case, with unknown different λ1 and λ2, the parameter values(α,β ,λ1,λ2) = (2,2,2,2) are used to obtain the simulation results. Also, theBayesian inference are considered by assuming three priors as Prior 5: a j =
0, b j = 0, j = 1,2,3,4, Prior 6: a j = 1, b j = 0.1, j = 1,2,3,4. Under theabove hypotheses, the MSEs of Bayesian estimates of R are derived by (8).Also, we derived the 95% HPD intervals for R. The simulation results aregiven in Table 1.
From Table 1, we observed that the best performance, in terms of MSE,belong to informative priors (priors 2, 4 and 6). Furthermore, in first case, per-formance of Bayes estimates which obtained by MCMC method are generallybetter than those obtained by Lindleys approximation. Also, we observed thatthe best performance among the different intervals belong to HPD intervalsbased on informative priors (priors 2, 4 and 6).
The 6th Seminar on Reliability Theory and its Applications 183
Table 1: Simulation results
Unknown common λ
Bayes(MCMC) Bayes(Lindley)
(N,n) CS Prior 1 Prior 2 Prior 1 Prior 2
MSE C.I C.P MSE C.I C.P MSE MSE
(40,10) (1,1) 0.0150 0.4018 0.933 0.0103 0.3962 0.939 0.0164 0.0155
(1,2) 0.0181 0.4082 0.932 0.0113 0.4000 0.940 0.0195 0.0178
(2,3) 0.0156 0.4054 0.931 0.0132 0.3980 0.939 0.0180 0.0150
(60,10) (1,1) 0.0150 0.4044 0.934 0.0149 0.3997 0.938 0.0186 0.0164
(1,2) 0.0118 0.4051 0.933 0.0105 0.3989 0.940 0.0196 0.0194
(2,3) 0.0141 0.4069 0.932 0.0136 0.3997 0.939 0.0193 0.0179
(40,20) (1,1) 0.0065 0.3007 0.938 0.0057 0.2993 0.943 0.0095 0.0087
(1,2) 0.0050 0.3009 0.937 0.0044 0.2987 0.945 0.0074 0.0046
(2,3) 0.0061 0.3002 0.938 0.0052 0.2969 0.945 0.0083 0.0055
(60,20) (1,1) 0.0105 0.2988 0.937 0.0092 0.2963 0.945 0.0114 0.0099
(1,2) 0.0114 0.2997 0.938 0.0112 0.2970 0.944 0.0120 0.0118
(2,3) 0.0087 0.2990 0.939 0.0076 0.2964 0.945 0.0145 0.0130
Known common λ
Bayes(Exact)
(N,n) CS Prior 3 Prior 4
MSE C.I C.P MSE C.I C.P
(40,10) (1,1) 0.0176 0.4245 0.930 0.0126 0.4085 0.931
(1,2) 0.0165 0.4234 0.926 0.0151 0.4049 0.931
(2,3) 0.0155 0.4161 0.927 0.0147 0.3996 0.932
(60,10) (1,1) 0.0160 0.4198 0.929 0.0121 0.4071 0.933
(1,2) 0.0172 0.4158 0.930 0.0122 0.3983 0.934
(2,3) 0.0115 0.4177 0.927 0.0106 0.3990 0.935
(40,20) (1,1) 0.0093 0.3058 0.936 0.0075 0.3013 0.937
(1,2) 0.0102 0.3054 0.935 0.0082 0.2999 0.940
(2,3) 0.0080 0.3051 0.937 0.0062 0.3007 0.940
(60,20) (1,1) 0.0123 0.3042 0.937 0.0103 0.2979 0.938
(1,2) 0.0137 0.3057 0.937 0.0114 0.2981 0.940
(2,3) 0.0107 0.3044 0.936 0.0095 0.2985 0.938
General case
Bayes(MCMC)
(N,n) CS Prior 5 Prior 6
MSE C.I C.P MSE C.I C.P
(40,10) (1,1) 0.0073 0.4015 0.940 0.0063 0.3932 0.943
(1,2) 0.0088 0.3983 0.938 0.0079 0.3934 0.942
(2,3) 0.0065 0.3937 0.937 0.0057 0.3881 0.943
(60,10) (1,1) 0.0117 0.3998 0.937 0.0107 0.3931 0.941
(1,2) 0.0126 0.3943 0.938 0.0113 0.3849 0.942
(2,3) 0.0100 0.3945 0.940 0.0089 0.3895 0.943
(40,20) (1,1) 0.0011 0.2971 0.944 0.0009 0.2943 0.948
(1,2) 0.0009 0.2958 0.945 0.0007 0.2952 0.948
(2,3) 0.0010 0.2982 0.942 0.0008 0.2927 0.949
(60,20) (1,1) 0.0022 0.2950 0.945 0.0019 0.2935 0.951
(1,2) 0.0023 0.2959 0.944 0.0020 0.2939 0.950
(2,3) 0.0021 0.2966 0.941 0.0018 0.2930 0.948
6 Conclusion
In this paper, we obtained the Bayesian inference of R in Bur distribution basedon AT-II HPC samples. The problem is solved in three cases. First: X and Y
have the unknown common first shape parameter and different second shapeparameter. Second: when the common shape parameter is known. Third: X
and Y have different parameters. In first case, we approximate R with twomethods, in second case, we derive the exact Bayes estimate and in third case,we obtain the estimate of R with MCMC method. All different method are
Kohansal, A., and Shoaee, S. 184
compared with the simulation results.
References
[1] Chen, M. H. and Shao, Q. M. (1999), Monte Carlo estimation of BayesianCredible and HPD intervals. Journal of Computational and Graphical
Statistics, 8, 69-92.
[2] Kizilaslan, F. and Nadar, M. (2018), Estimation of reliability in a multi-component stress-strength model based on a bivariate Kumaraswamy dis-tribution. Statistical Papers, 59, 307-340.
[3] Kohansal, A. (2019), On estimation of reliability in a multicomponentstress-strength model for a Kumaraswamy distribution based on progres-sively censored sample, Statistical Papers, 60, 2185-2224.
[4] Lindley, D. V. (1980), Approximate Bayesian methods. Trabajos de Es-
tadistica, 3, 281-288.
[5] Ng, H. K. T., Kundu, D. and Chan, P. S. (2009), Statistical analysis ofexponential lifetimes under an adaptive Type-II progressively censoringscheme. Naval Research Logistics, 56, 687-698.
The 6th Seminar on Reliability Theory and its Applications
Residual Varentropy of Lifetime Distributions
Maadani, S.1, Mohtashami Borzadaran, G.R.1, and Rezaei Roknabadi,A.H.1
1 Department of Statistics, Ferdowsi University of Mashhad, Mashhad, Iran
Abstract: This paper deal with the varentropy for the residual lifetime ran-dom variables. The influence of systems age on residual varentropy is investi-gated. It is shown that in some distributions such as uniform, exponential andgeneralized Pareto, residual varentropy is independent of systems age. Thesedistributions have characterized using residual varentropy, and a new class ofdistributions is also introduced.
Keywords: Characterization, Generalized Pareto Family, Residual Varentropy,Varentropy.
1 Introduction
The Shannon entropy (1948) of a continuous random variable X , with densityfunction f , is defined as follows
h(X) =−∫
Sf (x) log( f (x))dx, (1)
where h(X) is called differential entropy, and S is the support of X . It is obvi-ous that the Shannon differential entropy of X is the expectation of informationcontent − log( f (X)).
In applied statistics, the moments of a random variable such as mean andvariance have important roles in data analysis. Due to− log( f (X)) is a random
1Maadani, S.: [email protected]
185
Maadani, S., Mohtashami Borzadaran, G.R., and Rezaei Roknabadi, A.H. 186
variable, looking at its statistics including variance, higher moment, and so oncan be valuable. The variance of information content − log( f (X)) has beenstudied in some papers recently, and considerable results in finite-blocklengthinformation theory have been achieved. This variance is called varentropy, andit is a important parameter to estimate the performance of optimal coding, de-termine the dispersion of sources and channel capacity in computer sciences.There are few papers about varentropy in statistical studies. Song (2001) in-vestigated varentropy for comparing the measure of kurtosis in heavy taileddistributions. Liu (2007) presented some mathematical properties for varen-tropy. Zagrofos (2008) and Enomoto et al. (2013) proposed a goodness of fittest based on varentropy. See also Kontoyiannis and Verdu (2013), Fradeliziet al. (2016), and Erdal (2016).
Let X be a continuous random variable with density function f the varen-tropy of X is defined as follows
V E(X) =Var(− log f (X)) = E[− log f (X)−h(X)]2, (2)
where V E(X) is the varentropy of the random variable X . The varentropy is theexpectation of the squared deviation of the information content − log( f (X)),from its mean. This is a measure that indicates how the information content isdispersed around the entropy. Song (2001) showed that the varentropy can beused to compare the tail and shape of different densities as an intrinsic measureof the shape of a distribution. Within density functions that have the fourthmoment, µ4, and variance σ2, the varentropy provides similar informationto the well known kurtosis measure, µ4
σ4 . If the standard measure of kurtosiscan not be calculated, (several heavy-tailed distributions) such as Student’s twith the degree of freedom less than four, Cauchy and Pareto distribution, thevarentropy is a good measure instead of µ4
σ4 .The varentropy can provide a partial order about the distribution tails. For
example, if X has a Student’s t distribution with degree of freedom ν = 1,2,3,4,5,the varentropies are, 3.2899, 1.5978, 1.1595, 0.9661, and 0.8588, respectively.Therefore when ν is increased, the tails become lighter and the varentropy
The 6th Seminar on Reliability Theory and its Applications 187
decreases, consequently.Liu (2007) in his Ph.D. thesis introduced some mathematical characteris-
tics of the varentropy. Liu called the varentropy by Information Volatility andshowed that this measure can be characterized the uniform distribution, andused varentropy for separating the normal and gamma and a subfamily of thebeta distribution.If the lifetime of a system is considered as a random variable, the uncertaintymeasure of this system up to a specified time or afterword, has particular im-portance. These two important measures are referred to as past and residualentropies, respectively. That entropies have many applications, such as char-acterization of distributions, stochastic ordering, and so on.Studying the varentropy for residual lifetime distributions is the aim of thispaper. We will investigate the effect of systems age on it, and introduce somecharacterization using the residual varentropy. Also, a new class of the distri-butions by the residual varentropy is introduced.
2 Residual varentropy and characterization
Shannon entropy is used as a measure of uncertainty for a random variable ininformation theory. Nonetheless, if two random variables have the same en-tropy, there is a common question. Which of the entropies is the most suitablecriterion for measuring uncertainty? For instance, the Shannon entropy is zerofor standard uniform and also exponential distribution with parameter e. Inreality, the question is, do both entropies calculate uncertainty equally accu-rately? If the concentration of information content is more around the entropy,then the entropy would be appropriate to measure the amount of uncertainty.This concentration can be calculated with the variance of− log f (X). It can beshown that the varentropy of the uniform distribution is zero and for exponen-tial distribution is 1, so in the uniform distribution, entropy is more appropriatefor measuring the uncertainty.
In lifetime studies, we usually have knowledge about the age of the system
Maadani, S., Mohtashami Borzadaran, G.R., and Rezaei Roknabadi, A.H. 188
and we know, the system is still operating at the moment. If a system is knownto have survived to age t, Clearly (2) is no longer useful for measuring the un-certainty about remaining lifetime of the system. Ebrahimi (1996) introduceda measure of uncertainty of residual lifetime distributions as follows
h(X , t) =−∫
∞
t
f (x)F (t)
logf (x)F (t)
dx, (3)
where h(X , t) is the residual entropy, F (·) is survival function and (3) is ex-pressed based on the Shannon entropy for random variable {X− t|X ≥ t}.
For further study see also Ebrahimi and Kirmani (1996), Sankaran andGupta (1999), Asadi and Ebrahimi (2000) and Abraham and Sankaran (2006).This entropy is the expectation of the random variable − log f (X)
F(t) with respect
to density function g(x, t) = f (x)F(t), x > t. In this section, we introduce the resid-
ual varentropy for lifetime distribution. The residual varentropy is variance of− log f (X)
F(t) and is noted by V E(X , t).Now the last question is raised again, if two residual lifetime random vari-
ables have the same uncertainty, which of them shows the uncertainty withmore accurately? It is clear that the answer must be found by the residualvarentropy. Therefore residual varentropy also indicate concentration of infor-mation content − log f (X)
F(t) around the residual entropy, h(X , t), and it answersthe last question.
On the other hand, similar to Song’s measure, the residual varentropy is ableto compare the lifetime distribution in term of the heaviness tail, and gives ussimilar information to kurtosis measure for residual lifetime distributions.
Definition 2.1. Let X be a non-negative random variable with density func-tion f , and {X − t|X ≥ t} be residual lifetime random variable, the residualvarentropy is define as follows
V E(X− t|X ≥ t) =V E (X , t) =Var( − logf (X)
F (t)
∣∣∣∣X ≥ t).
It is clear that V E (X ,0) is the varentropy of X .
Basically, the calculation of the variance of the random variable − log f (X)
The 6th Seminar on Reliability Theory and its Applications 189
in not simple, and for the residual lifetime random variables is very difficult.Therefore we propose the using of the moment generating function (MGF) ofthe − log f (X)
F(t) for calculating the V E (X , t).
Proposition 2.2. We define MGF of log f (X)F(t) as below
L(X , t,λ ) = E(
e(λ−1) log f (X)F(t)
)=∫
∞
t
(f (x)F(t)
)λ
dx. (4)
Then,
V E (X , t) = L′′(X , t,1)−
(L′(X , t,1)
)2, (5)
where L′(X , t,1) and L
′′(X , t,1) are first and second order derivatives of L(X , t,λ ),
with respect to λ , respectively, at λ = 1.
Using this proposition we calculated the residual varentropy in some life-time distributions and compare varentropy with the residual varentropy. Wesee in uniform, exponential, Laplace and generalized Pareto distributions, theresidual varentropy is independent of the systems age but in other distribu-tions such as gamma, weibull, lognormal and so on the residual varentropy isdependent of t.
Example 2.3. Let X has gamma distribution with parametrers θ and λ anddensity function f (x) = λ θ
r(θ)xθ−1e−λx,θ > 0,λ > 0,x > 0, the residual varen-
tropy using (5) is: V E (X , t)=M[λ t−(θ−1)(2log(λ t) −1)]−M2+2M (θ −1)Ψ(θ ,λ t)+(θ −1)2
Ψ(θ ,λ t)+2−θ , and M = Γ(θ+1,λ t)Γ(θ ,λ t) −θ ,
where Γ(a,b), Ψ(a,b) and Ψ(a,b) are incomplete gamma, incomplete digammaand incomplete trigamma functions respectively.
This example implies that if θ = 2 and λ = 1 , V E (X) = 0.63 but if t = 1,V E (X ,1) = .76. Therefore the residual varentropy is dependent of the systemsage in this distribution.If we calculate the derivative of the V E (X , t), with respect to t, then:
V E ′(X , t) = r(t)[V E(X , t)− (log f (t)−E(log f (X)|X ≥ t)2], (6)
Maadani, S., Mohtashami Borzadaran, G.R., and Rezaei Roknabadi, A.H. 190
also it can be shown that
V E ′(X , t) = r (t)[V E (X , t)− (logr (t)+h(X , t))2
], (7)
where r(t) = f (t)F(t) is the hazard rate function and using (7) we have the follow-
ing proposition
Proposition 2.4. The residual varentropy is a constant function with respect
to t if
V E (X , t) = (logr (t)+h(X , t))2, (8)
where h(X , t) is the residual entropy of X, and r(t) is the hazard rate function
of it.
We shown that the residual varentropy is able to characterises some distri-butions. In the following Theorems we express this distributions.
Theorem 2.5. X has a uniform distribution if and only if V E (X , t) = 0 for all
t > o.
Proof. Let X ∼ U(a,b) then V E (X , t) = Var(log 1b−a|X ≥ t) = 0. Also if
V E (X , t) = 0, then we can show that f (x) = F(t)e−h(X ,t) = c.
Theorem 2.6. X has exponential distribution if and only if V E (X , t) = 1.
Proof. Let X ∼ Exp(θ) then the random variable {X− t|X ≥ t} is identical indistribution with X . So V E (X , t) = V E(X) = 1. Vice versa if V E (X , t) = 1,then by using (7) and some mathematical computation it can be shown thatr(t) = c. Therefore X has exponential distribution.
One of the important distribution in reliability theory and survival analysis isthe generalized Pareto distribution (GPD). This distribution was introduced byPickands (1975). Its applications include use in the analysis of events, in themodeling of large insurance claims, as a failure-time distribution in reliabilitystudies, and in any situation in which the exponential distribution might beused but in which some robustness is required against heavier tailed or lighter
The 6th Seminar on Reliability Theory and its Applications 191
tailed alternatives. if X has GPD, the distribution function of X as follows
F (x,k,σ) = 1− (1− kxσ)
1k, k 6= 0,σ > 0, (9)
where k and σ are the shape and scale parameters, respectively. The supportof X is x > 0 if k ≤ 0, and 0 ≤ x ≤ σ
k if k > 0. In the special cases, if k→ 0,the GPD reduces to the exponential distribution with mean σ , and when k = 1,GPD has uniform distribution, and if k < 0, it has second kind of the Paretodistribution.
Theorem 2.7. The continuous non-negative random variable X is GPD with
distribution function (9) if and only if V E (X , t) = c≥ 0,c 6= 1.
Proof. If the random variable X has a generalized Pareto distribution ,then theconditional distribution X − t given X ≥ t , is also generalized Pareto withthe same value of k. It can be shown that, V E (X , t) = V E(X) = (k−1)2 =
c, k 6= 0. Vice versa, if V E (X , t) = c; therefore V E ′(X , t) = 0 and by using(7) we have h(X , t) = c− logr(t). Asadi and Ebrahimi (2000) showed the lastequation implies F is the generalized Pareto distribution.
3 A class of distributions
Ebrahimi (1996) provided a class of lifetime distributions based on the mea-sure of uncertainty of residual lifetime random variables as follows:
Definition 3.1. F has decreasing (increasing) uncertainty of residual lifeDURL(IURL) , if h(X , t) is decreasing (increasing) in t.
He showed that if F is an increasing (decreasing) failure rate IFR (DFR),then it is also a DURL (IURL) and
r(t)≤ (≥)exp(1−h(X , t)). (10)
parallel to work of Ebrahimi(1996) we are going to introduce a class of lifetimedistributions using residual varentropy. Various properties of this class willalso provided.
Maadani, S., Mohtashami Borzadaran, G.R., and Rezaei Roknabadi, A.H. 192
Definition 3.2. F has increasing (decreasing) residual varentropy IRV E(DRV E),if V E(X , t) is an increasing (decreasing) in t, t ≥ 0.
Remark 3.3. It is clear that if F is IRV E(DRV E) we have:
V E (X , t)≥ (≤)V E(X), (11)
equality holds if (8) is established. We see that (11) is lower(upper) bound forresidual varentropy in this situations.
Proposition 3.4. For a non-negative random variable X, F has IRV E(DRV E),
if
V E (X , t)≥ (≤)(logr (t)+h(X , t))2. (12)
Proof. using (7) , (12) easily obtained.
Corollary 3.5. Suppose that F is both IRV E(DRV E) and 0 < f (0)< ∞. then
V E (X) = (log f (0)+h(X))2.
Corollary 3.6. If F has IRV E(DRV E) in t, then
V E (X)≥ (≤)(log f (0)+h(X))2. (13)
Therefore (13) is the lower (upper) bound for the V E in this distributions.
Corollary 3.7. If F is IFR and DRV E (DFR and IRV E), then
V E(X , t)≤ (≥)1.
Proof. If X is IFR using (10) we have:
(logr (t)+h(X , t))2 ≤ 1, (14)
and if X is DRV E (12) and (14) implies V E(X , t) ≤ 1. Other inequality issimilarly proved.
The 6th Seminar on Reliability Theory and its Applications 193
Conclusion
In this paper, a measure has been proposed for evaluating uncertainty aboutthe random variable of residual lifetime distribution with the name of residualvarentropy. This measure is able to compare the kurtosis measure of residuallifetime distributions. It has been shown that the residual varentropy in somedistributions is independent of the systems age, such as uniform, exponential,and generalized Pareto family. It also has been proven that the residual var-entropy characterizes these distributions. Moreover, a new class of lifetimedistributions has been introduced using residual varentropy. Future work inthis direction may focus on characterizing by past varentropy.
References
[1] Abraham, B. and Sankaran, P. G. (2006). Renyi’s entropy for residual life-time distribution. Statist. Papers, 47, 17–29.
[2] Asadi, M. and Ebrahimi, N. (2000). Residual entropy and its characteri-zations in terms of hazard function and mean residual life time function,Statist. Probab. Lett., 49, 263–269.
[3] Ebrahimi, N. (1996). How to measure uncertainty in the life time distribu-tions. Sankhya Ser. A, 58, 48–57.
[4] Ebrahimi, N. and Kirmani, S. N. U. A. (1996). Some results on ordering ofsurvival functions through uncertainty. Statist. Probab. Lett., 29, 167–176.
[5] Ebrahimi, N. and Pellerey, F. (1995). New partial ordering of survival func-tions based on the nation of uncertainty. J. Appl. Probab., 32, 202–211.
[6] Enomoto, R. Okamoto, N. and Seo, T. (2013). On the asymptotic normalityof test statistics using Song’s kurtosis. J. Stat. Theory Pract., 7, 102–119.
[7] Erdal, A. (2016). Varentropy decreases under the polar transform. IEEE
Trans. Inform. Theory, 60, 3390–3400.
Maadani, S., Mohtashami Borzadaran, G.R., and Rezaei Roknabadi, A.H. 194
[8] Fradelizi, M. Madiman, M. and Wang, L. (2016). Optimal concentrationof information content for log-concave densities. High dimensional prob-ability VII, Progr. Probab., 71, Springer, [Cham] 45–60.
[9] Kontoyiannis, I. and Verdu, S. (2013). Optimal lossless compression:Source varentropy and dispersion. IEEE Int. Symposium on Information
Theory, 1739–1743.
[10] Liu, J. (2007). Information theoretic content and probability. PhD Thesis,University of Florida. ProQuest LLC, Ann Arbor, MI.
[11] Pickands, J. (1975). Statistical inference using extreme order statistics.Ann. Statist, 3, 119–131.
[12] Shannon, C. E. (1948). A mathematical theory of communication. Bell
System Technical Journal, 27, 623–656.
[13] Sankaran, P. G. and Gupta R. P. (1999). Characterization of the life timedistributions using measure of uncertainty. Calcutta Statistical Association
Bulletin, 49, 159–166.
[14] Song, K. S. (2001). Renyi information, log likelihood and an intrinsicdistribution measure, J. Stat. Plan. Inference, 93, 51–69.
[15] Zografos, K. (2008). On Mardia’s and Song’s measures of kurtosis inelliptical distributions. J. Multivariate Anal., 99, 858–879.
The 6th Seminar on Reliability Theory and its Applications
Semiparametric Inference for a Class of Mean Residual Life RegressionModels with Right-Censored Length-Biased Data
Mansourvar, Z.1
1 Department of Statistics, Faculty of Mathematics and Statistics, Universityof Isfahan, Isfahan 81746-73441, Iran
Abstract: A general class of mean residual life models is studied for analysingright-censored length-biased data, which arise frequently in observational stud-ies. Martingale estimating equations are proposed for estimation of the regres-sion parameters and the baseline mean residual life function. It is shown thatthe resulting regression estimators are asymptotically normal.
Keywords: Censored Length-Biased Data, Estimating Equation, MartingaleTheory, Mean Residual Life Model.
1 Introduction
Length-biased data often arises in observational studies such as cancer screen-ing trials ([13]), and HIV prevalent cohort studies ([5]). The observationscollected via length-biased sampling scheme are left-truncated, as the subjectswho failed before the enrollment time cannot be observed. In addition, theobservations are usually subject to right-censoring due to the loss of followup. Hence, the observations are not selected randomly from the underlyingpopulation. In fact, in length-biased sampling, the observed individuals tendto live longer than those randomly selected from the underlying population.As an real data example subject to length bias, we can consider the Canadian
1Mansourvar, Z.: [email protected]
195
Mansourvar, Z. 196
study of Health and Aging (CSHA) data set ([4]). In this data set, about 10,000Canadians over 65 years old were recruited and screened for dementia. Forthe individuals who were found to have dementia in the study population, theapproximate date of onset of dementia and the time of death or censoring wererecorded. The CSHA data set is subject to length-biased sampling becausethose individuals with dementia but died before the examination time werenot included in the study. Only those individuals with dementia and survivedbeyond the examination time for the CSHA could have been observed.
There are two challenges in the analysis of right-censored and length-biaseddata. First, the observed failure time and the right-censoring time are depen-dent, i.e. the censoring mechanism is informative. Second, the observedlength-biased data change the model structure assumed for the underlyingpopulation. The problem of regression analysis under right-censored length-biased data are studied by various researchers. To estimate the covariate ef-fects for length-biased data under the proportional hazards regression model,[11] used a bias-adjusted risk set method. Later, [8] developed an inverseprobability weighted approach based on the proportional hazards model forlength-biased data.
The mean residual life function (MRLF) is of interest in biomedical andreliability research. It measures the remaining life expectancy of an individualwho has survived up to a certain time. The MRLF can sometimes serve asmore desirable tool than the survival function and the hazard function. Forinstance, a prostate cancer patient may care much more about how long he cansurvive from the time of diagnosis as compared with his instantaneous survivalchance. A comprehensive review of previous research on MRLF is given by[9].
To study the effects of covariates on the MRLF, the proportional mean resid-ual life model by [7] may be used:
m(t|Z) = m0(t)exp(Z>β ), (1)
where m(t|Z) = E(T − t | T > t;Z) is the MRLF corresponding to the p-vector
The 6th Seminar on Reliability Theory and its Applications 197
covariate Z, m0(t) is some unknown baseline MRLF when Z = 0, and β is anunknown vector of regression parameters. For the analysis of right-censoredlength-biased survival data, under model (1), [2] developed a composite par-tial likelihood estimation of β through a natural relationship between the pro-portional mean residual life model and a length-biased outcome. Using theirproposed methodology, several non standard data structures, including cen-soring of onset time and cross-sectional data without follow-up, can also behandled. Later, [1] proposed an inverse probability weighted approach for in-ference on the parameters in model (1) with right-censored length-biased data.More recently, [12] proposed estimating equations for additive mean residuallife model ([3]) through the unique structure of length-biased data.
In this paper, we consider a more general class of mean residual life regres-sion models by [10] as
m(t|Z) = m0(t)g(Z>β ), (2)
where g(.) is a pre-specified non-negative link function and assumed to betwice continuously differentiable. Choices of g include g(t) = 1+ t, g(t) =
exp(t) and g(t) = log(1+ exp(t)). Selection of an appropriate link functiong may be based on prior data or the desiring interpretation of the regressionparameters. The proposed models are generalization of the proportional meanresidual life model with more choices of the link function g(.). Under right-censored length-biased sampling scheme, we discuss inference procedures forestimating the parameters of models (2) through the martingale technique. Therest of the paper is organized as follows. In Section 2, we introduce the no-tation and assumptions used in the paper. Section 3 is devoted to inferenceprocedure for estimating the parameters of model (2) by applying martingaleestimating equations under length-biased sampling design.
Mansourvar, Z. 198
2 Notation and relations between models
Assume T to be the true survival time, that is, the time from the initiatingevent (diagnosis or onset of the disease) to the failure event, A to be the timefrom the initiation event to enrollment, V to be the time from enrollment to thefailure event, and C to be the time from enrollment to censoring. Let T denotethe observed survival time where under length-biased sampling, T can only beobserved when T > A. Note that T = A+V where A is the truncation timeand V is the residual survival time. Let Z be p-vector baseline covariates andassume that C is independent of (A,V ) given Z.
The observed data for a random sample of n independent subjects consist of{(Ai,Xi,δi,Zi), i = 1,2, . . . ,n}, where Xi = min(Ti,Ai +Ci), Ti = Ai +Vi, andδi = I(Ti ≤ Ai +Ci) = I(Vi ≤Ci). Here I(.) is an indicator function.
Under length-biased sampling, the truncation variable A follows a uniformdistribution and the joint density function (T,A) given Z = z evaluated at (t,a)is
f (t,a | z) = f (t | z)µ(z)
I(t ≥ a), (3)
where µ(z)=∫
∞
0 u f (u | z)du=∫
∞
0 S(u | z)du is the conditional mean of T givenZ = z ([6, chapter 3]). In the absence of censoring, following from equation(3), the random variables (A,V ) have an exchangeable joint density functionf (a+ v | z)/µ(z), for a ≥ 0, and v ≥ 0, and the common marginal densityfunction ([4]) is
fA(t | z) = fV (t | z) =S(t | z)µ(z)
. (4)
Note that the conditional survival function T given Z = z is
S(t | z) = m(0 | z)m(t | z)
exp{−∫ t
0
1m(u | z)
du}, (5)
and the conditional mean residual life function T given Z = z is
m(t | z) =∫
∞
t S(u | z)duS(t | z)
. (6)
The 6th Seminar on Reliability Theory and its Applications 199
Then by substituting (6) in (5), we have
S(t | z) = S(t | z)∫∞
t fA(u | z)duexp{−
∫ t
0
1m(u | z)
du}, (7)
where fA(t | z) is the density function of the left-truncation time A which isrepresented in (4). From equation (7), it can be obtained that
SA(t | z) =∫
∞
tfA(u | z)du = exp{−
∫ t
0
1m(u | z)
du}, (8)
where SA(t | z) is the conditional survival function of A. On the other hand, weknow
SA(t | z) = exp{−ΛA(t | z)}, (9)
where ΛA(t | z) is the cumulative hazards function of A. Therefore, with equal-ity of two relations (8) and (9), it can be derived that
dΛA(t | z) =1
m(t | z)dt. (10)
3 Estimation method
To avoid lengthy technical discussion of the tail behavior of the limiting distri-butions, we assume that the upper support of the censoring variable C is longerthan that of V and denote 0 < τ = inf{t : Pr(T ≥ t) = 0}< ∞. This assumptionguarantees that the survival function of T is estimable. Denote
NAi = I(Ai ≤ t), YAi(t) = I(Ai ≥ t), i = 1,2, · · · ,n.
Following [12] since
E[NAi(t)] =E[I(Ai ≤ t)] =∫ t
0fA(u | zi)du =
∫ t
0SA(u | zi)dΛA(u | zi)
=E[∫ t
0I(Ai ≥ u)dΛA(u | zi)
],
then regarding to equation (10), a zero-mean stochastic process MAi(t) can bedefined as
MAi(t) = NAi(t)−∫ t
0YAi(u)
1m(u | zi)
du.
Mansourvar, Z. 200
Thus, under model (2) and following to [10] , the estimating equations to esti-mate m0(t) and β , respectively, can be considered as
n
∑i=1
[m0(t)g(Z>i β )dNAi(t)−YAi(t)dt
]= 0 (0≤ t ≤ τ), (11)
n
∑i=1
∫τ
0
g(1)(Z>i β )
g(Z>i β )Zi
[m0(t)g(Z>i β )dNAi(t)−YAi(t)dt
]= 0, (12)
where g(1)(t) = dg(t)/dt.[4] pointed that in length-biased sampling, the two random variables A and
V have the same distribution. Therefore, an alternative zero-mean stochasticprocess can be derived as
MXi(t) = NXi(t)−∫ t
0YXi(u)dΛA(u | zi)du,
where NXi = I(Xi ≤ t,δi = 1), and YXi(t) = I(Xi ≥ t), i = 1,2, · · · ,n. Hence thefollowing estimating equations for m0(t) and β , respectively, can be obtained
n
∑i=1
[m0(t)g(Z>i β )dNXi(t)−YXi(t)dt
]= 0 (0≤ t ≤ τ), (13)
n
∑i=1
∫τ
0
g(1)(Z>i β )
g(Z>i β )Zi
[m0(t)g(Z>i β )dNXi(t)−YXi(t)dt
]= 0. (14)
Note that (A,V ) are correlated in general, but as discussed in [2], we can treat{(Ai,1),(Xi,δi) : i = 1, · · · ,n} as paired survival data, and methods for multi-variate survival data can be applied to estimate β . Thus composite estimatingequations for bivariate survival data can be developed as
n
∑i=1
[m0(t)g(Z>i β )dNi(t)−Yi(t)dt
]= 0 (0≤ t ≤ τ), (15)
n
∑i=1
∫τ
0
g(1)(Z>i β )
g(Z>i β )Zi
[m0(t)g(Z>i β )dNi(t)−Yi(t)dt
]= 0, (16)
where Ni(t) = NAi(t)+NXi(t) and Yi(t) = YAi(t)+YXi(t) for i = 1,2, · · · ,n.Therefore, in view of the estimating equation (15), m0(t) can be estimated
as
m0(t;β ) =∑
ni=1Yi(t)dt
∑ni=1 g(Z>i β )dNi(t)
. (17)
The 6th Seminar on Reliability Theory and its Applications 201
To obtain an estimator for β , we replace m0(t) with m0(t;β ) in equation (16).Then it is straight-forward to show that the resulting equation (16) is equivalentto
U(β ) =−n
∑i=1
∫τ
0{h(Z>i β )Zi− Z(t)}Yi(t)dt, (18)
where h(t) = g(1)(t)/g(t), and
Z(t) = ∑ni=1 h(Z>i β )g(Z>i β )ZidNi(t)/∑
ni=1 g(Z>i β )dNi(t). Let β denote the
solution to U(β ) = 0. The corresponding estimator of m0(t) is given bym0(t) = m0(t; β ).
In order to discuss the asymptotic normality of β , we assume that the fol-lowing conditions hold:(C1) Pr(C ≥ τ)> 0;(C2) The covariate Z is bounded;(C3) m0(t) is continuously differentiable on [0,τ].Then it can be shown that under conditions (C1)-(C3), n1/2(β −β ) is asymp-totically normal with zero mean.
References
[1] Bai, F., Huang, J., and Zhou, Y. (2016), Semiparametric inference for theproportional mean residual life model with right-censored length-biaseddata, Statistica Sinica, 26 (3) 1129-1158.
[2] Chan, K. C. G., Chen, Y. Q., and Di, C. Z. (2012), Proportional mean resid-ual life model for right-censored length-biased data. Biometrika, 99(4),995-1000.
[3] Chen, Y. Q., and Cheng, S. (2006), Linear life expectancy regression withcensored data, Biometrika, 93(2), 303-313.
[4] Huang, C. Y., and Qin, J. (2012), Composite partial likelihood estimationunder length-biased sampling, with application to a prevalent cohort study
Mansourvar, Z. 202
of dementia. Journal of the American Statistical Association, 107(499),946-957.
[5] Lagakos, S. W., Barraj, L. M., and De Gruttola, V. (1988), Nonparametricanalysis of truncated survival data, with applications to AIDS, Biometrika,75, 515-523.
[6] Lancaster, Tony. (1990), The econometric analysis of transition data, Cam-bridge university press.
[7] Oakes, D. and Dasu, T. (1990), A note on residual life, Biometrika,77(2):409-410.
[8] Qin, J. and Shen, Y. (2010), Statistical methods for analyzing right-censored length-biased data under Cox Model, Biometrics, 66, 382-392.
[9] Sun, L., and Zhang, Z. (2009), A class of transformed mean residual lifemodels with censored survival data. Journal of the American Statistical
Association, 104(486), 803-815.
[10] Sun, L., and Zhao, Q. (2010). A class of mean residual life regressionmodels with censored survival data. Journal of Statistical Planning and
Inference, 140(11), 3425-3441.
[11] Wang, M. C. (1996), Hazards regression analysis for length-biased data,Biometrika, 83, 343-354.
[12] Wu, H., Cao, X., and Du, C. (2019). Estimating equations of additivemean residual life model with censored length-biased data, Statistics and
Probability Letters, 154, 108-552.
[13] Zelen, M., and Feinleib M. (1969), On the theory of screening for chronicdiseases, Biometrika, 56, 601-614.
The 6th Seminar on Reliability Theory and its Applications
An Optimal Preventive Policy for Networks Consisting of HeterogenousComponents
Memari, M.1, Zarezadeh, S.1, and Asadi, M.2
1 Department of Statistics, Shiraz University, Shiraz 71454, Iran
2 Department of Statistics, University of Isfahan, Isfahan 81744, Iran
Abstract: In todays life, networks, such as communication and computer net-works, have many applications. One of the most important policies for pre-serving the network in an optimal working conditions is preventive mainte-nance (PM). The PM strategy is applied to reduce the likelihood of the failingof an operational network. In this article, we propose optimal PM model foran operating network. The criterian of interest to be optimized is the cost ofrenewing the network during its performance. In this paper, we consider thesituation that the network, with known structure, includes nonidentical compo-nents. To interpret the proposed models, the results are illustrated numericallyand graphically for a network.
Keywords: Maintenance, Reliability, Survival Signature, Cost Function, Avail-ability.
1 Introduction
Currently, networks have a wide range of applications in human life. Examplesare communication networks, railways, oil transmission networks, computernetworks, etc. Usually, a network is considered as a collection of nodes, and
1Memari, M.: [email protected]
203
Memari, M., Zarezadeh, S., and Asadi, M. 204
links. A network may be subject to failure during the time. Of course the per-formance of the network during the time, depends on the performance of itsnodes and links. In this paper, we assume that the nodes of the network are ab-solutely reliable but the links of the network (which we call them components)may be subject to failure. As the failure of networks may cause substantialincrease in costs, their maintenance in optimal conditions is of great importantgoal for network designers and the users. An important strategy to maintain thenetworks in good working conditions is preventive maintenance (PM) strategy.
In recent years, most of the literature on PM policies were limited to theone-unit systems (or the entire system was treated as a single-unit system). Forsome classical textbooks in this topic and related subjects, we refer the readerto Gertsbakh (1977, 2000), Wang (2002), Wang and Pham (2006), Nakagawa(2008).
Recently, attempts have been made to apply the PM strategies for complexsystems (networks) consisting of several components. Finkelstein and Gerts-bakh (2015) provided some time-free PM strategies for the systems based onthe notion of signature. Caballe and Castro (2017) investigated the reliabilityand the maintenance policies for the systems which are subject to degradationand shocks. Cha et al. (2018) explored the optimal PM policy for a system af-fected by an external shocks process when the system is replaced either on thefailure or on a predetermined number of shocks or at a predetermined replace-ment time, whichever occurs first. Zarezadeh and Ashrafi (2019) considered anetwork whose components are subject to shocks arriving based on a countingprocess and under the assumption that more than one component may fail ateach shock. Zarezadeh and Asadi (2019) proposed some optimal PM policiesfor systems consist of several groups of components, where the failure of thecomponents in groups occur based on external shocks.
The present paper is a further attempt to propose new optimal PM models forthe networks consisting of multi-type components, under the condition that thefailure of the components occurs according to aging over time.
The 6th Seminar on Reliability Theory and its Applications 205
2 Notation
Assume that a network has n components with i.i.d. lifetimes X1, ...,Xn whereXis have a common distribution function F(t) and reliability function F(t) =
1−F(t). Let X1:n ≤ . . . ≤ Xn:n denote the ordered lifetimes corresponding toX1, ...,Xn and T = T (X1, ...,Xn) denote the lifetime of the network. Assumethat H(t) denotes the reliability of the network lifetime at time t. Then, from[Samaniego (1985)], H(t) has the following mixture representation in terms ofreliability function of Xi:n:
H(t) = P(T > t) =n
∑i=1
siP(Xi:n > t), (1)
where si = P(T = Xi:n), i = 1, . . . ,n. The vector s = (s1, . . . ,sn) is called “sig-nature vector”. The signature vector s depends only on the structure of thenetwork and does not depend on the lifetimes of the components.
The survival signature denoted by Φ(i), i = 1, . . . ,n, is defined as “the prob-ability that the network functions when exactly i components of the networkfunction”. The survival signature can be obtained using the concept of pathsets of the network [Boland (2001)]. A path set is a set of components whoseworking ensures the functioning of network [Barlow-Proschan (1975)]. If rn(i)
denotes the number of path sets of size i for the network, then the survival sig-nature is given as Φ(i) = rn(i)
(ni)
.
If we define S(i) = Φ(n− i), then one can show that the signature and thesurvival signature are related as S(i) = ∑
nk=i+1 sk, or equivalently si = S(i−
1)−S(i), i = 1, . . . ,n.
One can easily show that an equivalent representation to (1) is as follows:
H(t) =n
∑i=0
S(i)Qi(t), (2)
where Qi(t) =(n
i
)(F(t))i(F(t))n−i, i = 0, . . . ,n.
Let TPM denote the time of the PM of the network. We assume that whenthe PM is applied, it is a complete (perfect) PM, i.e., a working network is
Memari, M., Zarezadeh, S., and Asadi, M. 206
replaced with a new one or is repaired such that it becomes as good as new. Ifthe network fails before TPM, i.e. T < TPM, then an emergency repair (replace-ment)(ER) is executed. Therefore, each cycle of renewing the network lastsmin{T,TPM}.
Consider an n-component network consisting of L, 2≤ L≤ n, types of com-ponents where there are n j components of type j, j = 1, ...,L, and∑
Lj=1 n j = n.
Assume that all components of the network operate independently and thelifetimes of components of group j are identically distributed with DF Fj
and reliability function Fj = 1−Fj, j = 1, ...,L. Coolen and Coolen-Maturi(2013) defined the concept of survival signature for such networks, denotedby Φ(i1, ..., iL), as follows:
Φ(i1, ..., iL) = P(B)
where for each i j = 0,1, ...,n j, j = 1,2, ...,L, the set B indicates the event that”the network operates given that precisely i j out of n j components of type j
operate”.
Coolen and Coolen-Maturi (2013) showed that the reliability function ofnetwork is represented as
H(t) = P(T > t) =n1
∑r1=0
...nL
∑rL=0
S(r1, ...rL)L
∏j=1
Qr j(t), (3)
in which Qr j(t)=(n j
r j
)(Fj(t))r j(Fj(t))n j−r j , and S(r1, ...,rL)=Φ(n1−r1, ...,nL−
rL).
In the next section, we propose two models to assess the optimal PM timesof networks composed of multiple types of components.
3 Networks with multiple types of components
Consider a network with n independent components partitioned to L≥ 2 differ-ent groups such that n j, j = 1, ...,L, components are of type j and ∑
Lj=1 n j = n.
In this case the reliability function of such a network is as given in (2.3).
The 6th Seminar on Reliability Theory and its Applications 207
We utilize the age-based PM strategies to propose our optimal PM models forabove described network using the notion of survival signature.
3.1 PM model for the unused networks with multiple types of components
The network is renewed at time TPM from the last renewal point or at failure,whichever comes first. If TPM < T , then each failed component of type j isreplaced with a new one with cost c0( j), j = 1, ...,L, and an additional costcPM is also paid for repairing the network such that it becomes as good asnew one. If T < TPM, let R( j) denote the number of components of the typej whose failures are the only cause of the network failure. Then, the long-runexpected cost per unit of time is obtained as
C(TPM) =V (TPM)
M(TPM), (4)
where
V (TPM) =n1
∑r1=0
. . .nL
∑rL=0
S(r1, . . . ,rL)(L
∑j=1
c( j)0 r j + cPM)
L
∏j=1
Qr j(TPM)
+(L
∑j=1
c( j)0 E(R( j))+ cER)H(TPM) (5)
in which H is the DF of the network lifetime, M(TPM) =∫ TPM
0 H(t)dt, and
E(R( j)) =n j
∑r j=1
r j(S( j)(r j−1)−S( j)(r j)), j = 1, . . . ,L,
where S( j)(r j) = S(0, . . . ,0,r j,0, ...,0). The optimal PM time of the networkis obtained by
C(T ∗PM) = inf{C(TPM),TPM > 0}. (6)
If there is no need for applying PM, we have C(∞) =∑
Lj=1 c( j)
0 E(R( j))+cERµ
, whereµ = M(∞). So, the efficiency of the PM policy is achieved by EC(T ∗PM) =
C(∞)/C(T ∗PM).
Memari, M., Zarezadeh, S., and Asadi, M. 208
3.2 Availability of the network
Suppose the times of performing ER and PM last tER and tPM, respectively,and that the time of replacing of a failed component lasts t0. Using the samearguments as used to write the expected cost function in (3.2), we can showthat the average time of a renewal period is W (TPM) := V (TPM) +M(TPM),
where V (TPM) is the required average time for renewing the failed items in arenewal cycle which can be easily verified by replacing, in (3.2), c0 with t0,cPM with tPM, and cER with tER, respectively. The stationary availability of thenetwork is defined as the probability that the network is in up state at someremote time moment. Hence, the network stationary availability is given as
A(TPM) =M(TPM)
W (TPM)=(
1+V (TPM)
M(TPM)
)−1. (7)
The optimal PM time T ∗PM is obtained by maximizing the availability. That is,
A(T ∗PM) = sup{A(TPM),TPM > 0}. (8)
The efficiency of the suggested model is EA(T ∗PM) =A(T ∗PM)
A(∞) , where A(∞) =µ
µ+∑Lj=1 t( j)
0 E(R( j))+tERand µ is the mean of network lifetime.
3.3 PM of used networks under partial informations
In this subsection, we again consider a network consists of L types of compo-nents. Suppose that the network is alive at time t and assume that at that time,exactly k j components of type j have failed. Let the random variable Nt
( j)
denote the exact number of failed components of type j at time t. Then theresidual lifetime of network is
Tt,k1,...,kL = (T − t|T > t;Nt(1) = k1, ...,Nt
(L) = kL).
In the sequel, we obtain optimal PM policy for the network under these set-tings. We first note that the reliability function of the residual lifetime of net-work can be represented as
Ht,k1,..,kL(x) :=P(T − t > x|T > t,N(1)t = k1, . . . ,N
(L)t = kL)
The 6th Seminar on Reliability Theory and its Applications 209
=n1−k1
∑r1=0
...nL−kL
∑rL=0
S(r1 + k1, . . . ,rL + kL)
S(k1, . . . ,kL)
L
∏j=1
Qr j|t,k j(x), (9)
where Qr j|t,k j(x) =(n j−k j
r j
)(Fj|t(x))
r j(Fj|t(x))n j−k j−r j and
Fj|t(x)= Fj(t + x)/Fj(t) is the reliability function of the residual lifetime (X ( j)−t|X ( j) > t), and Fj|t(x) = 1− Fj|t(x).
Now, let c0( j) be the cost of replacing a failed component of type j, cPM bethe cost of applying PM, and cER be the cost of ER. Then the mean cost perunit of time for applying PM is obtained as
Ct,k1,...,kL(TPM) =Vt,k1,...,kL(TPM)
t +Mt,k1,...,kL(TPM), (10)
where Vt,k1,...,kL(TPM) is the mean cost up to time of applying PM:
Vt,k1,...,kL(TPM)
=n1−k1
∑r1=0
. . .nL−kL
∑rL=0
S(r1 + k1, ...,rL + kL)
S(k1, ...,kL)(
L
∑j=1
(r j + k j)c( j)0 + cPM)
×L
∏j=1
Qr j|t,k j(TPM− t)+Bk1,...,kL(c(1)0 , . . . ,c(L)0 ,cER)Ht,k1,..,kL(TPM− t). (11)
where
Bk1,...,kL(c(1)0 , . . . ,c(L)0 ,cER) =
L
∑j=1
c( j)0 E(R( j)
k j)+
L
∑j=1
k jc( j)0 + cER
and Mt,k1,...,kL(TPM) is the mean of remaining time to renew the network.
The optimal PM time T ∗PM is obtained by minimizing Ct,k1,...,kL(TPM) overTPM. The efficiency of applying the PM policy is also given as
ECt,k1,...,kL(T ∗PM) =
Ct,k1,...,kL(∞)
Ct,k1,...,kL(T∗
PM)
where Ct,k1,...,kL(∞)=Bk1,...,kL(c
(1)0 ,...,c(L)0 ,cER)
t+µt,k1,...,kL, µt,k1,...,kL =Mt,k1,...,kL(∞) is the mean
of remaining lifetime of the network.
Memari, M., Zarezadeh, S., and Asadi, M. 210
3.4 Availability of the network
We assume that the times which last to replace the failed items are significant.Let t( j)
0 , tER, tPM be, respectively, the times needed for renewing a componentof type j, for ER, and PM action (usually, tER > tPM). Then, the mean timelasts to renew the network is obtained from (3.8) by replacing c( j)
0 by t( j)0 , cPM,
by tPM, and cER by tER, as follows:
Wt,k1,...,kL(TPM) :=Vt,k1,...,kL(TPM)+Mt,k1,...,kL(TPM)+ t.
Thus, the network stationary availability is given by
At,k1,...,kL(TPM) =Mt,k1,...,kL(TPM)+ t
Wt,k1,...,kL(TPM)=(
1+Vt,k1,...,kL(TPM)
Mt,k1,...,kL(TPM)+ t
)−1. (12)
The optimal PM time T ∗PM is such that
At,k1,...,kL(T∗
PM) = sup{
At,k1,...,kL(TPM),TPM ≥ 0}. (13)
The efficiency of the suggested model is computed using
EAt,k1,...,kL(T ∗PM) =
At,k1,...,kL(T∗
PM)
At,k1,...,kL(∞),
where At,k1,...,kL(∞) =t+µt,k1,...,kL
t+µt,k1,...,kL+Bk1,...,kL(t(1)0 ,...,t(L)0 ,tER)
, and µt,k1,...,kL is the mean
of remaining lifetime of the network.
4 Experimental Results
A mesh network is a well known layout in which each node (e.g., a computer)is interconnected with one another. All nodes cooperate in the transition ofdata in the network. Consider a wired mesh network, depicted in Figure 1, con-sisting of 5 computers (as the nodes) and 7 cables (as components) connectingthe computers. Assume that the components of the network are independentbut nonidentical. Components 1,2,5,6 and 7 are of type 1 and components 3and 4 are of type 2. The components of type 1 are identically distributed with
The 6th Seminar on Reliability Theory and its Applications 211
DF, F1(t) = 1−e−t−t2, t > 0 and the components of type 2 have gamma dis-
tribution with DF, F2(t) = 1− (t + 1)e−t , t > 0. The non-zero elements ofsurvival signature of the network is shown in Table 1.
Figure 1: A computer network with 5 computers and 7 cables.
Table 1: The non-zero elements of survival signature.
r1 r2 S(r1,r2) r1 r2 S(r1,r2)
3 0 0.4 1 0 1
2 1 0.6 0 2 1
2 0 0.8 0 1 1
1 2 1 0 0 1
1 1 1
Under these assumptions, it can be shown that the mean time to failure ofthe network is µ = 0.5155. Table 2 shows the optimal times of PM of thenetwork and the corresponding efficiencies for the unused networks accordingto the cost-based models. The costs of replacing the failed components oftypes 1 and 2 are assumed to be, respectively, c(1)0 = 0.2 and c(2)0 = 0.5. It canbe concluded that the efficiency of applying PM strategy increases when thecost of PM (ER) decreases (increases). Table 3 shows the optimal times of PMand its efficiency based on availability when t(1)0 = 0.01, t(2)0 = 0.03. We seethat when the time of PM (ER) increases, the efficiency decreases. Table 4 and5 show the same results for used network.
Memari, M., Zarezadeh, S., and Asadi, M. 212
Table 2: The optimal PM times based on cost function when c(1)0 = 0.2,c(2)0 = 0.5.
cPM cER T ∗PM EC(T ∗PM)
1 5 0.2847 1.2550
10 0.1984 1.7440
2 5 0.4595 0.9851
10 0.2875 1.2898
Table 3: The optimal PM times based on availability criterion when t(1)0 = 0.01, t(2)0 = 0.03.
tPM tER T ∗PM EA(T ∗PM)
0.2 1.5 0.2321 1.3660
2 0.2006 1.2117
0.3 1.5 0.2893 1.2117
2 0.2472 1.3541
Table 4: The optimal PM times based on cost function for used network, when t = 0.05,k1 = 2,k2 = 1.
cPM cER T ∗PM EC(T ∗PM)
1 5 0.3238 1.0341
10 0.1166 1.1777
2 5 0.5990 1.0030
10 0.2559 1.0593
Table 5: The optimal PM times based on availability criterion for used network, when t = 0.05,k1 = 2,k2 = 1.
tPM tER T ∗PM EA(T ∗PM)
0.2 1.5 0.0961 1.1726
2 0.0500 1.4472
0.3 1.5 0.2051 1.0750
2 0.1163 1.1545
5 Conclusions
In this article, we proposed some optimal preventive maintenance (PM) poli-cies for a network consisting of n components. The criteria of interest foroptimization were some imposed cost functions defined on the basis of thecost of replacing the failed components and replacing the network during its
The 6th Seminar on Reliability Theory and its Applications 213
performance. We studied the case that the network is constructed of severalnonidentical groups of components. In the first part, we studied the case that anetwork starts operating at time t = 0 and there is no information on the statusof its components. In the second part, we considered the case that the networkis a used network but still working at a time t and a number of components ofthe network have failed at that time. Some optimal PM policies based on thestationary availability of network were also investigated in both parts.
References
[1] Barlow, R. E., & Proschan, F. (1975), Statistical theory of reliability and
life testing: probability models. Florida State Univ Tallahassee.
[2] Boland, P. J. (2001), Signatures of indirect majority systems. Journal of
applied probability, 38(2), 597-603.
[3] Caballe, N. C., & Castro, I. T. (2017), Analysis of the reliability and themaintenance cost for finite life cycle systems subject to degradation andshocks. Applied Mathematical Modelling, 52, 731-746.
[4] Cha, J. H., Finkelstein, M., & Levitin, G. (2018), Bivariate preventivemaintenance of systems with lifetimes dependent on a random shock pro-cess. European Journal of Operational Research, 266(1), 122-134.
[5] Coolen, F. P., & Coolen-Maturi, T. (2013), Generalizing the signature to
systems with multiple types of components. In Complex systems and de-pendability (pp. 115-130). Springer, Berlin, Heidelberg.
[6] Finkelstein, M., & Gertsbakh, I. (2015), ‘Time-free’ preventive mainte-nance of systems with structures described by signatures. Applied Stochas-
tic Models in Business and Industry, 31(6), 836-845.
[7] Gertsbakh I. B. (1977), Models of Preventive Maintenance. North-HollandPublishing Company: Amsterdam.
Memari, M., Zarezadeh, S., and Asadi, M. 214
[8] Gertsbakh I. B. (2000), Reliability Theory with Applications to Preventive
Maintenance. Springer: London.
[9] Nakagawa, T. (2008), Advanced reliability models and maintenance poli-
cies. Springer Science & Business Media.
[10] Samaniego, F. J. (1985), On closure of the IFR class under formation ofcoherent systems. IEEE Transactions on Reliability, 34(1), 69-72.
[11] Wang, H. (2002), A survey of maintenance policies of deteriorating sys-tems. European journal of operational research, 139(3), 469-489.
[12] Wang, H., & Pham, H. (2006), Reliability and optimal maintenance.Springer Science & Business Media.
[13] Zarezadeh, S., & Asadi, M. (2013), Network reliability modeling understochastic process of component failures. IEEE Transactions on Reliabil-
ity, 62(4), 917-929.
[14] Zarezadeh, S., & Asadi, M. (2019), Coherent systems subject to multi-ple shocks with applications to preventative maintenance. Reliability En-
gineering & System Safety, 185, 124-132.
[15] Zarezadeh, S., & Ashrafi, S. (2019). On preventive maintenance of net-works with components subject to external shocks. Reliability Engineering
& System Safety, 191, 106559.
The 6th Seminar on Reliability Theory and its Applications
Reliability Analyses Weighted-k-out-of-n Systems Consisting Multiple Typesof Components
Meshkat, R.S.1, and Mahmoudi, E.1
1 Department of Statistics, Yazd University, 89175-741, Yazd, Iran
Abstract: In this paper, we introduces weighted-k-out-of-n:G system consist-ing of multiple types of components in which the same type of componentsare assumed to have the same reliability. If the total weights of the function-ing components exceeds a pre-specified threshold k, the system is supposed towork. The reliability of this system are studied and a illustrative example ispresented.
Keywords: Reliability, Total weight, Weighted-k-out-of-n:G System.
1 Introduction
In the most of real life systems, the total contribution of components plays animportant role and must be exceeding a predefined performance level. In manysituations, the components contribute variously to the system’s capacity. Theweighted systems with unequal weights for each components are introducedby Wu and Chen [11] to deal with this situation which has been studied inthe literature. A system including n components with their different positiveinteger weights is known as weighted-k-out-of-n:G system when it works ifand only if the sum total weights of functioning components is exceeding agiven threshold k. Chen and Yang [1] developed the existing algorithms tocalculate the system reliability of one-stage weighted-k-out-of-n model to two
1Meshkat, R.S.: [email protected]
215
Meshkat, R.S., and Mahmoudi, E. 216
stage weighted-k-out-of-n models. Samaniego and Shaked [10] presented a re-view on weighted k-out-of-n systems. Navarro et al. [3] extended the signature-based representations of the reliability functions of coherent systems to sys-tems with heterogeneous components. Eryilmaz [2] studied the reliabilityproperties of a k-out-of-n system with random weights for components. Rah-mani et al. [9] defined the weighted importance (WI) measure for k-out-of-nsystem with random weights that depends only on the distribution of com-ponent weights and also, Meshkat and Mahmoudi [7] generalized this mea-sure for two component i and j and the relation of these measures is investi-gated with Birnbaum reliability importance measure. Eryilmaz and Sarikaya[3] studied the special case of weighted k-out-of-n:G system containing twotypes of components, each group having different weights and reliabilitiessuch that one group has the common weight ω and reliability p1, while theother has the common weight ω∗ and reliability p2. They also obtained thenon-recursive equations for the system reliability, survival function and MeanTime To Failure (MTTF). Mahmoudi and Meshkat [5] proposed a special caseof weighted-k-out-of-n:G system formed from two types of non-identical com-ponents with different weights in which one group consists of n1 componentseach with its own positive integer-valued weight ωi and reliability p1i, whilethe other group consists of n2 components each with its own positive integer-valued weight ω∗j and reliability p2 j, (n = n1 +n2). The survival function andmean time to failure are obtained.
The ordinary k-out-of-n system operates if at least k components work. Inthese kind of systems all components perform same tasks with an equal portionto the performance of the entire system. In a more general setting, the systemconsisting multiple types of components having different functions may beexisted that different numbers of components of each type may be requiredfor the proper operation of the whole system. Recently, Eryilmaz [4] intro-duced the (k1,k2, · · · ,km)-out-of-n system including ni components of type i
for i = 1, · · · ,m and n = ∑mi=1 ni. The corresponding system is assumed to
The 6th Seminar on Reliability Theory and its Applications 217
work if at least k1 components of type 1, k2 components of type 2, · · · ,km
components of type m function. The reliability and the setup of weighted-(k1,k2, · · · ,km)-out-of-n system is also defined and studied. In this system,it is assumed that the random lifetimes of components of the same type areexchangeable and dependent, and that the random lifetimes of componentsof different type are dependent. That is, there are two levels of dependence:The first level dependence defines the dependence between the components ofsame type, and the second level of dependence is a dependence among differ-ent types of components. Mahmoudi et. al [6] investigated the copula-basedreliability of weighted-k-out-of-n systems consisting of m types of dependentcomponents which chosen randomly. Also, component importance in eachclass is obtained and illustrative examples are presented.
In this paper, we proposed weighted-k-out-of-n:G system consisting of mul-tiple types of components in which each type having ni components with itsown positive integer-valued weight ωi and reliability pi. That is, the same typeof components are assumed to have the same reliability. If the total weightsof the functioning components exceeds a pre-specified threshold k, the systemis supposed to work. The aim of this paper is to investigate the reliability ofproposed system.
The remainder of the paper is arranged as follows: In Section 2, for pro-posed weighted-k-out-of-n system, the description of system is presented. Thereliability of the system lifetime is provided in Section 3. Finally, the conclud-ing remarks are given in Section 4.
2 The system model
In this section, first some notations and the description of modelling main as-sumptions are provided for this general setup of weighted-k-out-of-n:G systemwith m types of components.
Meshkat, R.S., and Mahmoudi, E. 218
2.1 Notations
n Number of components in a systemXi State of i-th component of the system:
Xi = 1 if component is functioning; Xi = 0 if it is failedk Minimum total weight(capacity) of all working components
to operate systemC The set of all componentsCi i-th type of Componentsni Number of components in Ci
ωi Weight of components in Ci
pi Reliability of components in Ci
2.2 System description
Consider a setup of weighted-k-out-of-n system with m > 2 types of compo-nents. This system includes n(= n1+ · · ·+nm) independent components whichare placed in m > 2 groups with respect to their duties and services. The com-ponents of same type are assumed to have the same weight. This system issupposed to work with performance level k if and only if the total weight offunctioning components of all types is at least k.
Let the state of i-th component Xi be an independent binary random variablesuch that p j = P(Xi = 1) if i∈C j( j = 1, · · · ,m) with corresponding weight ω j.
Obviously from the above representation, C =m⋃
j=1C j,
m⋂j=1
C j = /0 and |C j|=
n j( j = 1, · · · ,m). Assume that φ(X) denotes the structure function of thesystem where X = (X1, · · · ,Xn) is the state vector of components. So, thestructure function of this system is defined by
φ(X) =
1 i f ∑
i∈C1
ω1Xi + · · ·+ ∑i∈Cm
ωmXi ≥ k,
0 o.w.
The 6th Seminar on Reliability Theory and its Applications 219
Evidently, the reliability of the system can be obtained by
R = P
(∑
i∈C1
ω1Xi + · · ·+ ∑i∈Cm
ωmXi ≥ k
). (1)
3 Reliability evaluation
Now, taking the independent and identical components with the reliability ofi-th component p j if i ∈ C j with corresponding weight ω j for j = 1, · · · ,m,the ∑
i∈C j
Xi has the binomial distribution, i.e. ∑i∈C j
Xi ∼ B(n j, p j). Hence, the
reliability defined by (1) can be computed as
R= ∑ · · ·∑ω1x1+···+ωmxm≥k0≤x j≤n j, j=1,···,m
P
(∑
i∈C1
ω1Xi = x1
)· · ·P
(∑
i∈Cm
ωmXi = xm
)
= ∑ · · ·∑ω1x1+···+ωmxm≥k0≤x j≤n j, j=1,···,m
(n1
x1
)px1
1 (1− p1)n1−x1 · · ·
(nm
xm
)pxm
m (1− pm)nm−xm. (2)
Note that nm = n− (n1 + · · ·+nm−1).
In the following, illustrative results are presented to observe the value of R
for a weighted-k-out-of-10:G system with respect to the different values of k,component weights and reliabilities.
Example 3.1. Consider a weighted-k-out-of-10:G system with four types ofcomponents. Suppose that n1 = 3, n2 = 2, n3 = 3 and n4 = 2 with correspond-ing component weights and reliabilities as
C1 C2 C3 C4
ωi 1 2 3 2
pi 0.90 0.97 0.95 0.85
Then for k = 17,
R = (1− p1)3p2
2p33p2
4 +
(31
)p1(1− p1)
2p22p3
3p24 +
(32
)p2
1(1− p1)p22p3
3p24
Meshkat, R.S., and Mahmoudi, E. 220
+
(32
)p2
1(1− p1)
(21
)p2(1− p2)p3
3p24
+
(32
)p2
1(1− p1)p22p3
3
(21
)p4(1− p4)
+p31
(21
)p2(1− p2)p3
3p24 + p3
1p22
(32
)p2
3(1− p3)p24
+p31p2
2p33
(21
)p4(1− p4)+ p3
1p22p3
3p24
= p22P3
3 p24 +6p2
1p2P33 p2
4−4p31p2p3
3p24 +6p2
1p22p3
3p4−4p31p2
2p33p4
−12p21p2
2p33p2
4 +3p31p2
2p23p2
4 +5p31p2
2p33p2
4.
and for k = 19,
R =
(32
)p2
1(1− p1)p22p3
3p24 + p3
1p22p3
3p24 = 3p2
1p22p3
3p24−2p3
1p22p3
3p24.
Table 1: Reliability of the weighted-k-out-of-n:G system.
n (ω1,ω2,ω3,ω4) (n1,n2,n3,n4) k R
10 (1,2,3,2) (3,2,3,2) 17 0.8849
19 0.5665
(2,3,3,2) 17 0.9363
19 0.7695
(2,2,3,3) 17 0.9168
19 0.7327
14 (3,2,4,1) (3,4,2,5) 20 0.9965
28 0.5989
(4,3,2,5) 20 0.9975
28 0.6877
(3,2,4,5) 20 0.9995
28 0.9161
As observed in Table 1, the system reliability is sensitive to the values of ni
which determine the number of components in each type. Indeed, the systemreliability depends on the combination of the weight and the reliability of thecomponents. Hence, their values and also the values of threshold k play asignificant role in system reliability.
The 6th Seminar on Reliability Theory and its Applications 221
Remark 3.2. In the weighted-k-out-of-n:G system consisting of multiple typesof components, if it is placed m = 2, the especial case of weighted-k-out-of-n:G system containing two types of components presented by Eryilmaz andSarikaya [3] will be concluded.
4 Conclusion
In many situations, the components contribute differently to the capacity ofthe system which the weighted-k-out-of-n:G system are used to deal with. Asystem including n components with their different positive integer weightsthat it works if and only if the total weight of working components is above agiven threshold k.
In this paper, we proposed weighted-k-out-of-n:G system consisting of mul-tiple types of components in which each type having ni components with itsown positive integer-valued weight ωi and reliability pi. That is, the same typeof components are assumed to have the same reliability. If the total weightsof the functioning components exceeds a pre-specified threshold k, the systemis supposed to work. This setup of weighted-k-out-of-n:G system containingm > 2 types of components might be useful in practice. The reliability of pro-posed system is investigated. The results are shown that the system reliabilityis sensitive to the values of ni which determines the number of components ineach type. Indeed, the system reliability depends on the combination of weightand reliability of each one of classes.
References
[1] Chen, Y. and Yang, Q. (2005), Reliability of two-stage weighted k-out-of-n systems with components in common, IEEE Transactions on Reliability,54, 431-440.
Meshkat, R.S., and Mahmoudi, E. 222
[2] Eryilmaz, S. (2013), On reliability analysis of a k-out-of-n system withcomponents having random weights, Reliability Engineering and System
Safety, 109, 41-44.
[3] Eryilmaz, S. and Sarikaya, K. (2014), Modeling and analysis of weighted-k-out-of-n:G system consisting of two different types of components, Pro-
ceedings of the Institution of Mechanical Engineers, Part O: Journal of
Risk and Reliability, 228(3), 265-271.
[4] Eryilmaz, S. (2019), (k1,k2, · · · ,km)-out-of-n system and its reliability,Journal of Computational and Applied Mathematics, 346, 591-598.
[5] Mahmoudi, E. and Meshkat, R.S. (2020), Reliability analysis of weighted-k-out-of-n:G system consisting of two different types of non-identicalcomponents each with its own positive integer-valued weight, Submitted.
[6] Mahmoudi, E., Meshkat, R.S. and Torabi, H. (2020), Copula-based relia-bility and component importance of weighted-k-out-of-n systems consist-ing m types of components with randomly chosen components, Submitted.
[7] Meshkat, R.S. and Mahmoudi, E. (2017), Joint reliability and weighted im-portance measures of a k-out-of-n system with random weights for compo-nents, Journal of Computational and Applied Mathematics, 326, 273-283.
[8] Navarro, J., Samaniego, F.J. and Balakrishnan, N. (2011), Signature-basedrepresentations for the reliability of systems with heterogeneous compo-nents. Journal of Applied Probability, 48, 856-867.
[9] Rahmani, R.A., Izadi, M. and Khaledi, B.E. (2016), Importance of com-ponents in k-out-of-n system with components having random weights,Journal of Computational and Applied Mathematics, 296, 1-9.
[10] Samaniego, F.J. and Shaked, M. (2008), Systems with weighted compo-nents, Statistics and Probability Letters, 78, 815-823.
The 6th Seminar on Reliability Theory and its Applications 223
[11] Wu, J.S. and Chen, R.J. (1994), An algorithm for computing the reliabil-ity of a weighted-k-out-of-n system, IEEE Transactions on Reliability, 43,327-328.
The 6th Seminar on Reliability Theory and its Applications
Analysis of Masked Competing Risks Data Using Machine LearningImputation Methods
Misaii, H.,1 Eftekhari Mahabadi, S.1, Jafari, N.1, and Haghighi, F.1
1 Department of Statistics, Faculty of Mathematics, Statistics and ComputerScience, University of Tehran, Tehran, Iran
Abstract: The analysis of masked cause of failure data is an important area inthe reliability analysis. Prior researches mostly included masking probabilityas a part of likelihood function to handle masked competing risks analysis. Inthis paper, a new two-step approach is presented which is based on imputationof masked causes of failure via some machine learning algorithms as the firststep. Then, in the second step, the filled-in competing risks data are analyzedusing standard maximum likelihood approach. The superiority of the proposedmethod comparing with the prior ones is evaluated in ML Estimations (MLE)of Life-time parameters via several simulation studies.
Keywords: Competing Risks, Masked Data, Machine Learning, StatisticalModels, Imputation.
1 Introduction
In the reliability analysis of series systems, time to failure and the exact causeof failure are collected in order to do statistical analysis such as estimation ofthe reliability function. But, sometimes the exact cause of failure is unidentifi-able (because of proper diagnostic equipment storage, or time and cost restric-tions) and we only know that the exact cause of failure belongs to a Minimum
1Eftekhari Mahabadi, S.: [email protected]
224
The 6th Seminar on Reliability Theory and its Applications 225
Random Subset (MRS) of all possible causes. These data are called to bemasked. The previous researches about masked data can be divided into threecategories. The first category includes symmetry assumption (masking proba-bility is independent of failure time and cause). Many authors have consideredthis assumption, among which Hodgson [1] and Miyakawa [2] and Sen et al.[5] presented some classical data analysis while Reiser et al. [3], Berger andSun [4] used Bayesian analysis to handle masked data. Furthermore, a com-prehensive parametric model was presented by Basu et al. [6].Although, the symmetry assumption simplified the model but increased the bi-asness. In the second category, researchers allow the masking probability to because dependent in order to increase model’s accuracy. For example, Lin andGuess [7], and Guttman et al. [8] presented a proportional probability modelfor a s-dependent masking probability for two-component series systems. Intheir model, the masking probability was assumed to be proportionally inde-pendent of time. Also, Kuo and Yang [9] and Mukhopadhyay and Basu [10]extended a Bayesian Model for two-component systems with Exponential andWeibull distributions. Craiu and Duchesne [11] considered an expectation-maximization (EM) algorithm in order to calculate maximum Likelihood esti-mations (MLEs). Also, a corrective method for EM algorithm was presentedby Mukhopadhyay [12] using bootstrap. Furthermore, a Bayesian analysis ofcause dependent model was proposed for two-component system under Paretodistribution by Xu and Tang [13]. They also considered a non-parametricmodel to handle masked data [14].The third category includes time and cause dependent masking probability asa part of competing risks’ model. There is not much research in this area,only one can refer to Misaii and et al. [15] that proposed the cause and timedependent masking mechanism based on multinomial logit GLMs.
Prior studies about masked data inter-twisted the masking probability andthe life-time model which leads to major complexities in the model estima-tion. In this paper, a two-step procedure is presented to overcome incom-
Misaii, H., Eftekhari Mahabadi, S., Jafari, N., and Haghighi, F. 226
pleteness of masked competing risks data. This approach do not require com-plex modelling phase and allows the researcher to analyze data using standardcomplete-data methods. The first step includes imputation of the exact causeof failure using some appropriate classification algorithms. Actually, the com-plete data part is used to learn the algorithm, which is then applied for thecause of failure prediction (imputation) of the masked data part. In the sec-ond step, the completed competing risks data would be analyzed using simplerstandard likelihood methods which do not require masking probability to beincluded. It should be noticed that the classification algorithms of the first stepplays an important role in the accuracy of the life-time parameters’ estimationin the second step. To have more accurate estimation results, we are going tocompare well-known machine learning algorithms through simulation studies.The rest of the paper is arranged as follows. The likelihood function for themasked competing risks data is given in Section 2. In Section 3, some simula-tion studies are presented in order to compare and justify the proposed two-stepmethod. Finally, the concluding remarks are given in Section 4.
2 Masked Data: Likelihood function
Let n series systems with J components are put on a test. In this case, Ti =
min(Ti1, ...,TiJ) is the failure time of the ith system, where Ti j is the failuretime of the ith system due to the jth component. Suppose that the exact causeof failure can not be observed for some systems and we know that it belongs toMRS of all components (which is displayed by Mi for the ith system). Hence,the encountered masked data and observed data is written as follows,
(t1,M1),(t2,M2), ...,(tn,Mn), (1)
where for the completely observed systems, Mi is a singleton. The typicalapproach to handle masked data is to include masking probability in the like-
The 6th Seminar on Reliability Theory and its Applications 227
lihood function as follows:
L(θ ,γ) =n
∏i=1
∑ki∈Mi
p(Mi|ti,ki,θ)p(ti,ki|γ), (2)
where p(Mi|ti,ki) is the masking probability and γ and θ are two sets ofparameters related to the lifetime distribution and masking probability, re-spectively. Different masking probability models such as independent model(p(Mi|ti,ki) = p(Mi)), cause dependent model (p(Mi|ti,ki) = p(Mi|ki)) andtime and cause dependent model (p(Mi|ti,ki)= g(ti,ki)) could be considered. Itwas shown that time and cause dependent model has less biasness than others.We consider this model as the ordinary model (ORD) and make a comprehen-sive comparison with our proposed two-step approach.To impute masked cause of failure (when MRS has carnality greater than one)we will apply some machine learning algorithms to do a classification. There-for, the imputed data (1) is rewritten as follows:
(t1,k1),(t2,k2), ...,(tn,kn), (3)
and the completed data likelihood function (2) is reduced to,
L(γ) =n
∏i=1
p(ti,ki|γ). (4)
In the next section we will utilize some machine learning algorithms suchas Random Forest (RF), Support Vector Machine (SVM), K-Nearest Neigh-bor (KNN), Linear Discriminant Analysis (LDA), Classification and RandomTrees (CART), Naive Bayes (NB) and, Logistic regression Model (GLM) topredict and impute exact cause of failure for masked data (For more informa-tion about these models refer to Gareth James and et al. [16].)
3 Numerical Analysis
Suppose n = 100 two-component series systems are put under the test suchthat each component’s lifetime follows Weibull distribution. The simulationstudy is conducted as follows:
Misaii, H., Eftekhari Mahabadi, S., Jafari, N., and Haghighi, F. 228
• Generate ti j (failure time of jth component of ith system) from Weibull(α j,β j)for i = 1,2, ...,n and j = 1,2.
• Set ti = min(ti1, ti2) (observed failure time of the ith system) and ki = 1 ifti1 < ti2 otherwise ki = 2.
• 100p1% and 100p2% of the first and second components are masked re-spectively and randomly such that for i = 1,2, ...,n:
pi1 =exp(θ01 +θ ti)
1+ exp(θ01 +θ ti)
pi2 =exp(θ02 +θ ti)
1+ exp(θ02 +θ ti)where tuning parameters θ01, θ02 and θ lead to different masking rates oftwo components.
• Partition the data into train and test subsets such that the test set onlyincludes records with masked cause of failure.
• Fit models based on training sample then predict the cause of failure formasked data (testing sample).
The true parameter values for the shape and scale parameters of Weibull dis-tribution are considered to be (α1, β1)=(0.5, 2.5) and (α2, β2)=(0.6, 1.5).Also, the parameters of masking probabilities are assumed to be θ01 = −1,θ02 = −1.5 and θ = 0.2 leading to 24 percent masking rate with p1 = 0.27and p2 = 0.20. To visualize descriptive properties of one randomly simulatedsample, Figures 1 and 2 are given. These Figures show that the failure timeand cause are dependent and the estimated kernel density of the failure time isdifferent for two groups of causes. Hence, assuming the above simulation sce-nario, the failure time could be predictive of its cause and the first componentis more likely to fail earlier than the second one.In the first step, different classification algorithms including failure time as afeature variable and the cause of effect as a binary target variable are imple-mented on the training sample and their accuracy on the testing sample are
The 6th Seminar on Reliability Theory and its Applications 229
Figure 1: Box Plots of Failure Times Given Causes
Figure 2: Kernel Densities for Failure Time of Two Causes
evaluated. Actual and imputed causes for masked part of a randomly simu-lated sample are given in Table 1 (the miss-classifications are showed in redcolor). Also, The accuracy and kappa (similar to accuracy, except that it isnormalized) metrics of different imputation algorithms are shown in Figure 2.This Figure shows that the SVM, KNN and CART algorithms created moreaccurate imputations (predictions) for the randomly simulated sample.
Table 1: Actual and predicted causes and the Error Rates (ER) of different imputation algorithms for one randomly
simulated sample
Actual Cause
Method 2 2 2 1 1 1 1 2 2 2 1 1 1 1 2 1 2 1 1 1 2 1 2 1 1 2 2 ER
RF 1 2 1 2 1 2 2 2 2 1 1 1 1 1 2 2 1 2 1 1 1 1 2 1 2 2 1 0.44
SVM
Pred
icte
dC
ause 1 2 1 2 1 1 2 2 2 1 1 1 1 1 2 1 2 2 1 1 2 1 2 1 2 2 1 0.29
KNN 1 1 1 2 1 1 1 2 2 1 2 1 1 1 2 1 2 1 1 1 2 1 2 1 2 1 2 0.29
LDA 1 2 1 2 1 1 1 2 2 1 1 1 1 2 2 1 2 2 1 1 2 1 2 2 2 1 1 0.33
CART 1 1 1 2 1 1 2 2 2 1 2 1 1 1 2 1 2 1 1 1 2 1 2 1 2 2 2 0.29
NB 1 2 1 2 1 1 2 2 2 1 2 1 1 2 2 1 2 2 1 1 2 1 2 2 2 2 2 0.37
GLM 1 2 1 2 1 1 1 2 2 1 1 1 1 2 2 1 2 2 1 1 2 1 2 2 2 1 1 0.37
Misaii, H., Eftekhari Mahabadi, S., Jafari, N., and Haghighi, F. 230
Figure 3: Accuracy and Kappa Metrics of Different Machine Learning Algorithms for one randomly simulated sample
(True Values are α1 = 0.5,α2 = 0.6,β1 = 2.5,β2 = 1.5)
To avoider randomness error, rest of the simulation study is repeated 100times. After imputing the masked causes of failure, in the second step, theMLEs of the life-time model’s parameters are calculated based on imputeddata. Table 2 gives the average ML estimations and their corresponding bi-asness using the proposed two-step approach with different classification al-gorithms and ordinary method of masked competing risks data analysis. Theresults indicate that some of the machine learning algorithms have led to lessbiasness than ordinary method (ORD). Also, average and standard deviation(SD) of the imputation models accuracy are presented in Table 3 which showsthat LDA and GLM algorithms averagely have more accurate imputations thanthe other ones.
Table 2: MLE(Biasness) of Parameters for 100 Simulated Competing Risks Data with True Values (α1 = 0.5,α2 =
0.6,β1 = 2.5,β2 = 1.5) with Overall Masked Rates 24.2%, and 27.24% and 20.17% Masked Rates for First and
Second Components, Respectively.
Par β1 β2 α1 α2
RF 2.610(0.110) 1.577(0.077) 0.506(0.006) 0.599(-0.001)
SVM 2.703(0.203) 1.506(0.006) 0.4996(-0.0004) 0.640(0.040)
KNN 2.728 (0.228) 1.502(0.002) 0.502(0.002) 0.625(0.025)
LDA 2.833(0.333) 1.451(-0.049) 0.501(0.001) 0.640(0.040)
CART 2.715(0.215) 1.5003(0.0003) 0.50001(0.00001) 0.636(0.036)
NB 2.896(0.396) 1.506(0.006) 0.518(0.018) 0.574(-0.026)
GLM 2.833(0.333) 1.449(-0.051) 0.501(0.001) 0.640(0.040)
ORD 2.550(0.050) 1.585(0.085) 0.502(0.002) 0.630(0.030)
The 6th Seminar on Reliability Theory and its Applications 231
Table 3: Average and Standard Deviation (SD) of imputations accuracy over 100 simulated samples
Algorithm Mean SD
RF 0.56 0.091
SVM 0.60 0.111
KNN 061 0.105
LDA 0.63 0.098
CART 0.61 0.123
NB 0.57 0.098
GLM 0.63 0.099
4 Conclusion
In this paper, we have proposed a two-step approach to the analysis of maskedcompeting risks data. The first step tries to impute incompletely observedcauses of failure through different machine learning methods. In the secondstep, the MLEs of the life-time model parameters are derived using standardcomplete competing risks data likelihood. The simulation study has been per-formed in order to justify our proposed approach. The results show that usingclassification algorithms to impute masked causes of failures leads to more ac-curate and simpler estimation of the parameters comparing with direct meth-ods of handling masked data.
References
[1] Usher, J.S. and Hodgson, T.J. (1988), Maximum likelihood analysis ofcomponent reliability using masked system life data, IEEE. Trans. Rel.,37(5), 550-555.
[2] Miyakawa, M. (1984), Analysis of incomplete data in a competing risksmodel, IEEE. Trans. Rel., 33(4), 293296.
Misaii, H., Eftekhari Mahabadi, S., Jafari, N., and Haghighi, F. 232
[3] Reiser, B. Guttman, I. Lin, D. K. J. Usher, J. S. and Guess, F.M. (1995),Bayesian inference for masked system lifetime data, Appl. Statist., 44, 79-90.
[4] Berger, J. O. and Sun, D. (1993), Bayesian analysis for the poly-Weibulldistribution, J. Amer. Statist. Assoc., 88, 1412-1418.
[5] Sen, A. Banerjee, M. and Basu, S. Balakrishnan, N. and Rao, Eds., C. R.(2001), Analysis of masked failure data under competing risks, in Hand-book of Statistics. Amsterdam, The Netherlands: North-Hol-land, 20, 523-540.
[6] Basu, S. Sen, A. and Banerjee, M. (2003), Bayesian analysis of competingrisks with partially masked cause of failure, Appl. Statist., 52, 77-93.
[7] Lin, D. K. J. and Guess, F. M. (1994), System life data analysis with de-pendent partial knowledge on the exact cause of system failure, Micro elec-
tron.Rel., 34, 535-544.
[8] Guttman, I. Lin, D.K.J. Reiser, B. and Usher, J.S. (1995), Dependent mask-ing and system life data analysis: Bayesian inference for two-componentsystems, Lifetime Data Anal., 1, 87-100.
[9] Kuo, L. and Yang, T. E. (2000), Bayesian reliability modeling for maskedsystem lifetime data, Statist. Probab. Lett., 47, 229-241.
[10] Mukhopadhyay, C. and Basu, A. P. (2000), Masking without the symme-try as-sumption: A Bayesian approach, in Proc. Abstract Book 2nd Int.Conf. Math. Methods Rel., Bordeaux, France, Universite Victor Segalen.2, 784-787.
[11] Craiu, R.V., Duchesne, T., Gelmanand, A. and Meng, Eds, X.-L. (2004),Using EM and DA for the competing risk model, in Applied Bayesian Mod-
eling and Causal Inference From an Incomplete-Data Perspective. NewYork, NY, USA: Wiley, 234-245.
The 6th Seminar on Reliability Theory and its Applications 233
[12] Mukhopadhyay, C. (2006), Maximum likelihood analysis of masked se-ries system lifetime data, J. Statist. Plann. Inference, 136, 803-838.
[13] Xu, A. and Tang, Y. (2009), Bayesian analysis of Pareto reliability withdependent masked data, IEEE. Trans. Rel., 58(4), 583-588.
[14] Xu, A. and Tang, Y. (2011), Non-parametric Bayesian analysis of com-peting risks problem with masked data, Commun. Statist. Theory Methods,40, 2326-2336.
[15] Misaii, H., Haghighi, F. and Eftekhari Mahabadi, S. (2019), BayesianAnalysis of Masked Data with non-ignorable Missing Mechanism, 5nd
Seminar on Reliability Theory and its Applications.
[16] James, G., Witten, D., Hastie, T., Tibshirani, R. (2013), An Introduction
to Statistical Learning: with Applications in R, Springer-Verlag New York.
The 6th Seminar on Reliability Theory and its Applications
A Two-Parameter Distribution by Mixing Weibull and Lindley Models
Saadati Nik, A.1, Asgharzadeh, A.1, and Bakouch, H.S.2
1 Department of Statistics, University of Mazandaran, Babolsar, Iran
2 Mathematics Department, Faculty of Science, Tanta University, Tanta,Egypt
Abstract: In this paper, we introduce a new lifetime distribution by mixing theWeibull and Lindely distributions. We assume that the scale parameter of theWeibull distribution is a random variable having the Lindely distribution. Theshapes of the density and hazard rate functions are discussed. Further, someproperties of the distribution are obtained, involving quantiles and moments.The distribution parameters are estimated by maximum likelihood method andits performance is evaluated by a simulation study. Applicability of the distri-bution among other competitive distributions is illustrated by fitting a practicaldata set and using some goodness-of-fit statistics.
Keywords: Statistical Distributions, Hazard Rate Function, Estimation, Sim-ulation.
1 Introduction
In several practical situations, objects in a certain population differ substan-tially from each other, hence the heterogeneity of such objects should be con-sidered for accurate data analysis of this population. Therefore, mixture dis-tribution is a recommended model for analyzing the heterogeneity.
1Saadati Nik, A.: [email protected]
234
The 6th Seminar on Reliability Theory and its Applications 235
Another issue must be taken into account is the different nature of the practicaldata which requires introducing new distributions with various hazard rate (hr)shapes to model and analyze such data. The two aims above are investigated byintroducing a new mixture distribution, named the Weibull Lindley distributionvia mixing the Lindely and Weibull distributions in a different manner thanused in Asgharzadeh et al. [2]. Also, the new distribution has decreasing andunimodal hazard rates shapes and its construction is explained as follows.Let X | λ follow the Weibull distribution with the probability density function(pdf)
f (x | λ ) = λαxα−1e−λxα
, x > 0; α,λ > 0,
and λ | β follows the Lindley distribution (Lindley [8]) with the pdf
f (λ | β ) = β 2
1+β(1+λ )e−βλ , λ > 0; β > 0.
Hence, the marginal distribution of X is called the Weibull-Lindley (WeL)distribution. The pdf of X is obtained as
f (x) =αβ 2 xα−1
1+β
∫∞
0λ (1+λ )e−(β+xα)λ dλ ,
and after some algebra, we gets the WeL pdf as
f (x) =αβ 2xα−1
1+β
2+β + xα
(β + xα)3 , x > 0; α,β > 0. (1)
Moreover, the cumulative distribution function (cdf) of the WeL distribution is
F(x) = 1− β 2
1+β
1+β + xα
(β + xα)2 , (2)
hence, the corresponding reliability (survival) function is given by
R(x) =β 2
1+β
1+β + xα
(β + xα)2 . (3)
In reliability analysis, usefulness of the model (1) comes in noting that X canbe the lifetime of a component and λ is the scale parameter of its distribu-tion. If the population has some variability in its scale parameter, then this
Saadati Nik, A., Asgharzadeh, A., and Bakouch, H.S. 236
variability can be explained by the distribution for λ . Moreover, comparingthe WeL distribution with Weibull and Lindley distributions implies the flexi-bility of WeL in terms of its hazard rate shapes as shall be shown later. Also,we shall see later that it has decreasing and unimodal (upside-down bathtub)hazard rates. Decreasing and unimodal shaped hazard rates have many appli-cations in reliability and survival analysis. It may be difficult to know why thelifetime of an object has a decreasing hazard rate. However, it would seem tocorrespond to some physical mechanisms of improvement with the time. Inreliability, this may happen in situations where the product manufacturer con-tinues to improve in-serve product by implementing corrective actions. On theother side, as mentioned by Lai and Xie [7], when the main reasons of the fail-ures of products are caused by fatigue and corrosion, the failure rates of thoseproducts will exhibit unimodal shapes. Further, in some medical situations,such as breast cancer and infection with some new viruses, the hazard rate hasa unimodal shape, see Demicheli et al. [4]. Another example in epidemiologyis that the patients with tuberculosis have a risk which initially increases andthen decreases after the treatment. The Weibull Lindley distribution proposedby Asgharzadeh et al [2] does not allow a unimodal hazard rate shape. So, thisdistribution is not suitable for modeling data with unimodal hazard rates.
2 Shape characteristics
In this section, we discuss the shape characteristics of the pdf and hrf of theWeL distribution.
The 6th Seminar on Reliability Theory and its Applications 237
2.1 Shape of pdf
we can see from (1) that
limx→0
f (x) =
∞ α < 1
2+β
β (1+β ) α = 1
0 α > 1,
and limx→∞ f (x) = 0. Figure 1 shows the pdf of the WeL distribution for someselected choices of α and β . From it, we see that the pdf of WeL distributionis decreasing for α ≤ 1 and unimodal for α > 1.
0 1 2 3 4
0.0
0.5
1.0
1.5
2.0
2.5
x
α = 0.5, β = 0.5
α = 1.0, β = 0.5
α = 1.0, β = 1.0
α = 1.5, β = 0.5
α = 1.5, β = 1.0
α = 1.5, β = 3.0
Figure 1: Plots of the WeL density for some parameter values.
Features of the pdf of WeL distribution are discussed theoretically in thenext theorem.
Theorem 2.1. The pdf of WeL distribution given by (1) is decreasing for α ≤ 1and unimodal for α > 1.
Proof. The logarithm of (1) is
ln f (x) =Constant +(α−1) lnx+ ln(2+β + xα)−3ln(β + xα).
We haveddx
log f (x) =α−1
x+
αxα−1
2+β + xα− 3α xα−1
β + xα
=α−1
x− 2α xα−1(3+β + xα)
(β + xα)(2+β + xα).
Saadati Nik, A., Asgharzadeh, A., and Bakouch, H.S. 238
If α ≤ 1, we easily see that ddx log f (x) < 0. Hence, f (x) is decreasing for all
x. For α > 1, ddx log f (x) has a global maximum at some point x0, where x0 is
the root of the equation ddx log f (x) = 0.
2.2 Hazard rate shape
The hazard rate function (hrf) corresponding to (1) and (15) is given by
h(x) =αxα−1 (2+β + xα)
(β + xα)(1+β + xα). (4)
The behavior of h(x) when x→ 0 and x→ ∞, respectively, are given by
limx→0
h(x) =
∞ α < 1(2+β )
β (1+β ) α = 1
0 α > 1
and limx→∞
h(x) = 0.
Figure 2, shows the hrf h(x) of the WeL distribution for some choices of α andβ . The next theorem investigates the shapes for the hazard rate function of the
0 1 2 3 4
01
23
4
x
h(x
)
α = 0.5, β = 0.5α = 1.0, β = 0.5α = 1.0, β = 1.0α = 1.5, β = 0.5α = 1.5, β = 1.0α = 1.5, β = 3.0
Figure 2: Plots of the WeL hrf for some parameter values.
WeL distribution.
Theorem 2.2. The hazard rate function of the WeL distribution in (4) is de-
creasing for α ≤ 1 and unimodal for α > 1.
Proof. Set η(x) = logh(x) = (α−1) logx+ log(2+β + xα)− log(β + xα)−log(1+β + xα). The first derivative of η is
η′(x) =
α−1x
+α xα−1[ 1
2+β + xα− 1
β + xα− 1
1+β + xα
].
The 6th Seminar on Reliability Theory and its Applications 239
If α ≤ 1, result η ′(x)< 0 for all x > 0. This implies that h(x) as a decreasingfunction. If α > 1, from the equation η ′(x) = 0, we get a uniqe positive solu-tion such as x = x0, such that η ′(x) > 0 for x < x0 and η ′(x) < 0 for x > x0.So, h(x) is a unimodal at x = x0.
3 Some properties of the WeL distribution
In this section, we obtain some properties of the WeL distribution, involvingquantiles and moments.
3.1 Quantiles and moments
For the WeL distribution, the pth quantile xp is the solution of F(xp) = p,hence
xp =
((1+β )
((1+
xαp
β)2(1− p)−1
))1/α
,
which is the base of generating WeL random variates.
Now, we obtain moments of the WeL distribution. The rth moment of theWeL distribution is
E(X r) = E(E(X r | λ )) = Γ(1+rα)E(λ−r/α)
= Γ(1+rα)[Γ(1− r
α)
β1− r
α
+Γ(2− r
α)
β2− r
α
], r < α, r = 1,2, · · · .
3.2 Stochastic ordering
Comparative behavior of positive continuous random variables can be judgedby stochastic ordering. Therefore, let us recall the next concepts.
A random variable X1 is said to be smaller than a random variable X2 in the
(i) stochastic order (X1 ≺st X2) if FX1(x)≥ FX2(x) for all x,
Saadati Nik, A., Asgharzadeh, A., and Bakouch, H.S. 240
(ii) hazard rate order (X1 ≺hr X2) if hX1(x)≥ hX2(x) for all x,
(iii) likelihood ratio order (X1 ≺lr X2) iffX1(x)fX2(x)
decreases in x.
The likelihood ratio order implies hazard rate order which in turn impliesstochastic order, see Shaked and Shanthikumar [10] for additional details. Thefollowing theorem presents the stochastic ordering for the WeL distribution.The proof is easy and omitted.
Theorem 3.1. Let Xi ∼WeL(αi,βi), i = 1,2, be two random variables. If α1 =
α2 = α and β1 ≤ β2, and if β1 = β2 = β ≥ 1 and α1 ≤ α2, then X1 ≺lr X2⇒X1 ≺hr X2⇒ X1 ≺st X2.
4 Maximum Likelihood Estimation
Let x1,x2, · · · ,xn be the observed values of a random sample taken from theWeL(α,β ) distribution, then the log-likelihood function is
ln L(α,β ) = n lnα +2n lnβ −n ln(1+β )+n
∑i=1
ln(2+β + xαi )
+(α−1)n
∑i=1
ln xi−3n
∑i=1
ln(β + xαi ).
(5)
The maximum likelihood estimates (MLEs) of α and β , say α and β , are thesolutions of the equations
nα+
n
∑i=1
xαi ln xi
2+β + xαi+
n
∑i=1
ln xi−3n
∑i=1
xαi ln xi
β + xαi= 0,
and2nβ− n
1+β+
n
∑i=1
12+β + xα
i−3
n
∑i=1
1β + xα
i= 0.
The 6th Seminar on Reliability Theory and its Applications 241
5 Monte Carlo simulation study
In this section, we assess the performance of the MLE’s of the parameters withrespect to sample size n for the WeL(α,β ) distribution. The assessment ofperformance is based on a simulation study by using the Monte Carlo method.Let α and β be the MLEs of the parameters α and β , respectively. We computethe mean square error (MSE) and bias of the MLEs of the parameters α andβ based on the simulation results of N = 2000 independent replications. Theresults are summarized in Table 1 for different values of n,α and β . FromTable 1, the results verify that MSE of the MLEs of the parameters decreaseas the sample size n increases. Hence, we can see the MLEs of α and β areconsistent estimators.
Table 1: MSEs and average biases(values in parentheses) of the simulated estimates.
α = 0.5 β = 0.5 α = 1.0 β = 1.5
n 30 0.038 (0.162) 1.038 (0.939) 0.025 (0.004) 0.714 (-0.830)
50 0.028 (0.146) 0.880 (0.896) 0.013 (-0.013) 0.709 (-0.833)
100 0.023 (0.141) 0.822 (0.885) 0.006 (-0.030) 0.704 (-0.835)
200 0.020 (0.137) 0.800 (0.884) 0.004 (-0.032) 0.700 (-0.835)
α = 0.5 β = 1.0 α = 1.5 β = 0.5
n 30 0.029 (0.138) 0.055 (-0.091) 0.072 (-0.148) 6.352 (2.290)
50 0.022 (0.128) 0.036 (-0.097) 0.058 (-0.177) 5.195 (2.166)
100 0.018 (0.123) 0.023 (-0.097) 0.050 (-0.193) 4.612 (2.098)
200 0.015 (0.117) 0.016 (-0.103) 0.048 (-0.204) 4.348 (2.060)
α = 1.0 β = 0.5 α = 1.5 β = 1.0
n 30 0.042 (0.097) 2.982 (1.591) 0.101 (-0.247) 0.081 (0.075)
50 0.023 (0.078) 2.716 (1.568) 0.098 (-0.277) 0.046 (0.075)
100 0.013 (0.064) 2.414 (1.521) 0.093 (-0.288) 0.023 (0.064)
200 0.007 (0.055) 2.243 (1.481) 0.093 (-0.297) 0.013 (0.062)
6 Practical data application
In this section, we present the application of the WeL model to an practicaldata set to illustrate its flexibility among a set of competitive models.
The data set is the Cancer Patients data. The data represent an uncensoreddata set corresponding the remission times (in months) of a random sample of
Saadati Nik, A., Asgharzadeh, A., and Bakouch, H.S. 242
128 bladder cancer patients reported in Lee and Wang [6]. This data set thatare given in Table 2.
Table 2: The data set0.08 2.09 3.48 4.87 6.94 8.66 13.11 23.63 0.20 2.23 3.52 4.98 6.97 9.02 13.29 0.40 2.26 3.57 5.06
7.09 9.22 13.80 25.74 0.50 2.46 3.64 5.09 7.26 9.47 14.24 25.82 0.51 2.54 3.70 5.17 7.28 9.74 14.76
26.31 0.81 2.62 3.82 5.32 7.32 10.06 14.77 32.15 2.64 3.88 5.32 7.39 10.34 14.83 34.26 0.90 2.69 4.18
5.34 7.59 10.66 15.96 36.66 1.05 2.69 4.23 5.41 7.62 10.75 16.62 43.01 1.19 2.75 4.26 5.41 7.63 17.12
46.12 1.26 2.83 4.33 5.49 7.66 11.25 17.14 79.05 1.35 2.87 5.62 7.87 11.64 17.36 1.40 3.02 4.34 5.71
7.93 11.79 18.10 1.46 4.40 5.85 8.26 11.98 19.13 1.76 3.25 4.50 6.25 8.37 12.02 2.02 3.31 4.51 6.54
8.53 12.03 20.28 2.02 3.36 6.76 12.07 21.73 2.07 3.36 6.93 8.65 12.63 22.69
We compare the WeL model with a set of competitive models, namely Lind-ley distribution (Lindley [8]), Weibull Lindley distribution (WL) (Asgharzadehet al., [2]), A new weighted Lindley distribution (NWL) (Asgharzadeh et al.,[1]), The power Lindley distribution (PL) (Ghitany et al., [5]), The extendedLindley distribution (EL) (Bakouch et al., [3]), Weibull and Gamma distribu-tion.
The 6th Seminar on Reliability Theory and its Applications 243
Table 3: Parameter estimates, standard errors, log-likelihood values and goodness of fit measures for the considered
models in this example.
Model Parameter Estimation(s.e) − log(L) K-S p-value AIC BIC
WeL(α,β ) α = 1.7244(0.1281) 411.4565 0.039 0.9874 826.9131 832.6172
β = 23.4951(6.3406)
Lindley(β ) β = 0.1960(0.0123) 419.5299 0.116 0.0623 841.0598 843.9118
WL(α,λ ,β ) α = 1.0479×10(0.0675) 414.0869 0.070 0.5555 834.1738 842.7298
λ = 9.2678×10−6 (0.0161)
β = 1.0457×10−1 (0.0093)
NWL(α,λ ) α = 240.1998(588.0903) 419.4645 0.116 0.0615 842.9289 848.633
λ = 0.1961(0.0123)
PL(α,β ) α = 0.8303(0.0471) 413.3538 0.068 0.5889 830.7077 836.4117
β = 0.2942(0.0369)
EL(α,λ ,β ) α =−2.0349(3.7241) 413.5721 0.088 0.2736 833.1442 841.7003
λ = 0.0444(0.0521)
β = 1.2240(0.2494)
Weibull(α,β ) α = 1.0477(0.0675) 414.0869 0.069 0.5576 832.1738 837.8778
β = 9.5600(0.8528)
Gamma(α,θ) α = 1.1725(0.1308) 413.3678 0.073 0.4985 830.7356 836.4396
θ = 0.1252(0.0173)
For each model, the MLEs and− logL values are computed. In addition, thegoodness-of-fit measures: Kolmogorov-Smirnov (K-S) statistics with their p-values, Akaike information criterion (AIC) and Bayesian information criterion(BIC) are evaluated. The required computations are carried out using the Rsoftware ([9]). The best model corresponds to the lowest values of − logL,K-S, AIC and BIC values, and the largest p-value associated with the K-S test.Table 3 , lists the MLEs of the parameters and their corresponding standarderrors (in parentheses) and the values of the goodness-of-fit measures for eachmodel.
The values of mentioned measures indicate that the WeL distribution is astrong competitor to the other considered distributions, moreover it has thebest fit among others. To assess if the WeL distribution is appropriate, Figure
Saadati Nik, A., Asgharzadeh, A., and Bakouch, H.S. 244
3 displays the histogram of data set and the fitted density functions, and plotsof the empirical and estimated cumulative distribution functions of these fitteddistributions. From these graphical measures, we can conclude that the WeLdistribution is a very suitable model to fit the data set of this example.
0 5 10 15 20 25
0.0
0.2
0.4
0.6
0.8
1.0
Empirical and theoretical CDFs
data
CD
F
WeL
Lindly
WL
NWL
PL
EL
Weibull
Gamma
Histogram and theoretical densities
data
De
nsity
0 5 10 15 20 250
.00
0.0
50
.10
0.1
50
.20
0.2
50
.30
WeL
Lindly
WL
NWL
PL
EL
Weibull
Gamma
Figure 3: Estimated densities and empirical and estimated cdf for the Cancer Patients data set.
References
[1] Asgharzadeh, A. Bakouch, H.S. Nadarajah, S. Sharafi, F. (2016), A newweighted Lindley distribution with application, Brazilian Journal of Prob-
ability and Statistics, 30 (1), 1-27.
[2] Asgharzadeh, A. Nadarajah, S. Sharafi, F. (2018), Weibull Lindley Distri-bution, REVSTAT, 16(1), 87-113.
[3] Bakouch, H. S, Al-Zahrani, B. M. Al-Shomrani, A. A. Marchi, V. A. A.and Louzada, F. (2012), An extended Lindley distribution, Journal of the
Korean Statistical Society, 41(1), 7585.
[4] Demicheli, R. Bonadonna, G. Hrushesky, W. J. Retsky, M. W. Valagussa,P. (2004), Menopausal status dependence of the timing of breast cancerrecurrence after surgical removal of the primary tumour, Breast Cancer
Res, 6(6), 689-696.
The 6th Seminar on Reliability Theory and its Applications 245
[5] Ghitany, M. E. Al-Mutairi, D. K. Balakrishnan, N. and Al-Enezi, L. J.(2013), Power Lindley distribution and associated inference, Computa-
tional Statistics and Data Analysis, 64, 20-33.
[6] Lee, E. T. and Wang, J. W. (2003), Statistical Methods for Survival Data
Analysis, Wiley, New York, DOI, 10.1002/0471458546.
[7] Lai, C. D. and Xie, M. (2006), Stochastic ageing and dependence for reli-
ability. New York, Springer.
[8] Lindley, D. V. (1958), Fiducial distributions and Bayes theorem, Journal
of the Royal Statistical Society Series, B, 20, 102-107.
[9] R Core Team. (2018), R: A language and environment for statistical com-
puting, r foundation for statistical computing. Vienna.
[10] Shaked, M. and Shanthikumar, J.G. (1994), Stochastic order and their
applications. Academic Press, New York.
The 6th Seminar on Reliability Theory and its Applications
Inference on Multicomponent Stress-Strength Parameter in LomaxDistribution
Sadeqi, N.1, and Kohansal, A.1
1 Department of Statistics, Imam Khomeini International University, Qazvin,Iran
Abstract: Different estimation of multicomponent stress-strength parameterfor Lomax distribution is considered, in veiw of frequentist and Bayesian in-ference. We derive the maximum likelihood estimation (MLE) and asymptoticconfidence interval of multicomponent stress-strength parameter. Also, due tothe lack of explicit form, the Bayes estimation of this parameter is obtained us-ing two approximation method: Lindley’s approximation and MCMC method.We compare different estimation methods using a Monte Carlo simulation.
Keywords: Multicomponent Stress-Strength, Lindely’s Approximation, MCMCMethod, Lomax Distribution.
1 Introduction
Statistical inference of the stress-strength parameter R = P(Y < X) is a gen-eral problem of interest in reliability theory. The random variables Y and X
are related to stress and strength, respectively. If at any time the applied stressis greater than its strength, the system fails. A multicomponent system is asystem having more than one component. This system is composed of a com-mon stress and k independent and identical strength components. When s
(1≤ s≤ k) or more of the components simultaneously survive, the system
1Sadeqi, N.: [email protected]
246
The 6th Seminar on Reliability Theory and its Applications 247
acts. [1] developed the multicomponent reliability as
Rs,k = P[at least s of (X1, . . . ,Xk) exceed Y ]
=k
∑p=s
(kp
)∫∞
−∞
[1−FX(x)]p[FX(x)]k−pdFY (y),
when the common random stress Y with cdf FY (.) subjected to (X1, . . . ,Xk)
which are independent and identically distributed random variables with cdfFX(.). Some authors have considered this problem. See for example [3, 4].
Lomax (Lo) distribution with the parameters α and λ , has the probabilitydensity function as f (x) = αλ (1+λx)−(α+1), x,α,λ > 0. In this paper, weobtain the different point and interval estimations of the Rs,k, when the stressand strengths are independent random variables from the Lomax distributions.
2 MLE of Rs,k
Suppose that X ∼ Lo(α,λ ) and Y ∼ Lo(β ,λ ) and they are independent randomvariables with unknown parameters α and β and common parameter λ . Themulticomponent stress-strength reliability is given by
Rs,k =k
∑p=s
k−p
∑q=0
(k
p
)(k− p
q
)(−1)q β
β +α(p+q).
In this case, we need to compute the MLE of the vector of parameters θ =
(α,β ,λ ) to compute MLE for Rs,k. The likelihood function can be written as:
L(α,β ,λ ) =n
∏i=1
( k
∏j=1
f (xi j)
)g(yi)
=n
∏i=1
( k
∏j=1
αλ (1+λxi j)−(α+1)
)βλ (1+λyi)
−(β+1)
= αnk
λn(k+1)
βn( n
∏i=1
k
∏j=1
(1+λxi j)−(α+1)
)( n
∏i=1
(1+λyi)−(β+1)
)= α
nkβ
nλ
n(k+1)( n
∏i=1
k
∏j=1
(1+λxi j)−(α+1)
)( n
∏i=1
(1+λyi)−(β+1)
),
Sadeqi, N., and Kohansal, A. 248
where {Xi1, . . . ,Xik}, i = 1, . . . ,n, is a sample from Lo(α,λ ) and {Y1, . . . ,Yn}is a sample from Lo(β ,λ ). So the log-likelihood function can be derived by:
`(α,β ,λ ) = = nk logα +n logβ +n(k+1) logλ
− (α +1)n
∑i=1
k
∑j=1
log(1+λxi j)− (β +1)n
∑i=1
log(1+λyi).
The MLE of α , β , which presented by α , β respectively, can be obtained as asolution of the following equation:
∂`
∂α=
nkα−
n
∑i=1
k
∑j=1
log(1+λxi j) = 0, (1)
∂`
∂β=
nβ−
n
∑i=1
log(1+λyi) = 0, (2)
From (1) and (2), we derive:
α(λ ) =nk
n∑
i=1
k∑j=1
log(1+λxi j)
,
β (λ ) =n
n∑
i=1log(1+λyi)
.
The MLE of λ , say λ , is the solution of the following nonlinear equation
n(k+1)λ
− (α +1)n
∑i=1
k
∑j=1
xi j
1+λxi j− (β +1)
n
∑i=1
yi
1+λyi= 0. (3)
The equation (3) is solved numerically using iterative process as Newton-Raphon method to get the value of λ . Now, using the invariance property,we can get the MLE of Rs,k as follows:
RMLE =k
∑p=s
k−p
∑q=0
(k
p
)(k− p
q
)(−1)q β
β + α(p+q). (4)
The 6th Seminar on Reliability Theory and its Applications 249
3 Asymptotic confidence interval
In this section, by obtaining the asymptotic distribution of Rs,k, the asymptoticconfidence interval of RMLE
s,k , is derived. Using the observe Fisher informationmatrix I = [Ii j] = [ −∂`
∂θi∂θ j], where i, j = 1,2,3 and θ = (α,β ,λ ), the asymp-
totic distribution of (α, β , λ ) can be obtained. The elements of the observedFisher information matrix are second partial derivative of log-likelihood func-tion, which can be evaluated as follow:
I11 =nkα2 , I12 = 0, I22 =
nβ 2 , I13 =
n
∑i=1
k
∑j=1
xi j
1+λxi j, I23 =
n
∑i=1
yi
1+λyi,
I33 =n(k+1)
λ 2 +(α +1)n
∑i=1
k
∑j=1
(xi j
1+λxi j
)2
+(β +1)n
∑i=1
(yi
1+λyi
)2
.
Theorem 3.1. Suppose that α, β , λ are the MLEs of α,β ,λ , respectively, then
[α−α β −β λ −λ ]TD−→ N3(0,I−1(α,β ,λ )),
where I(ff,fi, ˘) and I−1(ff,fi, ˘) are symmetric matrices as
I(ff, ˘, ¯) =
I11 0 I13
I22 I23
I33
, I−1(ff,fi, ˘) =1
|I(ff,fi, ˘)|
b11 b12 b13
b22 b23
b33
,
in which |I(ff,fi, ˘)|= I11I22I33− I11I223− I2
13I22,
b11 = I22I33− I223, b12 = I13I23, b13 =−I13I22,
b22 = I11I33− I213, b23 =−I11I23, b33 = I11I22.
Proof. The theorem is proved using the asymptotic normality of the MLEs.
Theorem 3.2. Suppose that RMLEs,k is the MLE of Rs,k. So,
(RMLEs,k −Rs,k)
D−→ N(0,B),
Sadeqi, N., and Kohansal, A. 250
where
B =1
|I(ff, ˘, ¯)|
[(∂Rs,k
∂α)2b11 +(
∂Rs,k
∂β)2b22 +2(
∂Rs,k
∂α)(
∂Rs,k
∂β)b12
], (5)
∂Rs,k
∂α=
n
∑p=s
k
∑q=1
(kp
)(k− p
q
)(−1)q+1β (p+q)(α(p+q)+β )2 , (6)
∂Rs,k
∂β=
n
∑p=s
k
∑q=1
(kp
)(k− p
q
)(−1)qα(p+q)(α(p+q)+β )2 . (7)
Proof. Using Theorem 3.1 and applying delta method, we attain the asymp-totic distribution of RMLE
s,k as
(RMLEs,k −Rs,k)
D−→ N(0,B),
where B = bTI−1(α,β ,λ )b and b = [∂Rs,k∂α
∂Rs,k∂β
∂Rs,k∂λ
]T = [ ∂R∂α
∂R∂λ
0]T . Now,using the equation (3.1), the theorem is proved.
By Theorem 3.2, we construct a 100(1−γ)% asymptotic confidence intervalof R as:
(RMLE− z1− γ
2
√B, RMLE + z1− γ
2
√B), (8)
where zγ is 100γ-th percentile of N(0,1).
4 Bayes estimation
In this section, we provide the Bayesian inference of Rs,k where α , β and λ
are gamma random variables. We consider the following priors for α , β andλ , respectively
π1(α) ∝ αa1−1e−b1α , α > 0, a1,b1 > 0,
π2(β ) ∝ βa2−1e−b2β , β > 0, a2,b2 > 0,
π3(λ ) ∝ λa3−1e−b3λ , λ > 0, a3,b3 > 0.
The joint posterior density based on the observed sample, is defined as:
π(α,β ,λ |data) =L(data|α,β ,λ )π1(α)π2()
¯π2(β )π3(λ )∫
∞
0∫
∞
0∫
∞
0 L(data|αβ ,λ )π1(α)π2(β )π3(λ )dα dβ dλ. (9)
The 6th Seminar on Reliability Theory and its Applications 251
It is impossible to obtain (9), analytically. Therefore, we approximate it byusing two following methods:
• Lindley’s approximation,
• MCMC method.
4.1 Lindley’s approximation
One of the most numerical methods to evaluate the Bayes estimate is Lind-ley’s method, see [5]. This approximat procedure can compute the ratio oftwo integrals. If U(θ) is a function of unknown parameters, then under thesquared error loss function, the Bayes estimate of U(θ) can be derived fromthe following integral representation:
E(u(θ)|data) =∫
u(θ)eQ(θ)dθ∫eQ(θ)dθ
,
where Q(θ) = `(θ)+ ρ(θ), `(θ) is log-likelihood function and ρ(θ) is thelogarithm of prior density of θ . The Lindley’s approximation of E(u(θ)|data)is given by
E(u(θ)|data) = u+12 ∑
i∑
j(ui j +2uiρ j)σi j +
12 ∑
i∑
j∑k
∑p`i jkσi jσkpup
∣∣∣∣θ=θ
,
where θ = (θ1, . . . ,θm), i, j,k, p = 1, . . . ,m, θ is the MLE of θ , u = u(θ),ui = ∂u/∂θi, ui j = ∂ 2u/∂θi∂θ j, `i jk = ∂ 3`/∂θi∂θ j∂θk, ρ j = ∂ρ/∂θ j, andσi j = (i, j)-th element in the inverse of matrix [−`i j]. We noted that all ofthese values should be evaluated at the MLE of parameters.For the three parameters case θ = (θ1,θ2,θ3), Lindley’s approximate result is
E(u(θ)|data) = u+(u1d1 +u2d2 +u3d3 +d4 +d5)
+12[A(u1σ11 +u2σ12 +u3σ13)+B(u1σ21 +u2σ22 +u3σ23)
+C(u1σ31 +u2σ32 +u3σ33)],
evaluated at θ = (θ1, θ2, θ3), where
di = ρ1σi1 +ρ2σi2 +ρ3σi3, i = 1,2,3, d4 = u12σ12 +u13σ13 +u23σ23,
Sadeqi, N., and Kohansal, A. 252
d5 =12(u11σ11 +u22σ22 +u33σ33),
A = `111σ11 +2`121σ12 +2`131σ13 +2`231σ23 + `221σ22 + `331σ33,
B = `112σ11 +2`122σ12 +2`132σ13 +2`232σ23 + `222σ22 + `332σ33,
C = `113σ11 +2`123σ12 +2`133σ13 +2`233σ23 + `223σ22 + `333σ33.
Now, when (θ1,θ2,θ3)≡ (α,β ,λ ) and u≡ u(α,β ,λ ) = Rs,k, we have
ρ1 =a1−1
α−b1, ρ2 =
a2−1β−b2, ρ3 =
a3−1λ−b3 `11 =−
nkα2 ,
`22 =−n
β 2 , `12 = 0, `13 =∂ 2`
∂α∂λ=
n
∑i=1
k
∑j=1
xi j
1+λxi j,
`23 =∂ 2`
∂λ∂ µ=
n
∑i=1
yi
1+λyi,
`33 =∂ 2`
∂λ 2 =n(k+1)
λ 2 − (α +1)n
∑i=1
k
∑j=1
(xi j
1+λxi j
)2
− (β +1)n
∑i=1
(yi
1+λyi
)2
.
σi j, i, j = 1,2,3 are obtained by using `i j, i, j = 1,2,3 and
`111 =2nkα3 , `222 =
2nβ 3 ,
`133 =n
∑i=1
k
∑j=1
(xi j
1+λxi j
)2
, `233 =n
∑i=1
(yi
1+λyi
)2
,
`333 =2n(k+1)
λ 3 − (α +1)n
∑i=1
k
∑j=1
(xi j
1+λxi j
)3
− (β +1)n
∑i=1
(yi
1+λyi
)3
,
and the other `i jk = 0. Furthermore, u3 = ui3 = 0, i = 1,2,3, and u1, u2 areexplained in (6) and (7), respectively. Also,
u11 =n
∑p=s
k
∑q=1
(kp
)(k− p
q
)(−1)q+1β (p+q)2
(α(p+q)+β )3 ,
u12 = u21 =n
∑p=s
k
∑q=1
(kp
)(k− p
q
)(−1)q(p+q)(β −α(p+q))
(α(p+q)+β )3 ,
u22 =n
∑p=s
k
∑q=1
(kp
)(k− p
q
)(−1)q+12α(p+q)(α(p+q)+β )3 .
The 6th Seminar on Reliability Theory and its Applications 253
Therefore,
d4 = u12σ12, d5 =12(u11σ11 +u22σ22), A = `111σ11 + `331σ33,
B = `222σ22 + `332σ33, C = 2`133σ13 +2`233σ23 + `333σ33.
So, the Bayes estimate of Rs,k can be derived by
RLins,k = R+[u1d1 +u2d2 +d4 +d5]+
12[A(u1σ11 +u2σ12)
+B(u1σ21 +u2σ22)+C(u1σ31 +u2σ32)]. (10)
Notice that all parameters should be estimated at (α, β , λ ).
Because the Bayesian credible interval, applying the Lindley’s approxima-tion, is not available, we force to use MCMC method. Utilizing this method,Bayes estimate is approximated and associated HPD credible interval is con-structed.
4.2 MCMC method
From (9), the posterior pdf of α , β and λ are as follows:
α|λ ,data∼ Γ(n+a1,b1 +n
∑i=1
k
∑j=1
log(1+λxi j)),
β |λ ,data∼ Γ(nk+a2,b2 +n
∑i=1
log(1+λyi)),
π(λ |α,β ,data) ∝ λn(k+1)+a3−1e−b3λ
( n
∏i=1
k
∏j=1
(1+λxi j)−α
)( n
∏i=1
(1+λyi)−β
).
Because we cannot reduce the posterior pdf of λ analytically to a well knowndistribution, so we should use the Metropolis-Hastings method to generaterandom samples form it. Therefore, we propose the Gibbs sampling algorithmas follows:
1. Start with the initial value (α(0), β(0), λ(0)).
2. Set t = 1.
Sadeqi, N., and Kohansal, A. 254
3. Generate λ(t) from π(λ |α(t−1),β(t−1),data), using Metropolis-Hastings method.
4. Generate α(t) from Γ(n+a1,b1n∑
i=1
k∑j=1
log(1+λ(t−1)xi j)).
5. Generate β(t) from Γ(nk+a2,b2 +n∑
i=1log(1+λ(t−1)yi)).
6. Compute R(t)s,k =k∑
p=s
k−p∑
q=0
(kp
)(k−pq
) (−1)qβ(t)α(t)(p+q)+β(t)
.
7. Set t = t +1.
8. Repeat steps 3-7, T times.
This above algorithm is used to evaluate the Bayes estimate of Rs,k under thesquared error loss function. Therefore, the MCMC Bayes estimate can beresulted by
RMCs,k =
1T
T
∑t=1
Rt . (11)
In addition, applying the method of Chen and Shao [2], we construct a 100(1−γ)% HPD credible interval of R.
5 Simulation Study
We consider the performance of different estimates by using the Monte Carlosimulations. The different estimates, in terms of mean squared errors (MSEs)are compared together and the different confidence intervals, in terms of av-erage confidence lengths are compared together. Based on 1000 replications,all results are gathered. The parameter values (θ ,λ ,α) = (1,1,1) are used toobtain the simulation results. We derive MLE of Rs,k by (4) and asymptoticconfidence interval for it using (8). Also, the Bayesian inference is consid-ered by assuming two priors as Prior 1: a j = 0, b j = 0, j = 1,2,3, Prior 2:a j = 1, b j = 1, j = 1,2,3. Under the above hypotheses, the MSEs of Bayesianestimates of Rs,k, via Linldey’s approximation and MCMC method are derived
The 6th Seminar on Reliability Theory and its Applications 255
by (10) and (11), respectively. Also, we derived the 95% HPD intervals forRs,k. The simulation results are given in Table 1.
From Table 1, we observed that the best performance, in terms of MSE,belong to informative priors. Furthermore, the performance of Bayes estimateswhich obtained by MCMC method are generally better than those obtained byLindley’s approximation. Also, we observed that the best performance amongthe different intervals belong to HPD intervals based on informative priors.
Table 1: Simulation results
Prior 1 Prior 2
n Rs,k MLE Lindley MCMC Lindley MCMC
MSE Length MSE MSE HPD MSE MSE HPD
10 (3,5) 0.0254 0.3912 0.0229 0.0039 0.3512 0.0237 0.0026 0.3365
(2,4) 0.0317 0.4125 0.0276 0.0094 0.4098 0.0303 0.0064 0.3876
20 (3,5) 0.0178 0.3542 0.0166 0.0019 0.3355 0.0177 0.0015 0.3021
(2,4) 0.0248 0.4098 0.0245 0.0044 0.3987 0.0224 0.0036 0.3711
30 (3,5) 0.0157 0.3365 0.0156 0.0013 0.3287 0.0148 0.0011 0.2912
(2,4) 0.0234 0.3777 0.0231 0.0030 0.3542 0.0216 0.0027 0.3444
40 (3,5) 0.0153 0.3023 0.0152 0.0009 0.2889 0.0146 0.0008 0.2768
(2,4) 0.0225 0.3542 0.0221 0.0025 0.3324 0.0211 0.0022 0.3122
50 (3,5) 0.0153 0.2877 0.0152 0.0008 0.2531 0.0147 0.0007 0.2436
(2,4) 0.0222 0.3333 0.0219 0.0019 0.3165 0.0211 0.0018 0.3054
References
[1] Bhattacharyya G. K. and Johnson R. A. (1974), Estimation of reliability inmulticomponent stress-strength model. Journal of the American Statistical
Association, 69, 966-970.
[2] Chen M. H. and Shao Q. M. (1999), Monte Carlo estimation of BayesianCredible and HPD intervals. Journal of Computational and Graphical
Statistics, 8, 69-92.
Sadeqi, N., and Kohansal, A. 256
[3] Kohansal, A. (2019), On estimation of reliability in a multicomponentstress-strength model for a Kumaraswamy distribution based on progres-sively censored sample, Statistical Papers, 60, 2185-2224.
[4] Kohansal A. and Shoaee S. (2019), Bayesian and classical estimationof reliability in a multicomponent stress-strength model under adaptivehybrid progressive censored data, Statistical Papers, Accepted. DOI:10.1007/s00362-019-01094-y.
[5] Lindley D. V. (1980), Approximate Bayesian methods. Trabajos de Es-
tadistica, 3, 281-288.
The 6th Seminar on Reliability Theory and its Applications
Optimal Progressive Type-II Censoring Random Schemes Based onExpected Total Test Time
Sharafi, M.1
1 Department of Statistics, Faculty of Science, Razi University, Kermanshah,Iran
Abstract: In this paper, the optimal progressive censoring scheme is exam-ined from the expected test time point of view, whereas the number of unitsremoved at each failure time follow three scenarios for choosing the censor-ing schemes and the lifetime distribution is exponential. Discrete probabilitydistributions have been considered as the discrete uniform, the binomial and adistribution that is introduced based on the time distance between consecutivefailure times. The numerical results of expected test times are carried out forthis type of progressive censoring. Finally, comparing them, we suggest theuse of a new approach as an instrument for obtaining optimal design in termsof expected experiment time.
Keywords: Expected Test Time, Lifetime Data, Progressive Censoring, Ran-dom Removals.
1 Introduction
The right censoring arises in a life-testing experiment whenever exact lifetimesare known for only a portion of test items and the remainder of the lifetimesare determined only to exceed certain values under an experiment. There areseveral types of censored test. One of the most common censoring planes is
1Sharafi, M.: [email protected]
257
Sharafi, M. 258
Type-II censoring. In a Type-II censoring, a total of n units is placed on the test,but instead of continuing until all n items have failed, the test is terminated atthe time of the r-th (1≤ r ≤ n) item failure where r is pre-fixed. An extensionof Type-II censoring is the progressive Type-II censoring scheme.
Under this scheme of censoring, from a total of n units placed simulta-neously on a life test, only r units are completely observed and n− r unitsare withdrawn from the experiment at various time points. This procedureworks as follows. After observing the first failure, s1 units are randomly se-lected from the n− 1 surviving units and removed. Immediately followingthe second failure which is the smallest lifetime among the n− s1− 1 units,s2 units are randomly chosen from n− s1− 2 remained units and withdrawnfrom the experiment. This process continue until observing r-th failure then alln−∑
r−1i=1 (si−1) remained units will be removed from the experiment. Further-
more, note that if s1 = · · ·= sr = 0 then n = r which corresponds to the com-plete sample. Also if s1 = · · ·= sr−1 = 0 so that sr = n− r which correspondsto the conventional Type-II right censoring plan. Two of comprehensive refer-ences, in this context, are [1] and [2]. Note that, in this scheme, s1,s2, · · · ,sr
are all pre-fixed. However, in some practical situations, these numbers mayoccur at random.
In the performance of the progressively type-II censoring scheme, one ofthe major challenges is to determine a removal vector. Some authors haveattempted to select appropriate schemes in this type of censoring. In gen-eral, there exist two different strategies. The first strategy considers the pre-specified and fixed censoring numbers and the second strategy chooses thecensoring numbers according to a probability distribution on the set of pos-sible censoring number, which leads to the so-called random removals. Theresearch on random removals dated back to the 1996s. [15] introduced theprogressive Type-II censoring with random removals. They indicated that, forexample, the number of patients who drop out from a clinical trial at eachstep is random and cannot be pre-determined. In some industrial studies, an
The 6th Seminar on Reliability Theory and its Applications 259
experimenter may find that it is inappropriate or too dangerous to continuethe life-testing with considering some of the tested units. In these cases, thepattern of removal at each failure is random. They considered the estimationproblem when lifetimes are Weibull distributed and random removals followthe uniform discrete distribution. They also compared numerically the ex-pected test times based on the two types of censoring: the Type-II censoringand the Type-II progressive censoring with random removals (PCR). How-ever, in this case, statistical inference can be carried out without any additionalparameters on the model but it may not fit well in the real data. Because itcommands that each removal event occurs with an equal probability regard-less of the number of units removed. Therefore, [11] discussed inference forthe two-parameter Weibull distribution under the progressive Type-II censor-ing scheme with situation that, any experimental unit being dropped out fromthe life test is independent of the others but with the same removal probabilityp such that the number of test units removed at each failure time follows thebinomial distribution.
Inferential issues, for progressively censored samples with random removals,have been addressed in numerous papers where are considered different life-time distributions as well as different probability mass functions on randomvectors as the binomial or the uniform discrete distributions. For further read-ing, we refer to, e.g., [12], [13], [14], [4], [10], and [6] . The Bayesian infer-ence approach can also be used for this model as well as classic inference (see; [9] and [7]). For the Weibull lifetime distribution, [8] and [5] investigated theoptimal designs of accelerated life tests under progressive Type-I and Type-IIinterval censoring schemes with the binomial random removals, respectively.
In this study, we will introduce a new plan for determining a removal vec-tor and also obtain the corresponding probability mass function. Moreover,the expected duration of an experiment is provided numerically as an optimalcriterion under the three random removal patterns. Finally, discussions andcomments are provided based on these numerical results.
Sharafi, M. 260
2 Model
In this section, we will consider the progressive Type-II censoring experimentwith random removals in which lifetimes of the units are assumed to follow anexponential distribution with probability density function
f (x) = θ exp(−θx) ,x > 0, θ > 0.
Suppose n independent units are placed on a test with the corresponding life-times being identically distributed with the above probability density function.For simplicity of notation, let (X1:r:n, · · · ,Xr:r:n) denote a progressively Type-II censored sample. Technically the constraints restrict the set of admissibleschemes given by
ξ?rn,r =
{(s1, · · · ,sr) ∈ Nr
0|r
∑i=1
si = n− r
},
where N0 = {0,1,2, · · ·}. The purpose of the experimental design is to pick acensoring plan S from ξ r
n,r which is best according to some optimality crite-rion. These criteria have been discussed in the literature including minimumexpected test duration, the minimum variance of estimators (of parameters andquantiles), maximum Fisher information, and minimum entropy.
Now, we propose a scenario to find the optimal censoring plan in terms ofminimum expected test duration based on the time distance between consecu-tive failure times (spacing). Then, the i-th censoring plan is defined as
Si =
b(n− i+1−∑
i−1j=1 S j)(Xi:r:n−Xi−1:r:n)
nX1:r:nc , i = 2 · · · ,r−1
1 , i = 1,where b.c is the floor function and S1 is degenerated at a point 1. This illus-trates that we have r−2 free parameters and here the set of admissible is givenby
ξrn,r =
{(s2, · · · ,sr) ∈ Nr
0|r
∑i=2
si = n− r−1,si ∈ {0,1, · · · ,n− r−1−i−1
∑j=2
s j}
}.
The 6th Seminar on Reliability Theory and its Applications 261
For the sake of brevity, we denote the random vector corresponding to this newmethod by SNew
r−1 = (S2, · · · ,Sr−1).
Theorem 2.1. The joint probability mass function SNewr−1 is given by
Pr(SNewr−1 = sNew
r−1 ) =n
n− r
r−1
∑i=1
(−1)i−1(r−2
i−1
)i+∑
r−1j=2 s j
, (1)
where, si ∈ ξ rn,r , 0≤ si ≤ n− r−1−∑
i−1j=2 s j, i = 2, · · · ,r−1.
Theorem 2.2. Assume that an individual unit being removed from the life test
is independent of the others but with the same probability p. Suppose further
that Si is independent of Xi. Then, the number of units removed at each failure
time follows a binomial distribution such that
Pr(SBinr−1 = s
Binr−1) =
(n− r)!
∏r−1i=1 si!(n− r−∑
r−1i=1 si)!
p∑r−1i=1 si(1− p)(r−1)(n−r)∑
r−1i=1 (r−1)si,
(2)
where, si ∈ ξ ?rn,r , 0 ≤ si ≤ n− r−∑
i−1j=1 s j, i = 1, · · · ,r− 1 and SBin
r−1 =
(S1, · · · ,Sr−1).
Theorem 2.3. Suppose that each Si follows a uniform discrete probability dis-
tribution and is independent of Xi. Then the number of units removed at each
failure time follows
Pr(SDisUr−1 = sDisU
r−1 ) =r−1
∏i=1
1n−m−∑
i−1j=1 s j +1
, (3)
where, si ∈ ξ ?rn,r , 0 ≤ si ≤ n− r−∑
i−1j=1 s j, i = 1, · · · ,r− 1 and SDisU
r−1 =
(S1, · · · ,Sr−1).
3 The Expected Test Time
Since the optimal criterion has been introduced a reduction of the duration ofa life test, therefore, we compute the expected test time required to complete alife test under three approaches by calculating the expectation of the r-th orderstatistic Xn:r:r.
Sharafi, M. 262
Theorem 3.1. Suppose (X1:r:n, · · · ,Xr:r:n) are progressively Type-II censored
order statistics with a fixed number of removal, conditioning on Sr−1, based
on the exponential distribution Then, the expected value of Xsn:r:r is given by
E(Xsr:r:n | Sr−1 = sr−1
)=
1θ
r
∑i=1
1βi, (4)
where β1 = n, βi = n−∑i−1j=1 s j− i+1 and i = 2, · · · ,r.
Corollary 3.2. The expected test time under progressive Type-II censoring
with random removals can be computed by taking expectation on both sides
of (4) with respect to the S. That is,
E(Xr:r:n) = ESr−1(E(Xr:r:n | Sr−1))
=f (s1)
∑s1=0
f (s2)
∑s2=0· · ·
f (sr−1)
∑sr−1=0
Pr (Sr−1 = sr−1)E (Xsr:r:n | Sr−1 = sr−1) , (5)
where Pr (Sr−1 = sr−1) are presented in Theorems 2.1, 2.2 and 2.3. Further-
more, f (s1)= n−r, f (si)= n−r−s1−s2−·· ·−si−1, i= 2, · · · ,r−1, however
in proposed model s1 = f (s1) = 1.
Thus, this gives an expression to compute the expected time for given valuesof r, n and p (in the binomial case ).
Corollary 3.3. When {Sr−1 =~0} in equation (4), the expected time of a
Type-II censoring without removal can also be obtained as follows:
E(X?r:r:n) =
1θ
n
∑j=n−r+1
1j
=1θ
[n
∑j=1
1j−
n−r+2
∑j=1
1j
]
=1θ
[log
nn− r+2
+ εn− εn−r+2
], (6)
where εn ∼ 12n which approaches 0 as n goes to infinity.
However, we can simply obtain above expression by computer and useE(X?
r:r:n) =1θ ∑
ri=1
1n−i+1 . On the other hand, the expected test time of a com-
plete sampling plan can be found by substituting r = n and s1 = 0, · · · ,sr = 0.
The 6th Seminar on Reliability Theory and its Applications 263
It is given by
E(Xn:n) =1θ
n
∑i=1
1n− i+1
. (7)
To compare equations (5) and (6), and also (5) and (7), we compute two theratios of the expected experiment time between a progressive Type-II and TypeII censored sample which is shown by REET ? and a progressive Type-II andcomplete sampling case which is determined by REET , i.e.
REET ? =E(Xr:r:n)
E(X?r:r:n)
, (8)
and
REET =E(Xr:r:n)
E(Xn:n). (9)
It should be noted that the REET ? and REET does not depend on the scaleparameter θ . Suppose that an experimenter wants to observe the failure ofat least r complete failures when the test is anticipated to be conducted un-der progressive Type-II censoring design. Then, the REET provides importantinformation in determining whether the experiment time can be shortened sig-nificantly if a much larger sample of n test units is used and the test is stoppedonce r failures are observed.
3.1 A numerical result
Up to this point, we have derived the expected test times of progressively Type-II censored samples with three approaches. It will be of interest to comparethem for choosing an optimal random removal plan. On the other hand, it isobvious that analytically comparing them is very difficult. An alternative isto calculate them numerically for various n, r and p (in the binomial case).Now, at the first step, the censoring schemes are generated under the threeapproaches and then progressively Type-II censored samples from the expo-nential distribution, are generated using the algorithm presented in [3]. Wecomputed the REET and REET ? from Eqs. (8) and (9) . For REET ?, the
Sharafi, M. 264
simulations were carried out for sample sizes n = 10,30, different choices ofthe effective sample size r, different θ = 1/2,2 and also various values of theremoval probability p = 0.2,0.5 and 0.8 in the binomial case. Furthermore,figure 1 shows the ratio of the expected test time under progressive Type IIcensoring with three removal plans to the expected test time under completesample versus n = 10,15,20,25,30 for r = 8 and different values of removalprobability p = 0.2,0.5,0.8 in the binomial case.
From table 1, we find that the numerical values of REET ? in fixed condi-tions(the same r and m) don’t depend on the value of the parameter θ and cor-responding values are very close to each other for different θ . Moreover, theproposed plan (New approach) is always less than corresponding values basedon the uniform and binomial distributions. From figure 1 , we derive REET
is the least value in our proposed plan specially for the small effective samplesizes. It note that inference on the basis of the small effective sample size ismore attractive in censorship schemes and is more consistent with the philos-ophy of censoring. The binomial pattern values depends on the parameter ofp which has to be estimated from the data. However, with the large effectivesample size, these values are close to each other. In general, the parameter ofp impresses powerfully on experiment time. A relatively small removal prob-ability (p), which shows a large number of dropouts occur in the late stages ofthe experiment, can reduce remarkably the experiment time. The Reduction ofREET ’s values is not noticeable for the moderate and large values of p. Theuniform pattern also has poor performance. However, it is often better than thebinomial models with the relatively large values of p.
In final, from table and Figure, we find that the new approach outperformexisting models in the sense of reducing the expected total time on the test.This result is worthwhile. Because it provides important information for anexperimenter to choose an appropriate sampling which can directly controlthe experiment time and cost.
The 6th Seminar on Reliability Theory and its Applications 265
10 15 20 25 30
0.0
0.2
0.4
0.6
0.8
1.0
n
RE
ET
New
B(0.2)
B2(0.5)
B(0.8)
DisU
Figure 1: REET values versus different sample sizes
Table 1: The numerical results of REET ? under three plans of random removals in the progressive Type-II right
censoring with different sample size n, r, θ and p
n = 10 n = 30
r Censoringplan REET ? r Censoringplan REET ?
θ = 12
4 New 1.358967 10 New 1.339331
4 DisU 2.307899 10 DisU 6.390057
4 Bin(p=0.2) 1.919200 10 Bin(p=0.2) 2.875154
4 Bin(p=0.5) 2.676915 10 Bin(p=0.5) 6.545398
4 Bin(p=0.8) 3.720867 10 Bin(p=0.8) 7.082093
8 New 1.154605 20 New 1.435276
8 DisU 1.806040 20 DisU 3.323707
8 Bin(p=0.2) 1.315855 20 Bin(p=0.2) 2.958532
8 Bin(p=0.5) 1.675690 20 Bin(p=0.5) 3.329358
8 Bin(p=0.8) 1.849808 20 Bin(p=0.8) 3.370019
θ = 2
4 New 1.355256 10 New 1.332072
4 DisU 2.306353 10 DisU 6.426749
4 Bin(p=0.2) 1.905301 10 Bin(p=0.2) 2.872888
4 Bin(p=0.5) 2.647976 10 Bin(p=0.5) 6.524829
4 Bin(p=0.8) 3.728878 10 Bin(p=0.8) 7.075267
8 New 1.150245 20 New 1.443749
8 DisU 1.817310 20 DisU 3.333504
8 Bin(p=0.2) 1.303077 20 Bin(p=0.2) 2.965582
8 Bin(p=0.5) 1.684037 20 Bin(p=0.5) 3.326752
8 Bin(p=0.8) 1.833688 20 Bin(p=0.8) 3.358363
Sharafi, M. 266
References
[1] Balakrishnan, N. and Aggarwala, R. (2000), Progressive Censoring: The-
ory, Methods, and Applications. Birkhauser, Boston.
[2] Balakrishnan N., Cramer E. (2014), The Art of Progressive Censoring.Springer New York.
[3] Balakrishnan, N., and Sandhu, R. A. (1995), A simple simulational algo-rithm for generating progressive Type-II censored samples, The American
Statistician, 49, 229-230.
[4] Dey S., and Dey T. (2014), Statistical inference for the Rayleigh distribu-tion under progressively Type-II censoring with binomial removal, Applied
Mathematical Modelling, 38, 974-982.
[5] Ding, C., and Tse, S. K. (2013), Design of accelerated life test plans underprogressive type II interval censoring with random removals. Journal of
Statistical Computation and Simulation. 83(7), 1330-1343.
[6] Gunasekera, S. (2018), Inference for the Burr XII reliability under pro-gressive censoring with random removals, Mathematics and Computers in
Simulation, 144, 182-195.
[7] Kaushik A, Singh U., and Singh SK. (2017), Bayesian inference for theparameters of Weibull distribution under progressive type-I interval cen-sored data with beta-binomial removals, Communications in Statistics-
Simulation and Computation, 46(4), 3140-3158.
[8] Tse S. K., Ding C., Yang C. (2008), Optimal accelerated life tests underinterval censoring with random removals: the case of Weibull failure dis-tribution, Statistics, 42, 435-451.
[9] Singh, S. K., Singh,U. and Sharma, V. K. (2013), Expected total testtime and Bayesian estimation for generalized Lindley distribution under
The 6th Seminar on Reliability Theory and its Applications 267
progressively Type-II censored sample where removals follow the beta-binomial probability law, Applied Mathematics and Computation, 222,402-419.
[10] Soliman A. A., Abd Ellah, A. H., Abou-Elheggag N. A., and El-SagheerR. M. (2015), Inferences using type-II progressively censored data withbinomial removals, Arabian Journal of Mathematics, 4(2), 127-139.
[11] Tse S. K., Yang C., Yuen H. K. (2000), Statistical analysis of Weibulldistributed lifetime data under Type II progressive censoring with binomialremovals. Journal of Applied Statistics, 27, 1033-1043.
[12] Tse S. K., and Yang C. (2003), Reliability sampling plans for the Weibulldistribution under Type II progressive censoring with binomial removals,Journal of Applied Statistics, 30, 709-718.
[13] Wu S. J., and Chang C. T. (2003), Inference in the Pareto distributionbased on progressive type II censoring with random removals, Journal of
Applied Statistics, 30, 163-172.
[14] Wu C. C., Wu S. F., and Chan H. Y. (2006), MLE and the estimatedexpected test time for the two-parameter Gompertz distribution underprogressive censoring with binomial removals, Applied Mathematics and
Computation, 181, 1657-1670.
[15] Yuen H. K., Tse S. K. (1996), Parameters estimation for Weibull dis-tributed lifetimes under progressive censoring with random removals,Journal of Statistical Computation and Simulation, 55, 57-71.
The 6th Seminar on Reliability Theory and its Applications
Bayesian Analysis for the Parameters of Mortality Rate in the Models ofDependent Lives
Shoaee, S.1, and Kohansal, A.2
1 Department of Actuarial Science, Faculty of Mathematical Sciences, ShahidBeheshti University, Tehran, Iran
2 Department of Statistics, Imam Khomeini International University, Qazvin,Iran
Abstract: In this paper, the Bayesian inference of the model of dependentlives is considered. We use the bivariate Gompertz (BGP) distribution. Aswe know, the maximum likelihood estimates do not always exist. Therefore,one of the methods in this case, is to estimate the parameters by the Bayesianmethod. So, the Bayesian estimations are considered using the squared errorloss function and a priori distributions that create a dependency between thehyper-parameters for this model of dependent lives. Also, prior independenceis a special case of them. But given the assumptions, one can see that explicitexpressions cannot be obtained for Bayesian estimations. Therefore, the im-portance sampling method is proposed to calculate the Bayes estimations andalso to create the corresponding HPD credible intervals. Finally, we analyzeone real data set for illustrative purposes.
Keywords: Bayesian Analysis, HPD Credible Interval, Dependent Lives, Mor-tality Rate, Posterior Distribution.
1Shoaee, S.: Sh [email protected]
268
The 6th Seminar on Reliability Theory and its Applications 269
1 Introduction
Modeling longevity data is an important topic of interest for researchers. Inmany fields of science, including statistics and life insurance, it is assumed thatthe remainder of the lives of two persons or two components is independent.But applying this assumption is not always correct. Because sometimes theremay be identical risk factors for a pair of people and people are exposed to thesame risk. For example, in twins, these common risk factors may be genetic orfor couples, these common risk factors may be from the environment. Thesemodels are used in a variety of fields, such as actuarial science and life insur-ance, survival analysis and reliability theory. In this regard, various work hasbeen done. Readers can refer to [3]), [4], [2]) and [5].
One classical model of dependent lives that captured our attention is calledthe ”common shock” model. This model assumes that the lifetimes of twopersons, say T1 and T2, are independent unless a common shock causes thedeath of both. For example, a contagious deadly disease, a natural catastropheor a car accident may affect the lives of the two spouses. Thus, if T0 denotesthe time until the common disaster, the actual ages-at-death are modeled byXi = min(Ti,T0) for i = 1,2. Then, the joint survival function of random vectorX = (X1,X2) for x1,x2 > 0 and z = max{x1,x2} can be computed as follow
SX(x1,x2) = P(T1 > x1,T2 > x2,T0 > z) = ST1(x1)ST2(x2)ST0(z).
This structure for dependent live models has been studied by many authors,for example [8], [11], [1], [9], [7] and [10].However, much work has been done to expand these models but, little workhas been done in analyzing these models. Recently, the parameters of thesemodels have been estimated using the maximum likelihood estimation andEM algorithm. But the estimation of the parameters by the Bayesian methodhas not yet been investigated. As we know, the EM algorithm performs wellin parameter estimation when the MLEs exist. But the maximum likelihoodestimates do not always exist. Another important issue is the convergence of
Shoaee, S., and Kohansal, A. 270
the EM algorithm, which is highly dependent on the initial value selection.Finally, it should be noted that calculating the exact confidence interval forMLEs is not easy. The constructed confidence interval based on the maximumlikelihood method is determined using the asymptotic property of MLEs.
In this paper, we want to use the Bayesian inference method to estimate theparameters of the dependent lives model. In this regard, the estimation of theparameters and their corresponding HPD credible intervals are calculated.
For this purpose, we use the bivariate Gompertz distribution presented by[10] for the modeling of dependent lives. Also, we assume that the scale pa-rameters have a Dirichlet-Gamma prior distribution. No specific prior distri-butions are considered for the shape parameter. It is only assumed that thisprior distribution is independent of the intended prior distribution for the scaleparameters and also the probability density function is log-concave on (0,∞).It can be seen that explicit expressions cannot be obtained for Bayesian esti-mation of parameters. Therefore, numerical methods should be used to calcu-late Bayesian estimates. So, the importance sampling procedure to generatesamples from the posterior distribution function and to calculate the Bayesestimations and also to construct the HPD credible intervals of the unknownparameters is proposed.
The present paper is organized as follows: A brief description of the bivari-ate Gompertz distribution is provided in Section 2. The required assumptionsfor prior distributions and bivariate data structures are explained in Section 3.The importance sampling structure, estimation of the Bayes parameters andtheir corresponding HPD credible intervals in different states are described indetail in Section 4. one real dataset to evaluate the performance of the pro-posed structure for Bayesian estimations are performed in Section 5. Finally,the conclusions of this article are presented in Section 6.
The 6th Seminar on Reliability Theory and its Applications 271
2 Bivariate Gompertz Model
In this section, we introduce a classical model of dependent lives based onGompertz distribution. Suppose Ti follows (∼) GP(α,λi) for i = 0,1,2 andalso they are independent. Define Xi = min{T0,Ti}, for i = 1,2. Then, therandom vector X = (X1,X2) is a bivariate Gompertz distribution. This new bi-variate distribution has four parameters and is denoted by BGP(α,λ0,λ1,λ2).
Theorem 2.1. Suppose X ∼ BGP(α,λ0,λ1,λ2). Therefore, the joint survival
function can be obtained for x = max{x1,x2} as follows
SX(x1,x2) =
SGP(x1,α,λ1 +λ0)SGP(x2,α,λ2) if x2 < x1
SGP(x1,α,λ1)SGP(x2,α,λ2 +λ0) if x1 < x2
SGP(x,α,λ0 +λ1 +λ2,λ ) if x1 = x2 = x.
Theorem 2.2. Suppose X∼ BGP(α,λ0,λ1,λ2). Therefore, the joint probabil-
ity density function of X is expressed as follows
fX(x1,x2) =
{α2λ2(λ0 +λ1)eα(x1+x2)e−(λ0+λ1)(eαx1−1)e−λ2(eαx2−1) if x2 < x1
α2λ1(λ0 +λ2)eα(x1+x2)e−(λ0+λ2)(eαx2−1)e−λ1(eαx1−1) if x1 < x2
αλ0eαxe−(λ0+λ1+λ2)(eαx−1) if x1 = x2 = x.
(1)
One of the most important issues in actuarial science is the force of mor-tality. The force of mortality (or hazard) at age x, µ(x), based on Gompertzlifetime distribution is represented as µ(x) = µ(x,a,b) = aebx. The joint sur-vival function can be represented by Theorem 2.1 as follow
SX(t1, t2) = ST1(t1)ST2(t2)ST0(max{t1, t2}).
Then for the marginal survival functions, we have
SX1(t) = SX(t,0) = ST1(t)ST2(0)ST0(max{t,0}) = ST1(t)ST0(t).
Similarly, SX2(t) = ST2(t)ST0(t). Therefore, the force of mortality X1 is
µX1(t) =−∂
∂ tln(ST1(t)ST0(t)) =−
∂
∂ tln(ST1(t))−
∂
∂ tln(ST0(t))
= µT1(t)+µT0(t),
Shoaee, S., and Kohansal, A. 272
where, µT1(t) and µT0(t) denote the force of mortality T1 and T0, respectively.Similarly, µX2(t) = µT2(t)+ µT0(t), where, µT2(t) and µT0(t) denote the forceof mortality T2 and T0, respectively. So,
µX1(t) = α(λ0 +λ1)eαt , µX2(t) = α(λ0 +λ2)eαt .
As presented, the force of mortality is characterized by a pair of parameters α
and λi, therefore, we must estimate these parameters to analyze the manner ofthe force of mortality.
3 Assumptions for the Prior Distribution and Bivariate Dataset
3.1 Prior Assumptions
In this subsection, we will describe some of the required prior assumptions.
(I): In the first step, we assume that λ = λ0+λ1+λ2 has a Gamma(a,b) prior.
π0(λ |a,b) =ab
Γ(a)λ
a−1e−bλ , a > 0,b > 0.
Also, given λ , (λ1λ, λ2
λ) has a Dirichlet prior. We denote that it by π1(.|a0,a1,a2)
and the probability density fuction for λ0 > 0, λ1 > 0 and λ2 > 0 is
π1(λ1
λ,λ2
λ|λ ,a0,a1,a2) =
Γ(a0 +a1 +a2)
Γ(a0)Γ(a1)Γ(a2)(λ0
λ)a0−1(
λ1
λ)a1−1(
λ2
λ)a2−1.
Therefore, the joint prior of λ0, λ1 and λ2 is
π1(λ0,λ1,λ2|a,b,a0,a1,a2) =Γ(a)Γ(a)
(bλ )a−a×2
∏i=0
bai
Γ(ai)λ
ai−1i e−bλi. (2)
where a = a0 +a1 +a2. The Equation (2) is a Gamma-Dirichlet distribu-tion and we denote this as GD(a,b,a0,a1,a2).
(II): In the second step, we explain the required assumptions for the shape pa-rameter. We denote the prior distribution on α by π2(α). For this priordistribution, it is only assumed that the support is non-negative on (0,∞)
The 6th Seminar on Reliability Theory and its Applications 273
and that its probability distribution is log-concave. It is necessary to men-tion, the assumption of log-concave prior is quite common in the Bayesianinference and several distributions have log-concave probability distribu-tion function, for example, normal, log-normal, Weibull, gamma distribu-tion. Also, the prior distribution on α is independent of the joint prior onλ0, λ1 and λ2. Therefore,
π(α,λ0,λ1,λ2) = π1(λ0,λ1,λ2)π2(α). (3)
3.2 Bivariate Data Set
In this subsection, we describe the required data set to analyze our purposes.We also assume that D1 = {(x11,x21), ...,(x1n,x2n)} is a random sample fromthe bivariate Gompertz distribution. Next, consider the following notation toestimate the model parameters:I0 = {i : x1i = x2i = xi}, I1 = {i : x1i < x2i} and I2 = {i : x1i > x2i}. Also, |I0|= n0,|I1|= n1, |I2|= n2 and n = n0 +n1 +n2. Now, the joint likelihood function canbe obtained as follow
`(D1|α,λ0,λ1,λ2) = αn0+2n1+2n2λ
n11 λ
n22 λ
n00
n1
∑j=0
n2
∑k=0
(n1
j
)(n2
k
)λ
j+k0 λ
n2−k1 λ
n1− j2
× eα ∑i∈I1 x1i+x2ie−λ1 ∑i∈I1(eαx1i−1)e−(λ0+λ2)∑i∈I1(e
αx2i−1)
× eα ∑i∈I2 x1i+x2ie−(λ0+λ1)∑i∈I2(eαx1i−1)e−λ2 ∑i∈I2(e
αx2i−1)
× eα ∑i∈I0 xie−(λ0+λ1+λ2)∑i∈I0(eαxi−1). (4)
4 Bayesian Inference
In this section, the Bayesian estimation of parameters of BGP distribution andtheir corresponding HPD credible intervals are obtained. In this regard, thetwo cases of this model are considered. When the shape parameter α is knownor unknown.
Shoaee, S., and Kohansal, A. 274
4.1 Common Shape Parameter α is Known
According to this assumption and using the prior function π1(.) in Equation(2), the posterior density function of (λ0,λ1,λ2) given D1 can be computed asfollows
`(λ0,λ1,λ2|α,D1) ∝ `(D1|λ0,λ1,λ2,α)π1(λ0,λ1,λ2|a,b,a0,a1,a2)
∝ λa−a(λ0 +λ2)
n1(λ0 +λ1)n2
×Gamma(λ0;a0 +n0,T0(α)+b)
×Gamma(λ1;a1 +n1,T1(α)+b)
×Gamma(λ2;a2 +n2,T2(α)+b),
where,
T0(α) = e−λ0{∑I1(eαx2i−1)+∑I2(e
αx1i−1)+∑I0(eαxi−1)},
T1(α) = e−λ1{∑I1(eαx1i−1)+∑I2(e
αx1i−1)+∑I0(eαxi−1)},
T2(α) = e−λ2{∑I1(eαx2i−1)+∑I2(e
αx2i−1)+∑I0(eαxi−1)}.
The Bayesian estimation of any function of λ0,λ1 and λ2 under the squarederror loss function is the mean of the posterior function. So,
θB =
∫∞
0∫
∞
0∫
∞
0 θ(λ0,λ1,λ2)`P(λ0,λ1,λ2|α,D1)dλ0dλ1dλ2∫∞
0∫
∞
0∫
∞
0 `P(λ0,λ1,λ2|α,D1)dλ0dλ1dλ2. (5)
Next, we consider the following two situations for calculating Bayesian esti-mates under the squared error loss function.
(I) If a = a, then λ0, λ1 and λ2 are independent. Therefore, the Bayes estima-tors of λ0,λ1 and λ2 are obtained explicitly as follows
λ0 =1
b+T0(α)
n1
∑j=0
n2
∑k=0
w jka0 jk, λi =1
b+Ti(α)
n1
∑j=0
n2
∑k=0
w jkaik, i = 1,2.
where,
a0 jk = n0 + j+ k+a0−1,a1k = n1 +n2− k+a1−1,
a2 j = n2 +n1− j+a2−1,
The 6th Seminar on Reliability Theory and its Applications 275
C jk =
(n1
j
)(n2
k
)Γ(a0 jk)
[T0(α)+b]a0 jk× Γ(a1k)
[T1(α)+b]a1k×
Γ(a2 j)
[T2(α)+b]a2 j,
w jk =C jk
∑n1j=0 ∑
n2k=0C jk
.
(II) If a 6= a, then the Bayes estimators of λ0,λ1 and λ2 cannot be computed inthe explicit forms. In this case, we suggest using the importance samplingmethod to compute the Bayesian estimations of any function of λ0,λ1 andλ2 and also to create the HPD credible interval.
4.1.1 Importance Sampling Method
As noted, Bayes estimates cannot generally be computed in these models. Toperform this numerical method, the following algorithm is used to estimate theunknown parameters.
Algorithm
Step 1: Generate λi ∼ Gamma(ai +ni,Ti(α)+b), for i = 0,1,2.
Step 2: Repeat step 1 to obtain {(λ0i,λ1i,λ2i); i = 1, . . . ,N}.
Step 3: The approximate Bayesian estimate of θ in Equation (5) is calculated asθB = ∑
Ni=1
θih(λ0i,λ1i,λ2i)
∑Ni=1 h(λ0i,λ1i,λ2i)
, where θi = θ(λ0i,λ1i,λ2i) and h(λ0,λ1,λ2) =
λ a−a(λ0 + λ2)n1(λ0 + λ1)
n2. The presented approximation in the recentrelation is a consistent estimator.
4.1.2 Credible Intervals
The HPD credible interval of θ = θ(λ0,λ1,λ2) is constructed to the samemethod. For this purpose, we follow the below algorithm.
Algorithm:
Step 1: Consider wi =h(λ0i,λ1i,λ2i)
∑Nj=1 h(λ0 j,λ1 j,λ2 j)
.
Step 2: Rearrange {(θ1,w1), . . . ,(θN,wN)} as {(θ(1),w(1)), . . . ,(θ(N),w(N))}, whereθ(1) < .. . < θ(N) but w(i) are not ordered and are associated with θ(i).
Shoaee, S., and Kohansal, A. 276
Step 3: Compute the consistent Bayes estimator of θp as θp = θ(Np). Where Np is
the integer satisfying ∑Npi=1 w(i) ≤ p < ∑
Np+1i=1 w(i).
Step 4: Construct a 100(1−γ)% of θ as (θδ , θδ+1−γ), for δ =w(1),w(1)+w(2), . . . ,
∑Nγ
i=1 w(i). Therefore, a 100(1− γ)% HPD credible interval of θ becomes(θδ ∗, θδ ∗+1−γ). Where δ ∗ satisfies θδ ∗+1−γ− θδ ∗ ≤ θδ ∗+1−γ− θδ for all δ .
4.2 Common Shape Parameter α is Unknown
In this subsection, parameter α is assumed to be unknown. Now, we have tocalculate the joint posterior density function of λi for i = 0,1,2, and α by thepresented prior distribution in Equation(3) as follow
`(λ0,λ1,λ2,α|D1) ∝ `(λ0,λ1,λ2|α,D1)`(α|D1),
where,
`(λ0,λ1,λ2|α,D1) ∝ λa−a(λ0 +λ2)
n1(λ0 +λ1)n2
×Gamma(a0 +n0,T0(α)+b)
×Gamma(a1 +n1,T1(α)+b)
×Gamma(a2 +n2,T2(α)+b),
and
`(α|D1) ∝ αn0+2n1+2n2exp{α[∑
I1
x1i + x2i +∑I2
x1i + x2i +∑I0
xi]}
× π2(α)
[T0(α)+b]n0+a0[T1(α)+b]n1+a1[T2(α)+b]n2+a2. (6)
The Bayes estimator under the squared error loss function is obtained as fol-lows
θBayes =
∫∞
0∫
∞
0∫
∞
0∫
∞
0 θ(λ0,λ1,λ2,α)`(λ0,λ1,λ2,α|D1)dλ0dλ1dλ2dα∫∞
0∫
∞
0∫
∞
0∫
∞
0 `(λ0,λ1,λ2,α|D1)dλ0dλ1dλ2dα. (7)
Therefore, it can be seen that the Expression (7) cannot be calculated and a
The 6th Seminar on Reliability Theory and its Applications 277
clear expression can be obtained. The importance sampling method can beused to calculate the Bayesian estimates and the corresponding HPD credibleintervals.
Algorithm:
Step 1: Use the method proposed by [6] to generate αi from the log-concave den-sity `(α|D1).
Step 2: Generate λ ji|αi,D1∼Gamma(a j+n j,Tj(αi)+b), for j = 0,1,2, i= 1,2, ...,N
Step 3: Therefore, the Bayes estimation is obtained as θBayes =∑
Ni=1 θih(λ0i,λ1i,λ2i)
∑Ni=1 h(λ0i,λ1i,λ0i)
,
where, h(λ0i,λ1i,λ2i) = λa−ai (λ0i +λ2i)
n1(λ0i +λ1i)n2, and
θi = θ(αi,λ0i,λ1i,λ2i).
The presented method in the case of α known can also be used to calculate theHPD credible intervals.
5 Simulation Studies and Data Analysis
These data include the remaining lifetime information of 100 persons fromthe population of couples in the age range of 35 – 70 years at an insurancecompany in Tehran. For the purposes of this article, we first draw the proposedTTT plots for marginal distributions in the real data set. This plot is shown inFigure 1. As can be seen, both diagrams are concave, so it can be concludedthat the marginal hazard functions are increasing functions. Another importantresult is that the correlation between the marginals is positive.
For this data set, we assume that parameter α has a prior Gamma distri-bution function. As mentioned, we have no information about the values ofthe hyper-parameters. Therefore, we should use the non-informative prior forthe Bayesian estimation of the parameters. We use the importance samplingmethod to calculate parameter estimates using the Bayesian method. First, weneed to produce observations from `(α|D1) using the method of [6]. Also, the
Shoaee, S., and Kohansal, A. 278
0 0.2 0.4 0.6 0.8 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
X1
X2
Y=X
Figure 1: The TTT plot for real data set.
histogram of the generated samples as well as the posterior function of α , arepresented in Figure 2.
0.06 0.07 0.08 0.09 0.1 0.11
0
50
100
150
200
250
Figure 2: The histogram of the generated samples and the posterior density function of α .
Finally, Bayes estimates of the unknown parameters with respect to squarederror loss function as well as the 95% HPD credible intervals are presentedin Table 1. A common way to evaluate the goodness of fit is to calculate theKolmogorov-Smirnov statistic and its corresponding p-value for marginals andminimum distributions. To calculate these values, we must use the Proposition(2.1) which is presented in the [10] paper. The results of this study are pre-sented in Table 2. Based on these results, it can be seen that the Gompertzdistribution is suitable for marginals and minimum distributions.
Table 1: The Bayes estimates of the unknown parameters and the 95% HPD credible intervals for real data set.
Parameters α λ0 λ1 λ2
Estimate 0.0853 0.3720 0.0334 0.0551
HPD (0.00805, 0.0944) (0.2869, 0.4148) (0.0093, 0.0687) (0.0296, 0.0752)
The 6th Seminar on Reliability Theory and its Applications 279
Table 2: The Kolmogorov-Smirnov (K-S) and the associated p-values for the marginals and their minimum in the real data set.
α λ K-S P-value
X1 0.0853 0.4054 0.1127 0.2369
X2 0.0853 0.4271 0.0952 0.4292
min{X1,X2} 0.0853 0.4605 0.1067 0.2830
6 Conclusions
In this paper, Bayesian estimation in the dependent lives models was investi-gated based on the bivariate Gompertz distribution. For this purpose, a priordependent distribution for the scale parameters and a prior distribution for theshape parameter was considered. Also, we assumed that prior distribution onthe shape parameter is independent of the joint prior on λi. As can be seen,Bayes’s estimates were not explicit in this case. Therefore, it was recom-mended to use the importance sampling method to estimate the parameters.We described in detail the structure of the importance sampling method forcalculating their estimates and corresponding HPD credible intervals. Finally,one real data set were used to evaluate the performance of this method.
References
[1] Al-Khedhairi, A. and El-Gohary, A. (2008), A new class of bivariate gom-pertz distributions and its mixture, International Journal of Mathematical
Analysis, 2, 235-253.
[2] Carriere, J.F. (2000), Bivariate survival models for coupled lives, Scandi-
navian Actuarial Journal, 2000, 17-32.
[3] Iyer, S.K. and Manjunath, D. (2004), Correlated bivariate sequences forqueueing and reliability applications, Communications in Statistics-Theory
and Methods, 33, 331-350.
Shoaee, S., and Kohansal, A. 280
[4] Jagger, C. and Sutton, C.J. (1991), Death after marital bereavement is therisk increased?, Statistics in Medicine, 10, 395-404.
[5] Kotz, S., Balakrishnan, N. and Johnson, N.L. (2000), Continuous multi-
variate distributions, John Wiley and Sons, New York.
[6] Kundu, D. (2008), Bayesian inference and life testing plan for weibulldistribution in presence of progressive censoring, Technometrics, 50, 144-154.
[7] Kundu, D. and Gupta, R.D. (2010), Modied sarhan-balakrishnan singularbivariate distribution, Journal of Statistical Planning and Inference, 140,526-538.
[8] Marshall, A.W., and Olkin, I. (1967), A multivariate exponential distribu-tion, Journal of the American Statistical Association, 62, 30-44.
[9] Sarhan, A.M. and Balakrishnan, N. (2007), A new class of bivariate distri-butions and its mixture, Journal of Multivariate Analysis, 98, 1508-1527.
[10] Shoaee, S. and Korram, E. (2019), Survival analysis for a new com-pounded bivariate failure time distribution in shock and competing riskmodels via an em algorithm, Communications in Statistics-Theory and
Methods, https://doi.org/10.1080/03610926.2019.1614193
[11] Veenus, P. and Nair, K.R.M. (1994), Characterization of a bivariate paretodistribution, Journal of Indian Statistical Association, 32, 15-20.
The 6th Seminar on Reliability Theory and its Applications
A Note on the Cumulative Residual Entropy of Reliability Systems
Toomaj, A.1
1 Department of Mathematics and Statistics, Faculty of Basic Sciences andEngineering, Gonbad Kavous University, Gonbad Kavous, Iran
Abstract: In this paper, we present some results on the cumulative residualentropy of coherent systems when lifetimes of components are independentand identically distributed. We also obtain bounds for the mentioned measure.A comparison results are also obtained.
Keywords: Coherent System; Cumulative Residual Entropy; Dispersive Or-der; System Signature.
1 Introduction
One of the most important measures of uncertainty is the Shannon entropy[8] which plays an important role in Information Theory and in various areasof sciences such as probability and statistics, financial analysis and informa-tion theory; see e.g. Cover and Thomas [1] for a greater detail. Let X be anabsolutely continuous nonnegative random variable with the cumulative dis-tribution function (CDF) F and the probability density function (PDF) f . TheShannon differential entropy of X is defined as
H(X) = H( f ) =−∫
∞
0f (x) log f (x)dx, (1)
where “ log” stands for the natural logarithm. The Shannon entropy (1) mea-sures the uniformity of a density function and large values of it represent more
1Toomaj, A.: [email protected]
281
Toomaj, A. 282
uncertainty in PDF f and consequently low ability in predicting the futureoutcome of the random variable X . Several generalizations of Shannon en-tropy were developed and introduced in various disciplines and contexts. Re-cently, Rao et al. [5] introduced an alternative measure of uncertainty calledcumulative residual entropy (CRE) which is based on the survival functionF(x) = 1−F(x) in place of the PDF f (x) applied to Shannon’s entropy (1).For a nonnegative random variable X with survival function F(x), the CRE ofX is defined by
E (X) =−∫
∞
0F(x) log F(x)dx =
∫ 1
0
ψ(u)f (F−1(u))
du, (2)
where ψ(u) =−(1−u) log(1−u), 0≤ u≤ 1, with convention ψ(0) =ψ(1) =0. The function F−1(u) = inf{x : F(x)≥ u}, is known as the quantile functionof F . The CRE is particularly suitable to describe the information in problemsrelated to ageing properties of reliability theory based on the mean residuallife function and also deserve interest to the audience of reliability theory andrelated disciplines.
2 General properties of dispersion measures
It is evident that (2) can be used for both continuous and discrete distributions.Moreover, for a degenerate distribution function FX for which X = c (a.s.), wehave E (X) = 0 which is the same as standard deviation i.e. σ(c) = 0. There-fore, E (X) can be used to measure if the X is close to a degenerate distribution(i.e. it is a dispersion measure). However, for continuous distributions, Shan-non entropy is a measure of disparity of the PDF f (x) from the uniform distri-bution. Furthermore, CRE has the following property: E (aX + b) = aE (X)
for all a > 0 and b ≥ 0. This is also similar to the well known propertyσ(aX + b) = aσ(X) for the standard deviation. However for the differentialentropy, we have H(aX + b) = H(X)+ loga. Some examples of these mea-sures are provided in Table 1.
The 6th Seminar on Reliability Theory and its Applications 283
Table 1: The cumulative residual entropy, standard deviation and Shannon differential entropy for some models.
Model F(x) σ(X) H(X) E (X)
Pareto ( β
β+x )α
β
α−1
√α
α−2, α > 2 log( β
α)+ α+1
α
αβ
(α−1)2 , α > 1
Weibull e−(λx)α
√√√√Γ(1+ 2α)
λ 2 −
(Γ(1+ 1
α)
λ
)2
log( β
α)+ α−1
α
Γ(1+ 1α)
αλ
Figures 1 and 2 shows that, in these models, there is a close relationshipbetween the standard deviation and the cumulative residual entropy howeverthey don’t admit with the differential entropy. Here, we obtain an upper bound
Figure 1: Standard deviation, differential entropy and CRE of Pareto family over the parameter space.
Figure 2: Standard deviation, differential entropy and CRE of Weibull family over the parameter space.
for the CRE of X in terms of its standard deviation, thus providing a suitablerelation between two concepts of dispersion measure which is given in Toomajand Di Crescenzo [10].
Theorem 2.1. If X denotes an absolutely continuous nonnegative random vari-
able with standard deviation σ(X) and CRE function E (X), then
E (X)≤ σ(X). (3)
Toomaj, A. 284
From Theorem 2.1 one can obtain a close relationship between the standarddeviation and CRE, i.e.
eCH(X) ≤ E (X)≤ σ(X), (4)
where, for C = exp{∫ 1
0 log(x |logx|)dx} ∼= 0.2065, the first inequality is ob-tained from Rao et al. [5]. It is worth to point out that the inequalities givenin (4) involve three uncertainty measures. Moreover, the related inequality be-tween the differential entropy and the standard deviation is similar to relation(2) of Ebrahimi et al. [2]. As well known, the entropy is a measure of disparityof the density function f (x) from the uniform distribution. On the other hand,the variance measures an average of distances of outcomes of the probabilitydistribution f (x) from the mean. The CRE acts like the standard deviation,i.e. it is a dispersion measure in spite of the similarity shape with the Shan-non entropy. Although all of three measures, i.e. entropy, standard deviationand CRE, are measures of dispersion and uncertainty, the lack of a simple re-lationship between orderings of a distribution by the three measures derivesfrom their quite substantial and subtle differences. All such measures reflect‘concentration’ but their respective metrics for concentration are different.
Another useful result is given in the following theorem. We recall that X isincreasing failure rete in average (IFRA) if − logF(x)
x is increasing in x > 0.
Theorem 2.2. Let X be an absolutely continuous non-negative random vari-
able with PDF f and cdf F. If X is IFRA, then
(i) σ(X)≤ E[X ];
(ii) E (X)≤ E[X ];
(iii) H(X)≤ E[X ]+ γ,
where γ = 0.5772 · · · is the Euler constant.
Proof. We just prove Part (iii). Part (i) is well-known. Since X is IFRA, then− logF(x)/x is increasing in x > 0 i.e.
−F(x) logF(x)x
≤ f (x), x > 0. (5)
The 6th Seminar on Reliability Theory and its Applications 285
From (2) and (5), Part (ii) immediately follows. Recalling (1), we have
H(X)≤∫
∞
0f (x) log(x)dx−
∫∞
0f (x) logF(x)dx−
∫∞
0f (x) log(− logF(x))dx.
Using probability integral transformation U = F(X), it can be verified that
−∫
∞
0f (x) logF(x)dx = 1
and ∫∞
0f (x) log(− logF(x))dx = ψ(1) =−γ.
By noting that logx≤ x−1 for all x > 0, the proof is then completed.
3 Dispersion measures of coherent systems
A system is said to be coherent if the system does not have any irrelevantcomponents and the system has a monotone structure function. For example,the k-out-of-n:F system is a special coherent system which fails upon failureof the k-th component. Recently Samaniego’s signature [6] is widely usedfor comparison of coherent system. This measure is distribution free and de-pends only on the structure of the system when lifetimes of components ofthe system are independent and identically distributed (i.i.d.). More precisely,suppose a coherent system with n i.i.d. component lifetimes X1, · · · ,Xn and thecorresponding order statistics X1:n, · · · ,Xn:n and assume component lifetimesbe absolutely continuous with the common CDF F . Let T stand the lifetimeof the coherent system. The signature of the system is defined as the vectors = (s1, · · · ,sn) where si = P(T = Xi:n), i = 1, · · · ,n, being the probability thatthe system fails with failure of the i-th component. Notice that si ≥ 0, and
∑ni=1 si = 1. For a coherent system with lifetime T , Samaniego [6] (see also
[7]) proved that
F(t) = P(T > t) =n
∑i=1
siFi:n(t), (6)
Toomaj, A. 286
where
Fi:n(t) =i−1
∑j=0
(nj
)[F(t)] j[F(t)]n− j, t > 0, 1≤ i≤ n, (7)
is the survival function of Xi:n. The density function of T is
fT (t) =n
∑i=1
si fi:n(t),
where for 1≤ i≤ n
fi:n(t) =Γ(n+1)
Γ(i)Γ(n− i+1)[F(t)]i−1[F(t)]n−i f (t), t > 0, (8)
stands for the density function of Xi:n and Γ(·) is the complete gamma function.
Let s = (s1, · · · ,sn) be the signature of the coherent system. The correspond-ing transformations of component lifetimes Ui = F(Xi) are i.i.d. random vari-ables and are uniformly distributed on the interval [0,1]. It is well known thatfor 1 ≤ i ≤ n, Ui:n = F(Xi:n) has the beta distribution with parameters i andn− i+ 1. The density of V = F(T ) is gV (v) = ∑
ni=1 sigi:n(v) for 0 < v < 1,
where
gi:n(v) =Γ(n+1)
Γ(i)Γ(n− i+1)vi−1(1− v)n−i, 0≤ v≤ 1,
is the density of Ui:n. Note that the Jacobian of the transformation T = F−1(V )
is 1/ f (F−1(v)). Using the probability integral transformation V = F(T ),Toomaj and Doostparast [11] showed that the system’s entropy is
H(T ) = H(V )−E[log f (T )] = H(V )−n
∑i=1
siE[log f (F−1(Ui:n))]. (9)
It is worth to point out that Equation (9) expresses the entropy of T as thesum of two terms, both depending on the system signature, whereas only thesecond term is depending on the distributions of the component lifetimes.
Recently, Park and Kim [4] obtained some recurrence relations for the CREof order statistics. This paper deals with information properties of coherentsystems from the perspective of CRE measure; see Toomaj et al. [14] for
The 6th Seminar on Reliability Theory and its Applications 287
details. It is known that the survival function of Ui:n is given as
Gi:n(u) =i−1
∑j=0
(nj
)u j(1−u)n− j, 0≤ u≤ 1, (10)
for all 1 ≤ i ≤ n. The transformation V = F(T ) has the survival functionGV (v) = ∑
ni=1 siGi:n(v), 0 ≤ v ≤ 1. From (2) and the earlier mentioned trans-
forms, we have
E (T ) = −∫
∞
0FT (t) logFT (t)dt =
∫ 1
0
ψ(GV (v))f (F−1(v))
dv
= E (V )+∫ 1
0ψ(GV (v))d{F−1(v)− v}. (11)
As applications of Eqs. (9) and (11), we have the following example.
Example 3.1. Let s = (0, 23,
13,0) be the signature of the given coherent sys-
tem consisting of n = 4 i.i.d. components having the common exponentialdistribution as
F(x) = 1− exp(−λx), λ > 0, x > 0. (12)
It is easy to see that f (F−1(v)) = λ (1− v) and hence E (T ) = 0.5568/λ . Onecan see that the CRE is decreasing with respect to λ that is the system’s uncer-tainty in terms of CRE decreases with increasing the scale parameter λ . More-over, by noting that H(V ) = 0.6137, we easily obtain H(T ) = 0.6137− logλ .
Figure 3 show three dispersion measures with respect to parameter λ .
Figure 3: Standard deviation, differential entropy and CRE of Example 3.1.
Toomaj, A. 288
From (11), we immediately obtain the following proposition.
Proposition 3.2. Let T denote the lifetime of a coherent system with signa-
ture s having the common CDF F. If F−1(v)− v is increasing (decreasing) in
v, for all 0 < v < 1, then
E (T )≥ (≤)E (V ). (13)
Another interesting application of (11) is the comparison of the CRE ofcoherent systems when two systems have the same signature with differenti.i.d. component lifetimes. Equation (11) gives the following theorem. Firstwe recall that the random variable X is smaller than Y in the dispersive orderdenoted by, X ≤d Y , if
G−1(x)−F−1(x), is nondecreasing in x ∈ (0,1),
where F−1 and G−1 are right continuous inverses of F and G, respectively.
Theorem 3.3. Let T X be the lifetime of a coherent system on the basis of i.i.d.
components with lifetimes X1, · · · ,Xn, having a common CDF F and TY be the
lifetime of a coherent system on the basis of i.i.d. components with lifetimes
Y1, · · · ,Yn, having a common CDF G such that both systems have the same
signature. If X ≤d Y , then
(i) E (T X)≤ E (TY );
(ii) σ(T X)≤ σ(TY );
(iii) H(T X)≤ H(TY );
Proof. (i) First notice that
E (TY )−E (T X) =∫ 1
0ψ(GV (v))d[G−1(v)−F−1(v)]. (14)
Since X ≤d Y , then G−1(v)−F−1(v) is increasing in v∈ (0,1), so this com-pletes the proof.
The 6th Seminar on Reliability Theory and its Applications 289
(ii) The proof can be easily obtained from Theorem 2.9 of Navarro et al. [3].
(iii) The proof is given in Toomaj et al. [13].
Another interesting result is given in the following theorem.
Theorem 3.4. Under the conditions of Theorem 3.3, if X ≤d Y, and if
E (T X) = E (TY ), (15)
then X and Y have the same distribution up to a location parameter.
Proof. If (15) holds, then Eq. (14) holds. Since X ≤d Y, we know that G−1(v)−F−1(v) is an increasing function of v. Now, we claim that G−1(v)−F−1(v) =
k (constant) for all 0 ≤ v ≤ 1. Suppose, by contradiction, that there exists aninterval (a,b)⊂ [0,1] such that G−1(v)−F−1(v) is not constant in (a,b). Then,
0 =∫ 1
0ψ(GV (v))d[G−1(v)−F−1(v)]
≥∫ b
aψ(GV (v))d[G−1(v)−F−1(v)]> 0.
a contradiction. Therefore G−1(v)−F−1(v) = k (constant) for all 0 ≤ v ≤1 and this means that X and Y have the same distribution up to a locationparameter.
It is known that coherent systems are closed under the formation of co-herent systems in the sense that the lifetime of the system is IFRA when thecomponent lifetimes are IFRA. From Theorem 5, we immediately obtain thefollowing result.
Theorem 3.5. Let T denote the lifetime of a coherent system with signature shaving the common CDF F. If X is IFRA, then
(i) σ(T )≤ E[T ];
(ii) E (T )≤ E[T ];
(iii) H(T )≤ E[T ]+ γ,
where γ = 0.5772 · · · is the Euler constant.
Toomaj, A. 290
References
[1] Cover, T.A. and Thomas, J.A. (2006), Elements of Information Theory.New Jersey: Wiley and Sons, Inc.
[2] Ebrahimi, N., Maasoumi, E. and Soofi, E.S. (1999), Ordering univariatedistributions by entropy and variance. Journal of Econometrics 90, 317–336.
[3] Navarro, J., del Aguila, Y., Sordo, M.A. and Suarez-Llorens, A. (2013),Stochastic ordering properties for systems with dependent identically dis-tributed components. Appl. Stochastic Models Bus. Ind. 29, 264–278.
[4] Park, S. and Kim, I. (2014), On cumulative residual entropy of order statis-tics. Statist. Probab. Lett. 94, 170–175.
[5] Rao, M., Chen, Y., Vemuri, B. and Fei, W. (2004), Cumulative residualentropy: a new measure of information, IEEE Trans. Inform. Theory. 50,1220–1228.
[6] SAMANIEGO, F. J. (1985), On closure of the IFR class under formation ofcoherent systems. IEEE Trans. Reliab. 69–72.
[7] Samaniego, F.J. (2007), System Signatures and their Applications in Engi-
neering Reliability. Springer Science+Business Media, LLC, New York.
[8] Shannon, C.E. (1948), A mathematical theory of communication, Bell
System Tech. J. 27, 379-423 and 623-656.
[9] Taneja, H. and Kumar, V. (2012), On dynamic cumulative residual inac-curacy measure. Proceedings of the World Congress on Engineering. I,153–156.
[10] Toomaj, A. and Di Crescenzo, A. (2019), Generalized entropies, varianceand applications. Advances in Applied Probability. submitted.
The 6th Seminar on Reliability Theory and its Applications 291
[11] Toomaj, A. and Doostparast, M. (2014), A note on signature based ex-pressions for the entropy of mixed r-out-of-n systems. Naval Res. Logist.
61, 202-206.
[12] Toomaj, A. and Doostparast, M. (2014), On the Kullback-Leibler infor-mation for mixed systems. Internat. J. Systems Sci. 47(10), 2458–2465.
[13] Toomaj, A., Di Crescenzo, A. and Doostparast, M. (2018), Some resultson information properties of coherent systems. Appl. Stochastic Models
Bus. Ind. 34, 128–143.
[14] Toomaj, A., Sunoj, S.M., and Navarro, J. (2017), Some properties ofthe cumulative residual entropy of coherent and mixed systems. J. Appl.
Probab. 54, 379-393.
The 6th Seminar on Reliability Theory and its Applications
Efficient Estimation of Parameters of the Generalized ExponentiatedDistribution Under Randomly Right Censored Data
Torkaman, P.1
1 Department of Statistics, Faculty of Mathematics and Statistics, Universityof Malayer, Malayer, Iran
Abstract: Generalized exponentiated distribution introduced as an alternativeto gamma and Weibull distributions is derived usefull applications in reliabilityand survival studies. In this paper, we compared the maximum likelihood es-timator (MLE), the approximate maximum likelihood estimator (AMLE) andthe approximate maximum likelihood Jackniffe estimator (AMLJE) of the pa-rameters of the Generalized exponentiated distribution in case of the randomlyright censored data. The performance of the MLE, AMLE and AMLJE arecompared by the simulation study. Simulation study show that, AMLE andAMLJE be have better than MLE when the proposed model is misspecifiedand thay are not better when not so.
Keywords: Generalized Exponentiated Distribution, Approximate MaximumLikelihood, Right Censored Data.
1 Introduction
The Generalized Exponential (GE) distribution has been proposed by Guptaand Kundu(1999) as a special case of Gompertz-Verhulst function. GE dis-tribution has a right skewed unimodal density function also the hazard rate ofthis distribution could be increasing, decreasing or constant depending on the
1Torkaman, P.: [email protected]
292
The 6th Seminar on Reliability Theory and its Applications 293
shape parameter α . It is observed that it can be used to analyze lifetime datain place of gamma, Weibull and log-normal distributions. Gupta and Kundu(2001a) mentioned that the two-parameter GE distribution could be used quiteeffectively in analyzing many lifetime data. The GE distribution has beenstudied extensivlely by Gupta and Kundu (2001b), Raqab (2005), Raqab andMadi (2005), Alamm et al. (2007), Gupta and Kundu (2007), Kundu andGupta (2008), Mitra and Kundu(2008), Madi and Raqab (2009), Wong andWu (2009).The two- parameter GE distribution has a probability density function, a cu-mulative distribution function and a hazard function as follows:
f (x;α,λ ) = αλ (1− e−λx)α−1e−λx x > 0,
F(x;α,λ ) = (1− e−λx)α x > 0,
and
h(x;α,λ ) =αλ (1− e−λx)α−1e−λx
1− (1− e−λx)αx > 0,
where α > 0 is the shape parameter and λ > 0 is the scale parameter. Whenα = 1, the GE distribution corresponds to the exponential distribution. If α <
1 the density function is decreasing and if α > 1 the density function is aunimodal function.
Because it was not always facing with complete data such as industrial lifetesting or medical survival analysis, it is also important to study the parame-ters estimation for incomplete data. Including incomplete data, we can referthe randomly right censored data that are of high standing because of savingtime, money and also having vast applications in time-life tests, survival anal-ysis and reliability theory.Chen and Lio (2010) compared the performance of methods for estimating pa-rameters of the GE distribution based on the mean squared error (MSE) underprogressive type-I interval censoring. Sarhan (2007) developed an inferenceprocess for a competing risk model based on an incomplete sample from theGE distribution, Prahan and Kundu (2008) obtained a statistical inference for
Torkaman, P. 294
a progressively censored sample. Although a considerable number of studieshave been made on parametric estimation for censored data, little attention hasbeen given to the misspecification of the parametric model. Our main concernare to consider the parametric estimation of the GE distribution under the mis-specification. This idea of parametric estimation based on censored data wasfirst proposed by Oakes (1986), and which is reffered to as approximate max-imum likelihood procedured In parametric estimation, the Kullback-Leiblerinformation is used as a measure of the divergence a true distribution relativeto the proposed parametric model.
2 Main results
Suppose that X1,X2, ...,Xn are i.i.d. random variables from an unknown distri-bution H(x) with probability density h(x). Parametric inference is done withinan assumed parametric family of densities A = { f (x,λ ),λ ∈Λ}. If A containsh, there exists λ0 ∈Λ such that, h(x)= f (x,λ0), and λ0 is called the true param-eter value and the proposed model is wellspecified, otherwise, the proposedmodel is misspecified, On the other hand, if h(x) is not contained in A, wecan obtained nearest f (x,λ ) to the true density h(x) by the Kullback-Leiblerinformation. This means that a purpose of the MLE is to find a parameter λ
which minimizes the Kullback-Leibler information
KL(h(.), f (.,λ )) =∫
h(x) logh(x)
f (x,λ )dx, (1)
which is a measure of the divergence of h(x) relative to f (x,λ ). Under suit-able regularity conditions, the maximum likelihood estimators (MLE), whichis defined as a value of λ ∈ Λ is obtained from derivation of logarithm of thelikelihood function. So, MLE is converged to λ0 which is true parameter ofdata, and data is generated from it, which is a parameter value minimizing (1).In the analysis of lifetime data, an important problem is censorship of obser-vations. For i = 1, ...,n, suppose that Xi and Yi for be random variables whichrepresent a lifetime and a censoring time of the i-th individual, respectively. In
The 6th Seminar on Reliability Theory and its Applications 295
lifetime data analysis, Xi and Yi are not observed. We can observe
(Zi,δi) = (min(Xi,Yi), I(Xi ≤ Yi)),
where I(B) denotes the indicator function of the set B. The set of observa-tions (Zi,δi), i = 1, ..,n is called randomly right censored data in survival andreliability theory. Note that Xi’ s are independent of Yi’s. G(y) where g(y) andare an unknown distribution and the probability density function. respectively.Let Y1,Y2, ...,Yn are i.i.d. from G(y).
Fn(x) = 1−n
∏i=1
[1− δi
n− i+1]I(Z(i)≤x),
where Z(1) ≤ Z(2) ≤ ...≤ Z(n) are the order values of Zi and δi denotes the con-comitaint associated with Z(i). In the uncensored case the Kaplan-Meier esti-mator Fn(x) coincides with the empirical distribution. The parametric modelA is assumed for the distribution of Xi, the log likelihood function is given by
L fn(λ ) =n
∑i=1{δi log f (Zi,λ )+(1−δi) log F(Zi,λ )}, (2)
where F(Zi,λ ) =∫
I(u > z) f (u,λ )du. The maximum likelihood estimatoris an element λn ∈Λ which attains the maximum likelihood value if ln(λ ) in Λ.When data are complete , the MLE is a consistent estimator of minimizing (1).Under random censorship, λn is not suitable estimator when A does not containh. Oakes (1986) introduced the approximate maximum likelihood estimator toparametic estimation based on censored data. Therefore, we consider anotherestimator λ ∗n , which is defined as an element in Λ which maximizes
L f ∗n (λ ) = n∫
log f (x,λ )dFn(X), (3)
When all Xi’s are observable, the log-likelihood function can be expressed asn
∑i=1
log f (xi,λ ) = n∫
log f (x,λ )dFn(X),
Thus L f ∗n (λ ) is a natural extension to the censored data in the sense that theempirical distribution Fn is replaced by the Kaplan-Meier estimator Fn. In caseof complete data, L f ∗n (λ ) = L fn(λ ) and therefore λ ∗n = λn.
Torkaman, P. 296
Stute and Wang (1993) proved the law of large numbers of the Kaplan-Meierintegral. The following theorem, using Suzukawa’ assumptions (Suzukawa etal. (2001)) is shown.We begin with the following assumptions:(A1) The parameter space Λ is an open interval in R.(A2) h(x,λ ) is continuous for almost every x.(A3) All probability density functions in the model A have the same support.(A4)
∫h0(x) logh(x,λ ) has a maximum at λ ∗0 .
(A5) For any λ 6= λ ∗0 , there exist d(λ )> 0 and a function hλ (X) with∫h0(x)|hλ (x)|< ∞, such that
supλ′ :|λ ′−λ ∗0 |<d(λ ) log
h(X ,λ′)
h(X ,λ ∗0 )< hλ (X).
(A6) For a sufficiently large K¿0, there exists a function h0(X) with∫h0(x) logh0(λ )< 0 such that
supλ′ :|λ ′−λ ∗0 |>K log
h(X ,λ′)
h(X ,λ ∗0 )< h0(X).
All of the above assumptions are indepebdent of G which is the distributionof the censoring variable.(A7) τF0 ≤ τG for τF0 = in f{x : F0(x) = 1},τG = in f{y : G(y) = 1}.
Theorem 2.1. Under the conditions (A1)-(A7), the AMLE (λ ∗n ) converges to
λ ∗0 in probability as n−→ ∞.
where λ ∗0 gives the neast density function in A to the true density function.Suzukawa et al. (2001) consistency and asymptotic normality of the AMLEunder the misspecification of the proposed model and it converges to λ ∗0 inprobability that which is a parameters value minimizingis (1). So if the pro-posed model is wellspecified and in case uncensored λ0 = λ ∗0 .Mauro (1985) and Stute(1994) pointed out, for every integrable ϕ ,
∫ϕ(x)dFn(x)
The 6th Seminar on Reliability Theory and its Applications 297
has a nonnegligible bias as an estimator of∫
ϕ(x)h(x)dx .∫ϕdFn(x)+Knϕ(Z(n))
as an estimator of∫
ϕ(x)h(x)dx, where
Kn =n−1
nδ(n)(1−δ(n))
n−2
∏j=1
(n−1− j
n− j)δ( j).
Thay showed that this estimator has smaller bias than∫
ϕ(x)dFn(x) forϕ(x)= x. We consider an estimator λ ∗JK
n which attains the maximum of L f ∗JKn (λ )
in Λ, where
L f ∗JKn (λ ) = L f ∗n (λ )+nKn log f (Z(n),λ ). (4)
So, we discuss comparision of the mentioned estimators based on a simulationstudy for the GE distribution.
Example 2.2. We assume that Xi’s denote the GE distribution as follow:
h(x) = αλ (1− e−λx)α−1e−λx,
and the censoring time Yi’s draw independently of the Xi’s are as follow:
g(y) = 2λ (1− e−λy)e−λy,
and the proposed model is A = { f (x,λ ) = λe−λx,λ > 0}.Therefore, the MLE, AMLE and AMLJE are obtained as (2), (3) and (4), re-spectively, as follow:
λn =∑
ni=1 δi
∑ni=1 Zi
,
λ∗n =
1∑
ni=1WiZ(i)
,
λ∗JKn =
1+Kn
KnZ(n)+∑ni=1WiZ(i)
,
where Wi =δ(i)
n−i+1 ∏i−1j=1(
n− jn− j+1)
δ(i).
Torkaman, P. 298
Based on Monte Carlo simulation a comparision these estimators for theGE distribution. We derive mean square errors(MSEs) of these three estima-tors under right censorship. Based on generating one thousand replications ofsamples of size n = 50,100,150,200 from the GE distribution with parame-ters α =0.4, 1, 1.6 and 2.7 , the MSEs are computed. For a fixed λ = 0.33the MSEs are observed in Table 1. Its worth to mention that the inversionmethod is used to generate samples from the GE distribution. So the sample isgenerated by solving following equation
(1− e−λx)α −u = 0,
where u∼U(0,1) with ”uniroot” function in R statistical program. The small-est MSE is written in blod script.From Table 1, we can see the MSEs for α = 1 (denote the proposed model iswellspecified) the MLE is better than AMLE and AMLJE and when α is farfrom one, the propsed model is misspecified therfore the result show that thesmallest value of MSE for AMLE and AMLJE , so thay are better than theMLE. Also, AMLJE is better than AMLE in case of heavy censorship. On theother hand, if the censoring probability is large, AMLJE is best.
Table 1: Simulation results for MSEs of the MLE, the AMLE and the AMLJE , λ = 0.33
α 0.4 1 1.6 2.7
λn 0.800 0.697 .797 0.712
n=50 λ ∗n 0.602 .808 0.674¯
0.799
λ ∗JKn 0.529 0.791 0.642 0.588
λn 0.589 0.499 0.605 0.696
n=100 λ ∗n 0.623 0.722 0.554 0.503
λ ∗JKn 0.559 0.763 0.644 0.655
λn 0.503 0.367 0.601 0.633
n=150 λ ∗n 0.351 0.477 0.387 0.308
λ ∗JKn 0.236 0.709 0.402 0.347
λn 0.569 0.333 0.598 4.052
n=200 λ ∗n 0.811 0.856 0.566 0.502
λ ∗JKn 0.364 0.507 0.368 0.315
The 6th Seminar on Reliability Theory and its Applications 299
3 Conclusion
In this paper we discussed that the proposed model must be checked carefullyin analysis of censored data. the result of this study can be shown that if thevalue of MLE and AMLE are significantly different for large n, the possibilityof misspecification is strong.
References
[1] Chen, D. G., and Lio, Y. L. (2010), Generalized exponential distributions:Different methods of estimations, Computation and Statistics and Data
Analysis , 54, 1581-1591.
[2] Gupta , R.D. and kundu, D. (1999), Generalized exponential distributions.Australian and New Zealand Journal of Statistics, 41, 173-188.
[3] Gupta , R.D. and kundu, D. (2001a), Exponentiated exponential distri-butions: An alternative to gamma and Weibull distributions. Biometrical
Journal, 43, 117-130.
[4] Gupta , R.D. and kundu, D. (2001b), Generalized exponential distribu-tions: Different methods of estimations, Journal of Statistics Computation
and Simulation, 69, 315-338.
[5] Gupta , R.D. and kundu, D. (2007), Generalized exponential distributions:Existing result and some recent developments, Journal of Statistics Plan-
ning and Inference, 137, 3537-3547.
[6] Mauro, D. (1985), A combinatoric approach to the Kaplan-Meier estima-tion, Ann. Statist., 13, 142-149.
[7] Oakes, D. (1986), An approximate likelihood procedure for censored data,Journal of Statistical Planning and Inference, 134, 350-372.
Torkaman, P. 300
[8] Pradhan, B., Kundu, D.(2008), On progressively censored generalizedexponential distribution, Vienna, Test: http://dx. doi.org/10.1007/s11749-008-0110-1.
[9] Raqab, M.Z., Madi, M.T. (2001), Estimation of the location and scale pa-rameters of generalized exponential distribution based on order statistics,Journal of Statistics Computation and Simulation, 69, 109-124.
[10] Raqab, M.Z., Madi, M.T. (2005), Bayesian inference for the generalizedexponential distribution, Journal of Statistics Computation and Simula-
tion, 75, 841-852.
[11] Sarhan, A, M. (2007), Analysis of incomplete, censored data in compet-ing risks models with generalized exponential distributions., IEEE Trans-
actions on Relibility, 56(1), 132-138.
[12] Stute, W. and Wang, J. L. (1993), The strong law under random censor-ship, Ann. Statist., 21, 1591-1607.
[13] Suzukawa, A. Imai, H. Sato, Y. (2001), Kullback-Leibler informationconsistent estimation for censored data, Ann. Statist, 53, 262-276.
[14] Zheng, G., (2002), Fisher information matrix in type-II censored datafrom exponentiated exponential family, Biometrical Journal, 44, 353-357.