GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In...

89

Transcript of GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In...

Page 1: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

PREDICTION INTERVALS FOR FUTURE ORDEREDOBSERVATIONS IN DOUBLY TYPE-II CENSORED

SAMPLES FROM A TWO PARAMETER EXPONENTIALDISTRIBUTION

Dissertation

Zur Erlangung des Doktorgrades

der Mathematischen Fakult�at

der Georg-August-Universit�at in G�ottingen

von

Adel Mohamed Mahmoud Omar

aus

Kena, �Agypten

G�ottingen 2000

Page 2: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

D 7Referent Prof. Dr. KrengelKorreferent Prof. Dr. DenkerTag der m�undlichen Pr�ufung: 27. April 2000

Page 3: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

CONTENTS

PageAcknowledgementIntroduction 1

Chapter one

Methods and previous results on prediction

1-1 De�nitions 31-2 Methods of construction of prediction intervals 4(i) Using a pivotal quantity 4(ii) Using a maximum likelihood estimator 4(iii) Using a conditioning suÆcient statistic 4(iv) Using Bayesian approach prediction 51-3 Prediction intervals for some distributions 6(1) Normal distribution 6(2) Weibull (type-I extreme value) 9(3) Gamma distribution 10(4) Inverse Gaussian distribution 13(5) Binomial distribution 14(6) Exponential distribution 15

Chapter two

Prediction intervals using a classical approach

2-1 Censored sample 172-1-1 Type-I censoring 172-1-2 Type-II censoring 172-2 Prediction intervals 172-3 Special cases 262-4 Applications of the results 282-5 Program descripation for calculating the factor 292-6 Tables of the values of the vectors 31

Page 4: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

Page

Chapter three

Estimation for the tow-parameter exponential distribution under doublytype-II censoring

3-1 Estimation for tow-parameter exponential distribution 58(i) The lower con�dence bound for the parameter � 58(i) The lower con�dence bound for the parameter � 61

Chapter four

Prediction interval for future observations using a maximum likelihoodestimator

4-1 Maximum likelihood estimate 634-2 Prediction using a maximum likelihood estimator 654-3 Prediction interval using a maximum likelihood estimator 684-4 Special cases 70

Chapter �ve

The Bayesian prediction

5-1 Prediction intervals for a future observation 745-2 Special cases 78References 80

Page 5: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

ACKNOWLEDGEMENTS

I would like to express my sincere appreciation and gratitude to my super-vissor Prof. Dr. Krengel for his valuable help during the prepration of thisthesis.My sincere thanks also given to Prof. Dr. Denker for his kind help and tomy colleagues in Institut f�ur Mathematische Stochastik for their useful helpduring the study of this project.

Adel M.M.Omar

Page 6: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

INTRODUCTION

This project deals with the following prediction problem:Let X1; X2; � � � ; Xn be independent random variables with two-parameterexponential distribution. In other words the Xi have a density function

f(xj�; �) = 1

�exp(�x � �

�)

where � 2 R and � > 0 are unknown.Let X(1); X(2); � � � ; X(n) be the corresponding order statistics , i.e. the set ofvalues of the random variables in increasing order.We assume that, for somek and r with 1 � k < r < n the random variables X(i) with k � i � r areobserved. On the basis of the observations X(k); X(k+1); � � � ; X(r) we have todetermine an interval which contains the value of X(s) for some s > r withspeci�ed probability. X(i) might be the time of the i-th failure of some deviceor the time when the i-th animal in a life-time test dies. We speak of type-1censoring if only the values of X(i) which are � some �xed T > 0 are ob-served. We speak of the type-2 censoring if the experiment is continued untilr animals have died. If we admit that the life times X(1); X(2); � � � ; X(k�1) areunknown, we speak of the doubly type-2 censoring.This thesis consists of �ve chapters.In the �rst chapter, we begin by describing some methods that have beenused to derive prediction intervals (pivotal method, maximum likelihood esti-mate, suÆcient statistic, Bayes approach). We also report known results onsome important distributions (normal distribution, Weibull ,extreme valuedistribution, Gamma distribution, inverse Gaussian, Binomial, exponential).In the second chapter we use a classical approach based on the statistic

T =rX

i=k+1

X(i) + (n� r)X(r) � (n� k)X(k)

we determine a prediction interval for X(s) of the form [X(r); X(r) + uT ], bycomputing the relevant distributions. The percentage points of the distribu-tion are tabulated for various values of k; r; s and n.In the third chapter we have estimated the parameters � and � for a doublytype-2 censored sample.In the fourth chapter we have derived maximum likelihood estimates based

1

Page 7: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

on doubly type-2 censored samples. With these estimated parameter valueswe determine a maximum likelihood predictor.In the last chapter we have studied the Bayesian approach when the priordistribution for the parameter is given by a two-parameter gamma density.We obtain a predictive density forX(s) and also a prediction interval forX(s).

2

Page 8: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

CHAPTER ONE

METHODS AND PREVIOUS RESULTS ONPREDICTION

A prediction interval (PI) is an interval which uses the results of a pastsample and contains the results of a future sample from the same popula-tion with a speci�ed probability. A prediction interval can be of a severalforms .These include intervals to predict a future observation or to predictthe mean of a future sample from the same population or to simultaneouslypredict the ranges of k future samples from the same population. Predic-tion intervals can be very useful in solving some statistical problems arisingin quality control and business. One of the earliest papers on predictionintervals is that by (Baker,1935). Baker considered the following problem:If, in an experimental science, a series of observations is made, how mucha similar series of future observations could be expected to di�er from theset of observations now available. Baker derived the probability function ofa deviation in the mean of a future sample measured from the mean of a�rst sample and measured in terms of the standard deviation of the �rstsample. A large number of papers on prediction intervals have appearedin the literature. (Epsein and Soble 1953,1954), (Chew,1968), (Hahn andNelson,1973), (Faulkenberry,1973), (Lingappaiah,1974), (Engelhardt,1975),(Fertig and Mann,1977), (Lawless,1971,1977), (Engelhardt and Bain,1982),(Chhikara and Gutmann,1982), (Patel,1989) and (Patel and Bain,1991).1-1 De�nitionsLet a random vector X be observed. We want to make a prediction aboutthe value of a random variable Y which is unobserved .The joint distributionof X and Y depends on a parameter � ,which ,in general ,will be unknown.If we assign to each value x of X a measurable subset B(x) of the range of Ysuch that

(1:1:1) P�(Y 2 B(x)jX = x) � 1� �

holds for all � and all x ,then B(.) is called a prediction set for Y for thelevel 1�� .In particular ,if each Y is real valued and each B(x) is an interval,then B(.) is called a prediction interval.Our main concern is with the following situation:Let X1; X2; � � �Xn be independent real valued random variables with a dis-tribution function F (:; �).

3

Page 9: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

Let X(1); X(2); � � �X(n) be the order statistics for X1; X2; � � �Xn .Let X be the random vector X = (X(k); X(k+1); � � � ; X(r)) and let Y be therandom variable X(s) ,where 1 � k < r < s � n are integers.1-2 Methods of construction of prediction interval(i)Using a pivotal quantityThis method is used very commonly in practice.Let Q(X,Y) be a function of a statistics based on the past and future sampleswhich does not depend on � functionally and whose distribution also doesnot depend on �.Let q�1 and q(1��2) be the lower percentage points of the distribution of thepivotal quantity Q(X,Y) where � = 1� �1 � �2.Let U1(X) and U2(X) be to statistics based on the observed sample , andU3(Y ) be some statistic based on the unobserved (future) sample.If the event q�1 < Q(X; Y ) < q�2 can be inverted to U1(X) < U3(Y ) < U2(X)Then we have an exact 100 %� prediction interval for U3(X).(ii)Using a maximum likelihood estimatorOur principal application is to the prediction of higher order statistic fromlower ones in type-II censored random samples.LetX1; X2; � � � ; Xn be independent real valued random variables.Assume thatthe distribution of the random variables is identical and that it has a densityfunction f(x; �); � 2 � ,and let X(1); X(2); � � � ; X(n) be the order statistics forthese variables .Let the random vector X = (X(k); X(k+1); � � � ; X(r)) be observed ,and let Ybe the unobserved (future) random variable X(s),where 1 � k < r < s � n.We proceed as follows: We �nd �? which is the maximum likelihood estimateof � .For this value �? we take the maximum likelihood predictor x?(s). In otherwords: x(s) maximizes the likelihood function L(x(s)jX(k) = x(k); � � � ; X(r) =x(r)). of X(s) for the parameter value �? and the conditional distributiongiven X(k) = x(k); � � � ; X(r) = x(r).(iii)Using a conditioning on a suÆcient statistic(Faulkenberry,1973) has given the following general method for deriving pre-diction intervals based on the idea of conditioning on a suÆcient statistic.Let X be an observed random vector, and let Y be an unobserved futurerandom variable. Let the joint distribution F (x; yj�) depend on a parameter�. Let T be a complete suÆcient statistic for the parameter � in the jointdistribution of (X,Y).Faulkenberry suggested the following steps1- Determine the conditional c.d.f.F (yjt) of y given T=t

4

Page 10: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

2- determine R0(t) such that :

(1:2:3)ZR0(t)

dF (yjt) = � see[14]:

Let R(x) be a function of x whose range is a region of the real line, and suchthat y 2 R(x) i� y 2 R`(t). Then R(x) is a prediction set for Y.(iv)Using Bayesian approach predictionThe idea behind prediction is to provide some estimate either point or inter-val for future observations of an experiment F based on the results obtainedfrom an informative experiment E.Let X1; X2; � � �Xn be independent real valued random variables with densityfunction f(xj�); x 2 X; � 2 �.Let X(1); X(2); � � �X(n) be the order statistics for these variables.Let the random vector X = (X(k); X(k+1); � � � ; X(r)) be observed ,and let Ybe the random variable (X(s) �X(r)), where 1 � k < r < s � n.We assume that the distributions of X and Y are indexed by the same pa-rameter �,our uncertainty about the true value of � is measured by a priordensity function g(�).Then the posterior density for the parameter � is given by

(1:2:4) h(�jX(k) = x(k); X(k+1) = x(k+1); � � � ; X(r) = x(r))

=f(x(k); x(k+1); � � � ; x(r)j�)g(�)Z

�f(x(k); x(k+1); � � � ; x(r)j�)g(�)d�

:

If the information in E can be summarized by a suÆcient statisticT (X(k); X(k+1); � � �X(r)) ,then we can write a posterior density for a parameter�

(1:2:5) h(�jT = t) =fT (tj�)g(�)Z

�fT (tj�)g(�)d�

see[32]:

Suppose we wish to predict a future statistic Y. If T and Y are independent,and Y has a density function f(yj�) ,then the predictive density function forY is given by

(1:2:6) p(yjT = t) =Z�f(yj�)h(�jT = t)d� see[7]:

5

Page 11: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

A 100%(1��) Bayesian prediction interval for a future observation Y is thende�ned as an interval A =(a,b) such that

(1:2:7) P (Y 2 AjT = t) =ZAp(yjT = t)dy = 1� � see[7]:

As we want a short prediction interval, it is most natural to take A =(y : p(yjT = t) > �), where � is determined by (1.2.7).1-3 Prediction intervals for some distributions(1)Normal distributionLet X1; X2; X3; � � � ; Xn be an observed (past) sample from a normal distri-bution N(�; �2) with a probability density function given by(1:3:1:1)

f(x; �; �2) =1p2��

e�12(

x��� )

2

; �1 < x <1; �1 < � <1; � > 0

Let Y1; Y2; � � � ; Ym be m independent future observations from the same dis-tribution N(�; �2) .Let Yk;m be the k-th order statistic of Y1; Y2; � � � ; Ym .Theassertion Yk;m > L is equivalent to the assertion that m � (k � 1) of theobservations Yi exceed L.Thus a prediction interval (L;1) for the level ofcon�dence � contains m � (k � 1) of the unobserved random variables atleast with probability �.Let

X =nXi=1

Xi

n

(1:3:1:2) S2 =nXi=1

(Xi �X)2

n� 1

S21 =

nXi=1

(Xi � �)2

n

and

R =X � Yk;m

S:

The distribution of R does not depend on � and �2.Thus there exists r suchthat

(1:3:1:3) P (R < r) = �

6

Page 12: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

holds for all �,�2 ,and the value of r depends only on k; n;m and �.A 100% � one-sided prediction interval for a future obsrevation Yk;m is givenby

(1:3:1:4) P (Yk;m > X � rS) = � see[16]:

The distribution of R is determined by looking at the conditional distributionof R given Yk;m = yk;m. Let Z =

pn(X � �)=�, and Zk;m = (Yk;m � �)=�.

Then

(1:3:1:5)pnR =

(Z �pnZk;m)

(S2�2)1=2:

Z, Zk;m and S2 are independent, Z is standard normal distribution N(0,1),Zk;m is the k-th order statistic in a sample of m N(0,1) distributed indepen-dent random variables. If T (:jÆ; �) denotes the distribution function of thenon-central cumulative t-distribution with noncentrality parameter Æ and �degrees of freedom, and zk;m = (yk;m � �)=�, we obtain

(1:3:1:6) P (R < rjYk;m = yk;m) = T (pnrj � pnzk;m; n� 1) see[16]:

Multiplying equation (1.3.1.6) by the density of Zk;m and integrating out z,we obtain(1:3:1:7)

P (R < r) =Z

1

�1

m!

(k � 1)!(m� k)!

1p2�

e�z2=2 (�(z))k�1 (1� �(z))m�k

�T (pnrj � pnz; n� 1)dz:

where �(z) is the cumulative standard normal distribution.Then for all val-ues of m;n and k such that 1 � k � m this distribution function can becomputed.(i)Suppose both � = �0 and � = �0 are knownSince both � and � are known, the sample values will not be used.Let m=k,(Chew ,1968) has given the a prediction interval for next k futureobservations.A 100% � two-sided prediction interval for next k future observations basedon �0 and �0 has the limits

(1:3:1:10) �0��0r(k;�) see[4]

7

Page 13: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

wherer(k;�) = z

( 1+�1=k

2),z(�) is the percentile points of the standard normal distri-

bution.(ii)Suppose � = �0 is known and � is unknownLet k=m, (Chew ,1968) has given the a prediction interval for next k futureobservations.A 100% � two-sided prediction interval for next k future observations basedon �0 and S1 has the limits

(1:3:1:11) �0�S1r(k;n;�) see[4]

where1� r(k;n;�) =

qkf(k;n;�) where f(k;n;�) is 100%� percentile points of F-distri-

bution.(iii)Suppose � = �0 is known and � is unknownLet k=m, (Chew,1968) has given the following prediction interval for next kfuture observations.A 100 %� two-sided prediction interval for next k future observations basedon X and �0 has the limits:

(1:3:1:12) X��0r(k;n;�) see[4]

where1-r(k;n;�) =

q1 + 1

n�2(k;�)

(iv)Suppose both � and � are unknownLet k=m, (chew,1968) has given the following prediction interval for next kfuture observations.A 100 %� two-sided prediction intervals for next k future observations basedon X and S2 has the limits :

(1:3:1:13) X�Sr(k;n;�) see[4]

where1- r(k;n;�) =

q(1 + 1

n)kf(k;n�1;�)

2-Weibull (type-1 extreme value)We use the notation T�W (�; �) to denote the following distribution functionof a Weibull random variable T

(1:3:2:1) FT (t) = 1� e�(t=�)�

; t > 0; � > 0; � > 0

8

Page 14: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

If we put X = lnT ,then the variable X has a distribution X�EV (u; b) withthe following distribution

(1:3:2:2) FX(x) = 1� e�

�e(

(x�u)b )

�; u = ln�; b =

1

�; �1 < x <1

Here most of the prediction intervals are available for a type-1 extreme valueEV(u,b) distribution. These can be used forW (�; �) distribution after trans-forming X = lnT .(i)Prediction intervals for the smallest observation of a futuresample from EV(u,b) distributionLet X(1); X(2); � � � ; X(n) be ordered observatiions in a sample of size n from atype-1 extreme value distribution EV(u,b).Suppose that only the �rst r of these order statistics are observed .Let Y(1)be the smallest order statistic from a future independent sample of size mfrom the EV(u,b) distribution.Let ~u and ~b be the best linear invariant estimators of u and b(Fertig,K.W,Meyer,M.E.and Mann,N.R.,1980) have given the following pre-diction intervals for Y(1).

A 100 %� one-sided lower prediction bound for Y(1) based on ~u and ~b is

(1:3:2:3) (~u� S�~b) see[15]:

For one-sided upper prediction bound, replace S� by S1��.(ii) Prediction interval for a largest observation from a futuresample from EV(u,b) distributionLet X(1); X(2); � � � ; X(n) be ordered observations in a sample of size n from atype-1 extreme value distribution EV(u,b). Let Y(m) be the largest observa-tion from a future independent sample of size m ,(1 � m � n) from EV(u,b)distribution.Let u and b be the maximum likelihood estimators (MLE) of u and b re-spectively.Antle and Rademaker (1972) have given the following predictionintervals for Y(m). One can �nd a factor e(n;�) such that,

a 100 %� one-sided upper prediction interval for Y(m) based on u and b is

(1:3:2:4) (�1; u+ e(n; �)b lnm) see[1]:

Antle and Rademaker have used the computer simulation to tabulate thefactor e(n; �) for n=10(10)70,100 and � =0.90,0.95,0.975,0.98,0.99,0.995.

9

Page 15: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

3-Gamma distributionLet X(1); X(2); � � � ; X(n) be ordered observations in a sample of size n fromthe gamma distribution having a density function

(1:3:3:1) f(x; �) =1

�(�)x��1e�x; x > 0:

where � is integar, and � > 0Let Gr denotes the distribution function of the variable X(r),where G(x)denotes the gamma distribution function which can be written as

(1:3:3:2) G(x) =1Xk=�

e�xxk

k!see[30]:

The joint density function of X(r) and X(s),s > r

(1:3:3:3) grs(x(r); x(s)) = C(Gr)r�1(Gs �Gr)

s�r�1(1�Gs)n�sdGrdGs

where C =n!

(r � 1)!(s� r � 1)!(n� s)!.

Put X(r) = X and X(s) = Y ,we get the joint density function of the variablesX and Y is

(1:3:3:4) grs(x; y) = C1

r�1Xi=0

s�r�1Xj=0

(�1)s�r+i+j�1 r � 1i

! s� r � 1

j

!

�i+jXx

n�r�j�1Xy

e�(x+y)(xy)��1 see[29]

where C1 =C

(�!�1)2,Xx

=��1Xk=0

e�xxk

k!and

Xy

=��1Xk=0

e�yyk

k!.

Then

(1:3:3:5) grs(x; y) = C1

r�1Xi=0

s�r�1Xj=0

(�1)s�r+i+j�1 r � 1i

! s� r � 1

j

!

���1Xk=0

e�xxk

k!

!i+j ��1Xk=0

e�yyk

k!

!n�r�j�1

e�(x+y)(xy)��1:

10

Page 16: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

Let X(s)�X(r) = U and X(r) = X ,then the joint probability density functionof the variables U and X is given by

(1:3:3:6) g(u; x) = C1

r�1Xi=0

s�r�1Xj=0

(�1)s�r+i+j�1 r � 1i

! s� r � 1

j

!

���1Xk=0

e�xxk

k!

!i+j ��1Xk=0

e�(x+u)(x+ u)k

k!

!n�r�j�1

e�(2x+u)(x(x + u))��1:

Let at(�; q) be the coeÆcient of xt in the expansion of

��1Xk=0

xk

k!

!q

, and simi-

larly let bp(�; q) be the coeÆcient of (x+u)p in the expansion of

��1Xk=0

(x + u)k

k!

!q.

Then we can rewrite (1.3.3.6) in the form

(1:3:3:7) g(u; x) = C1

r�1Xi=0

s�r�1Xj=0

(�1)s�r+i+j�1 r � 1i

! s� r � 1

j

!

�(��1)(i+j)X

t=0

at(�; i+ j)(��1)(n�r�j�1)X

p=0

bp(�; n� r � j � 1)

�p+��1Xm=0

p+ �� 1

m

!e�(n�r�j)uup+��m�1e�(n�r+i+1)xxt+m+��1:

Integrating (1.3.3.7) over x, we obtain the density function of U

(1:3:3:8) f(u) = C1

Xi;j;t;p;m

r � 1i

! s� r � 1

j

!atbp

� p+ �� 1

m

!e�(n�r�j)uup+��m�1�(t+m+ �)

(n� r + i+ 1)t+m+�:

11

Page 17: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

We can �nd for a speci�ed probability p0 that P (U � u0) = p0.Then the prediction interval for a future observation X(s) is

(1:3:3:9) P (X(s) � X(r) + u0) = p0 see[30]:

Prediction intervals for the mean of a future samplefrom the gamma distributionLet X(1); X(2); � � � ; X(n) be ordered observations in a sample of size n fromthe gamma distribution �(k).Let Y(1); Y(2): � � � ; Y(m) be m unobserved future samples from the same gammadistribution.We de�ne

X =nXi=1

Xi

n

and

Y =mXi=1

Yim:

(i) Suppose k is known(Shiue and Bain,1986) have given the following approximate prediction in-tervals for Y .A 100%� two-sided prediction intervals for Y based on X is given by

(1:3:3:7)

X

f(2nk;2mk;1��2)

;X

f(2nk;2mk;�1)

!see[40]

where �1 + �2 = � and f(:) are 100%�i percentile points of the F-distribution.

(ii) Suppose k is unknown(Shiue and Bain,1986) have given the following approximate prediction in-tervals for Y .Let k? be the maximum likelihood estimator of k.Then a 100%� two-sided lower prediction interval for Y based on X and k?

is given by

(1:3:3:8)

X

f(2nk?;2mk?;1��2)

;X

f(2nk?;2mk?;�1)

!see[40]:

The constants �1 and �2 are determined as follows.1- If k? � 1 Shiue and Bain recommend using their table 2 which gives for

12

Page 18: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

i=1,2 and �i-values ,for � = 0.005,0.025,0.050,0.075,0.1 and n=5,10,20,...2- If k? < 1 ,then �nd �i for a given �i using the following relations

(1:3:3:9) �1 =

8>>>>><>>>>>:

(1� d)e

nn

h1�( �1

1�d)1

�n+1

io(if �1 < 1� d)

(1� d)e

�n

�1�( 1��1

d )1

�n+1

��(if �1 > 1� d)

(1:3:3:10) �2 =

8>>>>><>>>>>:

d � enn

h1�(�2d )

11�n

io(if �2 � d)

1� (1� d)e

�n

�1�( 1��2

1�d )1

�n+1

��(if �2 > d)

where d = nn+m

:4-Inverse Gaussian distributionWe use the notation X� I(�; �) to denote the following probability densityfunction of an inverse Gaussian random variable X

(1:3:4:1) f(x; �; �) =

2�x3

!1=2

e�

��(x��)2

2�x2

�; x > 0; � > 0; � > 0

Prediction intervals for a future observation based on the observedsample from the inverse Gaussian distributionLet X1; X2; � � � ; Xn be an observed sample from an inverse Gaussian distri-bution . Let Y be a future independent sample from the inverse Gaussiandistribution.De�ne

X =nXi=1

Xi

n

V =1

n

nXi=1

�1

Xi

� 1

X

�2

Q =nXi=1

(Xi � �0)2

Xi

:

(i) Suppose � = �0 is known(Chikkara and Guttman,1982) have given the following prediction intervals

13

Page 19: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

for Y.A 100%� two-sided prediction intervals for Y based on X and �0 has limits

(1:3:4:2)

1

X� 1

2�0�2(1;�) � (

n+ 1

n�0X�2(1;�) +

1

4�0(�2(1;�))

2)1=2!�1

see[5]:

(ii) Suppose � = �0 is known(Chikkara and Guttman,1982) have given the following prediction intervalfor Y.A 100%� two-sided prediction intervals for Y based on Q and �0 has thelimits

(1:3:4:3)

�0 +

Q

2nf(1;n;�) � Q

2n(4n�0Q

f(1;n;�) + f 2(1;n;�))2

!see[5]:

(iii) Both � and � are unknown(Chikkara and Guttman ,1982) have given the following prediction intervalfor Y.A 100%� two-sided prediction intervals for Y based on X and V has thelimits(1:3:4:4) 1

X+

nV

2(n� 1)f(1;n�1;�) � (

(n + 1)V

(n� 1)Xf(1;n�1;�) +

n2V 2

4(n� 1)2f 2(1;n�1;�))

1=2

!�1

see[5]

5-Binomial distributionWe use the notationX � B(n; p) to denote the following probability functionof a binomial random variable X

(1:3:5:1) f(x; p) =

nx

!px(1� p)n�x; x = 0; 1; 2; � � � ; n; 0 < p < 1

In general suppose that k detective items have been obtained in n binomialtrails. A two-sided Prediction interval to contain with approximately 100%�probability the proportion of successes in m future trails from the same bi-nomial population is given by

(1:3:5:2) p� z( 1+�2

)

p(1� p)

(m+ n)

mn

!1=2

see[18]

where p = knand z(:) is the percentile point of the standard normal distribu-

tion.

14

Page 20: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

6-Exponential distributionWe use the notation X � exp(�; �) to denote the following probability den-sity function of a two-parameter exponential random variable X

(1:3:6:1) f(x; �; �) =1

�e�(

x��� ); x � �; � > 0

If � = 0 then the notation X � exp(�) is used for a one-parameter exponen-tial distribution .Prediction interval for one-parameter exponential distributionLet X(1); X(2); � � � ; X(n) be an ordered sample from an exponential exp(�) dis-tribution.Suppose only the �rst k of these order statistics are observed.We de�ne

(1:3:6:2) Sk =kXi=1

X(i) + (n� k)X(k):

For example in life testing problems which are terminated after the k-thfailure,the prediction interval given here would allow us to predict on thebasis of the �rst k observed failure times, the r-th failure time and the amountof the additional time required to complete the test.(Lawless,1971) has giventhe following prediction interval for the future observation X(r).A 100%� one-sided upper prediction interval for X(r) based on X(k) and Skis

(1:3:6:3) (X(k); X(k) + Sku(k;r;n;�)) see[25]

where the factor u(k;r;n;�) is the 100� lower percentage point of the distribu-

tion of the random variable U =X(r)�X(k)

Sk.

It can be obtained from the following distribution of U given by

(1:3:6:4)

P (U � u) = Cr�k�1Xi=0

r � k � 1

i

!(�1)i (1 + (n� r + i+ 1)u)�k

(n� r + i+ 1)see[25]

where C = 1�(r�k;n�r+1)

and �(a; b) is the beta function.

(Lingappaiah,1973) has given another prediction interval for the future ob-servation X(r).

15

Page 21: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

A 100%� one-sided upper prediction interval for X(r) based only on X(k) isgiven by

(1:3:6:5) (X(k); X(k)(1 +W(k;r;n;�))

where the factor W(k;r;n;�) is the 100� lower percentage point of the distribu-

tion of the random variable W =X(r)�X(k)

X(k).

It can be obtained from the following density function of W given by

(1:3:6:6) g(w) = 1� Cr�k�1Xi=0

r � k � 1

i

! ((b + i)w + n)

k

!((b+ i)k)�1

where b=(n-r+1) and C =n!

(k � 1)!(r � k � 1)!(n� r)!.

16

Page 22: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

CHAPTER TWO

PREDICTION INTERVAL USING A CLASSICALAPPROACH

2-1 Censored sampleAnimal studies usually start with a �xed number of animals to which thetreatments are given ,the researcher often can not wait for the death of allanimals. One option is to observe for a �xed period of time t0 after which thesurviving animals are sacri�ced .Survival times recorded for the animals thatdied during the study period are the times from the start of the experimentto their death.These are called exact or uncensored observations.The survivaltimes of the sacri�ced animals are not exactly known but are recorded as atleast the length of the study period .These are called censored observations2-1-1 Type-I censoringType-I censored sampling occurs when the experiments are suspended afterpre-determined times ,for type-I censored sample the length of the experimentis �xed but the number of observations obtained before time t0 is a randomvariable.2-1-2 Type-II censoringType-II censored sampling occurs when the experiments are stopped aftera pre-determined number of items has failed ,for type-11 censored samplethe number of observations is �xed but the duration of the experiment is arandom variable.2-2 Prediction intervalLet X(1); X(2); � � � ; X(n) be the order statistics of a sample of independentrandom variables with exponential distribution having a density function

(2:2:1) fX(x; �; �) =1

�e�(

x��� ); x � �; � > 0

and the cumulative distribution function

(2:2:2) FX(x) = 1� e�(x��� ); x � �; � > 0

Suppose that only the (r-k+1) ordered observations X(k); X(k+1); � � � ; X(r) areobserved ,1 � k < r < s � n.The joint probability density function of the order statistics X(k); X(r); X(s)

is given by

17

Page 23: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

(2:2:3)

gkrs(x(k); x(r); x(s)) = Cn;k;r;s

�FX(x(k))

�k�1 �FX(x(r))� FX(x(k))

�r�k�1

��FX(x(s))� FX(x(r))

�s�r�1 �1� FX(x(s))

�n�s�fX(x(k))fX(x(r))fX(x(s)) see[40]

where

(2:2:4) Cn;k;r;s =n!

(k � 1)!(r � k � 1)!(s� r � 1)!(n� s)!

Cn;k;r;s =1

�(k; r � k)�(r; s� r)�(s; n� s+ 1)

and

(2:2:5) �(a; b) =(a� 1)!(b� 1)!

(a+ b� 1)!

is the Beta function.Then

(2:2:6) gkrs(x(k); x(r); x(s)) =

�FX(x(k))

�k�1 �FX(x(r))� FX(x(k))

�r�k�1�(k; r � k)

��FX(x(s))� FX(x(r))

�s�r�1 �1� FX(x(s))

�n�s�(r; s� r)�(s; n� s+ 1)

�fX(x(k))fX(x(r))fX(x(s)):

Then the joint probability density function of the order statisticsX(k); X(r); X(s)

is given by

18

Page 24: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

(2:2:7)

gkrs(x(k); x(r); x(s)) =

1� e

�x(k)��

�!k�1 e�

�x(k)��

�� e

�x(r)��

�!r�k�1

�3�(k; r � k)

e�

�x(r)��

�� e

�x(s)��

�!s�r�1 e�

�x(s)��

�!n�s

�(r; s� r)�(s; n� s+ 1)

�e��x(k)��

�e�

�x(r)��

�e�

�x(s)��

�:

Hence

(2:2:8) gkrs(x(k); x(r); x(s)) =

�e�

(x(k)��)

�r�k�1 1� e

�x(k)��

�!k�1

�3�(k; r � k)

�e�

(x(r)��)

�s�r�1 1� e

�x(r)�x(k)

�!r�k�1

�(r; s� r)

�e�

(x(s)��)

�n�s 1� e

�x(s)�x(r)

�!s�r�1

�(n; n� s+ 1)

�e��x(k)��

�e�

�x(r)��

�e�

�x(s)��

�:

Put X(k) � � = Z , X(r) �X(k) = V and X(s) �X(r) = W .

19

Page 25: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

Then the joint probability density function of the variables Z ,V and W isgiven by

(2:2:9) gZVW (z; v; w) =

�e�

z�

�r�k�1 �1� e�(

z� )�k�1

�3�(k; r � k)

�e�

(v+z)�

�s�r�1 �1� e�(

v� )�r�k�1

�(r; s� r)

�e�

(z+v+w)�

�n�s �1� e�(

w� )�s�r�1

�(s; n� s+ 1)

�e�( z� )e�( (z+v)� )e�(

(z+v+w)� ):

Integration of z and v yields the density function of W:

(2:2:10) gW (w) =Z

1

0

Z1

0gZVW (z; v; w)dzdv:

We obtain

(2:2:11) gW (w) =

�e�

w�

�n�s+1 �1� e�(

w� )�s�r�1

�3�(k; r � k)�(r; s� r)�(s; n� s + 1)

�Z

1

0

�e�

z�

�n�k+1 �1� e�(

z� )�k�1

dz

�Z

1

0

�e�

v�

�n�r+1 �1� e�(

v� )�r�k�1

dv

20

Page 26: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

put t = e�(z� ) and u = e�(

v� ) ,then the integrals are transformed ,and we

obtain

(2:2:12) �Z 1

0tn�k(1� t)k�1dt = ��(k; n� k + 1)

and

(2:2:13): �Z 1

0un�r(1� u)r�k�1du = ��(r � k; n� r + 1)

Theorem 1:In the situation described of doubly type-2 censored samples, if a randomvector X = (X(k); X(k+1); � � � ; X(r)) be observed, the density of the randomvariable W = X(s) �X(r) is given by

(2:2:14) gW (w) =

�e�

w�

�n�s+1 �1� e�(

w� )�s�r�1

��(s� r; n� s+ 1):

Lemma 1:Let X(1); X(2); � � � ; X(r) be the frist r ordered observations in a random sampleof size n from the exponential distribution

(2:2:15) f(x; �; �) =1

�e�(

x���

); � > 0; x � �:

We de�ne the variables

(2:2:16) Zi = (n� i+ 1)(X(i) �X(i�1)); i = 1; 2; � � � ; r:

Then the variables Zi are independent and identically distributed with ex-ponential distribution exp(0; �).Proof:The joint probability density function of X(1); X(2); � � � ; X(r) is given by

(2:2:17) fX(1);X(2);���;X(r)(x(1); x(2); � � � ; x(r)j�; �) = n!

(n� r)!

rY

i=1

fX(x(i))

!

��1� FX(x(r))

�n�r

21

Page 27: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

(2:2:18) fX(1);X(2);���;X(r)(x(1); x(2); � � � ; x(r)j�; �) = n!

(n� r)!

rYi=1

1

�e�

�x(i)��

�!

��e�

(x(r)��)

�n�r

(2:2:19) fX(1);X(2);���;X(r)(x(1); x(2); � � � ; x(r)j�; �)

=n!

(n� r)!�re

0BBBB@

rXi=1

(x(i) � �) + (n� r)(x(r) � �)

1CCCCA:

Utilizing the fact that for r = 1; 2; � � � ; n

(2:2:20)rX

i=1

(X(i) � �) + (n� r)(X(r) � �) =rX

i=1

Zi see[11]

(2:2:21) fX(1);X(2);���;X(r)(x(1); x(2); � � � ; x(r)j�; �) = n!

(n� r)!�re

0BBBB@

rXi=1

zi

1CCCCA:

From (2.2.16) we �nd that the jacobian determinant is

(2:2:22)

�����@(X(1); X(2); � � � ; X(r))

@(Z1; Z2; � � � ; Zr)

����� = (n� r)!

n!:

Then the joint probability density function of the variables Z1; Z2; � � � ; Zr isgiven by

(2:2:23) fZ1;Z2;���;Zr(z1; z2; � � � ; zr) =1

�re

0BBBB@

rXi=1

zi

1CCCCA

22

Page 28: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

(2:2:24) fZ1;Z2;���;Zr(z1; z2; � � � ; zr) =rY

i=1

0@e�(

zi� )

1A:

Therefore the variables Z1; Z2; � � � ; Zr are independent and identically dis-tributed with exponential distribution exp(0; �).

We now de�ne a statistic

(2:2:25) T =rX

i=k+1

X(i) + (n� r)X(r) � (n� k)X(k):

Then

(2:2:26) T =rX

i=k+1

Zi:

The distribution of the sum of (r-k) independent random variables with iden-tical exponential distribution is a gamma distribution.Then the probability density function of the variable T is given by

(2:2:27) fT (t) =1

�(r � k)�r�ktr�k�1e�(

t�):

Applying lemma 1 with r replaced by s, we see that W and T are indepen-dent. Hence, the joint probability density function of W and T is given byfTW (t; w) = fT (t)gW (w).We obtain

(2:2:28) fTW (t; w) =1

�(r � k)�r�ktr�k�1e�(

t� )

� 1

��(s� r; n� s+ 1)

�e�

w�

�n�s+1 �1� e�(

w� )�s�r�1

:

Let U = WT.

23

Page 29: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

Then the joint probability density function of the variables U and T is givenby

(2:2:29) fUT (u; t) =1

�(r � k)�(s� r; n� s+ 1)�

�e�

ut�

�n�s+1

��1� e�(

ut� )�s�r�1

(t

�)r�ke�(

t� ):

Then

(2:2:30) fUT (u; t) =1

�(r � k)�(s� r; n� s+ 1)�(t

�)r�ke�(

[(n�s+1)u+1]t� )

�s�r�1Xi=0

(�1)i s� r � 1

i

!e�(

iut� ):

Then

(2:2:31) fUT (u; t) =1

�(r � k)�(s� r; n� s+ 1)�(t

�)r�k

�s�r�1Xi=0

(�1)i s� r � 1

i

!e�(

[(n�s+i+1)u+1]t� ):

Put

(2:2:32):[(n� s+ i+ 1)u+ 1]t

�= �

Then

(2:2:33) dt =�d�

(n� s+ i + 1)u+ 1):

The probability density function of U is given by

(2:2:34) fU(u) =Z

1

0f(u; t)dt:

24

Page 30: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

Hence

(2:2:35) fU(u) =1

�(r � k)�(s� r; n� s+ 1)

Z1

0�r�ke��d�

�s�r�1Xi=0

(�1)i s� r � 1

i

!1

[(n� s+ i+ 1)u+ 1]r�k+1:

We obtain

(2:2:36) fU(u) =�(r � k + 1)

�(r � k)�(s� r; n� s+ 1)

s�r�1Xi=0

(�1)i

� s� r � 1

i

!1

[(n� s+ i + 1)u+ 1]r�k+1:

If we integrate (2.2.35) we obtain

(2:2:37) P (U � u) =1

�(s� r; n� s+ 1)

s�r�1Xi=o

(�1)i s� r � 1

i

!

� 1

(n� s+ i+ 1)((n� s+ i+ 1)u+ 1)r�k:

We can write

(2:2:38) P (U � u) = P (u; k; r; s; n):

Whence the distribution function of the variable U is given by

(2:2:39) FU(u) = P (U � u) = 1� P (U � u)

Then the probabilities statements about U give the prediction statementsaboutX(s) on the basis of observed X(k); X(k+1); � � � ; X(r); T . If P (U � u) = �,then

(2:2:40) P (X(r) < X(s) < X(r) + uT ) = 1� �

25

Page 31: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

gives a two-sided 100%(1� �) prediction interval for X(s).For given values of n,k,r,s, and u the probabilities P(n,k,r,s,u) can be evalutedwith the help of a computer program.This indicated below.We summarize our result as:Theorem 2:Let u be determined by (2.2.37), and by the condition P (U � u) = �. Thenthe interval [X(r); X(r) + uT ] is a prediction interval for X(s) for the level1� �.2-3 Special cases(i) Prediction interval for the smallest future observationIn this case s=r+1.We consider the random variable

(2:3:1) U1 =X(s) �X(s�1)

T:

we shall needLemma 2:The statistic 2T

�has a chi-square distribution with 2(r-k) degree of freedom

and a statistic2(n�s+1)(X(s)�X(s�1))

�has a chi-square distribution with 2 degree

of freedom.ProofFrom lemma 1 we �nd that the statistic T has a gamma distribution. Theprobability density function of T is given by

(2:3:2) fT (t) =1

�(r � k)�r�ktr�k�1e�(

t�):

Then if we de�ne the statistic T1 =2T�, the probability density function of

T1 is given by

(2:3:3) fT1(t1) =1

�(r � k)2r�ktr�k�11 e�(

t12):

Then the statistc T1 =2T�has a chi-square distribution with 2(r-k) degree of

freedom.From lemma 1 the statistic Zs = (n � s + 1)(X(s) � X(s�1)) is distributedwith exponential distribution exp(0; �) .The probability density function of Zs is given by

(2:3:4) f(zs) =1

�e�(

zs�):

26

Page 32: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

If we de�ne the statistic Z = 2Zs�, then the probability density function of Z

is given by

(2:3:5) f(z) =1

2e�(

z2):

Then the statistic Z = 2Zs�

has a chi-square distribution with 2 degree offreedom. This completes the proof of the lemma .Now the statistic

(2:3:6)Z=2

T1=2(s� k � 1)=

(n� s+ 1)(s� k � 1)(X(s) �X(s�1))

T

has a F-distribution with (2,2(s-k-1)) degree of freedom.Then

(2:3:7) P (U1 � f(2;2(s�k�1);1��)(s� k � 1)(n� s+ 1)

) = 1� �:

however r=s-1,where

u1(k;s�1;s;n;1��) =f(2;2(s�k�1);1��)

(s� k � 1)(n� s+ 1)

and f(2;2(s�k�1);�) is the percentile points of F-distribution with (2,2(s-k-1))degrees of freedom.(ii) Prediction interval for the largest future observationIn this case s=n.We consider the random variable

(2:3:9) U2 =X(n) �X(r)

T

then we can write

(2:3:10) P (U2 � u) = P (k; r; n; n; u):

We have(2:3:11)

P (U2 � u) =1

�(1; n� r)

n�r�1Xi=0

(�1)i n� r � 1

i

!1

(1 + i)((1 + i)u+ 1)r�k:

27

Page 33: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

This can be conveniently rewritten as

(2:3:12) P (U2 � u) = 1�n�rXi=0

(�1)i n� ri

!1

(1 + iu)r�k

where the distribution function of U2 is given by

(2:3:13) P (U2 � u) =n�rXi=0

(�1)i n� ri

!1

(1 + iu)r�k:

2-4 Applications of the resultsA life test where all units are observed until failureConsider a life test where n units ,whose lifetimes follow the same exponen-tial distribution are put under test simultaneously and where some units areobserved until failure. We can use the result derived above to give a predic-tion interval for the largest lifetime X(n) on the basis of the (r-k+1) lifetimesX(k); X(k+1); � � � ; X(r). X(n) in this case is the total elapsed time required tocomplete the test.ExamplesSuppose that in a laboratory experiment n=10 mice are exposed to carcino-gens. The experimenter decides to start the study after three of the mice aredead and to observe the other at this time ,the observed survival times forfour mice are 30,90,120 and 170 hours .Let the lifetimes of all n mice be distributed according to the same two-parameter exponential distribution with unkown parameters � and �.We want to �nd a %95 prediction interval for X(8) , X(9) and X(10).Solution: In this case

T =7X

i=5

X(i) + 3X(7) � 7X(k) = 680:

1- A %95 prediction interval for X(8):In this case n=10 ,s=8, r=7 and k=4. We can �nd from

P (U1 � f2;2(s�k�1);1��(s� k � 1)(n� s+ 1)

) = 1� �

that

P (U1 � f(2;6;%95)

9) = %95:

28

Page 34: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

Then P (U1 � 0:57) is very nearly %95.Hence

P (X(8) < 170 + (0:57)680) = %95

P (X(8) < 557) = %95:

Then we can be approximately %95 con�dent that X(8) will not exceed 557hours.2-A %95 prediction interval for X(9):In this case n=10 ,s=9, r=7 and k=4. We can �nd from tables of u(n;s;r;k;1��)that P (U � 1:26) is very nearly %95. This yields

P (X(9) < 170 + (1:26)680) = %95

P (X(9) < 1026) = %95:

Then we can be approximately %95 con�dent that X(9) will not exceed 1026hours.3-A %95 prediction interval for X(10):In this case n=10 ,s=10, r=7 and k=4. We can �nd from tables u(n;n;r;k;1��)that P (U2 � 2:67) is very nearly %95 . Hence

P (X(10) < 170 + (2:67)680) = %95

P (X(10) < 1985) = %95

Then we can be approximately %95 con�dent that X(10) will not exceed 1985hour.2-5 Program description for calculating the factor u(k;r;s;n;1��)

The percentile points were found by numerically solving the equation

(2:5:1) P (U � u)� (1� �) = 0

for each � value of interest.The numerical technique has been used the Newton-Raphson method.We have

(2:5:2) P (U � u) = 1� P (U � u)

29

Page 35: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

and, therefore

(2:5:3) P (U � u) =1

�(s� r; n� s+ 1)

s�r�1Xi=0

(�1)i s� r � 1

i

!

� 1

(n� s+ i+ 1)[(n� s+ i+ 1)u+ 1]r�k:

To implement the above procedure a FORTRAN program consisting of amain routine and subroutine was employed ,the main routine performed theiterations and called on the subroutine to provide the value of P (U � u) ateach step.

30

Page 36: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

values of the factor u(n;s;r;k;1��)

1 - �n s r k 0.99 0.95 0.90 0.10 0.053 3 2 1 99 19 9 0.11 0.05

4 3 2 1 49.50 9.50 4.50 0.06 0.03

4 2 1 148.83 28.83 13.83 0.17 0.21

3 1 9 3.47 2.16 0.05 0.03

2 99 19 9 0.11 0.05

5 3 2 1 33 6.33 3 0.04 0.02

4 2 1 82.69 16.03 7.69 0.19 0.12

3 1 4.05 1.74 1.08 0.03 0.01

2 49.50 9.50 4.50 0.06 0.03

5 2 1 182.04 35.37 17.03 0.50 0.33

3 1 12.15 4.83 3.09 0.17 0.11

2 148.83 28.83 13.83 0.18 0.21

4 1 3.64 1.71 1.51 0.04 0.01

2 9 3.47 2.16 0.05 0.03

3 99 19 9 0.11 0.05

6 3 2 1 24.75 4.75 2.25 0.03 0.01

4 2 1 57.69 11.22 5.39 0.13 0.08

31

Page 37: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

values of the factor u(n;s;r;k;1��)

1 - �n s r k 0.99 0.95 0.90 0.10 0.056 4 3 1 3 1.16 0.72 0.02 0.01

2 33 6.33 3 0.04 0.02

5 2 1 107.59 20.92 10.08 0.31 0.21

3 1 6.69 2.67 1.71 0.09 0.06

2 82.69 16.03 7.69 0.19 0.12

4 1 1.82 0.86 0.58 0.02 0.01

2 4.50 1.74 1.08 0.03 0.01

3 44.50 9.50 4.50 0.06 0.03

6 2 1 206.94 40.27 19.43 0.63 0.43

3 1 14.23 5.72 3.70 0.27 0.19

2 182.04 35.37 17.03 0.50 0.33

4 1 4.68 2.29 1.60 0.12 0.08

2 12.15 4.83 3.09 0.17 0.11

3 148.83 28.83 13.83 0.18 0.21

5 1 12.16 1.11 0.78 0.03 0.01

2 3.64 1.71 1.51 0.04 0.01

3 9 3.47 2.16 0.05 0.03

32

Page 38: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

values of the factor u(n;s;r;k;1��)

1 - �n s r k 0.99 0.95 0.90 0.10 0.056 6 5 4 99 19 9 0.11 0.05

7 3 2 1 19.80 3.80 1.80 0.02 0.01

4 2 1 44.66 8.65 4.15 0.10 0.06

3 1 2.25 0.87 0.54 0.02 0.01

2 24.75 4.75 2.25 0.03 0.01

5 2 1 77.80 15.13 7.29 0.22 0.15

3 1 4.67 1.86 1.19 0.07 0.05

2 57.89 11.22 5.38 0.13 0.08

4 1 1.21 0.57 0.38 0.01 0.01

2 3 1.15 0.72 0.02 0.01

3 33 6.33 3 0.04 0.02

6 2 1 127.51 24.83 12 0.40 0.28

3 1 8.30 3.35 2.17 0.17 0.11

2 107.59 20.92 10.08 0.31 0.21

4 1 2.56 1.26 0.89 0.07 0.04

2 6.69 2.66 1.71 0.09 0.06

3 82.69 16.03 7.69 0.19 0.12

33

Page 39: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

values of the factor u(n;s;r;k;1��)

1 - �n s r k 0.99 0.95 0.90 0.10 0.057 6 5 1 1.08 0.56 0.39 0.01 0.01

2 1.82 0.86 0.56 0.02 0.01

3 4.50 1.73 1.08 0.03 0.01

4 49.50 9.50 4.50 0.06 0.03

7 2 1 226.8 44.19 21.35 0.73 0.51

3 1 15.79 6.39 4.15 0.34 0.25

2 206.94 40.27 19.43 0.63 0.43

4 1 5.35 2.67 1.89 0.19 0.13

2 14.23 5.72 3.70 0.27 0.19

3 182.04 35.37 17.03 0.50 0.33

5 1 2.70 1.46 1.06 0.09 0.06

2 4.68 2.29 1.60 0.12 0.08

3 12.15 4.83 3.09 0.17 0.11

4 148.83 28.83 13.83 0.18 0.21

6 1 1.51 0.82 0.58 0.02 0.01

2 2.16 1.11 0.78 0.03 0.01

3 3.64 1.71 1.51 0.04 0.01

34

Page 40: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

values of the factor u(n;s;r;k;1��)

1 - �n s r k 0.99 0.95 0.90 0.10 0.057 7 6 4 9 3.47 2.16 0.05 0.03

5 99 19 9 0.11 0.05

8 3 2 1 16.50 3.16 1.50 0.02 0.01

4 2 1 36.39 7.05 3.38 0.08 0.05

3 1 1.80 0.69 0.43 0.01 0.01

2 19.80 3.80 1.80 0.02 0.01

5 2 1 61.25 11.91 5.74 0.18 0.12

3 1 3.60 1.43 0.92 0.05 0.04

2 44.66 8.65 4.15 0.10 0.06

4 1 0.91 0.43 0.23 0.01 0.01

2 2.25 0.87 0.54 0.01 0.01

3 24.75 4.75 2.25 0.03 0.01

6 2 1 94.39 18.39 8.89 0.30 0.21

3 1 5.98 2.42 1.57 0.12 0.08

2 77.80 15.13 7.29 0.22 0.15

4 1 1.78 0.88 0.62 0.05 0.03

2 4.67 1.86 1.19 0.07 0.05

35

Page 41: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

values of the factor u(n;s;r;k;1��)

1 - �n s r k 0.99 0.95 0.90 0.10 0.058 6 4 3 57.89 11.22 5.38 0.13 0.08

5 1 0.72 0.37 0.26 0.01 0.01

2 1.21 0.57 0.38 0.01 0.01

3 3 1.15 0.72 0.02 0.01

4 33 6.33 3 0.04 0.02

7 2 1 144.10 28.09 13.59 0.48 0.34

3 1 9.59 3.48 2.54 0.22 0.16

2 127.51 24.83 12 0.40 0.28

4 1 3.10 1.56 1.10 0.12 0.08

2 8.30 3.35 2.17 0.17 0.12

3 107.59 20.92 10.08 0.31 0.12

5 1 1.47 0.80 0.59 0.05 0.03

2 2.56 1.26 0.89 0.07 0.04

3 6.69 2.66 1.71 0.10 0.06

4 82.69 16.03 7.69 0.19 0.12

6 1 0.76 0.41 0.29 0.01 0.01

2 1.08 0.56 0.39 0.01 0.01

36

Page 42: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

values of the factor u(n;s;r;k;1��)

1 - �n s r k 0.99 0.95 0.90 0.10 0.058 7 6 3 1.82 0.86 0.58 0.02 0.01

4 4.50 1.73 1.08 0.03 0.01

5 49.50 9.50 4.50 0.06 0.03

8 2 1 243.46 47.46 22.95 0.81 0.58

3 1 17.04 6.93 4.52 0.40 0.30

2 226.8 44.19 21.35 0.73 0.51

4 1 5.86 2.95 2.10 0.24 0.18

2 15.79 6.39 4.15 0.34 0.25

3 206.94 40.27 19.43 0.63 0.43

5 1 3.50 1.68 1.24 0.14 0.10

2 5.35 2.67 1.89 0.19 0.13

3 14.23 5.72 3.70 0.27 0.19

4 182.04 35.37 17.03 0.50 0.33

6 1 1.86 1.06 0.80 0.07 0.05

2 2.7 1.46 1.06 0.09 0.06

3 4.68 2.29 1.60 0.12 0.08

4 12.15 4.83 3.09 0.17 0.11

37

Page 43: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

values of the factor u(n;s;r;k;1��)

1 - �n s r k 0.99 0.95 0.90 0.10 0.058 8 6 5 148.83 28.82 13.82 0.33 0.21

7 1 1.15 0.65 0.47 0.02 0.01

2 1.51 0.82 0.58 0.02 0.01

3 2.16 1.11 0.78 0.03 0.01

4 3.64 1.71 1.15 0.04 0.02

5 9 3.47 2.16 0.05 0.03

6 99 19 9 0.11 0.05

9 3 2 1 14.14 2.71 1.28 0.02 0.01

4 2 1 30.71 5.95 2.86 0.07 0.04

3 1 1.50 0.58 0.36 0.01 0.01

2 16.50 3.16 0.19 0.02 0.01

5 2 1 50.61 9.48 4.75 0.15 0.10

3 1 2.93 1.17 0.75 0.04 0.03

2 36.39 7.05 3.38 0.08 0.05

4 1 0.73 0.34 0.23 0.01 0.01

2 1.80 0.69 0.43 0.01 0.01

3 19.80 3.80 1.80 0.02 0.01

38

Page 44: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

values of the factor u(n;s;r;k;1��)

1 - �n s r k 0.99 0.95 0.90 0.10 0.059 6 2 1 75.47 14.70 7.11 0.24 0.17

3 1 4.70 1.90 1.23 0.10 0.07

2 61.25 11.91 5.74 0.18 0.13

4 1 1.37 0.68 0.48 0.04 0.02

2 3.60 1.43 0.92 0.05 0.04

3 44.66 8.85 4.15 0.10 0.06

5 1 0.54 0.28 0.19 0.01 0.01

2 0.91 0.43 0.29 0.01 0.01

3 2.25 0.87 0.54 0.01 0.01

4 24.75 4.75 2.25 0.03 0.01

7 2 1 108.61 21.18 10.25 0.37 0.26

3 1 7.06 2.88 1.88 0.17 0.12

2 94.39 18.39 0.89 0.30 0.21

4 1 2.22 1.12 0.80 0.08 0.06

2 5.98 2.42 1.57 0.12 0.09

3 77.80 15.13 7.29 0.22 0.15

5 1 1.02 0.56 0.41 0.04 0.02

39

Page 45: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

values of the factor u(n;s;r;k;1��)

1 - �n s r k 0.99 0.95 0.90 0.10 0.059 7 5 2 1.78 0.88 0.62 0.05 0.03

3 4.67 1.83 1.19 0.07 0.05

4 57.89 11.22 5.38 0.13 0.08

6 1 0.50 0.27 0.19 0.01 0.01

2 0.72 0.37 0.26 0.01 0.01

3 1.21 0.57 0.38 0.01 0.01

4 3 1.15 0.72 0.02 0.01

5 33 6.33 3 0.04 0.02

8 2 1 158.32 30.29 14.95 0.55 0.39

3 1 10.66 4.35 2.85 0.27 0.20

2 144.10 28.09 13.59 0.48 0.34

4 1 3.52 1.79 1.28 0.16 0.12

2 9.59 3.90 2.54 0.22 0.16

3 127.51 24.83 12 0.40 0.28

5 1 1.75 0.98 0.73 0.09 0.06

2 3.10 1.56 1.10 0.12 0.08

3 8.30 3.35 2.17 0.17 0.12

40

Page 46: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

values of the factor u(n;s;r;k;1��)

1 - �n s r k 0.99 0.95 0.90 0.10 0.059 8 5 4 107.59 20.92 10.08 0.31 0.21

6 1 1.01 0.58 0.44 0.04 0.03

2 1.47 0.80 0.59 0.05 0.03

3 2.56 1.26 0.89 0.07 0.04

4 6.69 2.66 1.71 0.10 0.06

5 82.69 16.03 7.69 0.19 0.12

7 1 0.57 0.32 0.23 0.01 0.01

2 0.78 0.41 0.29 0.01 0.01

3 1.08 0.56 0.39 0.01 0.01

4 1.82 0.86 0.58 0.02 0.01

5 4.50 1.73 1.08 0.03 0.01

6 49.50 9.50 4.50 0.06 0.03

9 2 1 257.69 50.25 24.32 0.88 0.63

3 1 18.09 7.37 4.82 0.45 0.34

2 243.46 47.46 22.95 0.81 0.58

4 1 6.27 3.18 2.27 0.28 0.21

4 2 17.04 6.93 4.52 0.40 0.30

41

Page 47: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

values of the factor u(n;s;r;k;1��)

1 - �n s r k 0.99 0.95 0.90 0.10 0.059 9 4 3 226.87 44.29 21.35 0.73 0.51

5 1 3.31 1.85 1.37 0.19 0.14

2 5.86 2.95 2.10 0.24 0.18

3 15.79 6.39 4.15 0.34 0.25

4 206.94 40.27 19.43 0.63 0.43

6 1 2.08 1.21 0.82 0.21 0.08

2 3.09 1.68 1.42 0.14 0.10

3 5.35 2.67 1.89 0.19 0.13

4 14.23 5.72 3.70 0.27 0.19

5 182.04 35.37 17.03 0.50 0.33

7 1 1.40 0.83 0.63 0.06 0.04

2 1.86 1.06 0.79 0.07 0.05

3 2.70 1.46 1.06 0.09 0.06

4 4.68 2.29 1.60 0.12 0.08

5 12.15 4.87 3.09 0.17 0.11

6 148.83 28.82 13.88882 0.33 0.21

8 1 0.93 0.53 0.29 0.02 0.01

42

Page 48: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

values of the factor u(n;s;r;k;1��)

1 - �n s r k 0.99 0.95 0.90 0.10 0.059 9 8 2 1.15 0.65 0.47 0.02 0.01

3 1.51 0.82 0.58 0.02 0.01

4 2.16 1.11 0.79 0.03 0.01

5 7.64 1.71 1.15 0.04 0.02

6 9 3.47 2.16 0.05 0.03

7 99 19 9 0.11 0.05

10 3 2 1 12.37 2.37 1.12 0.01 0.01

4 2 1 26.65 5.15 2.47 0.06 0.04

3 1 1.28 0.47 0.31 0.01 0.01

2 14.14 2.71 1.28 0.02 0.02

5 2 1 43.16 8.39 4.05 0.13 0.08

3 1 2.47 0.99 0.64 0.04 0.02

2 30.71 5.95 2.86 0.07 0.04

4 1 0.61 0.29 0.19 0.01 0.01

2 1.50 0.58 0.36 0.01 0.01

3 16.50 3.16 1.50 0.02 0.01

6 2 1 63.05 12.28 5.94 0.20 0.14

43

Page 49: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

values of the factor u(n;s;r;k;1��)

1 - �n s r k 0.99 0.95 0.90 0.10 0.0510 6 3 1 3.88 1.57 1.43 0.08 0.06

2 50.61 9.84 4.75 0.15 0.10

4 1 1.12 0.55 0.39 0.03 0.02

2 2.93 1.17 0.75 0.04 0.03

3 36.39 7.05 3.38 0.08 0.05

5 1 0.43 0.22 0.16 0.01 0.01

2 0.73 0.34 0.23 0.01 0.01

3 1.80 0.69 0.43 0.01 0.01

4 19.80 3.80 1.80 0.02 0.01

7 2 1 87.91 17.15 8.30 0.30 0.21

3 1 5.64 2.29 1.50 0.13 0.10

2 75.47 14.70 7.11 0.24 0.17

4 1 1.74 0.88 0.63 0.07 0.05

2 4.70 1.90 1.23 0.10 0.07

3 61.25 11.91 5.74 0.18 0.12

5 1 0.80 0.43 0.32 0.03 0.02

2 1.37 0.68 0.48 0.04 0.02

44

Page 50: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

values of the factor u(n;s;r;k;1��)

1 - �n s r k 0.99 0.95 0.90 0.10 0.0510 7 5 3 3.60 1.43 0.92 0.05 0.04

4 44.66 8.65 4.15 0.10 0.06

6 1 0.38 0.21 0.15 0.01 0.01

2 0.54 0.28 0.19 0.01 0.01

3 0.91 0.43 0.29 01 0.01

4 2.25 0.87 0.54 0.01 0.01

5 24.75 4.75 2.25 0.03 0.01

8 2 1 121.06 23.62 11.44 0.43 0.30

3 1 7.99 3.27 2.14 0.21 0.15

2 108.61 21.18 10.25 0.37 0.26

4 1 2.59 1.32 0.95 0.12 0.09

2 7.06 2.88 1.88 0.17 0.12

3 94.39 18.39 8.89 0.30 0.21

5 1 1.25 0.71 0.52 0.06 0.05

2 2.22 1.12 0.79 0.08 0.06

3 5.98 2.42 1.57 0.12 0.9

4 77.80 15.13 7.29 0.22 0.15

45

Page 51: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

values of the factor u(n;s;r;k;1��)

1 - �n s r k 0.99 0.95 0.90 0.10 0.0510 8 6 1 0.70 0.41 0.30 0.03 0.02

2 1.02 0.56 0.41 0.04 0.02

3 1.78 0.88 0.62 0.05 0.03

4 4.67 1.86 1.19 0.07 0.05

5 57.89 11.22 5.38 0.13 0.08

7 1 0.38 0.22 0.16 0.01 0.01

2 0.50 0.27 0.19 0.01 0.01

3 0.72 0.37 0.26 0.01 0.01

4 1.21 0.57 0.38 0.01 0.01

5 3 1.15 0.72 0.02 0.01

6 33 6.33 3 0.04 0.02

9 2 1 170.77 33.33 16.15 0.61 0.44

3 1 11.58 4.74 3.11 0.31 0.23

2 158.32 30.89 14.59 0.55 0.39

4 1 3.88 1.98 1.43 0.19 0.14

2 10.66 4.35 2.85 0.27 0.20

3 144.10 28.09 13.59 0.48 0.34

46

Page 52: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

values of the factor u(n;s;r;k;1��)

1 - �n s r k 0.99 0.95 0.90 0.10 0.0510 9 5 1 1.97 1.11 0.84 0.12 0.09

2 3.52 1.79 1.28 0.16 0.12

3 9.59 3.90 2.54 0.22 0.16

4 127.51 24.83 12 0.40 0.28

6 1 1.19 0.71 0.54 0.07 0.05

2 1.75 0.98 0.73 0.09 0.06

3 3.10 1.56 1.10 0.11 0.08

4 8.30 3.35 2.17 0.17 0.12

5 107.59 20.92 10.08 0.31 0.12

7 1 0.76 0.46 0.35 0.04 0.02

2 1.01 0.58 0.44 0.04 0.03

3 1.47 0.81 0.59 0.05 0.03

4 2.56 1.26 0.89 0.07 0.04

5 6.69 2.66 1.71 0.10 0.07

6 82.69 16.03 7.69 0.19 0.12

8 1 0.47 0.27 0.19 0.01 0.01

2 0.58 0.32 0.23 0.01 0.01

47

Page 53: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

values of the factor u(n;s;r;k;1��)

1 - �n s r k 0.99 0.95 0.90 0.10 0.0510 9 8 3 0.76 0.41 0.29 0.01 0.01

4 1.08 0.56 0.39 0.01 0.01

5 1.82 0.86 0.58 0.02 0.01

6 4.50 1.73 1.08 0.03 0.01

7 49.50 1.73 1.08 0.03 0.01

10 2 1 270.14 25.70 25.21 0.95 0.68

3 1 18.99 7.76 5.08 0.50 0.38

2 257.69 50.25 24.32 0.88 0.63

4 1 6.61 3.37 2.42 0.32 0.24

2 18.09 7.37 4.82 0.45 0.34

3 234.46 47.46 22.95 0.81 0.58

5 1 5.32 1.98 1.48 0.22 0.17

2 6.27 3.18 2.27 0.28 0.21

3 17.04 6.93 4.52 0.40 0.17

4 226.87 44.19 21.35 0.73 0.24

6 1 2.24 1.33 1.01 0.15 0.11

2 3.31 1.85 1.37 0.19 0.14

48

Page 54: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

values of the factor u(n;s;r;k;1��)

1 - �n s r k 0.99 0.95 0.90 0.10 0.0510 10 6 3 5.886 2.95 2.10 0.24 0.18

4 15.79 6.39 4.15 0.34 0.25

5 206.94 40.27 19.43 0.63 0.43

7 1 1.56 0.95 0.73 0.10 0.07

2 2.08 1.21 0.92 0.12 0.08

3 3.05 1.68 1.24 1.14 1.10

4 5.35 2.67 1.89 0.19 0.13

5 14.23 5.72 3.70 0.27 0.19

6 182.04 35.37 17.03 0.50 0.33

8 1 1.12 0.68 0.52 0.05 0.03

2 1.40 0.83 0.63 0.06 0.04

3 1.86 1.06 0.79 0.07 0.05

4 2.70 1.46 1.06 0.09 0.06

5 4.68 2.29 1.60 0.012 0.08

6 12.25 14.83 3.09 0.17 0.11

7 148.83 28.82 13.82 0.33 0.21

9 1 0.78 0.45 0.33 0.01 0.01

49

Page 55: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

values of the factor u(n;s;r;k;1��)

1 - �n s r k 0.99 0.95 0.90 0.10 0.0510 10 9 2 0.93 0.5553 0.39 0.02 0.01

3 1.15 0.65 0.47 0.02 0.01

4 1.51 0.82 0.58 0.02 0.01

5 2.16 1.11 0.78 0.03 0.01

6 3.64 1.71 1.15 0.04 0.02

7 9 3.47 2.16 0.05 0.03

8 99 19 9 0.11 0.05

50

Page 56: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

values of the factor u(n;n;r;k;1��)

1 - �n n r k 0.99 0.95 0.90 0.10 0.053 3 2 1 99 19 9 0.11 0.05

4 4 2 1 148.83 28.83 13.83 0.17 0.21

3 1 9 3.47 2.16 0.05 0.03

2 99 19 9 0.11 0.05

5 5 2 1 182.04 35.37 17.03 0.50 0.33

3 1 12.15 4.83 3.09 0.17 0.11

2 148.83 28.83 13.83 0.18 0.21

4 1 3.64 1.71 1.51 0.04 0.01

2 9 3.47 2.16 0.05 0.03

3 99 19 9 0.11 0.05

6 6 2 1 206.94 40.27 19.43 0.63 0.43

3 1 14.23 5.72 3.70 0.27 0.19

2 182.04 35.37 17.03 0.50 0.33

4 1 4.68 2.29 1.60 0.12 0.08

2 12.15 4.83 3.09 0.17 0.11

3 148.83 28.83 13.83 0.18 0.21

5 1 12.16 1.11 0.78 0.03 0.01

51

Page 57: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

values of the factor u(n;n;r;k;1��)

1 - �n n r k 0.99 0.95 0.90 0.10 0.056 6 5 2 3.64 1.71 1.51 0.04 0.01

3 9 3.47 2.16 0.05 0.03

4 99 19 9 0.11 0.05

7 7 2 1 226.8 44.19 21.35 0.73 0.51

3 1 15.79 6.39 4.15 0.34 0.25

2 206.94 40.27 19.43 0.63 0.43

4 1 5.35 2.67 1.89 0.19 0.13

2 14.23 5.72 3.70 0.27 0.19

3 182.04 35.37 17.03 0.50 0.33

5 1 2.70 1.46 1.06 0.09 0.06

2 4.68 2.29 1.60 0.12 0.08

3 12.15 4.83 3.09 0.17 0.11

4 148.83 28.83 13.83 0.18 0.21

6 1 1.51 0.82 0.58 0.02 0.01

2 2.16 1.11 0.78 0.03 0.01

3 3.64 1.71 1.51 0.04 0.01

4 9 3.47 2.16 0.05 0.03

52

Page 58: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

values of the factor u(n;n;r;k;1��)

1 - �n n r k 0.99 0.95 0.90 0.10 0.057 7 6 5 99 19 9 0.11 0.05

8 8 2 1 243.46 47.46 22.95 0.81 0.58

3 1 17.04 6.93 4.52 0.40 0.30

2 226.8 44.19 21.35 0.73 0.51

4 1 5.86 2.95 2.10 0.24 0.18

2 15.79 6.39 4.15 0.34 0.25

4 3 206.94 40.27 19.43 0.63 0.43

5 1 3.50 1.68 1.24 0.14 0.10

2 5.35 2.67 1.89 0.19 0.13

3 14.23 5.72 3.70 0.27 0.19

4 182.04 35.37 17.03 0.50 0.33

6 1 1.86 1.06 0.80 0.07 0.05

2 2.7 1.46 1.06 0.09 0.06

3 4.68 2.29 1.60 0.12 0.08

4 12.15 4.83 3.09 0.17 0.11

5 148.83 28.82 13.82 0.33 0.21

7 1 1.15 0.65 0.47 0.02 0.01

53

Page 59: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

values of the factor u(n;n;r;k;1��)

1 - �n n r k 0.99 0.95 0.90 0.10 0.058 8 7 2 1.51 0.82 0.58 0.02 0.01

3 2.16 1.11 0.78 0.03 0.01

4 3.64 1.71 1.15 0.04 0.02

5 9 3.47 2.16 0.05 0.03

6 99 19 9 0.11 0.05

9 9 2 1 257.69 50.25 24.32 0.88 0.63

3 1 18.09 7.37 4.82 0.45 0.34

2 243.46 47.46 22.95 0.81 0.58

4 1 6.27 3.18 2.27 0.28 0.21

2 17.04 6.93 4.52 0.40 0.30

3 226.87 44.29 21.35 0.73 0.51

5 1 3.31 1.85 1.37 0.19 0.14

2 5.86 2.95 2.10 0.24 0.18

3 15.79 6.39 4.15 0.34 0.25

4 206.94 40.27 19.43 0.63 0.43

6 1 2.08 1.21 0.82 0.21 0.08

2 3.09 1.68 1.42 0.14 0.10

54

Page 60: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

values of the factor u(n;n;r;k;1��)

1 - �n n r k 0.99 0.95 0.90 0.10 0.059 9 6 3 5.35 2.67 1.89 0.19 0.13

4 14.23 5.72 3.70 0.27 0.19

5 182.04 35.37 17.03 0.50 0.33

7 1 1.40 0.83 0.63 0.06 0.04

2 1.86 1.06 0.79 0.07 0.05

3 2.70 1.46 1.06 0.09 0.06

4 4.68 2.29 1.60 0.12 0.08

5 12.15 4.87 3.09 0.17 0.11

6 148.83 28.82 13.88882 0.33 0.21

8 1 0.93 0.53 0.29 0.02 0.01

2 1.15 0.65 0.47 0.02 0.01

3 1.51 0.82 0.58 0.02 0.01

4 2.16 1.11 0.79 0.03 0.01

5 7.64 1.71 1.15 0.04 0.02

6 9 3.47 2.16 0.05 0.03

7 99 19 9 0.11 0.05

10 10 2 1 270.14 25.70 25.21 0.95 0.68

55

Page 61: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

values of the factor u(n;n;r;k;1��)

1 - �n n r k 0.99 0.95 0.90 0.10 0.0510 10 3 1 18.99 7.76 5.08 0.50 0.38

2 257.69 50.25 24.32 0.88 0.63

4 1 6.61 3.37 2.42 0.32 0.24

2 18.09 7.37 4.82 0.45 0.34

3 234.46 47.46 22.95 0.81 0.58

5 1 5.32 1.98 1.48 0.22 0.17

2 6.27 3.18 2.27 0.28 0.21

3 17.04 6.93 4.52 0.40 0.17

4 226.87 44.19 21.35 0.73 0.24

6 1 2.24 1.33 1.01 0.15 0.11

2 3.31 1.85 1.37 0.19 0.14

3 5.886 2.95 2.10 0.24 0.18

4 15.79 6.39 4.15 0.34 0.25

5 206.94 40.27 19.43 0.63 0.43

7 1 1.56 0.95 0.73 0.10 0.07

7 2 2.08 1.21 0.92 0.12 0.08

3 3.05 1.68 1.24 1.14 1.10

56

Page 62: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

values of the factor u(n;n;r;k;1��)

1 - �n n r k 0.99 0.95 0.90 0.10 0.0510 10 8 4 5.35 2.67 1.89 0.19 0.13

5 14.23 5.72 3.70 0.27 0.19

6 182.04 35.37 17.03 0.50 0.33

8 1 1.12 0.68 0.52 0.05 0.03

2 1.40 0.83 0.63 0.06 0.04

3 1.86 1.06 0.79 0.07 0.05

4 2.70 1.46 1.06 0.09 0.06

5 4.68 2.29 1.60 0.012 0.08

6 12.25 14.83 3.09 0.17 0.11

7 148.83 28.82 13.82 0.33 0.21

9 1 0.78 0.45 0.33 0.01 0.01

2 0.93 0.5553 0.39 0.02 0.01

3 1.15 0.65 0.47 0.02 0.01

4 1.51 0.82 0.58 0.02 0.01

5 2.16 1.11 0.78 0.03 0.01

6 3.64 1.71 1.15 0.04 0.02

7 9 3.47 2.16 0.05 0.038 99 19 9 0.11 0.05

57

Page 63: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

CHAPTER THREE

ESTIMATION FOR THE TWO-PARAMETEREXPONENTIAL DISTRIBUTION UNDER DOUBLY

TYPE-II CENSORING

The exponential distribution has an important position in lifetime models.Itis the �rst lifetime model for which statistical methods were extensively de-veloped.Many authors have contributed to the study of this distribution.Important works include (Epstein and Sobel,1954),(Epstein,1960),(Mann,Schafer and Singpurwalla ,1974),(Bain,1978),(Lawless ,1982),(Bal-akrishnan and Colen,1991),(Balasubramanian and Balakrishnan,1992) and (Kong,F. and Fei,H.1994)3-1 Estimation for two-parameter exponential distributionunder doubly type-II censoringWe assume that the survival times of certain items have a two-parameterexponential distribution with a density function

(3:1:1) f(x; �; �) =1

�e�(

x��� ); x � �; � > 0

This model is employed in situations where it is believed that death or failurecan't occur before a certain time �.When � is known it can be analyzed as for the one-parameter exponentialdistribution ,because the variable X � � has a one-parameter exponentialdistribution.When the model includes the parameter � the data usually do not provideenough informations about � .Including � could cause rather special sta-tistical problems especially for models such as the Weibull ,lognormal ,andgamma distributions .Fortunately for the exponential distribution the prob-lems have been satisfactorily solved.Suppose that X(k); X(k+1); � � � ; X(r) are order statistics in a doubly type-IIcensored sample from a sample of size n from a two-parameter exponentialdistribution exp(�; �).Then we shall obtain the lower con�dence bounds and signi�cance tests forthe parameters � and � .(i) The lower con�dence bound for the parameter �Let X(k); X(k+1); � � � ; X(r) be the (r-k+1) order statistics from a sample of sizen from a two-parameter exponential distribution exp(�; �).The following results have been obtained in the special case k=1,by lemma

58

Page 64: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

1 ,and our arguments are similar.The random variables

(3:1:2) Zi =(n� i+ 1)(X(i) �X(i�1))

�; i = k + 1; k + 2; � � � ; r

are independent and identically distributed with exponential distributionexp(0,1).De�ne

(3:1:3) Qk =(X(k) � �)

and

(3:1:4) Qr;k =rX

i=k+1

X(i) �X(i�1)

�:

Then

(3:1:5) Qr;k =rX

i=k+1

Zi

n� i+ 1

where Z1; Z2; � � � ; Zr are identically independent distributed from exp(0; 1).The pivotal Qr;k do not contain the parameter � .Therefore the con�dencebound for the parameter � can be constructed through Qr;k.Qr;k is a sum of (r-k) independent identically distributed random variableswith exponential distribution.Then Qr;k has a gamma distribution G(�; �) ,where �r;k = E(Qr;k) = ��,and Vr;k = V ar(Qr;k) = ��2.

Then � =Vr;k�r;k

,and � =�2r;kVr;k

,and the random variable2Qr;k

�has a �2(2�)-

distribution.Observe that

(3:1:6)2�r;kQr;k

Vr;k� �2(

2�2r;kVr;k

) see[38]:

The mean and the variance of Qr;k are given by

(3:1:7) �r;k =rX

i=k+1

1

n� i+ 1

59

Page 65: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

and

(3:1:8) Vr;k =rX

i=k+1

1

(n� i+ 1)2:

Then the (1� �)% lower con�dence bound for the parameter � is

(3:1:9) �l =

2�r;krX

i=k+1

(X(i) �X(i�1))

Vr;k�2(1��)(2�2

r;k

Vr;k)

:

If (2�2

r;k

Vr;k) is an integer, then the distribution values and the quantiles of

�2(2�2r;kVr;k

) can be found directly in a regular � �2 table.

In general if �2(�) is a �2 random variable with � degrees of freedom ,thenapproximately for large �

(3:1:10): [(�2(�)

�)1=3 +

2

9�� 1](

9�

2)1=2 � N(0; 1) see[20]

If z1�� is the (1� �)% quantiles of N(0,1) ,then correspondingly

(3:1:11) [(�21��(�)

�)1=3 +

2

9�� 1](

9�

2)1=2 � z1��;

that is(3:1:14)

�21��(�) � �(1� 2

9�+1

3(2

�)1=2

rXi=k+1

X(i) �X(i�1))Vr;k2�2r;kVr;k

[1� Vr;k(3�r;k)2

+V

1=2r;k

3�r;kz1��]

3:

Then we have proved:Theorem 3:The lower con�dence bound for the parameter � in doubly type-2 censoredsamples from a two-parameter exponential distribution is given by

(3:1:15) �l =

2�r;krX

i=k+1

(X(i) �X(i�1))

Vr;k2�2

r;k

Vr;k[1� Vr;k

(3�r;k)2+

V1=2r;k

3�r;kz1��]3

:

60

Page 66: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

(ii)The lower con�dence bound for the parameter �The con�dence bound for the parameter � can be constructed through thepivotal quantity

(3:1:16) Z =

0@ 2Qk

2�r;kQr;k

Vr;k

1A =

0BBBBB@

2(X(k) � �)

�2�r;kVr;k

rXi=k+1

(X(i) �X(i�1))

1CCCCCA

Z has the form of the ratio of two chi-square random variables.The distributions of Qk and Qr;k have been given above, therefore the distri-bution of Z is easy to derive. Note that,

(3:1:17) 2Qk � �2(m1)

and

(3:1:18)2�r;kQr;k

Vr;k� �2(m2)

where m1 = 2 and m2 =2�2r;kVr;k

.

Then

(3:1:19) Z =�2(m1)

�2(m2)

(3:1:20)

0@ Qk

2�r;kQr;k

Vr;km2

1A � f(2;m2;1��)

If f(2;m2;1��) is the (1� �)% percentile point of the F-distribution ,then the(1� �)% lower con�dence bound for � is

(3:1:21) �l = X(k) � 1

�r;kf(2;m2;1��)

rXi=k+1

(X(i) �X(i�1)):

We have proved:Theorem 4:

61

Page 67: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

The lower con�dence bound for the parameter � in doubly type-2 censoredsamples from a two-parameter exponential distribution is given by

(3:1:22) �l = X(k) � 1

�r;kf(2;m2;1��)

rXi=k+1

(X(i) �X(i�1)):

Whenm1 andm2 are not integers ,interpolation in f-tables for integral degreesof freedom have to be used to determine f(m1;m2;1��).Observe that in life-time models negative values of � do not make sense.Therefore if the estimation of � gives a negative value, we can simply replaceit by zero.

62

Page 68: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

CHAPTER FOUR

PREDICTION INTERVAL FOR FUTUREOBSERVATIONS USING A MAXIMUM

LIKELIHOOD ESTIMATOR

Our principal application is to the prediction of higher order statistic fromlower ones in type-II censored random samples.LetX1; X2; � � � ; Xn be independent real valued random variables.Assume thatthe distribution of the random variables is identical and that it has a densityfunction f(x; �); � = (�; �).let X(1); X(2); � � � ; X(n) be the order statistics for these variables .Let the random vector X = (X(k); X(k+1); � � � ; X(r)) be observed ,and let Ybe the unobserved (future) random variable X(s),where 1 � k < r < s � n.We proceed as follows :We �nd �? which is the maximum likelihood estimate of � .For this value �?

we take the maximum likelihood predictor x?(s) which is the value maximizingthe conditional likelihood function of X(s) given X(k) = x(k); � � � ; X(r) = x(r),L(x(s)jX(k) = x(k); � � � ; X(r) = x(r))4-1 Maximum likelihood estimateLet X(k); X(k+1); � � � ; X(r) denote (r-k+1) order statistics of a random sampleof size n from a two-parameter exponential distribution having a densityfunction

(4:1:1) fX(x; �; �) =1

�e�(

x���

); x � �; � > 0; � > 0

and

(4:1:2) FX(x) = 1� e�(x��� ); x � �; � > 0

To derive the maximum likelihood estimate ,we consider the likelihood func-tion of the variables Zk+1; Zk+2; � � � ; Zr and � ,where ,from lemma 1 ,thevariables Zi = (n � i + 1)(X(i) � X(i�1)); i = k + 1; � � � ; r ,are independentand identically distributed random variables with exponential distribution.Then the value of �?(X(k); X(k+1); � � � ; X(r)) which is a maximum likelihoodestimate can be obtained from a likelihood function of Zk+1; Zk+2; � � � ; Zr and�.

(4:1:3) L(zk+1; � � � ; zr; �) =rY

i=k+1

�1

�e�(

zi�)�

63

Page 69: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

(4:1:4) L(zk+1; � � � ; zr; �) =

0BBBB@

1

�r�ke

�rX

i=k+1

(zi�)

1CCCCA :

Then the value �?, which maximizes L satis�es

(4:1:5)@logL

@�= 0

(4:1:6) logL = �(r � k)log� �

rXi=k+1

z(i)

From (2.2.25),and (2.2.26),we obtain

T =rX

i=k+1

X(i) + (n� r)X(r) � (n� k)X(k) =rX

i=k+1

Zi

(4:1:6)@logL

@�= �(r � k)

�+

T

�2= 0

Then

(4:1:7) �? =T

r � k

We have proved:Theorem 5:The maximum likelihood estimate for the parameter � from the observationof X(k); X(k+1); � � � ; X(r) is given by

(4:1:8) �? =

rXi=k+1

X(i) + (n� r)X(r) � (n� k)X(k)

r � k:

64

Page 70: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

4-2 Prediction using a maximumu likelihood estimatorLet the random vector X = (X(k); X(k+1); � � � ; X(r)) be observed.For the value�? we take the maximum likelihood predictor x?(s) which is the value maximiz-ing the conditional likelihood function of X(s) given X(k) = x(k); � � � ; X(r) =x(r)

(4:2:1) L(X(s)jX(k) = x(k); � � � ; X(r) = x(r)) =L(x(k); x(k+1); � � � ; x(r); x(s))L(x(k); x(k+1); � � � ; x(r))

(4:2:2)

L(x(k); x(k+1); � � � ; x(r); x(s)) = n!

(k � 1)!(s� r � 1)!(n� s)!

rYi=k

fX(x(i))

��FX(x(k))

�k�1 �FX(x(s) � FX(x(r)))

�s�r�1 �1� FX(x(s))

�n�sfX(x(s))

and

(4:2:3) L(x(k); x(k+1); � � � ; x(r)) = n!

(k � 1)!(n� r)!

rYi=k

fX(x(i))

��FX(x(k))

�k�1 �1� FX(x(r))

�n�r:

Then

(4:2:4) L(X(s)jX(k) = x(k); � � � ; X(r) = x(r)) =(n� r)!

(s� r � 1)!(n� s)!

��FX(x(s) � FX(x(r)))

�s�r�1 �1� FX(x(s))

�n�sfX(x(s))

�1� FX(x(r))

��(n�r)

:

(4:2:5) L =(n� r)!

(s� r � 1)!(n� s)!�

�e�(

x(r)��

�) � e�(

x(s)��

�)�s�r�1

��e�(

x(s)��

�)�n�s+1 �

e�(x(r)��

�)��(n�r)

;

65

Page 71: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

(4:2:6) L =(n� r)!

(s� r � 1)!(n� s)!�

�1� e�(

x(s)�x(r)�

)�s�r�1

��e�(

x(s)��

�)�n�s+1 �

e�(x(r)��

�)��(n�s+1)

:

Put Z = X(s) �X(r).The predictive likelihood function (PLF) of Z and �? is given

(4:2:7) L(�?; z) =(n� r)!

(s� r � 1)!(n� s)!�?

�e�(

z�?

)�n�s+1 �

1� e�(z�?

)�s�r�1

(4:2:8) L(�?; z) =1

�?�(s� r; n� s+ 1)

�e�(

z�?)�n�s+1 �

1� e�(z�?

)�s�r�1

:

The value z? which maximizes the likelihood function L(�?; z) satis�es

(4:2:9)@logL(�?; z)

@z= 0

(4:2:10) logL(�?; z) = logC�log�?� (n� s+ 1)z

�?+(s�r�1)log

�1� e�

z�?

where C =1

�(s� r; n� s+ 1).

Then the maximum likelihood predictor z? which is the value maximizingL(�?; z) is given by

(4:2:11)@logL(�?; z)

@z=

(s� r � 1)e�(z�? )

�?�1� e�(

z�? )� � n� s+ 1

�?= 0

(4:2:12)e�(

z�? )�

1� e�(z�? )� =

n� s+ 1

s� r � 1

66

Page 72: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

(4:2:13) (n� r)e�(z�? ) = n� s+ 1

(4:2:14) Z? = �? logn� r

n� s+ 1

We have proved:Theorem 6:The maximum likelihood predictor for the future order statistic X(s) underthe parameter value �? is given by

(4:2:15) X?(s) = X(r) +

T

r � klog(

n� r

n� s+ 1):

From (2.3.3) it can be shown that

(4:2:16)2T

�� �2(2(r � k)):

Thus an approximate 100%(1� �) con�dence interval for the mean lifetime� is

(4:2:17)2T

�2(2(r � k); 1� �2)< � <

2T

�2(2(r � k); �2)

where �2(2(r � k); �) denotes the quantile of the �2-distribution .To �nd a con�dence interval for the parameter � ,if we de�ne from (3.1.3)

(4:2:18) Q(k) = (X(k) � �

�):

From lemma (1) and (3.1.17) ,(2.3.3) 2Q(k) and2T�are independent ,and have

chi-square distributions with 2 and 2(r-k) degrees of freedom respectively.Thus the ratio

(4:2:19) Y =(2Qk

2)

( 2T2(r�k)�

)=

(r � k)(X(k) � �)

T

follows the f-distribution with 2 and 2(r-k) degrees of freedom.Let f(2;2(r�k);1��) be the 100%(1��) percent points of the f-distribution, thatis

(4:2:20) P (Y � f(2;2(r�k);1��)) = 1� �

67

Page 73: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

We have proved:Theorem 7:The 100%(1 � �) percent con�dence interval for � from the observation ofX(k); X(k+1); � � � ; X(r), and T is given by

(4:2:21) X(k) � T

(r � k)f(2;2(r�k);1��) < � < X(k):

4-3 Prediction interval using a maximum likelihood estimatorLet �? be a maximum likelihood estimate of � ,and let f(yj�) be the probabil-ity density function of a random variable Y which is an unobserved (future)observation, and let f(yj�?) be the estimated probability density function ofY.Then if � is known ,a 100%(1��) prediction interval for Y ,is [U1(X; �); U2(X; �)],and ,if � is unkown ,then a reasonable approximate 100%(1� �) predictioninterval for Y is [U1(X; �

?); U2(X; �?)] ,where U1(X; �

?) and U2(X; �?) are the

solution of these integral equations

(4:3:1)Z U1

�1

f(yj�?)dy = �

2see[2]

and

(4:3:2)Z U2

�1

f(yj�?)dy = 1� �

2

Let a random vector X = (X(k); X(k+1); � � � ; X(r)) be observed from an expo-nential distribution with a density function in (4.1.1).Let �?(X(k); X(k+1); � � � ; X(r)) be a maximum likelihood estimator of �. Using�? we want to �nd an approximate 100%(1 � �) prediction interval for afuture observation Y = X(s) �X(r).From (4.2.8), the density function of the random variable Y is

(4:3:3) f(yj�) = 1

��(s� r; n� s+ 1)

�e�(

y�)�n�s+1 �

1� e�(y�)�s�r�1

and

(4:3:4) f(yj�?) = 1

�?�(s� r; n� s+ 1)

�e�(

y�?

)�n�s+1 �

1� e�(y�?

)�s�r�1

:

68

Page 74: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

From (4.3.1) and (4.3.2) we obtain, an approximate 100%(1� �) predictioninterval for a future observation Y is given by

(4:3:5)Z U1

0

1

�?�(s� r; n� s+ 1)

�e�(

y�?

)�n�s+1 �

1� e�(y�?

)�s�r�1

dy =�

2

and(4:3:6)Z U2

0

1

�?�(s� r; n� s+ 1)

�e�(

y�?

)�n�s+1 �

1� e�(y�?

)�s�r�1

dy = 1� �

2:

Then(4:3:7)Z U1

0

1

�?�(s� r; n� s+ 1)

s�r�1Xi=0

(�1)i s� r � 1

i

!e�(

(n�s+i+1)y�? )dy =

2:

Then

(4:3:8)1

�(s� r; n� s+ 1)

s�r�1Xi=0

(�1)i s� r � 1

i

!1

(n� s+ i + 1)

� 1� e

�(n�s+i+1)U1

�?

�!=�

2

and

(4:3:9)1

�(s� r; n� s+ 1)

s�r�1Xi=0

(�1)i s� r � 1

i

!1

(n� s+ i + 1)

� 1� e

�(n�s+i+1)U2

�?

�!= 1� �

2:

Then for given values of k,r,s,n and � we can compute the values of U1(X; �?)

and U2(X; �?).

We have proved:Theorem 8:An approximate prediction interval for a future observation Y = (X(s)�X(r))under the parameter value �? is given by (U1(X; �

?; U2(X; �?)), where U1 and

U2 are given from (4.3.8), and (4.3.9).

69

Page 75: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

4-4 Special cases(i) Prediction interval for a smallest future observationIn this case s=r+1,from (4.3.8) and (4.3.9) we get

(4:4:1)1

(n� r)�(1; n� r)

1� e

�(n�r)U1

�?

�!=

2;

(4:4:2)

1� e

�(n�r)U1

�?

�!=�

2

(4:4:3) e�

�(n�r)U1

�?

�= 1� �

2:

Then

(4:4:4) U1 =�?

n� rlog(

1

1� �2

)

and

(4:4:5)1

(n� r)�(1; n� r)

1� e

�(n�r)U2

�?

�!= 1� �

2

(4:4:6)

1� e

�(n�r)U2

�?

�!= 1� �

2

(4:4:7) e�

�(n�r)U2

�?

�=�

2

Then

(4:4:8) U2 =�?

n� rlog(

2

�)

Hence an approximate 100%(1� �) prediction interval for a smallest futureobservation is (U1(X; �

?); U2(X; �?)).

70

Page 76: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

(ii) Prediction interval for a largest future observationIn this case s=n ,from (4.3.8) and (4.3.9) we get

(4:4:9)1

�(n� r; 1)

n�r�1Xi=0

(�1)i n� r � 1

i

!1

(i + 1)

� 1� e

�(i+1)U1

�?

�!=

2:

We can rewrite (4.4.9) as

(4:4:10) 1�n�rXi=0

(�1)i n� ri

!�1� e�(

iU1�? )

�=�

2

(4:4:11)n�rXi=0

(�1)i n� ri

!�1� e�(

iU1�? )

�= 1� �

2

and

(4:4:12) =1

�(n� r; 1)

n�r�1Xi=0

(�1)i n� r � 1

i

!1

(i+ 1)

� 1� e

�(i+1)U2

�?

�!= 1� �

2:

(4:4:13) 1�n�rXi=0

(�1)i n� ri

!�1� e�(

iU2�? )

�= 1� �

2

(4:4:14)n�rXi=0

(�1)i n� ri

!�1� e�(

iU2�? )

�=

2

ExampleSuppose that in a laboratory experiment n=10 mice are exposed to carcino-gens ,the experimenter decides to start the study after two of the mice aredead and to observe the other at that time the survival times of the expiredare 62,84,106 and 144 hours.

71

Page 77: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

Let the lifetimes of all n items be distributed according to the two-parameterexponential distribution with unknown parameter � and �.We wish to �nd a 95% prediction interval for X(7) ,and a con�dence intervalfor the mean lifetime and a maximum likelihood predictor (MLP) for X(8).SolutionWe �rst look for a maximum likelihood predictor X(8). In this case n=10,r=6 ,k=3 and s=8, and

T =rX

i=k+1

X(i) + (n� r)X(r) � (n� k)X(k)

T =6Xi=4

X(i) + 4X(6) � 7X(3) = 476:

The maximum likelihood predictor X(8) is

X(8) = 144 + 158:7log(4

3) = 144 + 158:7(0:176) = 171:9 hours:

A 95% prediction interval for X(7) is obtained from (4.4.3) and (4.4.4):

U1 =476

12log(

1

0:975) = 39:6(0:01) = 0:396:

U2 =476

12log(

2

0:05) = 39:6(1:6) = 63:36:

Hence a 95% prediction interval for X(7) is (144.396,207.36).A 95% con�dence interval for the mean lifetime � is

2T

�2(2(r � k); 1� �2)< � <

2T

�2(2(r � k); �2):

For � = 0:05 the con�dence interval is

952

�2(6; 0:975)< � <

952

�2(6; 0:025):

Using the tables of �2(:; :) we �nd the interval is

952

14:45< � <

952

1:23= (65:88; 767:74):

72

Page 78: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

CHAPTER FIVE

THE BAYESIAN PREDICTION

The idea behind prediction is to provide some estimate either point or inter-val for future observations of an experiment F based on the results obtainedfrom an informative experiment E.Let X1; X2; � � �Xn be independent real valued random variables with a den-sity function f(xj�); x 2 X; � 2 � .Let X(1); X(2); � � �X(n) be the order statistics for these variables.Let the random vector X = (X(k); X(k+1); � � � ; X(r)) be observed ,and let Ybe the random variable (X(s) �X(r)), where 1 � k < r < s � n.We assume that the distributions of X and Y are indexed by the same pa-rameter �,our uncertainty about the true value of � is measured by a priordensity function g(�).Then the posterior density for the parameter � is given by

(5:1) h(�jX(k) = x(k); X(k+1) = x(k+1); � � � ; X(r) = x(r))

=f(x(k); x(k+1); � � � ; x(r)j�)g(�)Z

�f(x(k); x(k+1); � � � ; x(r)j�)g(�)d�

:

If the information in E can be summarized by a suÆcient statisticT (X(k); X(k+1); � � �X(r)) ,then we can write a posterior density for a parameter�

(5:2) h(�jT = t) =fT (tj�)g(�)Z

�fT (tj�)g(�)d�

see[32]:

Suppose we wish to predict a future statistic Y. If T and Y are independent,and Y has a density function f(yj�) ,then the predictive density function forY is given by

(5:3) p(yjT = t) =Z�f(yj�)h(�jT = t)d� see[7]:

A (1 � )% Bayesian prediction interval for a future observation Y is thende�ned as an interval A such that

(5:4) P (Y 2 AjT = t) =ZAp(yjT = t)dy = 1� see[7]:

73

Page 79: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

5-1 Prediction intervals for a future observationLet X1; X2; � � �Xn be an independent random sample from a two-parameterexponential distribution having a density function

(5:1:1) fX(xj�; �) = 1

�e�(

x��� ); x � �; � > 0

and a cumulative distribution function

(5:1:2) FX(x) = 1� e�(x��� ); x � �; � > 0:

Suppose that the informative experiment E has a doubly type-II censoredsample and it can be summarized by a suÆcient statisticT (X(k); X(k+1); � � � ; X(r)).

(5:1:3) T =rX

i=k+1

X(i) + (n� r)X(r) � (n� k)X(k)

is a suÆcient statistic for the parameter � ,and the distribution of T from(2.2.27) is a gammma distribution.

(5:1:4) fT (tj�) = 1

�(r � k)�r�ktr�k�1e�

t� :

We assume a two-parameter gamma prior density function for �

(5:1:5) g(�j�; �) = ��

�(�)(1

�)�+1e�

�� ; � > 0; ; � > 0; � > 0:

Then the posterior density for � is given by

(5:1:6) h(�jT = t) =fT (tj�)g(�j�; �)Z

1

0fT (tj�)g(�j�; �)d�

(5:1:7) h(�jT = t) =

1�(r�k)�r�k

tr�k�1e�t�

��

�(�)(1�)�+1e�

��Z

1

0

1

�(r � k)�r�ktr�k�1e�

t���

�(�)(1

�)�+1e�

�� d�

:

74

Page 80: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

Then to obtain the integral

(5:1:8)Z

1

0

��

�(r � k)�(�)(1

�)�+r�k+1tr�k�1e�(

t+��

)d�

put U = T+��.Then we obtain that the integral is

(5:1:9) =��tr�k�1

�(r � k)�(�)

Z1

0

u

t+ �

!�+r�k+1

e�u(t+ �

u2)du

(5:1:10) =��tr�k�1

�(r � k)�(�)(t+ �)�+r�k

Z1

0u�+r�k�1e�udu

=��tr�k�1�(� + r � k)

�(r � k � 1)(t+ �)�+r�k�(�):

Then the posterior density function is given by

(5:1:11) h(�jT = t) =(t+ �)�+r�k

�(� + r � k)(1

�)�+r�k+1e�(

t+��

)

(5:1:12) h(�jT = t) =KR

�(R)

�1

�R+1e�

K�

If the condition is T=t, then

(5:1:13) R = � + r � k; K = T + �:

Let Y = X(s) � X(r) .Then Y and T are independent and the predictivedensity function for Y is given by

(5:1:14) p(yjT = t) =Z

1

0f(yj�)h(�jT = t)d�:

The density function of the future observation Y given from (2.2.14) is

(5:1:15) f(yj�) = 1

�(s� r; n� s+ 1)�

�e�(

y�)�n�s+1 �

1� e�(y�)�s�r�1

:

75

Page 81: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

Then the predictive density function of a future observation Y is given by

(5:1:16) p(yjT = t) =

KR

�(R)�(s� r; n� s+ 1)

!Z1

0

�1

�R+1e�

K�

�1

�e�(

y�)�n�s+1 �

1� e�(y�)�s�r�1

d�

(5:1:17)

p(yjT = t) =

KR

�(R)�(s� r; n� s + 1)

! Z1

0

s�r�1Xi=0

(�1)i s� r � 1

i

!

��1

�R+2e�(

(n�s+1+i)y+K)� )d�:

(5:1:18) p(yjT = t) =

RKR

�(s� r; n� s+ 1)

!s�r�1Xi=0

(�1)i s� r � 1

i

!

� 1

((n� s+ i + 1)y +K)R+1:

Then a (1 � )% Bayesian prediction interval for a future observation Y isthen de�ned as an interval A such that

(5:1:19) P (Y 2 AjT = t) =ZAp(yjT = t)dy = 1�

where A is an interval (a,b) .If we consider equal tail limits, then

(5:1:20) P (Y � ajT = t) =

2

and

(5:1:21) P (Y � bjT = t) =

2:

76

Page 82: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

Then the prediction interval for Y is the interval (a,b),which a is determinedby(5:1:22)

P (Y � ajT = t) =Z a

0

RKR

�(s� r; n� s+ 1)

!s�r�1Xi=0

(�1)i s� r � 1

i

!

� 1

((n� s+ i + 1)y +K)R+1dy =

2

(5:1:23) P (Y � ajT = t) =KR

�(s� r; n� s+ 1)

s�r�1Xi=0

(�1)i s� r � 1

i

!

1

(n� s+ i+ 1)KR� 1

(n� s + i+ 1) ((n� s+ i + 1)(a+K)R

!=

2:

b is determined by(5:1:24)

P (Y � bjT = t) =Z

1

b

RKR

�(s� r; n� s+ 1)

!s�r�1Xi=0

(�1)i s� r � 1

i

!

� 1

((n� s+ i + 1)y +K)R+1dy =

2

(5:1:25) P (Y � bjT = t) =KR

�(s� r; n� s+ 1)

s�r�1Xi=0

(�1)i s� r � 1

i

!

� 1

(n� s+ i + 1) ((n� s+ i+ 1)b +K)R=

2:

Then for given values of k,r,s,n,�,� and the probabilities

P (Y � ajT = t) =

2

andP (Y � bjT = t) =

2

77

Page 83: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

can be evulated using a Newton-Raphson method.We have proved:Theorem 9:The Bayes prediction interval for a future observation Y = (X(s) � X(r))under a complete suÆcient statistic of observed observations T, and a givenprior density of the parameter is a two-parameter gamma distribution, theprediction interval is given by A=(a,b) ,where a and b are determined from(5.1.23) and (5.1.25)5-2 Special cases(i) Prediction intervals for a smallest future observationIn this case s=r+1. From (5.1.27) and (5.1.28) we obtain(5:2:1)

P (Y � ajT = t) =KR

�(1; n� r)

1

(n� r)KR� 1

(n� r) ((n� r)a+K)R

!=

2

(5:2:2) P (Y � ajT = t) =

1� KR

((n� r)a+K)R

!=

2

(5:2:3)

K

(n� r)a+K

!R

= 1�

2

(5:2:4)

K

(n� r)a+K

!=�1�

2

� 1R

(5:2:5) 1 +(n� r)a

K=�1�

2

��

1R

:

Then

(5:2:6) a =K

n� r

�(1�

2)�

1R � 1

�:

And

(5:2:7) P (Y � bjT = t) =KR

�(1; n� r)(n� r) ((n� r)b+K)R=

2

78

Page 84: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

(5:2:8)KR

((n� r)b+K)R=

2

(5:2:9)

K

(n� r)b+K

!R

=

2

(5:2:10)K

(n� r)b+K= (

2)

1R

Then

(5:2:11) b =K

n� r

�(

2)�

1R � 1

(ii) Prediction intervals for the largest future observationIn this case s=n ,we wish to give a prediction intervals for a largest futureobservation Y from (5.1.27) and (5.1.28) we �nd

(5:2:12) P (Y � a) =KR

�(n� r; 1)

n�r�1Xi=0

(�1)i n� r � 1

i

!

1

(i + 1)KR� 1

(i+ 1) ((i+ 1)a+K)R

!=

2

and

(5:2:13) P (Y � b) =KR

�(n� r; 1)

n�r�1Xi=0

(�1)i n� r � 1

i

!

� 1

(i + 1) ((i + 1)b+K)R=

2

79

Page 85: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

REFERENCES

[1]Antel,C.E.and Rademaker,F.,(1972).An upper con�dence limit on the max-imum of m future observations from type-1 extreme value distribution.Biometrika,59,475-477[2]Awad,A.M.,and Shayib,A.M.,(1985).Large sample prediction for future ge-ometric mean.A comparative study.Commun.Statist.Simula.Comput.,14,983-1006[3]Baker,G.A.,(1935).The probability that the mean of a second sample willdi�er from the mean of a �rst sample by less than a certain multiple of thestandard of the �rst sample.Ann.Math.Statist.,6,197-201

[4]Chew,V.,(1968).Simultaneous prediction intervals.Technometrics ,10,323-330[5]Chhikara,R.S.and Gutmann,I.,(1982).Prediction limits for the inverse Gaus-sain distribution.Technometrics ,24,319-324

[6]David,A.H.,(1981).Order Statistics.John Wiley,New York

[7]Dunsmore,I.R.,(1974).The Bayesian predictive distribution in life testingmodels Technometrics ,16,455-460

[8]Engelhardt,M.,(1975).On simple estimation of the parameters of the Weibullor extreme-value distribution.Technometrics ,17,369-374

[9]Engelhardt,M.and Bain,L.J.,(1982).On prediction limits for samples froma Weibull or extreme-value distribution.Technometrics ,24,147-150

[10]Epstein,B.and Sobel,M.,(1953).Life testing.J.Amer.Statist.Assoc.,48,486-502[11]Epstein,B.and Sobel,M.,(1954).Some theorems relevant to life testing froman exponential distribution.Ann.Math.Statist.,25,373-381

[12]Epstein,B.,(1954).Tables for the distribution of the number exceedances.Ann.Math.Statist.,25,762-768[13]Evans,G.I.and Nigm,A.H.,(1980).Bayesian prediction for the left trun-cated exponential distribution.Technometrics ,22,201-204

80

Page 86: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

[14]Faulkenberry,G.D.,(1973).A method of obtaining prediction intervals.J.Amer.Statist.Assoc.,68,433-435

[15]Fertig,K.W.,Meyer,M.E.and Mann,N.R.,(1980).On constructing predic-tion intervals for samples from aWeibull or extreme-value distribution.Technometrics,22.567-573[16]Fertig,K.W.and Mann,N.R.,(1977a).One-sided prediction intervals for atleast p out of m future observations from a normal population.Technometrics,19,167-177[17]Gupta,A.K.,(1952).Estimation of the mean and standard deviation ofnormal population from a censored sample.Biometrika ,39,260-273

[18]Hahn,G.J.and Nelson,W.,(1973).A survey of prediction intervals and theirapplications.J.Quality Technology ,5,178-188

[19]Howlader,A.H.and Hossain,A.,(1995).On Bayesian estimation and pre-diction from rayleigh based on type-IIcensored data.Commun.Statist-TheoryMeth.,24(9),2249-2259

[20]Johnson,N.L.,Kotz,S.,and Balakrishnan,N.,(1994).Continuous univariatedistribution.John Wiley ,New York

[21]Kaminsky,K.S.and Rhodin,L.S.,(1985a).Maximum likelihood prediction.Ann.Inst.Statist.Math.,37,507-517

[22]Kaminsky,K.S.,(1974).Prediction intervals for the exponential distribu-tion using subset of the data .Technometrics ,16,57-59

[23]Kaminsky,K.S.,(1977).Comparison of prediction intervals for failure timeswhen life is exponential.Technometrics ,19,83-86

[24]Kong,F.and Fei,H.,(1994).Interval estimations for one-and two-parameterexponential distribution under multiple type-II censoring.Commun.Statist.-Theory Meth.,23(6),1717-1733[25]Lawless,J.F.,(1971).A prediction problem concerning samples from theexponential distribution with application in life testing.Technometrics ,13,725-730

81

Page 87: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

[26]Lawless,J.F.,(1972).On prediction intervals for samples from the exponen-tial distribution and prediction limits for system survival.Sankhya B,34,1-14

[27]Lawless,J.F.,(1977).Prediction intervals for the two-parameter exponen-tial distribution.Technometrics ,19,469-472

[28]Lee,T.E.,(1992).Statistical methods for survival data analysis.John Wiley,New York[29]Likes,J.,(1974).Prediction of s-th ordered observation for the two-parameterexponential distribution.Technometrics ,16,241-244

[30]Lingappaiah,G.S.,(1974).Prediction in samples from the gamma distribu-tion as applied to life testing.Austral.J.Statist. ,16,30-32

[31]Lingappaiah,G.S.,(1979).Bayesian approach to the prediction problem incomplete and censored samples from the gamma and exponential distribu-tions. Commun.Statist.-Theor.Meth.,8,1403-1423

[32]Mann,N.R.,Schafer,R.E.and Singpurwalla,N.D.,(1974).Method for statis-tical analysis of reliability and life data.John Wiley ,New York

[33]Marek,Fisz,(1963).Probability theory and Mathematical statistics.JohnWiley ,New York[34]Nelson,W.,(1982).Applied life data analysis.John Wiley ,New York

[35]Owen,D.B.and Chou,Youn-Min.,(1986).One-sided simultaneous lower pre-diction intervals for future samples from a normal distribution.Technometrics,28,247-251[36]Patel,J.K.,(1989).Prediction intervals a review.Commun.Statist.-Theor.Meth.,18(7),2393-2465

[37]Patel,J.K.and Bain,P.T.,(1991).Factors for calculating prediction inter-vals for sample from a one-parameter exponential distribution.J.Quality Tech-nology ,23,48-52

[38]Patnaik,B.P.,(1949).Non central �2 and F distributions and their appli-cations.Biometrika ,36,202-232

82

Page 88: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

[39]Rajesh Singh,S.K.,(1995).Bayes estimators for two-parameter exponen-tial distribution.Commun.Statist.-Theor.Meth.,24(1),227-240

[40]Shiue,Wei-Kei and Bain,L.J.,(1986).Prediction intervals for the gammadistribution.Computer Science and Statist.Proceedings ofthe 18th sympo-sium on the interface.,405-410

[41]Tanis,E.A,(1964).Linear forms in the order statistics from an exponentialdistribution.Ann.Math.Statist.35,270-276

[42]Vijay K.Rohatgi,(1984).Statistical Inference.John Wiley ,New York

83

Page 89: GWDGwebdoc.sub.gwdg.de/ebook/e/2000/mathe-goe/Aomar3.pdf · CONTENTS P age Ac kno wledgemen t In tro duction 1 Chapter one Metho ds and previous results on prediction 1-1 De nitions

Lebenslauf

Am 31.Dezember 1963 wurde ich in Al-Shaghab, Kena, Aegypten geboren.Am Oktober 1970 besuchte ich in der Al-Shaghab Grunschule. Am Okto-ber 1980 besuchte ich in der Luxor Gymnasium.Ich begann das Studium derMathematik am Oktober 1982 in faculty of Science ,Department of Mathe-matics, South valley University in Kena. Am Mai 1986 habe ich Bachelorof Science in Mathematics erworben. Seit dem Maerz 1988 bin ich Wis-senscha icher Assistant in Department of Mathematics in faculty of Sciencein Kena. Am Mai 1994 habe ich Master degree of Science in Mathematicserworben. Seit dem Oktober 1996 arbeite ich Promotionsarbeit unter An-leitung von Herrn Prof. Krengel im Institut fuer Mathematische Stochastikder Universitaet Goettingen. Meine Staatsangehoerigkeit ist aegyptisch.

Adel M.M.Omar

84