Acessibilidade / Reportar erro

Objective and subjective prior distributions for the Gompertz distribution

Abstract

This paper takes into account the estimation for the unknown parameters of the Gompertz distribution from the frequentist and Bayesian view points by using both objective and subjective prior distributions. We first derive non-informative priors using formal rules, such as Jefreys prior and maximal data information prior (MDIP), based on Fisher information and entropy, respectively. We also propose a prior distribution that incorporate the expert’s knowledge about the issue under study. In this regard, we assume two independent gamma distributions for the parameters of the Gompertz distribution and it is employed for an elicitation process based on the predictive prior distribution by using Laplace approximation for integrals. We suppose that an expert can summarize his/her knowledge about the reliability of an item through statements of percentiles. We also present a set of priors proposed by Singpurwala assuming a truncated normal prior distribution for the median of distribution and a gamma prior for the scale parameter. Next, we investigate the effects of these priors in the posterior estimates of the parameters of the Gompertz distribution. The Bayes estimates are computed using Markov Chain Monte Carlo (MCMC) algorithm. An extensive numerical simulation is carried out to evaluate the performance of the maximum likelihood estimates and Bayes estimates based on bias, mean-squared error and coverage probabilities. Finally, a real data set have been analyzed for illustrative purposes.

Key words
Gompertz distribution; objective prior; Jeffreys prior; subjective prior; maximal data information prior; elicitation

INTRODUCTION

Gompertz distribution was introduced in connection with human mortality and actuarial sciences by Benzamin Gompertz (18258 GOMPERTZ B. 1825. On the nature of the function expressive of the law of human mortality and on a new mode of determining the value of life contingencies. Philos Trans R Soc Lond 115: 513-583.). Right from the time of its introduction, this distribution has been receiving great attention from demographers and actuarist. This distribution is a generalization of the exponential distribution and is applied in various fields especially in reliability and life testing studies, actuarial science, epidemiological and biomedical studies. Gompertz distribution has some interesting relations with some of the well-known distributions such as exponential, double exponential, Weibull, extreme value (Gumbel Distribution) or generalized logistic distribution (Willekens 200228 WILLEKENS F. 2002. Gompertz in context: the Gompertz and related distributions. In: Tabeau E, Van den Berg JA and Heathcote C (Eds), Forecasting mortality in developed countries - insights from a statistical demographic and epidemiological perspective European studies of population. Dordrecht: Kluwer Academic Publishers 9: 105-126.). An important characteristic of the Gompertz distribution is that it has an exponentially increasing failure rate for the life of the systems and is often used to model highly negatively skewed data in survival analysis (Elandt-Johnson and Johnson 19795 ELANDT-JOHNSON RC AND JOHNSON NL. 1979. Survival Models and Data Analysis. J Wiley & Sons: NY, 457 p.). In recent past, many authors have contributed to the studies of statistical methodology and characterization of this distribution; for example, Garg et al. (19707 GARG M, RAO B AND REDMOND C. 1970. Maximum-likelihood estimation of the parameters of the Gompertz survival function. J R Stat Soc Ser C Appl Stat 19: 152-159.), Read (198323 READ CB. 1983. Gompertz Distribution. Encyclopedia of Statistical Sciences. J Wiley & Sons, NY.), Makany (199119 MAKANY RA. 1991. Theoretical basis of Gompertz’s curve. Biom J 33: 121-128.), Rao and Damaraju (199222 RAO BR AND DAMARAJU CV. 1992. New better than used and other concepts for a class of life distribution. Biom J 34: 919-935.), Franses (19946 FRANSES PH. 1994. Fitting a Gompertz curve. J Oper Res Soc 45: 109-113.), Chen (19974 CHEN Z. 1997. Parameter estimation of the Gompertz population. Biom J 39: 117-124.) and Wu and Lee (199930 WU JW AND LEE WC. 1999. Characterization of the mixtures of Gompertz distributions by conditional expectation of order statistics. Biom J 41: 371-381.). Jaheen (2003a11 JAHEEN ZF. 2003a. Prediction of Progressive Censored Data from the Gompertz Model. Commun Stat Simul Comput 32: 663-676., b12 JAHEEN ZF. 2003b. A Bayesian analysis of record statistics from the Gompertz model. Appl Math Comput 145: 307-320.) studied this distribution based on progressive type-II censoring and record values using Bayesian approach. Wu et al. (200332 WU SJ, CHANG CT AND TSAI TR. 2003. Point and interval estimations for the Gompertz distribution under progressive type-II censoring. Metron LXI: 403-418.) derived the point and interval estimators for the parameters of the Gompertz distribution based on progressive type II censored samples. Wu et al. (200429 WU JW, HUNG WL AND TSAI CH. 2004. Estimation of parameters of the Gompertz distribution using the least squares method. Appl Math Comput 158: 133-147.) used least squared method to estimate the parameters of the Gompertz distribution. Wu et al. (200631 WU JW AND TSENG HC. 2006. Statistical inference about the shape parameter of the Weibull distribution by upper record values. Stat Pap 48: 95-129.) also studied this distribution under progressive censoring with binomial removals. Ismail (20109 ISMAIL AA. 2010. Bayes estimation of Gompertz distribution parameters and acceleration factor under partially accelerated life tests with type-I censoring. J Stat Comput Simul 80: 1253-1264.) obtained Bayes estimators under partially accelerated life tests with type-I censoring. Ismail (201110 ISMAIL AA. 2011. Planning step-stress life tests with type-II censored Data. Sci Res Essays 6: 4021-4028.) also discussed the point and interval estimations of a two-parameter Gompertz distribution under partially accelerated life tests with Type-II censoring. Asgharzadeh and Abdi (20111 ASGHARZADEH A AND ABDI M. 2011. Exact Confidence Intervals and Joint Confidence Regions for the Parameters of the Gompertz Distribution based on Records. Pak J Statist 271: 55-64.) studied different types of exact confidence intervals and exact joint confidence regions for the parameters of the two-parameter Gompertz distribution based on record values. Kiani et al. (201214 KIANI K, ARASAN J AND MIDI H. 2012. Interval estimations for parameters of Gompertz model with time-dependent covariate and right censored data. Sains Malays 414: 471-480.) studied the performance of the Gompertz model with time-dependent covariate in the presence of right censored data. Moreover, they compared the performance of the model under different censoring proportions (CP) and sample sizes. Shanubhogue and Jain (201324 SHANUBHOGUE A AND JAIN NR. 2013. Minimum Variance Unbiased Estimation in the Gompertz Distribution under Progressive Type II Censored Data with Binomial Removals. Int Sch Res Notices 2013: 1-7.) studied uniformly minimum variance unbiased estimation for the parameter of the Gompertz distribution based on progressively Type II censored data with binomial removals. Lenart (201417 LENART A. 2014. The moments of the Gompertz distribution and maximum likelihood estimation of its parameters. Scand Actuar J 3: 255-277.) obtained moments of the Gompertz distribution and maximum likelihood estimators of its parameters. Lenart and Missov (201618 LENART A AND MISSOV TI. 2016. Goodness-of-fit tests for the Gompertz distribution. Commun Stat Theory Methods 45: 2920-2937.) studied Goodness-of-fit tests for the Gompertz distribution. Recently, Singh et al. (201625 SINGH N, YADAV KK AND RAJASEKHARAN R. 2016. ZAP1-mediated modulation of triacylglycerol levels in yeast by transcriptional control of mitochondrial fatty acid biosynthesis. Mol Microbiol 1001: 55-75.) studied different methods of estimation for the parameters of Gompertz distribution when the available data are in the form of fuzzy numbers. They also obtained Bayes estimators of the parameters under different symmetric and asymmetric loss functions.

In this paper, we present a Bayesian analysis when there is a limited prior knowledge about the parameter of interest. In this regard, it is important to use noninformative priors, however, it can be difficult to choose a prior distribution that represents this situation, because there is hardly any precise definition of the concept of noninformative prior. Nevertheless, we have many noninformative priors, for instance, Jeffreys prior (Jeffreys 196713 JEFFREYS SIR HAROLD. 1967. Theory of probability. Oxford U Press: London, 470 p.), MDIP prior (Zellner 197733 ZELLNER A. 1977. Maximal Data Information Prior Distributions. In: Aykac A and Brumat C (Eds), New Methods in the applications of Bayesian Methods. North-Holland Amsterdam, p. 211-232., 198434 ZELLNER A. 1984. Maximal Data Information Prior Distributions. Basic Issues in Econometrics, U. of Chicago Press., 199035 ZELLNER A. 1990. Bayesian Methods and Entropy in Economics and Econometrics. In: Grandy Junior WT and Schick LH (Eds), Maximum Entropy and Bayesian Methods Dordrecht Netherlands: Kluwer Academic Publishers, p. 17-31.), Tibshirani prior (Tibshirani 198927 TIBSHIRANI R. 1989. Noninformative Priors for One Parameters of Many. Biometrika 76: 604-608.), reference prior (Bernardo 19792 BERNARDO JM. 1979. Reference Posterior Distributions for Bayesian Inference. J R Stat Soc Series B Stat Methodol 41: 113-147.) and many others which seemingly appropriate for a number of inference problems. It is to be noted that lack of enough information on the part of analysts often forces them to choose noninformative priors and this consideration ensures that the inferences are mostly data driven. In Bayesian analysis, many authors consider independent gamma priors for the estimation of parameters of the model, representing weak information as the use of a priori independence assumption simplifies the computations. Our main interest in the Bayesian analysis is to select a prior distribution that represents better dependence structure of the parameters in which the information regarding the parameters is not considered substantial as compared with information from the data. The focus is on the comparison of independent gamma prior, Jeffreys prior, maximal data information prior (MDIP), Singpurwalla’s prior and elicited prior. Jeffreys (1967) proposed a noninformative prior resulting from an argument based on the Fisher Information Measure and Zellner (1977, 1984) proposed an alternative prior, named maximal data information prior (MDIP) based on the Entropy Measure. The prior proposed by Singpurwalla (1988) for estimation of the parameters of Weibull distribution is also considered in this paper to estimate the parameters of Gompertz distribution.

There are many methods for eliciting parameters of prior distributions. In this paper, we also consider an elicitation method to specify the values of hyperparameters of the two gamma priors assigned to the parameters of the Gompertz distribution. The method requires the derivation of predictive prior distribution and it is assumed that the expert is able to provide some percentiles values. Thus, the main aim of this paper is to propose noninformative and informative prior distributions for the parameters c and λ of the Gompertz distribution and to study the effects of these different priors in the resulting posterior distributions, especially in situations of small sample sizes, a common situation in applications.

The paper is organized as follows. Some probability properties of the Gompertz distribution such as quantiles, moments, moment generating function are reviewed in Section 2. Section 3 describes the maximum likelihood estimation method. The Bayesian approach with proposed informative and noninformative priors is presented in section 4. In Section 5, simulation study is carried out to evaluate the performance of several estimation procedures along with coverage percentages is provided. The methodology developed in this paper and the usefulness of the Gompertz distribution is illustrated by using a real data example in Section 6. Finally, concluding remarks are provided in Section 7.

MODEL AND ITS BASIC PROPERTIES

A random variable X has the Gompertz distribution with parameters c and λ, say GM(c,λ), if its density function is

f ( x ) = λ e c x e λ c ( e c x 1 ) ; x > 0 , c , λ > 0 , (1)

and the corresponding c.d.f is given by

F ( x ) = 1 e λ c ( e c x 1 ) ; x > 0 , c , λ > 0 . (2)

The basic tools for studying the ageing and reliability characteristics of the system are the hazard rate h(x). The hazard function gives the rate of failure of the system immediately after time x. Thus the hazard rate function of the Gompertz distribution is given by

h ( x ) = f ( x ) 1 F ( x ) = λ e c x . (3)

Note that the hazard rate function is increasing function if c>0 or constant if c=0. Figure 1b shows the shapes of the hazard function for different selected values of the parameters c and λ. From the plot, it is quite evident that the Gompertz distribution has increasing hazard rate function.

Figure 1a shows the shapes of the pdf of the Gompertz distribution for different values of the parameters c and λ and from the plot, it is quite evident that the Gompertz distribution is positively skewed distribution.

Figure 1
Pdf function (a) and hazard function (b).

The quantile function xp=Q(p)=F1(p), for 0<p<1, of the Gompertz distribution is obtained from ([eq.2]), thus the quantile function xp is

x p = 1 c ln ( 1 c λ ln ( 1 p ) ) . (4)

In particular, the median of the Gompertz distribution can be written as

M d ( X ) = M d = 1 c ln ( 1 c λ ln ( 1 0.5 ) ) . (5)

If the random variable X is distributed GM(c,λ), then its nth moment around zero can be expressed as

E ( X n ) = λ e λ c c 1 1 c n e λ c x [ ln ( x ) ] n d x . (6)

On simplification, we get

E ( X n ) = n ! c n e λ c E 1 n 1 ( λ c ) , (7)

where

E s n ( z ) = 1 n ! 1 ( ln ( x ) ) n x s e z x d x ,
E n ( x ) = 1 e x t t n d t

and

E s 0 ( z ) = E s ( z ) ,

is the generalized integro-exponential function (Milgram 198521 MILGRAM M. 1985. The generalized integro-exponential function. Math Comp 44: 443-458.).

The mean and variance of the random variable X of the Gompertz distribution are respectively, given by

E ( X ) = 1 c e λ c E 1 ( λ c ) (8)

and

v a r ( X ) = 2 c 2 e λ c E 1 1 ( λ c ) ( 1 c e λ c E 1 ( λ c ) ) 2 (9)

Many of the interesting characteristics and features of a distribution can be obtained via its moment generating function and moments. Let X denote a random variable with the probability density function (1). By definition of moment generating function of X and using (1), we have

M x ( t ) = E ( e t x ) = 0 e t x f ( x ) d x = λ c p = 0 ( 1 ) p ( t λ p ) Γ ( p + 1 ) ( λ c ) p + 1 . (10)

MAXIMUM LIKELIHOOD ESTIMATION

The method of maximum likelihood is the most frequently used method of parameter estimation (Casella and Berger 20013 CASELLA G AND BERGER RL. 2001. Statistical Inference. Cengage Learning, 660 p.). The success of the method stems no doubt from its many desirable properties including consistency, asymptotic efficiency, invariance property as well as its intuitive appeal. Let x1,,xn be a random sample of size n from (1), then the log-likelihood function of (1) without constant terms is given by

( c , λ ; x ) = log L ( c , λ ; x ) = n log λ + c i = 1 n x i λ c i = 1 n ( e c x i 1 ) .

For ease of notation, we denote the first partial derivatives of any function f(x,y) by fx and fy. Now setting

c = 0 and λ = 0 ,

we have

λ = n λ 1 c i = 1 n ( e c x i 1 ) = 0 , (11)

and

c = i = 1 n x i λ c i = 1 n x i e c x i + λ c 2 i = 1 n e c x i n λ c 2 = 0 . (12)

From (11) and (12), we find the MLE for λ given by,

λ ̂ = n c ̂ i = 1 n ( e c ̂ x i 1 )

The MLE for "c" is obtained by solving the non-linear equation,

i = 1 n x i n i = 1 n x i e c x i i = 1 n ( e c x i 1 ) + n c = 0

The asymptotic distribution of the MLE θ ^ is

( θ ̂ θ ) N 2 ( 0 , I 1 ( θ ) )

(Lawless 200316 LAWLESS JF. 2003. Statistical Models and Methods for Lifetime Data. J Wiley & Sons: New Jersey, 664 p.), where I1(θ) is the inverse of the observed information matrix of the unknown parameters θ=(c,λ).

I 1 ( θ ) = ( 2 log L c 2 2 log L c λ 2 log L λ c 2 log L λ 2 ) | ( c , λ ) = ( c ^ , λ ^ ) 1 (13)
= ( v a r ( c ^ M L E ) c o v ( c ^ M L E , λ ^ M L E ) c o v ( λ ^ M L E , c ^ M L E ) v a r ( λ ^ M L E ) ) = ( σ c c σ c λ σ λ c σ λ λ )

The derivatives in I(θ) are given as follows

2 log L c 2 | c = c ^ M L E = λ c 2 [ c i = 1 n x i 2 e c x i + 2 i = 1 n x i e c x i 2 c i = 1 n e c x i + 2 n c ] (14)
2 log L λ 2 | λ = λ ̂ M L E = n λ 2 , (15)
2 log L c λ | c = c ̂ M L E , λ = λ ̂ M L E = [ 1 c 2 i = 1 n ( e c x i 1 ) 1 c i = 1 n x i e c x i ] . (16)

Therefore, the above approach is used to derive the approximate 100(1τ)% confidence intervals of the parameters θ=(c,λ) as in the following forms

c ̂ M L E ± z τ 2 V a r ( c ̂ M L E ) , λ ̂ M L E ± z τ 2 V a r ( λ ̂ M L E ) .

Here, Zτ2 is the upper (τ2)th percentile of the standard normal distribution.

BAYESIAN ANALYSIS

In this section, we consider Bayesian inference of the unknown parameters of the GM(c,λ). First, we assume that c and λ has the independent gamma prior distributions with probability density functions

π ( c ) c a 1 1 e b 1 c c > 0 (17)

and

π ( λ ) λ a 2 1 e b 2 λ λ > 0 . (18)

The hyperparameters a1, a2, b1 and b2 are known and positives. If both parameters c and λ are unknown, we cannot see an easy way to work with conjugation, since the expression of the likelihood function does not suggest any known form for the joint density of (c,λ). It is not unreasonable to assume independent gamma priors on the shape and scale parameters for a two-parameter GM(c,λ), because gamma distributions are very flexible. The joint prior distribution for both parameters in this case is given by

π ( c , λ ) c a 1 1 exp ( b 1 c ) λ a 2 1 exp ( b 2 λ ) . (19)

Thus, the joint posterior distribution is given by

p ( c , λ | 𝐱 ) λ n + a 2 1 c a 1 1 e c ( i = 1 n x i b 1 ) e λ [ 1 c i = 1 n ( e c x i 1 ) + b 2 ] . (20)

The conditional distribution of c given λ and data is given by

p ( c | λ , 𝐱 ) c a 1 1 e c ( i = 1 n x i b 1 ) e λ c i = 1 n ( e c x i 1 ) . (21)

Similarly, the conditional distribution of λ given c and data is given by

p ( λ | c , 𝐱 ) λ n + a 2 1 e λ [ 1 c i = 1 n ( e c x i 1 ) + b 2 ] . (22)

Note that the although the conditional p(λ|c,𝐱) is a gamma distribution, the conditional distributions p(c|λ,𝐱) is not identified as known distributions that are easy to simulate. In this way, Bayesian inference for the parameters cand λ can be performed by Metropolis-Hastings (MH) algorithm considering the gamma distribution as the target density for λ, and c can be generated from the conditional p(c|λ,𝐱) by using the rejection method.

JEFFREYS PRIOR

A well known non-informative prior, which represents a situation with little a priori information on the parameters was introduced by Jeffreys (1967), also known as the Jeffreys rule. The Jeffreys prior has been widely used due to the invariance property for one to one transformations of the parameters. Since then Jeffreys prior has played an important role in Bayesian inference. This prior is derived from Fisher Information matrix I(c, λ) as

π ( c , λ ) d e t ( I ( c , λ ) ) . (23)

However, I(c, λ) can not be analytically obtained for the parameters of Gompertz distribution. A possible simplification is to consider a noninformative prior given by π(c,λ)=π(λ|c)π(c). Using the Jeffreys’rule, we have,

π ( c , λ ) = [ E ( 2 λ 2 l o g L ( c , λ ) ) ] 1 2 π ( c ) (24)

where E(2λ 2logL(c,λ )) is given by (15) and π(c) is a noninformative prior, for instance, a gamma distribution with hyper-parameters equal to 0.01.

In this way, from (15) and (24) the non-informative prior for (c, λ) parameters is given by:

π ( c , λ ) = 1 λ π ( c ) . (25)

Let us denote the prior (25) as "Jeffreys prior".

Thus, the corresponding posterior distribution is given by

p ( λ , c | x ) = λ n 1 e x p ( c i = 1 n x i λ c i = 1 n e c x i + n λ c ) π ( c ) (26)

Proposition 1: For the parameters of the Gompertz distribution, the posterior distribution given in (26) under Jeffreys prior π(λ, c) given in (25) is proper.

We need to prove that 00p(λ, c|𝐱)dλdc is finite.

Indeed,

0 0 p ( λ , c | x ) d λ d c = Γ ( n ) 0 c n e x p ( c i = 1 n x i ) ( i = 1 n ( e c x i 1 ) ) n π ( c ) d c (27)

The function h(c)=cnexp(ci=1nxi)(i=1n(ecxi1))n is unimodal with maximum point ĉ as the solution of the nonlinear equation given by

c i = 1 n x i e c x i i = 1 n e c x i + n c i = 1 n ( e c x i 1 ) = X ¯

Therefore, from (27) we have

0 0 p ( λ , c | 𝐱 ) d λ d c Γ ( n ) h ( c ̂ ) 0 π ( c ) d c < ,

where 0π(c)dc=1. This completes the proof.

MAXIMAL DATA INFORMATION PRIOR (MDIP)

It is interesting to note that the data gives more information about the parameter than the information from the prior density, otherwise, there would not be justification for the realization of the experiment. Let X be a random variable with density f(x|ϕ), xRX (RX), parameter ϕa, b. Thus, we wish a prior distribution π(ϕ) that provides the gain in the information supplied by the data as much as possible relative to the prior information of the parameter, that is, maximizes the information on the data. With this idea, Zellner (1977, 1984, 1990) and Zellner and Min (199336 ZELLNER A AND MIN C. 1993. Bayesian analysis model selection and prediction. In: Grandy Junior WT and Milonni PW (Eds), Physics and Probability: Essays in honour of Edwin T Jaynes, Cambridge University Press, Cambridge, UK.) derived a prior distribution which maximize the information from the data in relation to the prior information on the parameters. Let

H ( ϕ ) = R X f ( x | ϕ ) l n f ( x | ϕ ) d x (28)

be a negative entropy of f(x|ϕ), the measure of the information in f(x|ϕ). Thus, the following functional form is employed in the MDIP approach:

G [ π ( ϕ ) ] = a b H ( ϕ ) π ( ϕ ) d ϕ a b π ( ϕ ) l n π ( ϕ ) d ϕ (29)

which is the prior average information in the data density minus the information in the prior density. G[π(ϕ)] is maximized by selection of π(ϕ) subject to abπ(ϕ)dϕ=1.

The following theorem proposed by Zellner provides the formula for the MDIP prior.

Theorem: The MDIP prior is given by:

π ( ϕ ) = k e x p ( H ( ϕ ) ) a ϕ b (30)

where k1=abexp(H(ϕ ))dϕ is the normalizing constant.

Proof. We have to maximize the function U=G[π(ϕ)]λ (abπ (ϕ)dϕ1) where λ is the Lagrange multiplier.

Thus, Uπ=0 H(X)=ln(π(ϕ))1λ=0 and the solution is given by π(ϕ)=k exp(H(X)).

Therefore, the MDIP is a prior that leads to an emphasis on the information in the data density or likelihood function, that is, its information is weak in comparison with data information.

Zellner (1984) shows several interesting properties of MDIP and additional conditions that can also be imposed to the approach refleting given initial information.

Suppose that we do not have much prior information available about c and λ. Under this condition, the prior distribution MDIP for the parameters (c, λ) of Gompertz distribution (1) is obtained as follows. Firstly, we have to evaluate the measure of information H(c, λ)=E(logf(x)), that is

E ( l o g f ( x ) ) = l o g ( λ ) + c E ( X ) λ c ( E ( e c X ) 1 ) (31)

where E(X)is obtained from (8) given by E(X)=1ceλ cE1(λ c) and E1(x)=1exuudu is the Exponential Integral. After some algebra, we also obtain E(ecX)=cλ+1. Therefore,

H ( c , λ ) = l o g ( λ ) + e λ c E 1 ( λ c ) λ c ( c λ + 1 ) + λ c (32)

Hence the MDIP prior is given by

π Z ( c , λ ) λ e x p ( e λ c E 1 ( λ c ) ) (33)

Now combining the likelihood function given by

L ( λ , c | x ) = λ n e x p ( c i = 1 n x i λ c i = 1 n e c x i + n λ c ) (34)

and the MDIP prior in (33), the posterior densitiy for the parameters c and λ is given by

p ( λ , c | x ) = λ n + 1 e x p ( c i = 1 n x i + e λ c E 1 ( λ c ) λ c i = 1 n ( e c x i 1 ) ) (35)

Proposition 2: For the parameters of the Gompertz distribution, the posterior distribution given in (35) under the corresponding MDIP prior π(λ, c) given in (33) is proper.

Proof. Indeed,

0 0 p ( λ , c | x ) d λ d c = 0 0 λ n + 1 e x p ( c i = 1 n x i + e λ c E 1 ( λ c ) λ c i = 1 n ( e c x i 1 ) ) d λ d c

Now, we consider a substituition of variables in the integral above as

{ w = λ c u = c J = w u 1 0 = u

resulting in

0 0 p ( λ , c | x ) d λ d c = 0 [ 0 u n + 2 e x p ( u i = 1 n x i w i = 1 n e u x i ) d u ] h ( w ) d w

where h(w)=wn+1exp(nw+ewE1(w)).

Let us denote x(n)=max(x1,x2,...,xn)then i=1neuxi<neux(n). Hence,

0 0 p ( λ , c | x ) d λ d c < 0 [ 0 u n + 2 e x p ( n x ¯ u n w e u x ( n ) ) d u ] h ( w ) d w

where x¯=i=1nxin. As x¯<x(n)we have

0 u n + 2 e x p ( n x ¯ u n w e u x ( n ) ) d u < 0 u n + 2 e x p ( n x ( n ) u n w e u x ( n ) ) d u <
< 0 u n + 2 e x p ( n x ( n ) u w e u x ( n ) ) d u

Now consider eux(n)=z then by substitution process the integral above becomes

0 u n + 2 e x p ( n x ( n ) u w e u x ( n ) ) d u = 1 x ( n ) n + 3 1 ( l o g z ) n + 2 z n 1 e x p ( w z ) d z = 1 x ( n ) n + 3 Γ ( n + 2 ) G n + 3 , n +4 0 , 0 ( -( n 1 ), . . ., -( n 1 ) -n, . . ., -n, 0 | w ) (36)

where Gp,qm,n(a1,...,apb1,...,bq|w) is the Meijer G-function introduced by Meijer (193620 MEIJER CS. 1936. Über Whittakersche bzw. Besselsche Funktionen und deren Produkte. Nieuw Archief voor Wiskunde 18: 10-39.) and given by

G p , q m , n ( a 1 , . . . , a p b 1 , . . . , b q | w ) = 1 2 π i L j = 1 m Γ ( b j s ) j = 1 n Γ ( 1 a j + s ) j = m + 1 q Γ ( 1 b j + s ) j = n + 1 p Γ ( a j s ) z s d s

for 0mq and 0np, where m, n, p and q are integer numbers.

From (36) we have

0 0 p ( λ , c | x ) d λ d c < 1 x ( n ) n + 3 Γ ( n + 2 ) 0 G n + 3 , n +4 0 , 0 ( -( n 1 ), . . ., -( n 1 ) -n, . . ., -n, 0 | w ) h ( w ) d w

which is not possible to obtain an analytical expression for this integral. However, the software Mathematica gives a convergence result of the integral.

PRIORS PROPOSED BY SINGPURWALLA

Singpurwalla (198826 SINGPURWALLA ND. 1988. An interactive PC-based procedure for reliability assessment incorporating expert opinion and survival data. J Am Stat Assoc 83: 43-51.) presented a procedure for the construction of the prior distribution with the use of expert opinion in order to estimate the parameters α and β of Weibull distribution. Expert’s opinion about measures of central tendency such as the median can be easily found, since most people are accustomed to this term. Singpurwalla introduces the median life M and elicit expert opinion on M and β through the priors π(M) and π(β). He focuses attention on β and on M=α exp(kβ ) where k=ln(ln2). Since M is restricted to being nonnegative, it is assumed a Gaussian distribution truncated at 0 with parameters μ and σ. A gamma prior distribution with parameters a and b is chosen to model the uncertainty about β. After this reparametrization, the prior π(α,β) is derived by transformation of variables.

Our aim is to derive the prior π(c, λ) applying a similar procedure proposed by Singpurwalla in order to estimate the parameters c and λ of Gompertz distribution. Differently of Singpurwalla’s priors who considered elicitation from expert for the parameters, we assume absence of information, hence the hyperparameters of the priors are chosen to provide noninformative prior and we use the information from the data to the parameter μ through the median of the data.

Consider the median of X is given by M=1clog(1+0.6931cλ ). Thus, it is assumed a Gaussian distribution truncated at 0 for the parameter M, making the density as

π ( M | μ , σ ) e x p ( 1 2 ( M μ σ ) 2 )

where 0M< with parameter μ equal to median of the data and standard deviation σ=100.

A gamma prior distribution is chosen to model the uncertainty about λ with density

π ( λ | a , b ) λ a 1 e x p ( b λ ) , (38)

with the parameters a and b specified as 0.01 representing a noninformative prior for λ.

Thus, we can determine the conditional prior distribution π(c |λ, μ, σ) for the parameter c given λ through the reparametrization M=1clog(1+0.6931cλ ) with the Jacobian given by |dMdc|=|0.6931cλ +0.6931c21c2log(1+0.6931cλ )|. Therefore, the conditional prior π(c |λ, μ, σ) is given by

π ( c | λ , μ , σ ) e x p ( 1 2 ( 1 c l o g ( 1 + 0.6931 c λ ) μ σ ) 2 ) | 0.6931 c λ + 0.6931 c 2 1 c 2 l o g ( 1 + 0.6931 c λ ) | (39)

Finally, the joint prior for c and λ, obtained as π(c, λ)=π(c|λ)π(λ), that is, by the product of (38) and (39), is given by

π ( c , λ | Θ ) λ a 1 e x p ( 1 2 ( 1 c l o g ( 1 + 0.6931 c λ ) μ σ ) 2 b λ ) | 0.6931 c λ + 0.6931 c 2 1 c 2 l o g ( 1 + 0.6931 c λ ) | (40)

where the vector of parameters Θ=(a,b,μ,σ) is known.

ELICITED PRIOR

In this Section, we provide a methodology that permits the experts to use their knowledges about the reliability of an item through statements of percentiles. This method requires the derivation of prior predictive distribution for elicitation. Suppose that joint prior π(c,λ) is given, then the reliability based on the predictive prior distribution is given by:

R ( t p ) = P { T t p } = 0 0 t p f ( t | c , λ ) π ( c , λ ) d t d c d λ , (41)

for a fixed mission time tp.

In order to elicit the four hyperparameters (a1,a2,b1,b2) of the prior π (c,λ |a1,a2,b1,b2) the integral have been considered as follows

p = 0 0 R ( t p | c , λ ) π ( c , λ | a 1 , a 2 , b 1 , b 2 ) d c d λ , (42)

for a given p-th percentile elicited from the expert where p=R(tp).

By considering Gompertz distribution, the reliability function R(tp|c, λ) is given by

R ( t p | c , λ ) = e x p ( λ c ( e c t p 1 ) ) (43)

and assuming a joint prior π(c, λ| a1,a2,b1,b2) given by the product of gamma priors, we have

π ( c , λ | a 1 , a 2 , b 1 , b 2 ) = k c a 1 1 λ a 2 1 e x p ( ( b 1 c + b 2 λ ) ) (44)

wherek=b1a1b2a2Γ(a1)Γ(a2).

Using (42), (43) and (44), the probability in (42) becomes

p = k 0 0 c a 1 1 λ a 2 1 e x p ( λ c ( e c t p 1 ) ( b 1 c + λ b 2 ) ) d c d λ (45)

Let d=1c(ectp1), then the integral for λ in equation (45) takes the gamma shape resulting in

p = k 0 [ 0 λ a 2 1 e x p ( λ ( b 2 + d ) ) d λ ] c a 1 1 e b 1 c d c (46)

that is,

p = b 1 a 1 b 2 a 2 Γ ( a 1 ) 0 c a 1 1 e b 1 c ( b 2 + 1 c ( e c t p 1 ) ) a 2 d c (47)

Since it is not possible to obtain a closed form for the integral (47), one possibility to work around this problem is to use the Laplace approximation.

Assuming h is a smooth function of an one-dimensional parameter ϕ with h having a maximum at ϕ̂, the Laplace approach asymptotically approximates an integral of the form,

I = + e x p ( n h ( ϕ ) ) d ϕ (48)

by expanding h in a Taylor series about ϕ̂ (Tierney and Kadane 1986). The Laplace’s method gives the approximation

+ e x p ( n h ( ϕ ) ) d ϕ 2 π σ 2 n e x p ( n h ( ϕ ̂ ) ) ( 1 + O ( n 1 ) ) (49)

where ϕ̂ is the root of equation h(1)(ϕ)=0 and σ̂=1h(2)(ϕ̂).

We can write (47) as

0 c a 1 1 e b 1 1 ( b 2 + 1 c ( e c t p 1 ) ) a 2 d c = 0 e x p ( a 2 l o g ( b 2 + 1 c ( e c t p 1 ) ) + ( a 1 1 ) l o g ( c ) b 1 c ) d c (50)

Thus, the function h(c) in (50) is given by

h ( c ) = a 2 l o g ( b 2 + 1 c ( e c t p 1 ) ) + ( a 1 1 ) l o g ( c ) b 1 c (51)

By applying Laplace approximation to the integral in (47) we have

p = b 1 a 1 b 2 a 2 Γ ( a 1 ) σ ̂ 2 π e x p ( h ( c ̂ ) ) (52)

where ĉ is the root of the equation h(1)(c)=0 and σ̂=1h(2)(ĉ) .

We suppose that an expert can summarize his/her knowledge about the reliability of an item through statements of percentiles. Thus, we ask for

expert’s information in the form of four distinct percentiles tp for given p be provided to generate four equations in (52). In particular, the expert needs to specify tp for p=0.25,0.50,0.75,0.90.

The nonlinear system composed by the equation (52) under the four pair of values (tp,p) is solved numerically to obtain the required values of the hyperparameter a1,a2,b1 and b2 of the joint prior π(c, λ| a1,a2,b1,b2). A program has been developed in R package to solve the system.

SIMULATIONS

In this section, we perform a simulation study to examine the behavior of the proposed methods under different conditions. We considered three different sample sizes; n=10, 50, 100, and used several values of (c,λ). We computed MLEs of the unknown parameters of the Gompertz distribution along with the confidence intervals using the method described in Section 3. All results of the simulation study are based on 1,000 samples. The performance of the estimates is compared with respect to the average biases and the mean squared errors (MSE). To obtain the Bayes estimates and credible intervals, we need to appeal MCMC algorithm in order to obtain a sample of values of c and λ from the joint posterior distribution. To conduct the MCMC procedure, Markov chains of size 20,000 are generated from both conditional distributions p(c|λ,𝐱) and p(c|λ,𝐱) corresponding to the joint posteriors obtained under each proposed prior distribution in this paper, using MH algorithm and the first 5,000 of the observations are removed to eliminate the efect of the starting distribution. Then, in order to reduce the dependence among the generated samples, we take every 5th sampled value which result in final chains of size 10,000, and subsequently obtained Bayes estimates based on mean of the chain, and credible intervals. The rejection rate for is around 43 and 41 over 5,000 iterations. This ensures that the choice of proposal distribution works reasonably well in sampling posterior. subsequently obtained Bayes estimates, and credible intervals.

To investigate the convergence of the MCMC sampling via MH algorithm, we have used the Gelman-Rubin multiple sequence diagnostics. For computation, we have used R package coda. For each case of “c” and λ, we ran two different chains with two distinct starting values of the Monte Carlo samples. Then we get two potential scale reduction factor (psrf ) values for “c” and λ. If psrf values are close to 1, we say that samples converge to the stationary distribution. For both the cases, using 10,000 samples, we get psrf value equal to 1, which suffices the convergence of the MCMC sampling procedure.

For the informative gamma priors, the elicited percentiles provided by the expert and the corresponding elicited values of the hyperparameters have been found to be: for Table I, we have t25=1.2625,t50=1.5512,t75=1.7806 and t90=1.9491 resulting (a1,a2,b1,b2)=(38.5645,1.615,11.6416,78.2737), for Table II, t25=0.1083,t50=0.2010,t75=0.2992 and t90=0.3820 resulting (a1,a2,b1,b2)=(23.3800,4.2410,23.3795,11.9681) and for Table III with t25=0.2272,t50=0.4348,t75=0.6638 and t90=0.8618 provides (a1,a2,b1,b2)=(38.7505,17.2753,13.2237,13.5219). Frequentist property of coverage probabilities for the parameters c and λ have also been obtained to compare the Bayes estimators with different priors and MLE. Tables IVIV, VV and VI summarize the simulated coverage probabilities of 95% confidence/ credible intervals.

From the simulation results, we reach to the following conclusions: 1. With increase in sample size, biases and MSEs of the estimators decrease for given values of n, c and λ. 2. The performance of the MLEs are quite satisfactory. The Bayes’ estimates using noninformative prior works quite well, and in most of the cases it performs better, in terms of MSE than the MLE when the value of λ is very small. 3. Bayes estimates based on elicited prior produces much smaller bias and MSE than using the other assumed priors. 4. For the three sample sizes considered here, the elicited prior produces an over-coverage probability for small sample sizes while MLE and independent gamma priors seem to have an under-coverage for some cases. Coverage probabilities are very close to the nominal value when n increases.

TABLE I
Average bias of the estimates of c and λ and their associated MSEs (in parenthesis) for the different methods with 𝐜=3 and 𝛌=0.02.
TABLE II
Average bias of the estimates of c and λ and their associated MSEs (in parenthesis) for the different methods with 𝐜=5 and 𝛌=2.
TABLE III
Average bias of the estimates of c and λ and their associated MSEs (in parenthesis) for the different methods with 𝐜=2 and 𝛌=1.
TABLE IV
Coverage probabilities for the parameters 𝐜 and 𝛌.
TABLE V
Coverage probabilities for the parameters 𝐜 and 𝛌.
TABLE VI
Coverage probabilities for the parameters 𝐜 and 𝛌.

AN EXAMPLE WITH LITERATURE DATA

In this section, we use a real data set to illustrate the proposed estimation methods discussed in the previous sections.

Let us consider the following data set provided in King et al. (197915 KING M, BAILEY DM, GIBSON DG, PITHA JV AND MCCAY PB. 1979. Incidence and growth of mammary tumors induced by 7, l2-Dimethylbenz antheacene as related to the dietary content of fat and antioxidant. J Natl Cancer Inst 63: 656-664.):

112, 68, 84, 109, 153, 143, 60, 70, 98, 164, 63, 63, 77, 91, 91, 66, 70, 77, 63, 66, 66, 94, 101, 105, 108, 112, 115, 126, 161, 178.

These data represent the numbers of tumor-days of 30 rats fed with unsaturated diet. Chen (1997) and Asgharzadeh and Abdi (2011) used the Gompertz distribution for these data set in order to obtain exact confidence intervals and joint confidence regions for the parameters based on two different statistical analysis. Let us also assume the Gompertz distribution with density (1) fitted to the data and to compare the performance of the methods discussed in this paper.

For a Bayesian analysis, we assume independent Gamma prior distributions for the parameters c and λ, with the hyper parameter values a=b=α=β=0.01. The Bayes estimates cannot be obtained in closed form therefore we use MCMC procedure to compute the Bayes estimates and also to construct credible intervals. Using the software R, we simulated 50,000 MCMC samples (5,000 “burn-in-samples”) for the joint posterior distribution. The convergence of the chains was monitored from trace plots of the simulated samples. The estimates and 95% confidence intervals under classical method and Bayesian estimates with 95% credible intervals for the parameters c and λ of the Gompertz distribution are given in Table VII. The results show that among the Bayes estimators, Bayes estimate based on Singpurwalla’s prior performs the best in terms of credible intervals for both the parameters.

The marginal posterior distributions for the parameters c and λ considering the proposed prior distributions are shown in Figures 2 and 3, respectively.

TABLE VII
Estimators and 95% confidence/credible intervals of 𝐜 and 𝛌 of the Gompertz distribution for different estimation methods.
Figure 2
The posterior densities for the parameter c of the Gompertz distribution fitted by the data.
Figure 3
The posterior densities for the parameter λ of the Gompertz distribution fitted by the data.

CONCLUSIONS

In this paper, we have considered estimation of the parameters of the Gompertz distribution using frequentist and Bayesian methods. In Bayesian methods, we have consider objective priors (Jeffreys and MDIP), gamma prior, Singpurwalla’s prior and Elicited prior. We have performed an extensive simulation study to compare these methods. From the simulation study regarding the bias, MSE and CP we observe that in general the MDIP provides best results for both parameters and in some cases, with MDIP and Jeffreys priors the results are quite similar. The real data application shows the same situation. It is worth remembering that both forms result from formal procedures for representing absence of information, that is, they are noninformative. The commonly assumption used of independent gamma priors and the priors proposed by Singpurwalla do not present as good results as the objective priors. The independent gamma priors are generally used in situations where no objective priors are possible to obtain or provide improper posterior distributions and mainly due to computational ease. Elicited prior produces much smaller bias and MSE than using the other assumed priors and also provides an over-coverage probability than their counterparts. Hence, we can conclude that, in the situation of the absence of information, the MDIP prior is more indicate for a Bayesian estimation of the two-parameter Gompertz distribution. On the other hand, in the situation where we have available expert’s information, the Elicited prior will perform the best estimators.

REFERENCES

  • 1
    ASGHARZADEH A AND ABDI M. 2011. Exact Confidence Intervals and Joint Confidence Regions for the Parameters of the Gompertz Distribution based on Records. Pak J Statist 271: 55-64.
  • 2
    BERNARDO JM. 1979. Reference Posterior Distributions for Bayesian Inference. J R Stat Soc Series B Stat Methodol 41: 113-147.
  • 3
    CASELLA G AND BERGER RL. 2001. Statistical Inference. Cengage Learning, 660 p.
  • 4
    CHEN Z. 1997. Parameter estimation of the Gompertz population. Biom J 39: 117-124.
  • 5
    ELANDT-JOHNSON RC AND JOHNSON NL. 1979. Survival Models and Data Analysis. J Wiley & Sons: NY, 457 p.
  • 6
    FRANSES PH. 1994. Fitting a Gompertz curve. J Oper Res Soc 45: 109-113.
  • 7
    GARG M, RAO B AND REDMOND C. 1970. Maximum-likelihood estimation of the parameters of the Gompertz survival function. J R Stat Soc Ser C Appl Stat 19: 152-159.
  • 8
    GOMPERTZ B. 1825. On the nature of the function expressive of the law of human mortality and on a new mode of determining the value of life contingencies. Philos Trans R Soc Lond 115: 513-583.
  • 9
    ISMAIL AA. 2010. Bayes estimation of Gompertz distribution parameters and acceleration factor under partially accelerated life tests with type-I censoring. J Stat Comput Simul 80: 1253-1264.
  • 10
    ISMAIL AA. 2011. Planning step-stress life tests with type-II censored Data. Sci Res Essays 6: 4021-4028.
  • 11
    JAHEEN ZF. 2003a. Prediction of Progressive Censored Data from the Gompertz Model. Commun Stat Simul Comput 32: 663-676.
  • 12
    JAHEEN ZF. 2003b. A Bayesian analysis of record statistics from the Gompertz model. Appl Math Comput 145: 307-320.
  • 13
    JEFFREYS SIR HAROLD. 1967. Theory of probability. Oxford U Press: London, 470 p.
  • 14
    KIANI K, ARASAN J AND MIDI H. 2012. Interval estimations for parameters of Gompertz model with time-dependent covariate and right censored data. Sains Malays 414: 471-480.
  • 15
    KING M, BAILEY DM, GIBSON DG, PITHA JV AND MCCAY PB. 1979. Incidence and growth of mammary tumors induced by 7, l2-Dimethylbenz antheacene as related to the dietary content of fat and antioxidant. J Natl Cancer Inst 63: 656-664.
  • 16
    LAWLESS JF. 2003. Statistical Models and Methods for Lifetime Data. J Wiley & Sons: New Jersey, 664 p.
  • 17
    LENART A. 2014. The moments of the Gompertz distribution and maximum likelihood estimation of its parameters. Scand Actuar J 3: 255-277.
  • 18
    LENART A AND MISSOV TI. 2016. Goodness-of-fit tests for the Gompertz distribution. Commun Stat Theory Methods 45: 2920-2937.
  • 19
    MAKANY RA. 1991. Theoretical basis of Gompertz’s curve. Biom J 33: 121-128.
  • 20
    MEIJER CS. 1936. Über Whittakersche bzw. Besselsche Funktionen und deren Produkte. Nieuw Archief voor Wiskunde 18: 10-39.
  • 21
    MILGRAM M. 1985. The generalized integro-exponential function. Math Comp 44: 443-458.
  • 22
    RAO BR AND DAMARAJU CV. 1992. New better than used and other concepts for a class of life distribution. Biom J 34: 919-935.
  • 23
    READ CB. 1983. Gompertz Distribution. Encyclopedia of Statistical Sciences. J Wiley & Sons, NY.
  • 24
    SHANUBHOGUE A AND JAIN NR. 2013. Minimum Variance Unbiased Estimation in the Gompertz Distribution under Progressive Type II Censored Data with Binomial Removals. Int Sch Res Notices 2013: 1-7.
  • 25
    SINGH N, YADAV KK AND RAJASEKHARAN R. 2016. ZAP1-mediated modulation of triacylglycerol levels in yeast by transcriptional control of mitochondrial fatty acid biosynthesis. Mol Microbiol 1001: 55-75.
  • 26
    SINGPURWALLA ND. 1988. An interactive PC-based procedure for reliability assessment incorporating expert opinion and survival data. J Am Stat Assoc 83: 43-51.
  • 27
    TIBSHIRANI R. 1989. Noninformative Priors for One Parameters of Many. Biometrika 76: 604-608.
  • 28
    WILLEKENS F. 2002. Gompertz in context: the Gompertz and related distributions. In: Tabeau E, Van den Berg JA and Heathcote C (Eds), Forecasting mortality in developed countries - insights from a statistical demographic and epidemiological perspective European studies of population. Dordrecht: Kluwer Academic Publishers 9: 105-126.
  • 29
    WU JW, HUNG WL AND TSAI CH. 2004. Estimation of parameters of the Gompertz distribution using the least squares method. Appl Math Comput 158: 133-147.
  • 30
    WU JW AND LEE WC. 1999. Characterization of the mixtures of Gompertz distributions by conditional expectation of order statistics. Biom J 41: 371-381.
  • 31
    WU JW AND TSENG HC. 2006. Statistical inference about the shape parameter of the Weibull distribution by upper record values. Stat Pap 48: 95-129.
  • 32
    WU SJ, CHANG CT AND TSAI TR. 2003. Point and interval estimations for the Gompertz distribution under progressive type-II censoring. Metron LXI: 403-418.
  • 33
    ZELLNER A. 1977. Maximal Data Information Prior Distributions. In: Aykac A and Brumat C (Eds), New Methods in the applications of Bayesian Methods. North-Holland Amsterdam, p. 211-232.
  • 34
    ZELLNER A. 1984. Maximal Data Information Prior Distributions. Basic Issues in Econometrics, U. of Chicago Press.
  • 35
    ZELLNER A. 1990. Bayesian Methods and Entropy in Economics and Econometrics. In: Grandy Junior WT and Schick LH (Eds), Maximum Entropy and Bayesian Methods Dordrecht Netherlands: Kluwer Academic Publishers, p. 17-31.
  • 36
    ZELLNER A AND MIN C. 1993. Bayesian analysis model selection and prediction. In: Grandy Junior WT and Milonni PW (Eds), Physics and Probability: Essays in honour of Edwin T Jaynes, Cambridge University Press, Cambridge, UK.

Publication Dates

  • Publication in this collection
    Sept 2018

History

  • Received
    29 Dec 2017
  • Accepted
    26 Mar 2018
Academia Brasileira de Ciências Rua Anfilófio de Carvalho, 29, 3º andar, 20030-060 Rio de Janeiro RJ Brasil, Tel: +55 21 3907-8100 - Rio de Janeiro - RJ - Brazil
E-mail: aabc@abc.org.br