SciELO - Scientific Electronic Library Online

vol.90 número3A weighted negative binomial Lindley distribution with applications to dispersed dataA new characterization of the Euclidean sphere índice de autoresíndice de assuntospesquisa de artigos
Home Pagelista alfabética de periódicos  

Serviços Personalizados




Links relacionados


Anais da Academia Brasileira de Ciências

versão impressa ISSN 0001-3765versão On-line ISSN 1678-2690

An. Acad. Bras. Ciênc. vol.90 no.3 Rio de Janeiro jul./set. 2018 

Mathematical Sciences

Objective and subjective prior distributions for the Gompertz distribution



1Departamento de Estatística, Faculdade de Ciências e Tecnologia, Universidade Estadual Paulista/UNESP, Rua Roberto Simonsen, 305, Centro Educacional, 19060-900 Presidente Prudente, SP, Brazil

2Department of Statistics, St. Anthony’s College, Bomfyle road, East Khasi Hills, 793001 Shillong, Meghalaya, India


This paper takes into account the estimation for the unknown parameters of the Gompertz distribution from the frequentist and Bayesian view points by using both objective and subjective prior distributions. We first derive non-informative priors using formal rules, such as Jefreys prior and maximal data information prior (MDIP), based on Fisher information and entropy, respectively. We also propose a prior distribution that incorporate the expert’s knowledge about the issue under study. In this regard, we assume two independent gamma distributions for the parameters of the Gompertz distribution and it is employed for an elicitation process based on the predictive prior distribution by using Laplace approximation for integrals. We suppose that an expert can summarize his/her knowledge about the reliability of an item through statements of percentiles. We also present a set of priors proposed by Singpurwala assuming a truncated normal prior distribution for the median of distribution and a gamma prior for the scale parameter. Next, we investigate the effects of these priors in the posterior estimates of the parameters of the Gompertz distribution. The Bayes estimates are computed using Markov Chain Monte Carlo (MCMC) algorithm. An extensive numerical simulation is carried out to evaluate the performance of the maximum likelihood estimates and Bayes estimates based on bias, mean-squared error and coverage probabilities. Finally, a real data set have been analyzed for illustrative purposes.

Key words Gompertz distribution; objective prior; Jeffreys prior; subjective prior; maximal data information prior; elicitation


Gompertz distribution was introduced in connection with human mortality and actuarial sciences by Benzamin Gompertz (1825). Right from the time of its introduction, this distribution has been receiving great attention from demographers and actuarist. This distribution is a generalization of the exponential distribution and is applied in various fields especially in reliability and life testing studies, actuarial science, epidemiological and biomedical studies. Gompertz distribution has some interesting relations with some of the well-known distributions such as exponential, double exponential, Weibull, extreme value (Gumbel Distribution) or generalized logistic distribution (Willekens 2002). An important characteristic of the Gompertz distribution is that it has an exponentially increasing failure rate for the life of the systems and is often used to model highly negatively skewed data in survival analysis (Elandt-Johnson and Johnson 1979). In recent past, many authors have contributed to the studies of statistical methodology and characterization of this distribution; for example, Garg et al. (1970), Read (1983), Makany (1991), Rao and Damaraju (1992), Franses (1994), Chen (1997) and Wu and Lee (1999). Jaheen (2003a, b) studied this distribution based on progressive type-II censoring and record values using Bayesian approach. Wu et al. (2003) derived the point and interval estimators for the parameters of the Gompertz distribution based on progressive type II censored samples. Wu et al. (2004) used least squared method to estimate the parameters of the Gompertz distribution. Wu et al. (2006) also studied this distribution under progressive censoring with binomial removals. Ismail (2010) obtained Bayes estimators under partially accelerated life tests with type-I censoring. Ismail (2011) also discussed the point and interval estimations of a two-parameter Gompertz distribution under partially accelerated life tests with Type-II censoring. Asgharzadeh and Abdi (2011) studied different types of exact confidence intervals and exact joint confidence regions for the parameters of the two-parameter Gompertz distribution based on record values. Kiani et al. (2012) studied the performance of the Gompertz model with time-dependent covariate in the presence of right censored data. Moreover, they compared the performance of the model under different censoring proportions (CP) and sample sizes. Shanubhogue and Jain (2013) studied uniformly minimum variance unbiased estimation for the parameter of the Gompertz distribution based on progressively Type II censored data with binomial removals. Lenart (2014) obtained moments of the Gompertz distribution and maximum likelihood estimators of its parameters. Lenart and Missov (2016) studied Goodness-of-fit tests for the Gompertz distribution. Recently, Singh et al. (2016) studied different methods of estimation for the parameters of Gompertz distribution when the available data are in the form of fuzzy numbers. They also obtained Bayes estimators of the parameters under different symmetric and asymmetric loss functions.

In this paper, we present a Bayesian analysis when there is a limited prior knowledge about the parameter of interest. In this regard, it is important to use noninformative priors, however, it can be difficult to choose a prior distribution that represents this situation, because there is hardly any precise definition of the concept of noninformative prior. Nevertheless, we have many noninformative priors, for instance, Jeffreys prior (Jeffreys 1967), MDIP prior (Zellner 1977, 1984, 1990), Tibshirani prior (Tibshirani 1989), reference prior (Bernardo 1979) and many others which seemingly appropriate for a number of inference problems. It is to be noted that lack of enough information on the part of analysts often forces them to choose noninformative priors and this consideration ensures that the inferences are mostly data driven. In Bayesian analysis, many authors consider independent gamma priors for the estimation of parameters of the model, representing weak information as the use of a priori independence assumption simplifies the computations. Our main interest in the Bayesian analysis is to select a prior distribution that represents better dependence structure of the parameters in which the information regarding the parameters is not considered substantial as compared with information from the data. The focus is on the comparison of independent gamma prior, Jeffreys prior, maximal data information prior (MDIP), Singpurwalla’s prior and elicited prior. Jeffreys (1967) proposed a noninformative prior resulting from an argument based on the Fisher Information Measure and Zellner (1977, 1984) proposed an alternative prior, named maximal data information prior (MDIP) based on the Entropy Measure. The prior proposed by Singpurwalla (1988) for estimation of the parameters of Weibull distribution is also considered in this paper to estimate the parameters of Gompertz distribution.

There are many methods for eliciting parameters of prior distributions. In this paper, we also consider an elicitation method to specify the values of hyperparameters of the two gamma priors assigned to the parameters of the Gompertz distribution. The method requires the derivation of predictive prior distribution and it is assumed that the expert is able to provide some percentiles values. Thus, the main aim of this paper is to propose noninformative and informative prior distributions for the parameters c and λ of the Gompertz distribution and to study the effects of these different priors in the resulting posterior distributions, especially in situations of small sample sizes, a common situation in applications.

The paper is organized as follows. Some probability properties of the Gompertz distribution such as quantiles, moments, moment generating function are reviewed in Section 2. Section 3 describes the maximum likelihood estimation method. The Bayesian approach with proposed informative and noninformative priors is presented in section 4. In Section 5, simulation study is carried out to evaluate the performance of several estimation procedures along with coverage percentages is provided. The methodology developed in this paper and the usefulness of the Gompertz distribution is illustrated by using a real data example in Section 6. Finally, concluding remarks are provided in Section 7.


A random variable X has the Gompertz distribution with parameters c and λ , say GM(c,λ) , if its density function is

f(x)=λecxeλc(ecx1);x>0,c,λ>0, (1)

and the corresponding c.d.f is given by

F(x)=1eλc(ecx1);x>0,c,λ>0. (2)

The basic tools for studying the ageing and reliability characteristics of the system are the hazard rate h(x) . The hazard function gives the rate of failure of the system immediately after time x. Thus the hazard rate function of the Gompertz distribution is given by

h(x)=f(x)1F(x)=λecx. (3)

Note that the hazard rate function is increasing function if c>0 or constant if c=0 . Figure 1b shows the shapes of the hazard function for different selected values of the parameters c and λ . From the plot, it is quite evident that the Gompertz distribution has increasing hazard rate function.

Figure 1a shows the shapes of the pdf of the Gompertz distribution for different values of the parameters c and λ and from the plot, it is quite evident that the Gompertz distribution is positively skewed distribution.

Figure 1 Pdf function (a) and hazard function (b). 

The quantile function xp=Q(p)=F1(p) , for 0<p<1 , of the Gompertz distribution is obtained from ([eq.2]), thus the quantile function xp is

xp=1cln(1cλln(1p)). (4)

In particular, the median of the Gompertz distribution can be written as

Md(X)=Md=1cln(1cλln(10.5)). (5)

If the random variable X is distributed GM(c,λ) , then its n th moment around zero can be expressed as

E(Xn)=λeλcc11cneλcx[ln(x)]ndx. (6)

On simplification, we get

E(Xn)=n!cneλcE1n1(λc), (7)





is the generalized integro-exponential function (Milgram 1985).

The mean and variance of the random variable X of the Gompertz distribution are respectively, given by

E(X)=1ceλcE1(λc) (8)


var(X)=2c2eλ cE11(λ c)(1ceλ cE1(λ c))2 (9)

Many of the interesting characteristics and features of a distribution can be obtained via its moment generating function and moments. Let X denote a random variable with the probability density function (1). By definition of moment generating function of X and using (1), we have

Mx(t)=E(etx)=0etxf(x)dx=λcp=0(1)p(tλp)Γ(p+1)(λc)p+1. (10)


The method of maximum likelihood is the most frequently used method of parameter estimation (Casella and Berger 2001). The success of the method stems no doubt from its many desirable properties including consistency, asymptotic efficiency, invariance property as well as its intuitive appeal. Let x1,,xn be a random sample of size n from (1), then the log-likelihood function of (1) without constant terms is given by


For ease of notation, we denote the first partial derivatives of any function f(x,y) by fx and fy . Now setting


we have

λ=nλ1ci=1n(ecxi1)=0, (11)


c=i=1nxiλci=1nxiecxi+λc2i=1necxinλc2=0. (12)

From (11) and (12), we find the MLE for λ given by,


The MLE for " c " is obtained by solving the non-linear equation,


The asymptotic distribution of the MLE θ ^ is


(Lawless 2003), where I1(θ) is the inverse of the observed information matrix of the unknown parameters θ=(c,λ) .

I1(θ )=(2logLc2 2logLcλ 2logLλ c 2logLλ 2)|(c,λ )=(c^ ,λ ^ )1 (13)
=(var(c^ MLE) cov(c^ MLE,λ ^ MLE)cov(λ ^ MLE,c^ MLE) var(λ ^ MLE))=(σ cc σ cλ σ λ c σ λ λ )

The derivatives in I(θ) are given as follows

2logLc2|c=c^ MLE=λ c2[ci=1nxi2ecxi+2i=1nxiecxi2ci=1necxi+2nc] (14)
2logLλ2|λ=λ̂MLE=nλ2, (15)
2logLcλ|c=ĉMLE,λ=λ̂MLE=[1c2i=1n(ecxi1)1ci=1nxiecxi]. (16)

Therefore, the above approach is used to derive the approximate 100(1τ)% confidence intervals of the parameters θ=(c,λ) as in the following forms


Here, Zτ2 is the upper ( τ2 )th percentile of the standard normal distribution.


In this section, we consider Bayesian inference of the unknown parameters of the GM(c,λ) . First, we assume that c and λ has the independent gamma prior distributions with probability density functions

π(c)ca11eb1cc>0 (17)


π(λ)λa21eb2λλ>0. (18)

The hyperparameters a1 , a2 , b1 and b2 are known and positives. If both parameters c and λ are unknown, we cannot see an easy way to work with conjugation, since the expression of the likelihood function does not suggest any known form for the joint density of (c,λ) . It is not unreasonable to assume independent gamma priors on the shape and scale parameters for a two-parameter GM(c,λ) , because gamma distributions are very flexible. The joint prior distribution for both parameters in this case is given by

π(c,λ)ca11exp(b1c)λa21exp(b2λ). (19)

Thus, the joint posterior distribution is given by

p(c,λ|𝐱)λn+a21ca11ec(i=1nxib1)eλ[1ci=1n(ecxi1)+b2]. (20)

The conditional distribution of c given λ and data is given by

p(c|λ,𝐱)ca11ec(i=1nxib1)eλci=1n(ecxi1). (21)

Similarly, the conditional distribution of λ given c and data is given by

p(λ|c,𝐱)λn+a21eλ[1ci=1n(ecxi1)+b2]. (22)

Note that the although the conditional p(λ|c,𝐱) is a gamma distribution, the conditional distributions p(c|λ,𝐱) is not identified as known distributions that are easy to simulate. In this way, Bayesian inference for the parameters c and λ can be performed by Metropolis-Hastings (MH) algorithm considering the gamma distribution as the target density for λ , and c can be generated from the conditional p(c|λ,𝐱) by using the rejection method.


A well known non-informative prior, which represents a situation with little a priori information on the parameters was introduced by Jeffreys (1967), also known as the Jeffreys rule. The Jeffreys prior has been widely used due to the invariance property for one to one transformations of the parameters. Since then Jeffreys prior has played an important role in Bayesian inference. This prior is derived from Fisher Information matrix I ( c , λ ) as

π(c,λ)det(I(c,λ)). (23)

However, I ( c , λ ) can not be analytically obtained for the parameters of Gompertz distribution. A possible simplification is to consider a noninformative prior given by π(c,λ)=π(λ|c)π(c) . Using the Jeffreys’rule, we have,

π (c,λ )=[E(2λ 2logL(c,λ ))]12π (c) (24)

where E(2λ 2logL(c,λ )) is given by (15) and π(c) is a noninformative prior, for instance, a gamma distribution with hyper-parameters equal to 0.01.

In this way, from (15) and (24) the non-informative prior for ( c , λ ) parameters is given by:

π(c,λ)=1λπ(c). (25)

Let us denote the prior (25) as "Jeffreys prior".

Thus, the corresponding posterior distribution is given by

p(λ ,c|x)=λ n1exp(ci=1nxiλ ci=1necxi+nλ c)π (c) (26)

Proposition 1: For the parameters of the Gompertz distribution, the posterior distribution given in (26) under Jeffreys prior π ( λ , c ) given in (25) is proper.

We need to prove that 00p(λ , c|𝐱 ) dλdc is finite.


00p(λ ,c|x)dλ dc=Γ (n)0cnexp(ci=1nxi)(i=1n(ecxi1))nπ (c)dc (27)

The function h(c)=cnexp(ci=1nxi)(i=1n(ecxi1))n is unimodal with maximum point ĉ as the solution of the nonlinear equation given by


Therefore, from (27) we have


where 0π ( c ) dc=1 . This completes the proof.


It is interesting to note that the data gives more information about the parameter than the information from the prior density, otherwise, there would not be justification for the realization of the experiment. Let X be a random variable with density f(x|ϕ) , xRX ( RX ), parameter ϕ a, b. Thus, we wish a prior distribution π(ϕ) that provides the gain in the information supplied by the data as much as possible relative to the prior information of the parameter, that is, maximizes the information on the data. With this idea, Zellner (1977, 1984, 1990) and Zellner and Min (1993) derived a prior distribution which maximize the information from the data in relation to the prior information on the parameters. Let

H(ϕ)=RXf(x|ϕ)lnf(x|ϕ)dx (28)

be a negative entropy of f ( x|ϕ ), the measure of the information in f ( x|ϕ ). Thus, the following functional form is employed in the MDIP approach:

G[π (ϕ )]=abH(ϕ )π (ϕ )dϕ abπ (ϕ )lnπ (ϕ )dϕ (29)

which is the prior average information in the data density minus the information in the prior density. G[π ( ϕ ) ] is maximized by selection of π ( ϕ ) subject to abπ ( ϕ ) dϕ=1 .

The following theorem proposed by Zellner provides the formula for the MDIP prior.

Theorem: The MDIP prior is given by:

π (ϕ )=kexp(H(ϕ )) aϕ b (30)

where k1=abexp(H(ϕ ))dϕ is the normalizing constant.

Proof. We have to maximize the function U=G[π ( ϕ )] λ (abπ ( ϕ ) dϕ1) where λ is the Lagrange multiplier.

Thus, Uπ=0 H(X)=ln(π ( ϕ ) )1λ=0 and the solution is given by π ( ϕ ) =k exp(H(X)) .

Therefore, the MDIP is a prior that leads to an emphasis on the information in the data density or likelihood function, that is, its information is weak in comparison with data information.

Zellner (1984) shows several interesting properties of MDIP and additional conditions that can also be imposed to the approach refleting given initial information.

Suppose that we do not have much prior information available about c and λ . Under this condition, the prior distribution MDIP for the parameters ( c , λ ) of Gompertz distribution (1) is obtained as follows. Firstly, we have to evaluate the measure of information H ( c, λ ) =E(logf(x)) , that is

E(logf(x))=log(λ )+cE(X)λ c(E(ecX)1) (31)

where E(X) is obtained from (8) given by E(X)=1ceλ cE1(λ c) and E1(x)=1exuudu is the Exponential Integral. After some algebra, we also obtain E(ecX)=cλ+1 . Therefore,

H(c,λ )=log(λ )+eλ cE1(λ c)λ c(cλ +1)+λ c (32)

Hence the MDIP prior is given by

π Z(c,λ )λ exp(eλ cE1(λ c)) (33)

Now combining the likelihood function given by

L(λ ,c|x)=λ nexp(ci=1nxiλ ci=1necxi+nλ c) (34)

and the MDIP prior in (33), the posterior densitiy for the parameters c and λ is given by

p(λ ,c|x)=λ n+1exp(ci=1nxi+eλ cE1(λ c)λ ci=1n(ecxi1)) (35)

Proposition 2: For the parameters of the Gompertz distribution, the posterior distribution given in (35) under the corresponding MDIP prior π ( λ , c ) given in (33) is proper.

Proof. Indeed,

00p(λ ,c|x)dλ dc=00λ n+1exp(ci=1nxi+eλ cE1(λ c)λ ci=1n(ecxi1))dλ dc

Now, we consider a substituition of variables in the integral above as


resulting in

00p(λ ,c|x)dλ dc=0[0un+2exp(ui=1nxiwi=1neuxi)du]h(w)dw

where h(w)=wn+1exp(nw+ewE1(w)) .

Let us denote x(n)=max(x1,x2,...,xn) then i=1neuxi<neux(n) . Hence,

00p(λ ,c|x)dλ dc<0[0un+2exp(nx¯ unweux(n))du]h(w)dw

where x¯=i=1nxin . As x¯<x(n) we have

0un+2exp(nx¯ unweux(n))du<0un+2exp(nx(n)unweux(n))du<

Now consider eux(n)=z then by substitution process the integral above becomes

0un+2exp(nx(n)uweux(n))du=1x(n)n+31(logz)n+2zn1exp(wz)dz=1x(n)n+3Γ (n+2)Gn+3,n+40,0(-(n1), . . ., -(n1)-n, . . ., -n, 0|w) (36)

where Gp,qm,n(a1,...,apb1,...,bq|w) is the Meijer G-function introduced by Meijer (1936) and given by

Gp,qm,n(a1,...,apb1,...,bq|w)=12π iLj=1mΓ (bjs)j=1nΓ (1aj+s)j=m+1qΓ (1bj+s)j=n+1pΓ (ajs)zsds

for 0 mq and 0 np , where m , n , p and q are integer numbers.

From (36) we have

00p(λ ,c|x)dλ dc<1x(n)n+3Γ (n+2)0Gn+3,n+40,0(-(n1), . . ., -(n1)-n, . . ., -n, 0|w)h(w)dw

which is not possible to obtain an analytical expression for this integral. However, the software Mathematica gives a convergence result of the integral.


Singpurwalla (1988) presented a procedure for the construction of the prior distribution with the use of expert opinion in order to estimate the parameters α and β of Weibull distribution. Expert’s opinion about measures of central tendency such as the median can be easily found, since most people are accustomed to this term. Singpurwalla introduces the median life M and elicit expert opinion on M and β through the priors π ( M ) and π ( β ). He focuses attention on β and on M=α exp(kβ ) where k=ln(ln2) . Since M is restricted to being nonnegative, it is assumed a Gaussian distribution truncated at 0 with parameters μ and σ . A gamma prior distribution with parameters a and b is chosen to model the uncertainty about β . After this reparametrization, the prior π(α,β) is derived by transformation of variables.

Our aim is to derive the prior π ( c , λ ) applying a similar procedure proposed by Singpurwalla in order to estimate the parameters c and λ of Gompertz distribution. Differently of Singpurwalla’s priors who considered elicitation from expert for the parameters, we assume absence of information, hence the hyperparameters of the priors are chosen to provide noninformative prior and we use the information from the data to the parameter μ through the median of the data.

Consider the median of X is given by M=1clog(1+0.6931cλ ) . Thus, it is assumed a Gaussian distribution truncated at 0 for the parameter M , making the density as

π (M|μ ,σ )exp(12(Mμ σ )2)

where 0M< with parameter μ equal to median of the data and standard deviation σ=100 .

A gamma prior distribution is chosen to model the uncertainty about λ with density

π(λ|a,b)λa1exp(bλ), (38)

with the parameters a and b specified as 0.01 representing a noninformative prior for λ .

Thus, we can determine the conditional prior distribution π ( c |λ , μ , σ ) for the parameter c given λ through the reparametrization M=1clog(1+0.6931cλ ) with the Jacobian given by |dMdc|=|0.6931cλ +0.6931c21c2log(1+0.6931cλ )| . Therefore, the conditional prior π ( c |λ , μ , σ ) is given by

π (c|λ ,μ ,σ )exp(12(1clog(1+0.6931cλ )μ σ )2)|0.6931cλ +0.6931c21c2log(1+0.6931cλ )| (39)

Finally, the joint prior for c and λ , obtained as π ( c , λ ) =π ( c|λ ) π ( λ ), that is, by the product of (38) and (39), is given by

π (c,λ |Θ )λ a1exp(12(1clog(1+0.6931cλ )μ σ )2bλ )|0.6931cλ +0.6931c21c2log(1+0.6931cλ )| (40)

where the vector of parameters Θ=( a ,b,μ,σ) is known.


In this Section, we provide a methodology that permits the experts to use their knowledges about the reliability of an item through statements of percentiles. This method requires the derivation of prior predictive distribution for elicitation. Suppose that joint prior π(c,λ) is given, then the reliability based on the predictive prior distribution is given by:

R(tp)=P{Ttp}=00tpf(t|c,λ)π(c,λ)dtdcdλ, (41)

for a fixed mission time tp .

In order to elicit the four hyperparameters (a1,a2,b1,b2) of the prior π (c,λ |a1,a2,b1,b2) the integral have been considered as follows

p=00R(tp|c,λ)π(c,λ|a1,a2,b1,b2)dcdλ, (42)

for a given p-th percentile elicited from the expert where p=R(tp) .

By considering Gompertz distribution, the reliability function R(tp|c , λ) is given by

R(tp|c,λ )=exp(λ c(ectp1)) (43)

and assuming a joint prior π(c , λ| a 1, a 2,b1,b2) given by the product of gamma priors, we have

π (c,λ |a1,a2,b1,b2)=kca11λ a21exp((b1c+b2λ )) (44)

where k=b1a1b2a2Γ(a1)Γ(a2) .

Using (42), (43) and (44), the probability in (42) becomes

p=k00ca11λ a21exp(λ c(ectp1)(b1c+λ b2))dcdλ (45)

Let d=1c(ectp1) , then the integral for λ in equation (45) takes the gamma shape resulting in

p=k0[0λ a21exp(λ (b2+d))dλ ]ca11eb1cdc (46)

that is,

p=b1a1b2a2Γ (a1)0ca11eb1c(b2+1c(ectp1))a2dc (47)

Since it is not possible to obtain a closed form for the integral (47), one possibility to work around this problem is to use the Laplace approximation.

Assuming h is a smooth function of an one-dimensional parameter ϕ with h having a maximum at ϕ̂ , the Laplace approach asymptotically approximates an integral of the form,

I=+exp(nh(ϕ))dϕ (48)

by expanding h in a Taylor series about ϕ̂ (Tierney and Kadane 1986). The Laplace’s method gives the approximation

+exp(nh(ϕ))dϕ2πσ2nexp(nh(ϕ̂))(1+O(n1)) (49)

where ϕ̂ is the root of equation h(1)(ϕ)=0 and σ̂=1h(2)(ϕ̂) .

We can write (47) as

0ca11eb11(b2+1c(ectp1))a2dc=0exp(a2log(b2+1c(ectp1))+(a11)log(c)b1c)dc (50)

Thus, the function h(c) in (50) is given by

h(c)=a2log(b2+1c(ectp1))+(a11)log(c)b1c (51)

By applying Laplace approximation to the integral in (47) we have

p=b1a1b2a2Γ(a1)σ̂2πexp(h(ĉ)) (52)

where ĉ is the root of the equation h(1)(c)=0 and σ̂=1h(2)(ĉ) .

We suppose that an expert can summarize his/her knowledge about the reliability of an item through statements of percentiles. Thus, we ask for

expert’s information in the form of four distinct percentiles tp for given p be provided to generate four equations in (52). In particular, the expert needs to specify tp for p=0.25,0.50,0.75,0.90 .

The nonlinear system composed by the equation (52) under the four pair of values (tp,p) is solved numerically to obtain the required values of the hyperparameter a 1, a 2,b1 and b2 of the joint prior π(c , λ| a 1, a 2,b1,b2) . A program has been developed in R package to solve the system.


In this section, we perform a simulation study to examine the behavior of the proposed methods under different conditions. We considered three different sample sizes; n=10 , 50, 100, and used several values of (c,λ ). We computed MLEs of the unknown parameters of the Gompertz distribution along with the confidence intervals using the method described in Section 3. All results of the simulation study are based on 1,000 samples. The performance of the estimates is compared with respect to the average biases and the mean squared errors (MSE). To obtain the Bayes estimates and credible intervals, we need to appeal MCMC algorithm in order to obtain a sample of values of c and λ from the joint posterior distribution. To conduct the MCMC procedure, Markov chains of size 20,000 are generated from both conditional distributions p(c|λ,𝐱) and p(c|λ,𝐱) corresponding to the joint posteriors obtained under each proposed prior distribution in this paper, using MH algorithm and the first 5,000 of the observations are removed to eliminate the efect of the starting distribution. Then, in order to reduce the dependence among the generated samples, we take every 5th sampled value which result in final chains of size 10,000, and subsequently obtained Bayes estimates based on mean of the chain, and credible intervals. The rejection rate for is around 43 and 41 over 5,000 iterations. This ensures that the choice of proposal distribution works reasonably well in sampling posterior. subsequently obtained Bayes estimates, and credible intervals.

To investigate the convergence of the MCMC sampling via MH algorithm, we have used the Gelman-Rubin multiple sequence diagnostics. For computation, we have used R package coda. For each case of “ c ” and λ , we ran two different chains with two distinct starting values of the Monte Carlo samples. Then we get two potential scale reduction factor (psrf ) values for “c” and λ . If psrf values are close to 1, we say that samples converge to the stationary distribution. For both the cases, using 10,000 samples, we get psrf value equal to 1, which suffices the convergence of the MCMC sampling procedure.

For the informative gamma priors, the elicited percentiles provided by the expert and the corresponding elicited values of the hyperparameters have been found to be: for Table I, we have t25=1.2625,t50=1.5512,t75=1.7806 and t90=1.9491 resulting (a1,a2,b1,b2)=(38.5645,1.615,11.6416,78.2737) , for Table II, t25=0.1083,t50=0.2010,t75=0.2992 and t90=0.3820 resulting (a1,a2,b1,b2)=(23.3800,4.2410,23.3795,11.9681) and for Table III with t25=0.2272,t50=0.4348,t75=0.6638 and t90=0.8618 provides (a1,a2,b1,b2)=(38.7505,17.2753,13.2237,13.5219) . Frequentist property of coverage probabilities for the parameters c and λ have also been obtained to compare the Bayes estimators with different priors and MLE. Tables IVIV, VV and VI summarize the simulated coverage probabilities of 95% confidence/ credible intervals.

From the simulation results, we reach to the following conclusions: 1. With increase in sample size, biases and MSEs of the estimators decrease for given values of n , c and λ . 2. The performance of the MLEs are quite satisfactory. The Bayes’ estimates using noninformative prior works quite well, and in most of the cases it performs better, in terms of MSE than the MLE when the value of λ is very small. 3. Bayes estimates based on elicited prior produces much smaller bias and MSE than using the other assumed priors. 4. For the three sample sizes considered here, the elicited prior produces an over-coverage probability for small sample sizes while MLE and independent gamma priors seem to have an under-coverage for some cases. Coverage probabilities are very close to the nominal value when n increases.

TABLE I Average bias of the estimates of c and λ and their associated MSEs (in parenthesis) for the different methods with 𝐜=3 and 𝛌=0.02

Method c=3 λ=0.02
n=10 n=50 n=100 n=10 n=50 n=100
MLE 0.8476 0.3080 0.2077 0.0196 0.0088 0.0060
(1.4217) (0.1556) (0.0685) (0.0009) (0.0001) (6.0e-05)
Gamma prior 0.6607 0.2935 0.1997 0.0363 0.0097 0.0062
(0.6696) (0.1378) (0.0649) (0.0042) (0.0002) (6.9e-05)
Jeffrey’s prior 0.5204 0.2907 0.1978 0.0360 0.0096 0.0061
(0.4204) (0.1341) (0.0613) (0.0037) (0.0001) (6.8e-05)
MDIP 0.5639 0.2749 0.1913 0.0573 0.0114 0.0068
(0.4732) (0.1178) (0.0570) (0.0063) (0.0002) (8.4e-05)
Singpurwalla’s prior 0.4687 0.2898 0.1780 0.0248 0.0091 0.0058
(0.3387) (0.1327) (0.0494) (0.0015) (0.0001) (6.2e-05)
Elicited prior 0.2146 0.1822 0.1528 0.0040 0.0045 0.0040
(0.0680) (0.0538) (0.0367) (2.5e-05) (3.2e-05) (2.5e-05)

TABLE II Average bias of the estimates of c and λ and their associated MSEs (in parenthesis) for the different methods with 𝐜=5 and 𝛌=2

Method c=5 λ=2
n=10 n=50 n=100 n=10 n=50 n=100
MLE 2.8437 0.9839 0.6354 1.0200 0.4439 0.3116
(15.2918) (1.5513) (0.6406) (1.8551) (0.3017) (0.1536)
Gamma prior 2.5298 1.0271 0.6519 1.2956 0.5039 0.3304
(10.2579) (1.6401) (0.6689) (2.8684) (0.4233) (0.1806)
Jeffrey’s prior 2.2830 1.0219 0.6508 1.1710 0.5007 0.3308
(8.5106) (1.6199) (0.6672) (2.4183) (0.4134) (0.1805)
MDIP 1.7570 0.8304 0.6015 0.6181 0.4049 0.3135
(6.6596) (1.1011) (0.5643) (0.6339) (0.2464) (0.1575)
Singpurwalla’s prior 2.5279 0.9786 0.6323 0.7457 0.4248 0.3042
(12.1530) (1.5391) (0.6358) (0.8266) (0.2741) (0.1458)
Elicited prior 0.5518 0.4103 0.3436 0.1301 0.1467 0.1445
(0.4509) (0.2713) (0.1920) (0.0255) (0.0326) (0.0324)

TABLE III Average bias of the estimates of c and λ and their associated MSEs (in parenthesis) for the different methods with 𝐜=2 and 𝛌=1

Method c=2 λ=1
n=10 n=50 n=100 n=10 n=50 n=100
MLE 1.2171 0.4331 0.2967 0.4908 0.2139 0.1575
(2.9378) (0.3166) (0.1414) (0.4166) (0.0720) (0.0393)
Gamma prior 1.0368 0.4557 0.3030 0.5919 0.24807 0.1659
(1.8688) (0.3422) (0.1449) (0.5873) (0.1055) (0.0460)
Jeffrey’s prior 0.9578 0.4536 0.3020 0.5465 0.2466 0.1661
(1.6174) (0.3380) (0.1443) (0.5124) (0.1039) (0.0461)
MDIP 0.7924 0.3410 0.2636 0.2828 0.1757 0.1454
(1.3917) (0.2110) (0.1102) (0.1328) (0.0482) (0.0331)
Singpurwalla’s prior 1.0689 0.4324 0.2965 0.3593 0.2054 0.1543
(2.2494) (0.3157) (0.1415) (0.1916) (0.0653) (0.0374)
Elicited prior 0.2306 0.1922 0.1674 0.1071 0.0932 0.0858
(0.0664) (0.0539) (0.0432) (0.0171) (0.0130) (0.0112)

TABLE IV Coverage probabilities for the parameters 𝐜 and 𝛌

Method c=3 λ=0.02
n=10 n=50 n=100 n=10 n=50 n=100
MLE 0.95 0.95 0.95 0.73 0.87 0.91
Gamma prior 0.91 0.92 0.92 0.91 0.92 0.92
Jeffrey’s prior 0.97 0.96 0.96 0.97 0.96 0.96
MDIP 0.94 0.95 0.96 0.93 0.95 0.96
Singpurwalla’s prior 0.97 0.96 0.98 0.98 0.96 0.98
Elicited prior 1.00 0.98 0.98 1.00 0.99 0.98

TABLE V Coverage probabilities for the parameters 𝐜 and 𝛌

Method c=5 λ=2
n=10 n=50 n=100 n=10 n=50 n=100
MLE 0.90 0.97 0.96 0.82 0.92 0.95
Gamma prior 0.89 0.96 0.95 0.89 0.96 0.94
Jeffrey’s prior 0.95 0.95 0.95 0.95 0.95 0.95
MDIP 0.97 0.97 0.96 0.98 0.97 0.95
Singpurwalla’s prior 0.94 0.95 0.96 0.92 0.93 0.95
Elicited prior 1.00 0.99 0.99 1.00 1.00 0.99

TABLE VI Coverage probabilities for the parameters 𝐜 and 𝛌

Method c=2 λ=1
n=10 n=50 n=100 n=10 n=50 n=100
MLE 0.92 0.96 0.95 0.84 0.93 0.94
Gamma prior 0.90 0.95 0.94 0.92 0.93 0.94
Jeffrey’s prior 0.96 0.94 0.94 0.95 0.94 0.94
MDIP 0.97 0.96 0.95 0.98 0.96 0.95
Singpurwalla’s prior 0.94 0.94 0.94 0.92 0.94 0.94
Elicited prior 1.00 0.99 0.99 1.00 0.99 0.98


In this section, we use a real data set to illustrate the proposed estimation methods discussed in the previous sections.

Let us consider the following data set provided in King et al. (1979):

112, 68, 84, 109, 153, 143, 60, 70, 98, 164, 63, 63, 77, 91, 91, 66, 70, 77, 63, 66, 66, 94, 101, 105, 108, 112, 115, 126, 161, 178.

These data represent the numbers of tumor-days of 30 rats fed with unsaturated diet. Chen (1997) and Asgharzadeh and Abdi (2011) used the Gompertz distribution for these data set in order to obtain exact confidence intervals and joint confidence regions for the parameters based on two different statistical analysis. Let us also assume the Gompertz distribution with density (1) fitted to the data and to compare the performance of the methods discussed in this paper.

For a Bayesian analysis, we assume independent Gamma prior distributions for the parameters c and λ , with the hyper parameter values a=b=α=β=0.01 . The Bayes estimates cannot be obtained in closed form therefore we use MCMC procedure to compute the Bayes estimates and also to construct credible intervals. Using the software R, we simulated 50,000 MCMC samples (5,000 “burn-in-samples”) for the joint posterior distribution. The convergence of the chains was monitored from trace plots of the simulated samples. The estimates and 95% confidence intervals under classical method and Bayesian estimates with 95% credible intervals for the parameters c and λ of the Gompertz distribution are given in Table VII. The results show that among the Bayes estimators, Bayes estimate based on Singpurwalla’s prior performs the best in terms of credible intervals for both the parameters.

The marginal posterior distributions for the parameters c and λ considering the proposed prior distributions are shown in Figures 2 and 3, respectively.

TABLE VII Estimators and 95% confidence/credible intervals of 𝐜 and 𝛌 of the Gompertz distribution for different estimation methods. 

Method ĉ 95% CI λ̂ 95% CI
MLE 0.0241 (0.0160, 0.0322) 0.0016 (0.0002, 0.0031)
Gamma Prior 0.0232 (0.0150, 0.0312) 0.0019 (0.0007, 0.0038)
Jeffrey’s prior 0.0234 (0.0152, 0.0318) 0.0018 (0.0007, 0.0038)
MDIP 0.0226 (0.0147, 0.0304) 0.0020 (0.0008, 0.0045)
Singpurwalla’s Prior 0.0242 (0.0165, 0.0317) 0.0016 (0.0006, 0.0033)

Figure 2 The posterior densities for the parameter c of the Gompertz distribution fitted by the data. 

Figure 3 The posterior densities for the parameter λ of the Gompertz distribution fitted by the data. 


In this paper, we have considered estimation of the parameters of the Gompertz distribution using frequentist and Bayesian methods. In Bayesian methods, we have consider objective priors (Jeffreys and MDIP), gamma prior, Singpurwalla’s prior and Elicited prior. We have performed an extensive simulation study to compare these methods. From the simulation study regarding the bias, MSE and CP we observe that in general the MDIP provides best results for both parameters and in some cases, with MDIP and Jeffreys priors the results are quite similar. The real data application shows the same situation. It is worth remembering that both forms result from formal procedures for representing absence of information, that is, they are noninformative. The commonly assumption used of independent gamma priors and the priors proposed by Singpurwalla do not present as good results as the objective priors. The independent gamma priors are generally used in situations where no objective priors are possible to obtain or provide improper posterior distributions and mainly due to computational ease. Elicited prior produces much smaller bias and MSE than using the other assumed priors and also provides an over-coverage probability than their counterparts. Hence, we can conclude that, in the situation of the absence of information, the MDIP prior is more indicate for a Bayesian estimation of the two-parameter Gompertz distribution. On the other hand, in the situation where we have available expert’s information, the Elicited prior will perform the best estimators.


1 ASGHARZADEH A AND ABDI M. 2011. Exact Confidence Intervals and Joint Confidence Regions for the Parameters of the Gompertz Distribution based on Records. Pak J Statist 271: 55-64. [ Links ]

2 BERNARDO JM. 1979. Reference Posterior Distributions for Bayesian Inference. J R Stat Soc Series B Stat Methodol 41: 113-147. [ Links ]

3 CASELLA G AND BERGER RL. 2001. Statistical Inference. Cengage Learning, 660 p. [ Links ]

4 CHEN Z. 1997. Parameter estimation of the Gompertz population. Biom J 39: 117-124. [ Links ]

5 ELANDT-JOHNSON RC AND JOHNSON NL. 1979. Survival Models and Data Analysis. J Wiley & Sons: NY, 457 p. [ Links ]

6 FRANSES PH. 1994. Fitting a Gompertz curve. J Oper Res Soc 45: 109-113. [ Links ]

7 GARG M, RAO B AND REDMOND C. 1970. Maximum-likelihood estimation of the parameters of the Gompertz survival function. J R Stat Soc Ser C Appl Stat 19: 152-159. [ Links ]

8 GOMPERTZ B. 1825. On the nature of the function expressive of the law of human mortality and on a new mode of determining the value of life contingencies. Philos Trans R Soc Lond 115: 513-583. [ Links ]

9 ISMAIL AA. 2010. Bayes estimation of Gompertz distribution parameters and acceleration factor under partially accelerated life tests with type-I censoring. J Stat Comput Simul 80: 1253-1264. [ Links ]

10 ISMAIL AA. 2011. Planning step-stress life tests with type-II censored Data. Sci Res Essays 6: 4021-4028. [ Links ]

11 JAHEEN ZF. 2003a. Prediction of Progressive Censored Data from the Gompertz Model. Commun Stat Simul Comput 32: 663-676. [ Links ]

12 JAHEEN ZF. 2003b. A Bayesian analysis of record statistics from the Gompertz model. Appl Math Comput 145: 307-320. [ Links ]

13 JEFFREYS SIR HAROLD. 1967. Theory of probability. Oxford U Press: London, 470 p. [ Links ]

14 KIANI K, ARASAN J AND MIDI H. 2012. Interval estimations for parameters of Gompertz model with time-dependent covariate and right censored data. Sains Malays 414: 471-480. [ Links ]

15 KING M, BAILEY DM, GIBSON DG, PITHA JV AND MCCAY PB. 1979. Incidence and growth of mammary tumors induced by 7, l2-Dimethylbenz antheacene as related to the dietary content of fat and antioxidant. J Natl Cancer Inst 63: 656-664. [ Links ]

16 LAWLESS JF. 2003. Statistical Models and Methods for Lifetime Data. J Wiley & Sons: New Jersey, 664 p. [ Links ]

17 LENART A. 2014. The moments of the Gompertz distribution and maximum likelihood estimation of its parameters. Scand Actuar J 3: 255-277. [ Links ]

18 LENART A AND MISSOV TI. 2016. Goodness-of-fit tests for the Gompertz distribution. Commun Stat Theory Methods 45: 2920-2937. [ Links ]

19 MAKANY RA. 1991. Theoretical basis of Gompertz’s curve. Biom J 33: 121-128. [ Links ]

20 MEIJER CS. 1936. Über Whittakersche bzw. Besselsche Funktionen und deren Produkte. Nieuw Archief voor Wiskunde 18: 10-39. [ Links ]

21 MILGRAM M. 1985. The generalized integro-exponential function. Math Comp 44: 443-458. [ Links ]

22 RAO BR AND DAMARAJU CV. 1992. New better than used and other concepts for a class of life distribution. Biom J 34: 919-935. [ Links ]

23 READ CB. 1983. Gompertz Distribution. Encyclopedia of Statistical Sciences. J Wiley & Sons, NY. [ Links ]

24 SHANUBHOGUE A AND JAIN NR. 2013. Minimum Variance Unbiased Estimation in the Gompertz Distribution under Progressive Type II Censored Data with Binomial Removals. Int Sch Res Notices 2013: 1-7. [ Links ]

25 SINGH N, YADAV KK AND RAJASEKHARAN R. 2016. ZAP1-mediated modulation of triacylglycerol levels in yeast by transcriptional control of mitochondrial fatty acid biosynthesis. Mol Microbiol 1001: 55-75. [ Links ]

26 SINGPURWALLA ND. 1988. An interactive PC-based procedure for reliability assessment incorporating expert opinion and survival data. J Am Stat Assoc 83: 43-51. [ Links ]

27 TIBSHIRANI R. 1989. Noninformative Priors for One Parameters of Many. Biometrika 76: 604-608. [ Links ]

28 WILLEKENS F. 2002. Gompertz in context: the Gompertz and related distributions. In: Tabeau E, Van den Berg JA and Heathcote C (Eds), Forecasting mortality in developed countries - insights from a statistical demographic and epidemiological perspective European studies of population. Dordrecht: Kluwer Academic Publishers 9: 105-126. [ Links ]

29 WU JW, HUNG WL AND TSAI CH. 2004. Estimation of parameters of the Gompertz distribution using the least squares method. Appl Math Comput 158: 133-147. [ Links ]

30 WU JW AND LEE WC. 1999. Characterization of the mixtures of Gompertz distributions by conditional expectation of order statistics. Biom J 41: 371-381. [ Links ]

31 WU JW AND TSENG HC. 2006. Statistical inference about the shape parameter of the Weibull distribution by upper record values. Stat Pap 48: 95-129. [ Links ]

32 WU SJ, CHANG CT AND TSAI TR. 2003. Point and interval estimations for the Gompertz distribution under progressive type-II censoring. Metron LXI: 403-418. [ Links ]

33 ZELLNER A. 1977. Maximal Data Information Prior Distributions. In: Aykac A and Brumat C (Eds), New Methods in the applications of Bayesian Methods. North-Holland Amsterdam, p. 211-232. [ Links ]

34 ZELLNER A. 1984. Maximal Data Information Prior Distributions. Basic Issues in Econometrics, U. of Chicago Press. [ Links ]

35 ZELLNER A. 1990. Bayesian Methods and Entropy in Economics and Econometrics. In: Grandy Junior WT and Schick LH (Eds), Maximum Entropy and Bayesian Methods Dordrecht Netherlands: Kluwer Academic Publishers, p. 17-31. [ Links ]

36 ZELLNER A AND MIN C. 1993. Bayesian analysis model selection and prediction. In: Grandy Junior WT and Milonni PW (Eds), Physics and Probability: Essays in honour of Edwin T Jaynes, Cambridge University Press, Cambridge, UK. [ Links ]

Received: December 29, 2017; Accepted: March 26, 2018

Correspondence to: Fernando Antonio Moala E-mail:

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License