# ABSTRACT

This paper addresses the different methods of estimation of the unknown parameters of a two-parameter unit-logistic distribution from the frequentist point of view. We briefly describe different approaches, namely, maximum likelihood estimators, percentile based estimators, least squares estimators, maximum product of spacings estimators, methods of minimum distances: Cramér-von Mises, AndersonDarling and four variants of Anderson-Darling. Monte Carlo simulations are performed to compare the performances of the proposed methods of estimation for both small and large samples. The performances of the estimators have been compared in terms of their relative bias, root mean squared error, average absolute difference between the theoretical and empirical estimate of the distribution functions and the maximum absolute difference between the theoretical and empirical distribution functions using simulated samples. Also, for each method of estimation, we consider the interval estimation using the Bootstrap confidence interval and calculate the coverage probability and the average width of the Bootstrap confidence intervals. Finally, two real data sets have been analyzed for illustrative purposes.

Keywords:
Unit-Logistic distribution; Monte Carlo simulations; estimation methods; parametric Bootstrap

# 1 INTRODUCTION

Tadikamalla & Johnson 3030 TADIKAMALLA PR & JOHNSON NL. 1982. Systems of frequency curves generated by transformations of Logistic variables. Biometrika, 69(2): 461-465. introduced a new probability distribution with support on (0, 1) and named the distribution as L B distribution by using transformations of logistic variables. They obtained the distribution as follows:

X = g - 1 Y - γ δ , (1)

where Y is a standard Logistic distribution, g(·) is some suitable function and γ ∈ ℝ, δ> 0 are parameters. The choice of g(·) determines the support of the distribution, hence from 3030 TADIKAMALLA PR & JOHNSON NL. 1982. Systems of frequency curves generated by transformations of Logistic variables. Biometrika, 69(2): 461-465., by taking:

g X = log X 1 - X , (2)

we can obtain the L B distribution, hereafter we refer it as unit-Logistic distribution, with probability density function (PDF) given by:

f ( x | γ , δ ) = δ e γ x δ - 1 ( 1 x ) δ - 1 x δ e γ + ( 1 x ) δ 2 , 0 < x < 1 . (3)

In spite of its versatility, this distribution did not receive much attention in the literature. Nevertheless, recently, the basic properties and a regression analysis was studied by 55 DA PAZ RF, BALAKRISHNAN N, BAZA´ N JL. 2016. L-logistic distribution: Properties, inference and an application to study poverty and inequality in Brazil. Tech. Rep., Programa Interinstitucional de Pós Graduação em Estatística UFSCar - USP, São Carlos, SP, Brazil.. The authors introduced an alternative parametrization, where one parameter is the median. Following this parametrization, they defined the PDF as:

f ( x | μ , β ) = β μ β x β - 1 ( 1 μ ) β ( 1 x ) β - 1 ( 1 μ ) β x β + μ β ( 1 x ) β 2 , 0 < x < 1 (4)

where 0 < μ < 1 is the median of X and β > 0 is the shape parameter. The corresponding cumulative distribution function and quantile function are written respectively as:

F ( x | μ , β ) = 1 + μ ( 1 x ) x ( 1 μ ) β - 1 , 0 < x < 1 (5)

and

Q ( p | μ , β ) = μ p 1 / β ( 1 μ ) ( 1 p ) 1 / β + μ p 1 / β , 0 < p < 1 . (6)

Note that when we set μ = 0.5 and β = 1 in (4), the PDF of the unit-logistic distribution simply becomes the PDF of the standard uniform distribution. As we can see in Figure 1 the unitLogistic density is uni-modal (or uni-antimodal), increasing, decreasing, or constant, depending on the values of the parameters.

Figure 1
Behavior of the probability density function of unit-Logistic distribution for some values of μ and β.

The objective of this paper is to introduce different methods of estimation for the unknown parameters that index the unit-Logistic distribution and to study the behavior of these estimators for different sample sizes and for different parameter values. In particular, we compare the maximum likelihood estimators, maximum product of spacings estimators, estimators based on percentiles, least-squares estimators, weighted least-squares estimators, Cramér-von Mises estimators and Anderson-Darling estimators and four of its variants. Since, it is difficult to compare theoretically the performances of the different estimators, we perform extensive simulations to compare the performances of the different estimation methods based on relative bias, root mean squared error, average absolute difference between the theoretical and empirical estimate of the distribution functions, and maximum absolute difference between the theoretical and empirical distribution functions. Also, for each method of estimation, we

consider the interval estimation using the Bootstrap confidence interval 1212 EFRON B. 1982. The Jackknife, the Bootstrap and other resampling plans. Vol. 38. SIAM. and calculate the coverage probability and the average width of the confidence interval.

The uniqueness of this study comes from the fact that thus far, no attempt has been made to compare all these estimators for the two-parameter unit-logistic distribution. Comprehensive comparisons of estimation methods for other distributions have been performed in the literature: see 1313 GUPTA RD & KUNDU D. 2001. Generalized Exponential distribution: Different method of estimations. Journal of Statistical Computation and Simulation, 69(4): 315-337.

for generalized Exponential distribution, 1717 KUNDU D& RAQAB MZ. 2005. Generalized Rayleigh distribution: different methods of estimations. Computational Statistics & Data Analysis, 49(1): 187-200. for generalized Rayleigh distributions, 3131 TEIMOURI M, HOSEINI SM & NADARAJAH S. 2013. Comparison of estimation methods for the Weibull distribution. Statistics, 47(1): 93-109. for Weibull distribution, 2222 MAZUCHELI J, LOUZADA F & GHITANY ME. 2013. Comparison of estimation methods for the parameters of the weighted Lindley distribution. Applied Mathematics and Computation, 220: 463- 471. for weighted Lindley distribution, 1010 DO ESPIRITO SANTO APJ & MAZUCHELI J. 2015. Comparison of estimation methods for the Marshall-Olkin extended Lindley distribution. Journal of Statistical Computation and Simulation, 85(17): 3437-3450. for Marshall-Olkin extended Lindley distribution, 77 DEY S, ALI S & PARK C. 2015. Weighted exponential distribution: properties and different methods of estimation. Journal of Statistical Computation and Simulation, 85(18): 3641-3661. for weighted Exponential distribution, 2121 MAZUCHELI J, GHITANY ME & LOUZADA F. 2017. Comparisons of ten estimation methods for the parameters of Marshall-Olkin extended Exponential distribution. Communications in Statistics - Simulation and Computation, 46(7): 5627-5645. for Marshall-Olkin extended Exponential distribution, 99 DEY S, MAZUCHELI J & NADARAJAH S. 2017b. Kumaraswamy distribution: different methods of estimation. Computational and Applied Mathematics, (to appear). for the Kumaraswamy distribution, 88 DEY S, KUMAR D, RAMOS PL & LOUZADA F. 2017a. Exponentiated Chen distribution: Properties and estimation. Communications in Statistics - Simulation and Computation 46(10): 8118-8139. for the Exponentiated Chen distribution, 2727 RODRIGUES GC, LOUZADA F & RAMOS PL. 2018. Poisson-exponential distribution: different methods of estimation. Journal of Applied Statistics, 45(1): 128-144. for the Poisson-exponential distribution, 2424 NASSAR M, AFIFY AZ, DEY S & KUMAR D. 2018. A new extension of weibull distribution: Properties and different methods of estimation. Journal of Computational and Applied Mathematics, 336: 439-457. for the Alpha logarithmic transformed Weibull distribution and 2323 MENEZES AFB, MAZUCHELI J & BARCO KVP. 2018. The power inverse Lindley distribution: different methods of estimation. Ciência e Natura, 40: 24-26. for the power inverse Lindley distribution.

The final motivation of the paper is to show how different frequentist estimators of this distribution perform for different sample sizes and different parameter values and to develop a guideline for choosing the best estimation method for the unit-logistic distribution, which we think would be of interest to applied statisticians.

The paper is organized as follows. In Section 2 we discuss the eleven estimation methods considered in this paper. The performance of the proposed estimation procedures is studied through a Monte Carlo simulation and is presented in Section 3. In section 4, the methodology developed in this manuscript and the usefulness of the unit-Logistic distribution is illustrated by using two real data examples. Some concluding remarks are presented in Section 5.

# 2 METHODS OF ESTIMATION

In this section, we describe seven methods and four variants of AD test for estimating the parameters, μ and β, associated to the unit-Logistic distribution. For all methods, it is assumed that x = (x 1 , ... , x n ) is a random sample of size n from the unit-Logistic distribution with PDF given by (4) and unknown parameters μ and β. Also, let x(1) < ... < x(n) be the corresponding order sample statistics.

## 2.1 Method of Maximum Likelihood

Undoubtedly the method of maximum likelihood is the most popular method in statistical inference, mainly because of its many appealing properties. For instance, the maximum likelihood estimates are asymptotically unbiased, efficient, consistent, invariant under parameter transformation and asymptotically normally distributed (see, e.g., 1818 LEHMANN EJ & CASELLA G. 1998. Theory of Point Estimation. Springer Verlag.), (2525 PAWITAN Y. 2001. In All Likelihood: Statistical Modelling and Inference Using Likelihood. Oxford University Press, Oxford.), (2828 ROHDE CA. 2014. Introductory Statistical Inference with the Likelihood Function. Springer-Verlag, New York.).

The log-likelihood function of unit-Logistic distribution based on the random sample x = (x 1, ..., x n ) can be written as:

l = ( μ , β | x ) = n log β + n β log ( 1 μ ) + n β log μ + ( β 1 ) i = 1 n log x i + ( β 1 ) i = 1 n log ( 1 x i ) - 2 i = 1 n log ( 1 μ ) β x i β + μ β ( 1 x i ) β . (7)

The maximum likelihood estimates μ^MLE and β^MLE of the parameters μ and β, respectively, can be obtained by maximizing (7), or equivalently solving the following normal equations:

l β = n β - μ β μ 1 - μ - 2 β i = 1 n ( 1 μ ) β - 1 x i β - μ β - 1 1 - x i β ( 1 μ ) β x i β + μ β 1 - x i β = 0 , (8)

l β = n β + n log ( 1 μ ) + n log μ + i = 1 n log x i + i = 1 n log ( 1 x i ) - 2 i = 1 n x i β log ( 1 μ ) + log x i ( 1 μ ) β + 2 μ β ( 1 x i ) β log ( 1 x i ) + log μ ( 1 μ ) β x i β + μ β ( 1 x i ) β = 0 . (9)

Confidence intervals can be obtained by using the large sample distribution of the MLEs, which is normally distributed with the covariance matrix given by the inverse of the Fisher information since regularity conditions are satisfied.

## 2.2 Method of Maximum Product of Spacings

The maximum product of spacings (MPS) method was introduced by 22 CHENG RCH & AMIN NAK. 1979. Maximum product-of-spacings estimation with applications to the log-Normal distribution. Tech. rep., Department of Mathematics, University of Wales.), (33 CHENG RCH & AMIN NAK. 1983. Estimating parameters in continuous univariate distributions with a shifted origin. Journal of the Royal Statistical Society. Series B (Methodological) 45(3): 394- 403. as an alternative to MLE for estimating parameters of continuous univariate distributions. Ranneby 2626 RANNEBY B. 1984. The maximum spacing method. An estimation method related to the maximum likelihood method. Scandinavian Journal of Statistics, 11(2): 93-112. independently derived the same method as an approximation to the Kullback-Leibler measure of information.

The uniform spacings of a random sample from unit-Logistic distribution is defined as:

D i ( μ , β ) = F ( x ( i ) | μ , β ) F ( x ( i 1 ) | μ , β ) , i = 1 , . . . , n , F ( x ( 0 ) | μ , β ) = 0 and F ( x ( n + 1 ) | μ , β ) = 0 .

Clearly, D 0(μ, β) + D 1(μ, β) + ... + D n +1 (μ, β) = 1.

From 22 CHENG RCH & AMIN NAK. 1979. Maximum product-of-spacings estimation with applications to the log-Normal distribution. Tech. rep., Department of Mathematics, University of Wales.), (33 CHENG RCH & AMIN NAK. 1983. Estimating parameters in continuous univariate distributions with a shifted origin. Journal of the Royal Statistical Society. Series B (Methodological) 45(3): 394- 403., the MPSEs, μ^MPS and β^MPS are the values of μ and β, which maximize the geometric mean of the spacings:

G ( μ , β | x ) = i = 1 n + 1 D i μ , β 1 n + 1 (10)

or, equivalently, by maximize its logarithm:

H ( μ , β | x ) = 1 n + 1 i = 1 n + 1 log D i . (11)

The estimators μ^MPS and β^MPS of the parameters α and β can also be obtained by solving the nonlinear equations:

μ H ( μ , β ) = 1 n + 1 i = 1 n + 1 1 D i ( μ , β ) Δ 1 ( x ( i ) | μ , β ) Δ 1 ( x ( i 1 ) | μ , β ) = 0 , β H ( μ , β ) = 1 n + 1 i = 1 n + 1 1 D i ( μ , β ) Δ 2 ( x ( i ) | μ , β ) Δ 2 ( x ( i 1 ) | μ , β ) = 0 ,

where

Δ 1 ( x i | μ , β ) = μ β - 1 β x i - β μ 1 x ( i ) 1 β ( μ 1 ) x ( i ) μ - β μ 1 x ( i ) 1 β + 1 2 (12)

and

Δ 2 ( x i : n | μ , β ) = log μ + log ( 1 x ( i ) ) log x ( i ) log ( 1 μ ) x ( i ) β μ β μ 1 x ( i ) 1 β ( μ 1 ) x ( i ) μ - β μ 1 x ( i ) 1 β + 1 2 . (13)

It is noteworthy that the MPSE is as efficient as ML estimation and consistent under more general conditions than the ML estimators 33 CHENG RCH & AMIN NAK. 1983. Estimating parameters in continuous univariate distributions with a shifted origin. Journal of the Royal Statistical Society. Series B (Methodological) 45(3): 394- 403..

## 2.3 Method of Percentiles

If the data come from a distribution function which has a closed form, then we can estimate the unknown parameters by fitting straight line to the theoretical points obtained from the distribution function and the sample percentile points. This method was developed by 1515 KAO JHK. 1958. Computer methods for estimating Weibull parameters in reliability studies. IRE Transactions on Reliability and Quality Control PGRQC-13, 15-22.),(1616 KAO JHK. 1959. A graphical estimation of mixed Weibull parameters in life-testing of electron tubes. Technometrics, 1(4): 389-407. to estimate the parameters of the Weibull distribution.

Since the unit-Logistic distribution has an explicit cumulative distribution function, (5), it is feasible to use the same concept to derive estimators for μ and β. If p i denotes some estimate of Fx (i) | μ, β , then the percentiles estimators, μ^PCE and β^PCE can be obtained by minimizing, with respect to μ and β, the nonlinear function:

P ( μ , β | x ) = i = 1 n x ( i ) μ p i 1 / β ( 1 μ ) ( 1 p i ) 1 / β + μ p i 1 / β 2 , (14)

where pi=in+1 is an unbiased estimator of F(x (i) | μ, β) It is to be mentioned here that there are several possible choices for p i , interested readers may refer to 2020 MANN NR, SCHAFER RE & SINGPURWALLA ND. 1974. Methods for Statistical Analysis of Reliability and Life Data. Wiley..

## 2.4 Methods of Least Squares

The least square methods were originally proposed by 2929 SWAIN JJ, VENKATRAMAN S & WILSON JR. 1988. Least-squares estimation of distribution functions in Johnson’s translation system. Journal of Statistical Computation and Simulation, 29(4): 271- 297. to estimate the parameters of the Beta distributions. Suppose that F(X(i)) denotes the distribution function of the order statistics from the random sample x = (x 1 , ..., x n ). An important result from the probability shows that F(X (i)) ∼ Beta(i, ni + 1). Therefore, we have:

E F X ( i ) = i n + 1 and V F X ( i ) = i n - i + 1 n + 1 2 n + 2 (15)

for further details see 1414 JOHNSON NL, KOTZ S & BALAKRISHNAN N. 1995. Continuous Univariate Distributions., 2nd Edition. Vol. 2. John Wiley & Sons Inc., New York.. Using the expectations and variances, we obtain two variants of the least squares methods.

### 2.4.1 Ordinary Least Squares

In case of unit-Logistic distribution, the ordinary least square estimates μ^OLS and β^OLS of the parameters μ and β can be obtained by minimizing the function:

S ( μ , β | x ) = i = 1 n F x ( i ) | μ , β i n + 1 2 (16)

with respect to μ and β. Alternatively, these estimates can also be obtained by solving the following nonlinear equations:

i = 1 n F x i | μ , β - i n + 1 Δ 1 x i | μ , β = 0 , i = 1 n F x i | μ , β - i n + 1 Δ 2 x i | μ , β = 0

where Δ1(· | μ, β) and Δ2(· | μ, β) are given by Equations (12) and 13, respectively.

### 2.4.2 Weighted Least Squares

For the unit-Logistic distribution, the weighted least square estimates of μ and β, say μ^WLS and β^WLS, respectively are obtained by minimizing the function:

W ( μ , β | x ) = i = 1 n ( n + 1 ) 2 ( n + 2 ) i ( n i + 1 ) F x ( i ) | μ , β i n + 1 2 (17)

with respect to μ and β. Equivalently, these estimates are the solution of the following nonlinear equations:

i = 1 n n + 1 2 n + 2 i n - i + 1 F x i | μ , β - i n + 1 Δ 1 x i | μ , β = 0 , i = 1 n n + 1 2 n + 2 i n - i + 1 F x i | μ , β - i n + 1 Δ 2 x i | μ , β = 0

where Δ1(· | μ, β) and Δ2(· | μ, β) are defined in Equations (12) and 13, respectively.

## 2.5 Methods of Minimum Distances

Here, we will discuss some methods based on the test statistics of Cramár-von Mises, Anderson-Darling and four variants of the Anderson-Darling test, whose acronyms are ADR, AD2R, AD2L and AD2. Mainly, these methods determine the values of parameters that minimize the distance between the theoretical and empirical cumulative distribution functions (see for further details e.g., 66 D’AGOSTINO RB & STEPHENS MA. 1986. Goodness-of-Fit Techniques. Taylor & Francis.), (1919 LUCEÑO A. 2006. Fitting the Generalized Pareto distribution to data using maximum goodness-of-fit estimators. Comput. Stat. Data Anal., 51(2): 904-917.). The expressions for each method are presented in Table 1.

Table 1
Expressions for the methods based on the minimum distances.

For illustrative purposes, we have presented only the expressions used for the estimation of the parameters for the Cramér-von Mises and Anderson-Darling methods.

### 2.5.1 Method of Cramér-von Mises

In regard to unit-Logistic distribution, the Cramér-von- Mises estimates μ^CvM and β^CvM are obtained by minimizing with respect to μ and β the function:

C ( μ , β | x ) = 1 12 n + i = 1 n F x ( i ) | μ , β 2 i 1 2 n 2 . (18)

The estimates can also be obtained by solving the following nonlinear equations:

i = 1 n F x i | μ , β - 2 i - 1 2 n Δ 1 x i | μ , β = 0 , i = 1 n F x i | μ , β - 2 i - 1 2 n Δ 2 x i | μ , β = 0

where Δ1(· | μ, β) and Δ2(· | μ, β) are specified in Equations (12) and 13, respectively.

### 2.5.2 Method of Anderson-Darling

11 ANDERSON TW & DARLING DA. 1952. Asymptotic theory of certain “goodness-of-fit” criteria based on stochastic processes. Ann. Math. Statist., 2: 193-212. developed a test, as an alternative to statistical tests for detecting sample distributions departure from normality. Using these test statistics, we can obtain the Anderson-Darling estimates, μ^ADE and β^ADE by minimizing the function

A ( μ , β | x ) = n 1 n i = 1 n ( 2 i 1 ) log F x ( i ) | μ , β + log F ¯ x ( n + 1 i ) | μ , β . (19)

with respect to μ and β. Equivalently, these estimates are the solution of the following nonlinear equations:

i = 1 n 2 i - 1 Δ 1 x i | μ , β F x i | μ , β - Δ 1 x n + 1 - i | μ , β F x n + 1 - i | μ , β = 0 , i = 1 n 2 i - 1 Δ 2 x i | μ , β F x i | μ , β - Δ 2 x n + 1 - i | μ , β F x n + 1 - i | μ , β = 0

where Δ1(· | α, β) and Δ2(· | α, β) are given by (12) and (13), respectively.

# 3 SIMULATION RESULTS

In this section we conduct a Monte Carlo simulation study to compare the performance of the frequentist estimators discussed in the previous sections. The methods are compared for sample sizes n = {20, 50, 100, 200}. We generate M = 5.000 pseudo-random samples from unit-Logistic distribution using the inverse transform method with parameters μ = {0.2, 0.4, 0.6, 0.8} and β = {0.5, 1.5, 2.0}.

All simulations are done in Ox version 7.10, (see 1111 DOORNIK JA. 2007. Object-Oriented Matrix Programming Using Ox, 3rd ed. London: Timberlake Consultants Press and Oxford.), using the MaxBFGS subroutine for numerical optimizations. For each estimate, we calculate the relative bias, root mean-squared error (RMSE), the average absolute difference between the theoretical and empirical estimate of the distribution functions (D abs), and the maximum absolute difference between the theoretical and empirical distribution functions (D max). These measures are obtained using the following formulae:

Bias Θ ^ = 1 M i = 1 M Θ ^ i - Θ Θ , (20)

RMSE Θ ^ = 1 M i = 1 M Θ ^ i - Θ 2 , (21)

D a b s = 1 M × n i = 1 M j = 1 n | F y i j | Θ - F y i j | Θ ^ | , (22)

D m a x = 1 M i = 1 M max j | F y i j | Θ - F y i j | Θ ^ | . (23)

where Θ = (μ, β). Due to space constraint, we report the results only for μ = (0.2, 0.8) and β = (0.5, 2). The results for other combinations are summarized by their ranks in Tables 6 and 11, however this can be obtained from the corresponding author on request.

Table 2
Simulations results for μ = 0.2 and β = 0.5.

Table 3
Simulations results for μ = 0.2 and β = 2.0.

Table 4
Simulations results for μ = 0.8 and β = 0.5.

Table 5
Simulations results for μ = 0.8 and β = 2.0.

Table 6
Overall performance of estimation methods.

Table 7
Simulations results of interval estimation for μ = 0.2 and β = 0.5.

Table 8
Simulations results of interval estimation for μ = 0.2 and β = 2.0.

Table 9
Simulations results of interval estimation for μ = 0.8 and β = 0.5.

Table 10
Simulations results of interval estimation for μ = 0.8 and β = 2.0.

Table 11
Overall performance of estimation methods with respect the interval estimation.

In Tables 2-5 we report the empirical values of (20)-(23). A superscript indicates the rank of each of the estimators among all the estimators for that metric. For example, Table 2 presents he bias of the MLE (β^) in the first row as 0.0709 for n = 20. This indicates that the bias of β^ obtained using the method of maximum likelihood ranks 9th among all other estimators.

The following observations can be drawn from Tables 2-5.

1. 1. All the estimators reveal the property of consistency, i.e., the RMSE decreases when the sample size increases.

2. 2. The bias of β^ decreases when n increases for all estimation methods.

3. 3. The bias of μ^ decreases when n increases for all estimation methods.

4. 4. The bias of μ^ generally decreases when β increases for any given β and n for all estimation methods.

5. 5. The bias of β^ generally decreases when μ increases for any given μ and n for all estimation methods.

6. 6. D^abs is smaller than D^max for all estimation techniques. Again, these statistics become smaller when n increases.

The overall ranks of the estimation methods are presented in Table 6. For the parameter combinations considered in our study, Anderson-Darling estimator (AD) turns out to the best (overall score of 159) in the overall ranking closely followed by the method of weighted least square (WLS) (overall score of 178).

In the previous tables, we have obtained the point estimates of each method of estimation. However, it is also important to know the behaviour of interval estimation for each method of estimations. Therefore, we computed the parametric Bootstrap confidence interval 1212 EFRON B. 1982. The Jackknife, the Bootstrap and other resampling plans. Vol. 38. SIAM. and evaluate their coverage probability and average length of the simulated confidence intervals. The results are presented in Tables 7-10.

From the results reported in Tables 7-10, it is observed that as sample sizes increases, the coverage probability increases for both the parameters as well as for the estimation methods, while the average width of the confidence intervals decreases as the sample sizes increases for both the parameters and estimation methods.

The overall positions of the interval estimates are presented in Table 11. It is observed that WLS is the best method for interval estimation based on parametric Boostrap confidence intervals. The next best method is the AD, followed by MLE.

Thus, based on our study we may conclude that AD and WLS are the best methods for estimating the parameters of unit-Logistic distribution for both point and interval estimation. Therefore, we suggest to use AD and WLS methods of estimation for practical purposes.

# 4 ILLUSTRATIVE EXAMPLES

In this section, the performance of the eleven estimation methods is compared through two real data applications.

The first data (data set I) is available in software R and corresponds to 48 observations of twelve core samples from petroleum reservoirs that were sampled by four cross-sections. The second data set (data set-II) can be found in 44 CORDEIRO GM & DOS SANTOS RB. 2012. The Beta power distribution. Brazilian Journal of Probability and Statistics, 26(1): 88-112. and represents the total milk production in the first birth of 107 cows from SINDI race.

The parameter estimates and their corresponding Bootstrap confidence intervals for all estimation methods considered are summarized in Tables 12 and 13. We also present the results of formal goodness-of-fit tests, the Kolmogorov-Smirnov (KS) test, in order to show that the unit-Logistic distribution can be used to model these two data sets.

Table 12
Parameter estimates, 95% confidence intervals based on parametric Bootstrap and K-S test: data set I.
Table 13
Parameter estimates, 95% confidence intervals based on parametric Bootstrap and KS test: data set II.

From Table 12 we can see that all estimates provide a good fit to the data set. It is also observed that the AD2L and MPS estimators give the shortest confidence intervals for μ and β, respectively.

The results in Table 13 indicate that the CvM estimates do not provide a good fit to this data set as per KS statistic is concerned. It is also observed that MLE has the lowest value of KS. It is also noteworthy, that MLE and ADR have the shortest confidence intervals for μ and β.

# 5 CONCLUDING REMARKS

In this paper, we have performed an extensive simulation study to compare eleven aforementioned methods of estimation. We have compared estimators with respect to bias, root meansquared error, the average absolute difference between the theoretical and empirical estimate of the distribution functions, and the maximum absolute difference between the theoretical and empirical distribution functions. We have also calculated the coverage probability and the average width of the Bootstrap confidence intervals. We have also compared estimators by two real data applications. The simulation results show that AD estimators is the best performing estimator in terms of biases and RMSE. The next best performing estimators is the WLS estimators, followed by MLE. The real data applications show that the AD2L and MPS estimators give the shortest confidence intervals for μ and β, respectively for the data set I and MLE and ADR have the shortest confidence intervals for the data set II. Hence, we can argue that the AD estimators, weighted least squares estimators, AD2L, MPS, ADR and ML estimators are among the best performing estimators for unit-logistic distribution.

# REFERENCES

• 1
ANDERSON TW & DARLING DA. 1952. Asymptotic theory of certain “goodness-of-fit” criteria based on stochastic processes. Ann. Math. Statist., 2: 193-212.
• 2
CHENG RCH & AMIN NAK. 1979. Maximum product-of-spacings estimation with applications to the log-Normal distribution. Tech. rep., Department of Mathematics, University of Wales.
• 3
CHENG RCH & AMIN NAK. 1983. Estimating parameters in continuous univariate distributions with a shifted origin. Journal of the Royal Statistical Society. Series B (Methodological) 45(3): 394- 403.
• 4
CORDEIRO GM & DOS SANTOS RB. 2012. The Beta power distribution. Brazilian Journal of Probability and Statistics, 26(1): 88-112.
• 5
DA PAZ RF, BALAKRISHNAN N, BAZA´ N JL. 2016. L-logistic distribution: Properties, inference and an application to study poverty and inequality in Brazil. Tech. Rep., Programa Interinstitucional de Pós Graduação em Estatística UFSCar - USP, São Carlos, SP, Brazil.
• 6
D’AGOSTINO RB & STEPHENS MA. 1986. Goodness-of-Fit Techniques. Taylor & Francis.
• 7
DEY S, ALI S & PARK C. 2015. Weighted exponential distribution: properties and different methods of estimation. Journal of Statistical Computation and Simulation, 85(18): 3641-3661.
• 8
DEY S, KUMAR D, RAMOS PL & LOUZADA F. 2017a. Exponentiated Chen distribution: Properties and estimation. Communications in Statistics - Simulation and Computation 46(10): 8118-8139.
• 9
DEY S, MAZUCHELI J & NADARAJAH S. 2017b. Kumaraswamy distribution: different methods of estimation. Computational and Applied Mathematics, (to appear).
• 10
DO ESPIRITO SANTO APJ & MAZUCHELI J. 2015. Comparison of estimation methods for the Marshall-Olkin extended Lindley distribution. Journal of Statistical Computation and Simulation, 85(17): 3437-3450.
• 11
DOORNIK JA. 2007. Object-Oriented Matrix Programming Using Ox, 3rd ed. London: Timberlake Consultants Press and Oxford.
• 12
EFRON B. 1982. The Jackknife, the Bootstrap and other resampling plans. Vol. 38. SIAM.
• 13
GUPTA RD & KUNDU D. 2001. Generalized Exponential distribution: Different method of estimations. Journal of Statistical Computation and Simulation, 69(4): 315-337.
• 14
JOHNSON NL, KOTZ S & BALAKRISHNAN N. 1995. Continuous Univariate Distributions., 2nd Edition. Vol. 2. John Wiley & Sons Inc., New York.
• 15
KAO JHK. 1958. Computer methods for estimating Weibull parameters in reliability studies. IRE Transactions on Reliability and Quality Control PGRQC-13, 15-22.
• 16
KAO JHK. 1959. A graphical estimation of mixed Weibull parameters in life-testing of electron tubes. Technometrics, 1(4): 389-407.
• 17
KUNDU D& RAQAB MZ. 2005. Generalized Rayleigh distribution: different methods of estimations. Computational Statistics & Data Analysis, 49(1): 187-200.
• 18
LEHMANN EJ & CASELLA G. 1998. Theory of Point Estimation. Springer Verlag.
• 19
LUCEÑO A. 2006. Fitting the Generalized Pareto distribution to data using maximum goodness-of-fit estimators. Comput. Stat. Data Anal., 51(2): 904-917.
• 20
MANN NR, SCHAFER RE & SINGPURWALLA ND. 1974. Methods for Statistical Analysis of Reliability and Life Data. Wiley.
• 21
MAZUCHELI J, GHITANY ME & LOUZADA F. 2017. Comparisons of ten estimation methods for the parameters of Marshall-Olkin extended Exponential distribution. Communications in Statistics - Simulation and Computation, 46(7): 5627-5645.
• 22
MAZUCHELI J, LOUZADA F & GHITANY ME. 2013. Comparison of estimation methods for the parameters of the weighted Lindley distribution. Applied Mathematics and Computation, 220: 463- 471.
• 23
MENEZES AFB, MAZUCHELI J & BARCO KVP. 2018. The power inverse Lindley distribution: different methods of estimation. Ciência e Natura, 40: 24-26.
• 24
NASSAR M, AFIFY AZ, DEY S & KUMAR D. 2018. A new extension of weibull distribution: Properties and different methods of estimation. Journal of Computational and Applied Mathematics, 336: 439-457.
• 25
PAWITAN Y. 2001. In All Likelihood: Statistical Modelling and Inference Using Likelihood. Oxford University Press, Oxford.
• 26
RANNEBY B. 1984. The maximum spacing method. An estimation method related to the maximum likelihood method. Scandinavian Journal of Statistics, 11(2): 93-112.
• 27
RODRIGUES GC, LOUZADA F & RAMOS PL. 2018. Poisson-exponential distribution: different methods of estimation. Journal of Applied Statistics, 45(1): 128-144.
• 28
ROHDE CA. 2014. Introductory Statistical Inference with the Likelihood Function. Springer-Verlag, New York.
• 29
SWAIN JJ, VENKATRAMAN S & WILSON JR. 1988. Least-squares estimation of distribution functions in Johnson’s translation system. Journal of Statistical Computation and Simulation, 29(4): 271- 297.
• 30
TADIKAMALLA PR & JOHNSON NL. 1982. Systems of frequency curves generated by transformations of Logistic variables. Biometrika, 69(2): 461-465.
• 31
TEIMOURI M, HOSEINI SM & NADARAJAH S. 2013. Comparison of estimation methods for the Weibull distribution. Statistics, 47(1): 93-109.

# Publication Dates

• Publication in this collection
Sep-Dec 2018