Abstract
Abstract: For some ranges of its parameters and arguments, the series for Tweedie probability density functions are sometimes exceedingly difficult to sum numerically. Existing numerical implementations utilizing inversion techniques and properties of stable distributions can cope with these problems, but no single one is successful in all cases. In this work we investigate heuristically the nature of the problem, and show that it is not related to the order of summation of the terms. Using a variable involved in the analytical proof of convergence of the series, the critical parameter for numerical nonconvergence (“alpha”) is identified, and an heuristic criterion is developed to avoid numerical nonconvergence for a reasonably large subinterval of the latter. With these practical rules, simple summation algorithms provide sufficiently robust results for the calculation of the density function and its definite integrals. These implementations need to utilize highprecision arithmetic, and are programmed in the Python programming language. A thorough comparison with existing R functions allows the identification of cases when the latter fail, and provide further guidance to their use.
Key words
Python; R Tweedie package; Tweedie probability density; Tweedie series
Introduction
The Tweedie distribution is a member of the family of exponential dispersion models, discussed for example in ^{Jørgensen (1987)}JØRGENSEN B. 1992. The theory of exponential dispersion models and analysis of deviance. 2nd edition. Rio de Janeiro: Instituto de Matemática Pura e Aplicada. and ^{Jørgensen (1992)}JØRGENSEN B. 1992. The theory of exponential dispersion models and analysis of deviance. 2nd edition. Rio de Janeiro: Instituto de Matemática Pura e Aplicada.. The Tweedie is a distribution for which Taylor’s ^{(1961)}TAYLOR LR. 1961. Aggregation, variance and the mean. Nature 189: 732735. empirical law relating the mean to the variance, viz.,
holds.In this work, we are concerned with particular ranges of parameters of the Tweedie distribution, for which serious numerical difficulties arise when computing the probability density function (pdf) with form (c.f.^{Jørgensen (1992)}JØRGENSEN B. 1992. The theory of exponential dispersion models and analysis of deviance. 2nd edition. Rio de Janeiro: Instituto de Matemática Pura e Aplicada., Section 2.7)
Clearly, Eqs. (2)–(4) define a twoparameter distribution. The ranges of the parameters that we are interested in are $0\alpha 1$ and $\theta 0$. The exponent $p$ in Eq. (1) and $\alpha $ are related by
The central problem here is that for certain values of $z$ and $\alpha $ the series given by Eq. (3) is exceedingly difficult, if not impossible, to sum employing “standard” doubleprecision (64bit) floatingpoint arithmetic. A headon approach to sum the series fails in many cases ^{(Dunn and Smyth 2005)}DUNN PK AND SMYTH GK. 2005. Series evaluation of tweedie exponential dispersion model densities. Stat Comp 15(4): 267280.. To the best of our knowledge, currently the most successful approaches to sum Eq. (3) are either to employ Fourier inversion ^{(Dunn and Smyth 2008)}MALCOLM MA. 1971. On accurate floatingpoint summation. Communape ACMape 14(11): 731736. or to exploit the link between the stable distribution and the Tweedie distribution ^{(Dunn 2015)}DUNN PK. 2015. Package ‘tweedie’ (Tweedie exponential family models), version 2.2.1. URL http://www.rproject.org/package=tweedie.
http://www.rproject.org/package=tweedie...
.
In spite of the aforementioned difficulties, it turns out that, for a fairly wide range of the variables $\alpha $ and $z$, accurate calculations are possible, at the price of using much higher precision and correspondingly much slower arithmetic. Unless stated otherwise, we use Python’s ^{(Python Software Foundation 2015)}PYTHON SOFTWARE FOUNDATION. 2015. Python Language Reference, version 3.5. URL https://docs.python.org/3/reference/index.html.
https://docs.python.org/3/reference/inde...
mpmath module (http://mpmath.org/) to perform all calculations with 1 000bit precision for the mantissa (compare with the 53bit precision of usual doubleprecision arithmetic). Being interpreted, Python is intrinsically much slower than compiled languages, and the use of mpmath makes our calculations even slower still. Yet for many practical purposes (for example, integrating ${f}_{Z}(z)$ with 1 000 points and Gaussian quadrature), our approach is entirely feasible on a desk computer with the rules devised here. It also has the advantage that the algorithms (and the underlying mathematics) are simple and straightforward, being therefore easy to implement and use.
In this work, we compare our computations of the Tweedie densities with the ones obtained using the tweedie package ^{(Dunn 2015)}DUNN PK. 2015. Package ‘tweedie’ (Tweedie exponential family models), version 2.2.1. URL http://www.rproject.org/package=tweedie.
http://www.rproject.org/package=tweedie...
for R ^{(R Core Team 2015)}R CORE TEAM. 2015. R: A Language and Environment for Statistical Computing .R Foundation for Statistical Computing. URL https://www.Rproject.org/.
https://www.Rproject.org/...
. There are four alternatives for computations using the tweedie package depending on the approach desired:
Summing the series: dtweedie.series ^{(Dunn and Smyth 2005)}DUNN PK AND SMYTH GK. 2005. Series evaluation of tweedie exponential dispersion model densities. Stat Comp 15(4): 267280..
Fourier series inversion: dtweedie.inversion ^{(Dunn and Smyth 2008)}MALCOLM MA. 1971. On accurate floatingpoint summation. Communape ACMape 14(11): 731736..
Using a stable distribution: dtweedie.stable ^{(Nolan 1997)}NOLAN JP. 1997. Numerical calculation of stable densities and distribution functions. Comm Statist Stochastic Models 13(4): 759–774.; it uses R’s library stabledist^{(Wuertz et al. 2016)}B15 WUERTZ D, MAECHLER M AND RMETRICS CORE TEAM MEMBERS. 2016. Stabledist: Stable distribution functions. R package version 0.71. URL https://CRAN.Rproject.org/package=stabledist.
https://CRAN.Rproject.org/package=stab...
.
Using a saddlepoint approximation to the density: dtweedie.saddle ^{(Dunn and Smyth 2001)}DUNN PK AND SMYTH GK. 2001. Tweedie family densities: methods of evaluation. Odense, Denmark: In Proceedings of the 16th International Workshop on Statistical Modelling..
A generic call to dtweedie uses either dtweedie.series or dtweedie.inversion depending on the parameter values whereas dtweedie.stable and dtweedie.saddle must be called explicitly. Note that in the present work we never use the dtweedie.saddle function.
In spite of the mathematically more sophisticated – and usually effective, as we will see – approaches of ^{Dunn and Smyth (2008)}MALCOLM MA. 1971. On accurate floatingpoint summation. Communape ACMape 14(11): 731736. and ^{Nolan (1997)}NOLAN JP. 1997. Numerical calculation of stable densities and distribution functions. Comm Statist Stochastic Models 13(4): 759–774., the nature of the problems involved in summing Eq. (3) remains not fully explored. Considerable insight as well as useful practical guidance for the calculation of Tweedie probability density functions can be gained by looking at them in more detail. Incidentally, this will also give us the opportunity to assess the implementations in the tweedie R package to calculate densities in some numerically challenging situations.
We have three objectives. First, we attempt to explain (empirically, by way of example) why it is in some cases so difficult to sum Eq. (3); often, it is in practice impossible to calculate it with doubleprecision (53bit mantissa) arithmetic. As we shall see, the explanation involves difficulties in calculating individual terms of the series for large $k$ when $\alpha $ is very close to 0 or to 1, and/or when $z$ is small.
Then, we also show that some simple practical rules can be devised to avoid numerical errors or the need to sum too many terms in Eq. (3). It is fortunate that in many cases the numerical problems will arise for a range of $z$ at which ${f}_{Z}(z)$ is exceedingly small. In such cases, instead of attempting to sum the series and keeping errors under control, it is much more effective to force the calculating function to return zero. As shown in Section Results using Python, this does not affect the accuracy of the calculation of definite integrals of ${f}_{Z}(z)$, and therefore of the probabilities. The criterion for returning zero in these cases is explained below in Section Practical rules and tests after Eq. (21).
Lastly, it will be possible to reassess the capabilities of the tweedie package, so that the practical rules devised here can also be useful for its users. The comparison with the highprecision approach using Python adopted in this work will be mutually beneficial for assessing both the tweedie R package and the nptweedie.py module developed for the present work.
In the next section we give a few examples of the numerical difficulties that arise in the summation of the series, without going into the details of how the terms are actually summed. This is left for Sections Absolute convergence, where we show that the series always converges (in an analytical sense), and Numerical implementation, where it is shown how a standard algorithm for the summation (with 1 000bit precision) is enough in most cases. Then, with a “good enough” algorithm at hand, a practical rule is presented in Section Practical rules and tests to predict cases when, even with 1 000 bits, the sum will fail (in the numerical sense) to converge in less than $10\phantom{\rule{0.167em}{0ex}}000$ terms. In this section, a first comparison with R’s tweedie package is also made.
As already mentioned, because failure of numerical convergence is associated with very small values of ${f}_{Z}(z)$, it is enough to return zero in such cases. This rule is then incorporated into a simple summation algorithm that is always successful (in this narrower sense) for $0.01\alpha 0.99$. It is important to note that reliable computations of the pdf for the range of $\alpha $ considered here are important not only for problems characterized by parameter values in that range, but also when running algorithms for statistical inference either based on optimization or sampling methods such as Monte Carlo Markov Chain and similar ones which may have excursions off of those regions of the parameter space.
In Section Results using Python, several tests on the proposed algorithm are performed: the results are tested against an analytical closed form for ${f}_{Z}(z)$ when $\theta =1/2$, $\alpha =1/2$ which corresponds to the inverse Gaussian distribution; and the accuracy of the integrals of ${f}_{Z}(z)$ is checked by numerical integration. We will show that the proposed algorithm always integrates to 1 (given enough integration points) within an error of ${10}^{6}$ (at most). In this section we also go back to calculating Eq. (3) with doubleprecision, obtaining additional insight at where and why standard doubleprecision implementations fail. Many of the tests performed in Section Results using Python are then run again in R, using the tweedie package, in Section Performance in comparison with R’s tweedie package: this provides an objective assessment of current R implementations for situations where straightforward summation is very difficult. Our conclusions are given in the final section.
Termwise behavior and the difficulty of computing N(z;$\alpha $)
Let
it is worth writingAn upper bound for ${max}_{k}{b}_{k}$ can be found as follows ^{(Dunn and Smyth 2005)}DUNN PK AND SMYTH GK. 2005. Series evaluation of tweedie exponential dispersion model densities. Stat Comp 15(4): 267280.:
We now discuss the problems involved with the summation of Eq. (3) by means of 3 examples. In this section, we do not correct those problems yet, and the discussion that follows has the sole objective of shedding light on the nature of the numerical problems discovered by ^{Dunn and Smyth (2005)}DUNN PK AND SMYTH GK. 2005. Series evaluation of tweedie exponential dispersion model densities. Stat Comp 15(4): 267280.. We proceed to sum the series almost straightforwardly, except that we use 1 000bit precision arithmetic, and avoid loss of precision due to large variations in the order of magnitude of the terms in the sum. The numerical convergence criterion used is
Figures 1(a)  1(c) show the behavior of ${b}_{k}$, ${S}_{d}(k)$, and $\u03f5$. The parameters chosen for this example are $\alpha =+1/2$, $\theta =1/2$, because then the Tweedie coincides with an inverse Gaussian distribution, the only case in the range $0\alpha 1$ for which an alternative analytical expression for ${f}_{Z}(z)$ is available. Figures 1(a)  1(c) correspond to the cases (a) $z=0.0006$, (b) $z=0.0008$ and (c) $z=0.0020$.
Individual terms ${b}_{k}/{b}_{{k}_{max}}$ (solid line), sum ${S}_{d}(k)/{b}_{{k}_{max}}$ of the $N(z;\alpha )$ series (filled circles), and relative error $\u03f5$ (dashed line) with $\alpha =1/2$, $\theta =1/2$. (a)$z=0.0006$, (b)$z=0.0008$ and (c)$z=0.0020$. Only (c) converged to the correct value with less than 10 000 sums.
The first striking feature in the figures is the range covered by the ${b}_{k}$’s (shown as a solid line): in excess of ${10}^{300}$ in the first two cases, and $2.82\times {10}^{216}$ in the last one. Indeed, the range is so wide that the plotting program used to produce the figures, which itself uses doubleprecision arithmetic, is unable to plot all the values calculated, thereby forcing the graphs to be clipped at the lower ${10}^{300}$ end.
To prevent that the summation of terms varying across this very wide range of orders of magnitude suffer from the “catastrophic loss of precision” mentioned by ^{Malcolm (1971)}MALCOLM MA. 1971. On accurate floatingpoint summation. Communape ACMape 14(11): 731736., we implemented an adaptation of the msum function found in http://code.activestate.com/recipes/393090/ ^{(Shewchuck 1996)}SHEWCHUCK JR. 1996. Adaptive precision floatingpoint arithmetic and fast robust geometric predicates. Pittsburgh PA 15213: Technical Report CMUCS96140, School of Computer Science, Carnegie Mellon University., which is designed to deal with this kind of problem. The adaptation, in the form of 3 functions in the module nsum (written by us), is presented in the Appendix. Therefore, in principle, the sum in Eq. (12) is being carried out without loss of precision. Yet, in cases (a) and (b), correct numerical convergence was not achieved for $k10\phantom{\rule{0.167em}{0ex}}000$ (although this is not readily apparent in the figures themselves): later on, we will find more evidence that loss of precision is not the critical issue with summing Eq. (3). Rather, the lack of convergence to the right value can happen due to the following reasons:

Although the sum itself is carried out without loss of precision, apparently inaccurate evaluation of the terms ${d}_{k}$ for large values of $k$ leads the algorithm to “converge” to a wrong value.

The algorithm reaches the stopping criterion $k=10\phantom{\rule{0.167em}{0ex}}000$ without achieving the convergence criterion specified in Eq. (15).
Case (a) is perhaps the worst of all: the algorithm’s convergence criterion given by Eq. (15) is satisfied, but the value obtained for the density is wrong and positive (reason 1 above). In this particular case, because we have the alternative to calculate the density with the inverse Gaussian formula, this is easily verifiable; for $\alpha \ne 1/2$, however, there seems to be no way to check within the algorithm itself if reason 1 is happening. In the case of Figure 1(a), the algorithm (employing 1 000 bits of precision!) calculated a density of $2.735443272\times {10}^{+68}$, whereas the correct value given by the inverse Gaussian formula is $9.03190022\times {10}^{358}$.
Case (b)’s failure is another instance of reason 1: here, the algorithm found a negative value of ${S}_{d}$. This case is somewhat easier to handle, because the user can verify the implausibility of a negative density with a simple if clause. Note that the negative values of ${S}_{d}(k)/{b}_{{k}_{max}}$ to which the algorithm converges do not appear in the loglog plot.
In short, neither case (a) nor case (b) failed to converge because the stopping criterion $k=10\phantom{\rule{0.167em}{0ex}}000$ was reached: they either converged to a wrong positive value (a) or to a wrong negative value (b), in spite of being calculated with $1\phantom{\rule{0.167em}{0ex}}000$ bits of precision and the use of a summation algorithm specifically designed to avoid loss of precision.
Finally, case (c) is successful. The range of the ${b}_{k}$ terms is somewhat smaller (albeit still daunting). Convergence to the right value of $3.2329931463\times {10}^{105}$, confirmed by the inverse Gaussian formula, is only achieved at $k=1826$.
In all cases, the relative error (shown as a dashed line, with the corresponding axis scale on the right) takes a long time to reach the convergence criterion $\u03f5{10}^{10}$. Close to spurious (cases (a) and (b)) or correct (case (c)) convergence, $\u03f5$ plunges down abruptly, there being no qualitative difference between the three cases that could help to devise an identification pattern within the algorithm for spurious convergence.
Finally, Figure 1 shows a last disconcerting feature of Eq. (3): because of the alternating character of Eq. (11), compounded by the $sin(k\pi \alpha )$ term in Eq. (12), successive values of ${d}_{k}$ tend to almost cancel out each other (in order of magnitude), the residue being carried over to the next term. For this reason, the order of magnitude of ${S}_{d}(k)$ (given by the points in the figure) and of ${b}_{k}$ (and therefore of ${c}_{k}$ and ${d}_{k}$ as well) remains the same for a very long time, while the magnitude of ${b}_{k}$ varies nonmonotonically with $k$ over a very wide range of values (the reader should be aware that the points and the solid line in Figure 1 are not exactly equal, the visual impression being an artifact of the very wide range in the plot).
All that adds to the difficulty of summing the series, casting serious doubts on the feasibility of a direct attack, as found out by ^{(Dunn and Smyth 2005)}DUNN PK AND SMYTH GK. 2005. Series evaluation of tweedie exponential dispersion model densities. Stat Comp 15(4): 267280. and ^{Dunn and Smyth (2008)}MALCOLM MA. 1971. On accurate floatingpoint summation. Communape ACMape 14(11): 731736..
If one accepts the use of very large floatingpoint precision as a useful, and perhaps unavoidable, tool, however, there is hope in practical situations, although definitely not for all possible combinations of $z$ and $\alpha $. In the next section, we discuss the analytical convergence of the series. In Section Numerical implementation, we give the details of how the series summations, in this section, and elsewhere in the manuscript, were implemented. In Section Practical rules and tests, we give a practical procedure to sum the series using 1 000bit precision that avoids the errors observed, for example, in cases (a) and (b) of this section. The procedure is able to predict when the density ${f}_{Z}(z)$ will be very close to zero, and in this case will return zero instead of attempting the sum. Unavoidably, this results in a relative error of 100% for the null estimate of ${f}_{Z}(z)$, but has no practical consequences insofar as integration of the density and calculation of probabilities is concerned.
Absolute convergence
The series ${S}_{d}(k)$ is absolutely convergent. The proof is based on two theorems:

If a series is absolutely convergent, then it is convergent (^{Dettman (1984)}DETTMAN JW. 1984. Applied Complex Variables. New York: Dover Publications., Theorem 4.3.4). We shall show that ${S}_{b}(k)$ converges. Therefore, ${S}_{c}(k)$ is absolutely convergent (${b}_{k}={c}_{k}$), and ${S}_{c}(k)$ converges.

If a series is bounded termbyterm by a convergent series of positive terms, the series is absolutely convergent (^{Dettman (1984)}DETTMAN JW. 1984. Applied Complex Variables. New York: Dover Publications., Theorem 4.3.12). Since ${d}_{k}\le {b}_{k}$ and the latter is convergent, ${S}_{d}(k)$ is absolutely convergent, and therefore it is convergent.
To prove that ${S}_{b}(k)$ is convergent, we use the ratio test:
Numerical implementation
We use a very simple scheme to calculate ${S}_{d}(k)$. The ${b}_{k}$ terms are calculated with
The summation stops when the criterion given in Eq. (15) is reached. If ${S}_{d}(k)$ is found to be negative, $k$ is incremented and the algorithm continues. Usually, this forces the algorithm to reach $k=10\phantom{\rule{0.167em}{0ex}}000$, at which point it flags nonconvergence.
Practical rules and tests
As mentioned in the Introduction, the algorithm as is will often fail to converge to the right numerical value with less than 10 000 terms in the sum. In order to make it more robust, and to curtail unnecessary calculations when ${f}_{Z}(z)$ is actually very small, we observe that $B$ defined in Eq. (6) is a good predictor of the “difficulty” of the summation. This observation is found in the caption of Table I of ^{Dunn and Smyth (2008)}MALCOLM MA. 1971. On accurate floatingpoint summation. Communape ACMape 14(11): 731736. about a variable they call $\xi $, which is given by (in our notation, and setting $\sigma =1$)
For comparison, our is not equal to $\xi $, but it is related to it.The role of $B$ in the rate of convergence is seen in Eq. (16). For a specified (desired) rate of convergence $\eta $ after $n$ terms, one has
On the bright side, Eq. (20) is independent of $\theta $: only $\alpha $ appears in it. The idea therefore is to vary $\alpha $ over its allowed range of $0\alpha 1$, and for each $\alpha $ to vary $z$ from below until the algorithm starts to converge to the “correct” value of ${f}_{Z}(z)$. For a single $z$ it may be difficult to identify if the convergence is to the true value, but a synoptic view of several values of $z$ in a graph is quite enough to identify the correct values.
For a fixed $\alpha $, therefore, it is possible to determine, by trial and error, the values of $z$ and $B$ at which the algorithm converges numerically with less than 10 000 terms.
To pursue this idea, we (manually) searched for the $z$range where the algorithm converges using the values $\alpha =0.01$, $0.1$, $0.2$, $0.3$, $0.4$, $0.5$, $0.6$, $0.7$, $0.8$, $0.9$, and $0.99$. For $\alpha 0.01$ or $\alpha >0.99$, manually trying to identify a range of $z$’s where the algorithm converges proved unsuccessful. We consider that the interval $\alpha \in [0.01,0.99]$ is wide enough in practice; therefore, if $\alpha $ falls outside of it, we simply do not attempt to sum the series. The value of $\theta =1/2$ (the same as for the inverse Gaussian) was used.
The graphs obtained with 6 of the 11 values of $\alpha $ above are shown in Figure 2. Each subfigure shows the calculated densities ${f}_{Z}(z)$ (with 1 000bit precision) as well as $1/B$ for a range of $z$ values around which the algorithm starts to be successful. Nonconvergence in 10 000 iterations (or convergence to a negative value) is flagged as ${10}^{+150}$. Also shown is the value of the density calculated by the tweedie package with the “generic” call to dtweedie (see the comment in the introduction about how it chooses which method to employ). In contrast to the 1 000bit calculation of ${f}_{Z}(z)$ by series summation, usually dtweedie does not suffer from spurious values at the left tail of the distribution, but for $\alpha =0.9$ it clearly does not return correct values there, and in fact highprecision series summation fares a little better (Figure 2, lower right panel). We observed the same problem for our maximum “acceptable” $\alpha $, 0.99 (not shown). We did not try to determine a more precise value of $\alpha $, above 0.8, where dtweedie starts to give incorrect values at the left tail.
Wrong convergence to positive values simply shows as discernible spurious values. Similar behavior is observed for the other values of $\alpha $ mentioned above. Note the monotonically increasing $1/B$ in all cases.
Transition from unsuccessful to successful calculation of Tweedie densities, for $\alpha =$ 0.4, 0.5, 0.6, 0.7, 0.8 and 0.9. On the left axis scale, the solid line shows the density calculated from summing the series with 1 000bit precision, and the grey circles are the densities calculated by dtweedie in R. On the right axis scale, the dashed line is $1/B$. The arrows indicate the first point at which the series summation is considered “successful”. In all cases, $\theta =1/2$.
In Figure 2, the very small values of ${f}_{Z}(z)$ for the first successful value of $z$ are noteworthy. Table I lists $\alpha $, $z$, ${f}_{Z}(z)$ and $1/B$ for the first successful instance of the algorithm (subjectively decided) in each case.
Early values, in the direction of increasing z, of $\alpha $, z, f_{Z} (z) and 1/B for which the Tweedie densities are calculated correctly.
With the values in Table I, it is possible to plot $1/B$versus$\alpha $, in Figure 3. We can now draw a smooth curve through the data points. This can be done by adjusting a suitable equation using (in our case) the available nonlinear LevenbergMarquardt least squares procedure builtin in the plotting program (Gnuplot: www.gnuplot.info). We used a weighted version, giving weight 1 to all the ten lowest values of $\alpha $, and weight 100 to the largest ($0.99$). This is necessary because, as $\alpha $ gets closer to 1, the pdf becomes more and more pointed close to zero, and small changes in $z$ and in $1/B$ make all the difference between numerical nonconvergence and convergence. Thus, an accurate value of the minimal $1/B$ is much more critical for $\alpha =0.99$.
Our first try was to use Eq. (20) and to adjust $n$ and $\eta $ as parameters of the leastsquares fit. Although Eq. (20) is (encouragingly) able to capture the overall $\alpha $dependency, the result, shown in Figure 3 as a dashed line, is relatively poor. Therefore, we changed to a purely empirical equation with 3 instead of 2 adjustable paramters, viz.
This is shown as a solid line in Figure 3. Not surprisingly, the fit is better. The adjusted parameters are directly coded in module Psialpha.py, given in the Appendix. It is very simple, and just defines the values of $a$, $b$ and $c$ in Eq. (21) above.Early values of $1/B$ for which the Tweedie densities are calculated correctly, in the direction of increasing $z$, for each $\alpha $, and adjusted curves $\Phi (\alpha )$ (dashed line) and $\Psi (\alpha )$ (solid line).
With $\Psi (\alpha )$ thus obtained, there are only two changes that need to be made to the algorithm of calculating Tweedie densities:

Do not allow $\alpha 0.01$ or $\alpha >0.99$; in these cases we do not attempt to sum the series, and the calculating function aborts. As discussed above, the range $[0.01,0.99]$ is probably wide enough for most applications.

If $1/B\Psi (\alpha )$, return 0; otherwise, sum using 1 000bit precision.
The full implementation is given in the Appendix. The probability density function is called pdfz_tweedie. It is designed to receive and to return standard floatingpoint variables: all the multipleprecision work is done internally and kept hidden from the user. In this way, he or she can use the ordinary Python float type transparently while using the module nptweedie.
Results using Python
A first comparison is to check the proposed algorithm against the densities calculated with the inverse Gaussian distribution. This is only possible for $\alpha =1/2$, $\theta =1/2$, and is done visually in Figure 4. The agreement is actually very good: for 1 000 points evenly distributed between 0 and 20 the two methods agree exactly up to the eighth decimal place.
Comparison between the proposed algorithm and the inverse Gaussian distribution, using $\alpha =1/2$, $\theta =1/2$.
For cases when an analytical benchmark is not available, a good alternative is to check (numerically) how close the integral $I={\int}_{0}^{\infty}{f}_{Z}(z)\phantom{\rule{0.167em}{0ex}}dz$ is to 1.
At this point, also, one raises the question of how much the 1 000bit precision in use is overkill: to what extent would doubleprecision suffice? Moreover, how important is it to have a sophisticated handling of possible loss of precision in the summation?
In order to answer these questions, we reimplemented the pdfz_tweedie function using doubleprecision only, while keeping most of the remainder of the algorithm intact. Two versions of the pure doubleprecision version were implemented: the first version uses the nsum module and does the sum without loss of precision (“no loss”); it is called here “dpnl”; the other version does not use the nsum package, and performs all the sums with standard arithmetic only (“with loss”) ; it is called here “dpwl”. For easy reference, we call the 1 000bit precision version “npnl”. Note that “dpnl”, “dpwl” and “npnl” are the mnemonic names of the versions for summing the series, and not actual function names.
We now show the results of integrating the density numerically for these three versions, for all values of $\alpha $ considered above, keeping $\theta =1/2$ fixed. Except for $\alpha =0.99$, all numerical integrals were calculated for a range $z\in [1.0\times {10}^{6},50]$ with 1 000 points using GaussLegendre quadrature (we used a translation to Python of the routine gauleg in Numerical Recipes ^{(Press et al. 1992)}PRESS WH, TEUKOLSKY SA, VETTERLING WT AND FLANNERY BP. 1992. Numerical Recipes in C; The Art of Scientific Computing. 2nd edition. New York, NY, USA: Cambridge University Press.. For $\alpha =0.99$, the integration was for $z\in [1.0\times {10}^{6},20]$ with 10 000 points. The adjustment of range and number of points used is relatively easy to do, and somewhat unavoidable.
Results are shown in Table II: for each $\alpha $, the rows show the number of points used in the quadrature, and the value of the numerical integral $I$ and its absolute difference from 1, $\delta I$, for each of “npnl”, “dpnl” and “dpwl”.
Numerical integration of Tweedie densities using Gaussian quadrature for versions npnl, dpnl and dpwl of bit precision / summation method: $I=\int {f}_{Z}\left(z\right)dz$ (the integral calculated with each version) and $\delta $I = I $$ 1, with $\theta $ = $$1/2.
Clearly, with 1 000bit precision, we are able to calculate very accurate integrals and, consequently, probabilities, for all values of $\alpha $ in the range.
The double precision versions, on the other hand, are at first sight disappointing. Except for $\alpha =0.5$ and $\alpha =0.99$, these versions are unable to produce accurate values of $I$. On the other hand, they fail or suceed simultaneously and produce essentially the same results. This is an indication that “catastrophic loss of precision” is not an issue, because dpnl avoids loss of precision, while dpwl does not. In hindsight, this is probably because the order of magnitude of the ${b}_{k}$ terms varies slowly with $k$, so that each new term added is close, in order of magnitude, to the previous one and to the partial sum as well (c.f. Figure 1).
Moreover, in practice the double precision versions fare better than suggested by Table II. This can be verified in Figure 5, where we show the pointwise numerical convergence (or not) of the double precision version “dpnl” (gray dots) against the 1 000bit precision version “npnl” (solid black lines). The behavior of “dpwl” is essentially the same as that of “dpnl”, and is not shown. In the figure, nonconvergence of the sum for 10 000 terms (or spurious convergence for values greater than 10) is flagged as ${f}_{Z}(z)=10$ for easy identification. Negative spurious values are simply not shown on the loglog plot.
Detailed verification of the convergence behavior of versions “npnl” (continuous black line), and “dpnl” (gray dots, with “dpwl” being the same as “dpnl”): (a), $\alpha =0.01$; (b), $\alpha =0.2$; (c), $\alpha =0.4$; (d), $\alpha =0.64$; (e), $\alpha =0.8$ and (f), $\alpha =0.99$.
One can see that the same problems that afflicted the 1 000bit version are present, except that with more gravity, in the doubleprecision version. Thus, for small values of $z$, the simple algorithm used here fails, except that these $z$ values are now greater than they were with the 1 000bit version. Not surprisingly, the filter of Eq. (21), which was designed and calibrated with the 1 000bit version, “lets through” a few points on the left tail of the distribution that lead to nonconvergence.
Of course, one can recalibrate the parameters in Eq. (21) to censor those points, but especially for the smaller $\alpha $’s, this may result in “chopping off” two much of the left tail (in the case $\alpha =0.01$, the whole pdf is lost with double precision).
Performance in comparison with R’s tweedie package
We now repeat the analysis in the previous section except that, instead of running npnl against its doubleprecision versions, we run it against two options available in R: dtweedie, and dtweedie.stable. The results are shown in Table III. In this table, columns 3 and 4 are repeated from Table II, for ease of comparison. dtweedie.stable now fails for $\alpha =0.01$ (it was not tried to check at exactly what value of $\alpha 0.1$ it fails for the first time), whereas, at the other extreme, dtweedie fails for $\alpha =0.9$ and $\alpha =0.99$ (it was not tried to check at exactly what value of $\alpha >0.8$ it fails for the first time).
Numerical integration of Tweedie densities using Gaussian quadrature for npnl (1000 bit precision), dtweedie and dtweedie.stable: $I=\int {f}_{Z}\left(z\right)dz$ the integral calculated with each version) and $\delta $I = I $$ 1, with $\theta $ = $$1/2.
The same parameters of Gaussian quadrature previously employed to generate Table II were used. Figure 6 shows the corresponding densities for six cases: in the figure, failures are flagged at constant values equal to $1.0\times {10}^{6}$.
Detailed verification of the convergence behavior of 1 000bit densities “npnl” (continuous gray line), in comparison with dtweedie (open squares) and dtweedie.stable (filled circles): (a), $\alpha =0.01$; (b), $\alpha =0.2$; (c), $\alpha =0.4$; (d), $\alpha =0.64$; (e), $\alpha =0.8$ and (f), $\alpha =0.99$. In (a), dtweedie.stable values lower than $1.0\times {10}^{6}$ are shown at this value. In (e), dtweedie values lower than $1.0\times {10}^{6}$ are shown at this value.
There is always at least one function in package tweedie that performs as well, in practice, as our highprecision implementation in Python, but the choice must be made by the user. In its interval of validity, $0.01\alpha 0.99$, nptweedie always succeeds and calculates the integral of the density with an accuracy better than $1\times {10}^{6}$.
Conclusions
In this work we have investigated empirically a few of the problems that afflict the calculation of Tweedie probability density functions for the range $0\alpha 1$ of one of its parameters. The $\alpha $ parameter is identified as the one whose values affect the summing of the corresponding series, the $\theta $ parameter playing no role in the problem. We have also shown that the series converges in the analytical sense, in spite of the occurrence of severe numerical problems for its summation for some values of $z$ and $\alpha $.
The terms in the series span a very large range of orders of magnitude, close to that allowed by double precision. This wide range of values, however, turns out not to lead to catastrophic loss of precision, likely because the order of magnitude varies slowly in the summation (nearby terms have close to the same orders of magnitude).
Still, for very large indices $k$ in the summation, the calculation of the individual term remains inaccurate, leading in some cases to either the inability of reaching a stable value even after 10 000 terms, or to convergence to spurious values. A key parameter that allows to predict this event, here called $1/B$, was identified and used to construct an adhoc procedure to avoid it.
We have visually identified the thresholds in $z$ and the corresponding $1/B$ values as a function of $\alpha $. For a practical range of $\alpha \in [0.01,0.99]$, this allows the Tweedie pdfs to be calculated successfully using 1 000bit precision.
Attempts to use only double precision fail to various degrees, with the worst cases being $\alpha $ close to the lower limit above (and viceversa). In specific cases, the procedures adopted here may be adapted for double precision, if one is willing to accept a shorter range for $\alpha $ than $[0.01,0.99]$. One of the three functions available in R’s tweedie package that we tested is always able to deal with the numerical problems satisfactorily, but the specific function varies, and its name may need to be enforced by the user: the results presented here may be useful in this regard.
ACKNOWLEDGMENTS
The authors wish to thank the comments and suggestions by two anonymous reviewers, which contributed substantially to improve the manuscript.
REFERENCES
 DETTMAN JW. 1984. Applied Complex Variables. New York: Dover Publications.
 DUNN PK. 2015. Package ‘tweedie’ (Tweedie exponential family models), version 2.2.1. URL http://www.rproject.org/package=tweedie
» http://www.rproject.org/package=tweedie  DUNN PK AND SMYTH GK. 2001. Tweedie family densities: methods of evaluation. Odense, Denmark: In Proceedings of the 16th International Workshop on Statistical Modelling.
 DUNN PK AND SMYTH GK. 2005. Series evaluation of tweedie exponential dispersion model densities. Stat Comp 15(4): 267280.
 DUNN PK AND SMYTH GK. 2008. Evaluation of tweedie exponential dispersion model densities by Fourier inversion. Stat Comp 18(1): 7386.
 JØRGENSEN B. 1987. Exponential dispersion models. J Roy Stat Soc B Met 49(2): 127162.
 JØRGENSEN B. 1992. The theory of exponential dispersion models and analysis of deviance. 2nd edition. Rio de Janeiro: Instituto de Matemática Pura e Aplicada.
 MALCOLM MA. 1971. On accurate floatingpoint summation. Communape ACMape 14(11): 731736.
 NOLAN JP. 1997. Numerical calculation of stable densities and distribution functions. Comm Statist Stochastic Models 13(4): 759–774.
 PRESS WH, TEUKOLSKY SA, VETTERLING WT AND FLANNERY BP. 1992. Numerical Recipes in C; The Art of Scientific Computing. 2nd edition. New York, NY, USA: Cambridge University Press.
 PYTHON SOFTWARE FOUNDATION. 2015. Python Language Reference, version 3.5. URL https://docs.python.org/3/reference/index.html
» https://docs.python.org/3/reference/index.html  R CORE TEAM. 2015. R: A Language and Environment for Statistical Computing .R Foundation for Statistical Computing. URL https://www.Rproject.org/
» https://www.Rproject.org/  SHEWCHUCK JR. 1996. Adaptive precision floatingpoint arithmetic and fast robust geometric predicates. Pittsburgh PA 15213: Technical Report CMUCS96140, School of Computer Science, Carnegie Mellon University.
 TAYLOR LR. 1961. Aggregation, variance and the mean. Nature 189: 732735.

^{B15}WUERTZ D, MAECHLER M AND RMETRICS CORE TEAM MEMBERS. 2016. Stabledist: Stable distribution functions. R package version 0.71. URL https://CRAN.Rproject.org/package=stabledist.
» https://CRAN.Rproject.org/package=stabledist
Appendix Python implementation
For completeness, we give here the full Python implementation of 3 modules (nsum, Psialpha and nptweedie). The following observations may be helpful for nonPython programmers:

In Python, there is no end, endif, end do, etc., statement. The scope of if statements or loops is defined by indentation. To close long stretches of if statements or loops, we use the Python keyword pass, which does nothing.

Python modules are libraries, and functions in a module can be imported into other programs, in a fashion similar to MODULA2, ADA and R.

Although Python functions are much more flexibile, the functions in the present modules are similar to C functions (returning a single value, or none) and they can be used on the right side of an assignment in exactly the same way.

Calling programs do not need to manage the inner workings of the modules. In particular, there is no need to understand what a dictionary is in nsum (just use the recipe) or multiple precision in nptweedie (just use ordinary floating point; all conversions necessary are performed on the input and output values).
nsum.py
The adaptation of the function msum found in http://code.activestate.com/recipes/393090/, which is what is actually used to perform the sums, consists of a small Python module nsum with three functions: startsum, sumthis, and sumall. startsum creates a new entry in a internal dictionary called partials. A Python dictionary is like an associative array: instead of an integer, it is indexed by an entry; in our case, the name of the sum in a string. This allows the flexibility of keeping tab of several partial sums in parallel. Although this feature was not actually used in our implementation for summing the series (see module nptweedie in this appendix), it is useful, for example, for debugging.
Although not necessary, we adopt the convention of using strings as dictionary keys with the same name as the floatingpoint variable that ultimately stores the sum.
It is easy to understand how to use the functions in nsum.py, and this is all that is needed to understand how sums are calculated in the implementation nptweedie.py given below: startsum(’sumd’) starts a sum that will ultimately be stored in variable sumd; sumthis(dk,’sumd’) adds a new term dk, without actually calculating the sum, and sumd = sumall(’sumd’) calculates the partial sum. Note that, because the Python dictionary is internal to nsum, knowledge of how Python dictionaries work, although useful, is not strictly necessary.
Here is the full implementation of nsum.py:
Psialpha.py
nptweedie.py
This module implements the summation of the Tweedie series using 1 000bit precision. The most useful function in the module is pdfz_tweedie(z,theta,alpha); its use is selfexplanatory.
Publication Dates

Publication in this collection
02 Dec 2019 
Date of issue
2019
History

Received
20 Mar 2018 
Accepted
4 Dec 2018