## Services on Demand

## Article

## Indicators

- Cited by SciELO
- Access statistics

## Related links

- Similars in SciELO

## Share

## Brazilian Journal of Physics

*Print version* ISSN 0103-9733

### Braz. J. Phys. vol.31 no.1 São Paulo Mar. 2001

#### http://dx.doi.org/10.1590/S0103-97332001000100002

**Variances and covariances in deconvolution of multichannel spectra: ^{34}S (g, n) cross section **

O. Helene^{*}, C. Takiya^{* †}, and V. R. Vanin^{*}

^{*} Instituto de Física, Universidade de São Paulo* CP 66318 CEP 05315-970 São Paulo, SP, Brazil*

^{†} *Universidade Estadual do Sudoeste da Bahia*

*Departamento de Ciências Exatas*

*Estrada do Bem Querer Km 4, CEP 45083-900, Vitória da Conquista, BA*

**Received on 8 March, 2000. Revised version received on 14 September, 2000 **

This paper discusses some aspects of deconvolution of one-dimensional spectra in the framework of the Least Squares Method and presents a minimum variance regularization procedure. Covariance matrices are taken into account inevery step. Fluctuation and artifacts, both in the deconvolved and regularized spectra, are related to the structure of the covariance matrices. The method is applied to a simulated spectrum and to the^{34}S (g,n) cross section determination from actual yield data.

**I Introduction**

Deconvolution of one dimensional spectra has been extensively used in experimental sciences in connection to gamma-ray spectroscopy [1], neutron [2, 3], mass [4] and beta [5] spectra studies, cross-section measurements [7, 8], energy distribution of annihilation radiation [6, 9] and microprobe scans [10], among others. The basic goal of deconvolution procedures is to obtain the intrinsic distribution of a signal blurred by the response function of a detector system and affected by statistical fluctuation. Most of the troubles in deconvolution algorithms stem from that even small statistical fluctuations in the original data are strongly amplified [11] and, frequently, produce an oscillatory behavior of the deconvolved data, and even unphysical results. In order to reduce fluctuations of the deconvolved spectra, many regularization procedures have been developed [1 - 15]. Such regularization procedures have usually been studied from ad hoc observations of the obtained results in some practical and simulated cases. However, regularization methods give rise to biased estimates and artifacts. As a consequence, the choice of the regularization parameter depends on the compromise between artifacts and noise [14].

In the study of deconvolution procedures, not enough attention has been paid to the covariance matrices. This paper discusses some aspects of the deconvolution procedure within the Least Squares Method (LSM), and takes into account, in every step, the covariance matrices, which are not ill-behaved although showing large elements on the diagonal and negative covariances between adjacent channels. A regularization procedure on the basis of a minimum variance criterion was developed in order to reduce the large fluctuations. When the covariance matrices are taken into account, the obtained results are unbiased, the artifacts can be understood, functions can be fitted to the data, and the goodness-of-fit can be tested by the usual chi-square test.

** II Least Squares Method and regularization procedures **

We assume that the relationship between an unknown spectrum *S*(*x*^{¢}) and a measured spectrum *Y*_{i}, *i* = 1, 2, ¼*n*, can be described by

where *R*_{i}(*x*^{¢}) is the detector response function, and _{i} is a measurement error. Eq.(1) can be approximated by

where

Eq. (2) can be written in a more suitable form as

where **Y** is the known column vector with elements *Y*_{i}, **R** is the response matrix, **S** the unknown (column) vector with elements *S*(*x*_{i}) and is the error vector. Although is unknown, we have < > =0 and < · ^{ t} > =**V**, where **V** is the covariance matrix of **Y**, and < > stands for expectation value. We assume that **V** is known, as usual in experimental physics.

Due to the linear relation between the vector of observations, **Y**, and the vector of parameters, **S**, the estimate given by the LSM,

is consistent, unbiased and has the minimum attainable variance among the linear estimates even in the case of deconvolution procedures [16]. The covariance matrix of is given by

and can be easily calculated from **R** and **V**. Generally, **R** is a *n* × *m* rectangular matrix, with *n* ³ *m* where *n* is the number of experimental data points in **Y** and *m* is the number of channels in **S**.

Eqs. 5 and 6 can fruitfully be used in the deconvolution procedure and in the interpretation of the obtained spectra. However, deconvolved spectra are usually poorly defined, showing very large fluctuations. If we know the function to be fitted to the deconvolved spectra, the large fluctuations are not cumbersome. However, when we need to examine the spectra for clues about the fitting function, a regularization procedure provides important guidance.

** III Regularization **

The LSM is the best linear estimator of linear models like that given by (4), not only for the parameters but also of any linear combination of them. As a consequence, we choose a linear regularization given by

where **T** is the regularization matrix. The covariance matrix of the regularized spectra **S**_{T} is given by

In addition to the linearity, **T** obeys the following rules:

- i) Only three non vanishing elements in every line of

**T**are taken, in order to reduce the structure loss due to the smoothing caused by the linear combination of parameters in ;

ii) In order to avoid skewnnes and to preserve simmetry and normalization **T** is given by

where *m* is the number of rows in ;

- iii) The values of

*a*

_{i}will be chosen to minimize the variances of the regularized spectra. Minimizing the diagonal elements of

**V**

_{T}(eq. 8) we obtain

If the regularized spectrum is not sufficiently defined, the regularization can be repeated with **S**_{T} as input, giving rise to an iterative procedure.

** IV Simulation**

In order to study some practical aspects of the deconvolution procedure we simulated a Gaussian signal convolved with a Gaussian response function, which gives rise to a Gaussian spectrum with variance equal to the sum of the variances of the signal and of the response function. The simulated spectrum consists of a Gaussian peak with a standard deviation of 5.00 channels and an area of 100,000 counts, superimposed to uniform background of 200 counts per channel. A Poisson random fluctuation was simulated in order to obtain the typical statistical fluctuation of real spectra. As a consequence, the covariance matrix **V** of the simulated data is diagonal with *V*_{ii} = *Y*_{i}, where *Y*_{i} is the number of counts in channel *i*. Fig. 1 shows the simulated peak. Results of the fitting of a Gaussian peak are shown in second column of Table I.

Figure 1. Simulated spectrum and the fitted peak plus background. |

**Deconvolution**

The simulated peak was deconvolved using eq. 5 with

with s_{r}=1.2 channels. In this example a square **R** (200 × 200) was used. Fig. 2 shows the obtained spectrum, where the enormous fluctuation are typical of deconvolution procedures. The peak structure was lost and non physical (yet statistically meaningful) negative counts appear. The structure of the spectrum in Fig. 2 can be understood by inspecting the covariance matrix calculated from eq.6 whose central part is shown in Table II. The variances of are about 10^{9}, corresponding to standard deviations of about 3 · 10^{4}, greater than the typical values of **Y** in the peak region (about 10^{4}), and many times greater than . Those standard deviations explain the enormous fluctuations of .

Figure 2. Deconvolved spectrum by using the LSM method and the fitted peak and background. |

The typical oscillation pattern of can also be understood from the inspection of the correlation between counts in adjacent channels, defined as

As can be seen from Table II, the correlation coefficients between counts in adjacent channels are about -0.98. A negative correlation between two data means that if a datum is underestimated (overestimated) the other is probably overestimated (underestimated). Since the values obtained are very near - 1, which means total anticorrelation, the deconvolved spectrum must show strong oscillatory pattern.

Table I. Result of the fit of a Gaussian peak plus a background to the original (second column), the deconvolved (third column), and the regularized spectra (fourth column). |

Table II. Covariance (upper triangle, including the main diagonal) and correlation (lower triangle) matrix of the central part of the deconvolved spectrum. The numbers into parentheses refer to the channel number of the spectrum. |

Despite the strange pattern of the spectrum in Fig. 2, it is possible to fit the peak and background by the LSM taking into account the covariance matrix of . Table I shows the result of the fitting of a Gaussian peak to . The obtained standard deviation, s = 4.863(14), agrees with the expected value ( = 4.854). Likewise the area, position and background agree with the expected values. The obtained reduced chi-square value, 1.04, has 27 % probability of being exceeded and shows that both the fit is acceptable, and no more hypotheses are needed to explain the structure of the spectrum.

Indeed, it should be expected that a Gaussian could be fitted to the ill-behaved spectrum of Fig. 2, because all the assumptions required for its success are satisfied. This result is shown here only to stress the fact that if the function to be fitted is known a regularization procedure is not necessary. Regularization procedures are required, however, when we need visual information to decide which function to fit.

**Regularization**

Fig. 3 shows the same data of Fig. 2 regularized by the procedure of section III. Table III shows the covariance/correlation matrix of the central part of **S**_{T}**=****T** · . As can be seen by comparing the diagonal elements of **V**_{T} (table III) with the diagonal elements of **V**_{} (table II),variances were reduced by a factor of about 3 · 10^{-4}. Such reduction can be understood by inspecting the regularization procedure. The variance of the number of counts in a regularized channel can be estimated assuming *a*_{i} @ 0.25, a typical value, by

Table III. Covariance (upper triangle, including the main diagonal) and correlation (lower triangle) matrix of the central part of regularized spectrum. |

where s_{i}, *i* = 1, 2, 3, are the standard deviations of three adjacent channels and r_{i j} their correlation coefficients. Since r_{12} @ r_{23} @ -r_{13}, r_{12} @ -1 and s_{1} @ s_{2} @ s_{3} @ s we have << s^{2}. In conclusion, the regularization procedure relies on the strong and negative covariances between adjacent channels for its success. Table I shows the results of fitting a Gaussian peak superimposed to an uniform background to data in figure 3 using the LSM and taking into account the whole covariance matrix of **S**_{T}. The reduced c^{2} is 1.019 with 195 degrees of freedom and corresponds to a 34% confidence level.

Figure 3. Regularized spectrum and the fitted curve. |

** V Application example: cross section of ^{34}S **

The ^{34}S (g, *n*) yield was measured by Assafiri et al [17] from 10.4 MeV to 29.4 MeV in 100 keV intervals. Determination of the cross section from yield data is an inverse problem like that given by eq. (1), where *Y*_{i} is the yield data, *S(x')* the unknown cross section, and *R*_{i}(*x*¢) the Bremsstrahlung spectrum. Some procedures have been used to determine the cross section [7, 8] from yield data; however, those procedures do not consider the whole covariance matrices and, as a consequence, statistical tests can not be applied. Here we apply the method of section III to the analysis of the ^{34}S(g, *n*) cross section.

Fig. 4 shows the 182 experimental yield data points taken from ref [17]. The analysis of the yield data was performed in two steps. First, from inspection, it was searched the best dimension of **R** to use, and the number of regularizations such that both the shape of the deconvolved-regularized spectrum becomes defined, and the fitted cross-section passed a quantitative acceptance test. A deconvolution using a 182 × 90 response matrix, followed by a threefold regularization was performed. If a more compressed response matrix *n* × *m* was used, with *m* < 90, or a higher regularization were performed, some narrow structures would be lost. Otherwise, if a less compressed response matrix or a lower regularization were adopted, the large fluctuation of the obtained spectrum would hide the cross section structure. Fig. 5 shows the obtained spectrum. Four Lorentz curves,

were fitted to the spectrum and the results are shown in the last three columns of table IV. The obtained *P*(c^{2} > ), about 9 %, validates the regularization procedure.

Figure 4. Experimental yield of the ^{34}S (g, n) reaction. |

Figure 5. Deconvolved (182 × 90 response matrix) and regularized ^{34}S (g, n) cross-section and the fitted curve. |

Table IV. Result of fit of four Lorentzians to the experimental ^{34}S (g, n) cross section. |

Fig. 6 shows the same cross section obtained by Assafiri *et al.* [17] using the Penfold-Leiss [8] deconvolution method. As can be seen, both Figs. 5 and 6 show the same features: a split of the giant dipole ressonance into two components near 17 MeV and 21 MeV with two small peaks at 13 MeV and 15 MeV. The main difference between data in Figs. 5 and 6 is that those data in Fig. 5 have a known covariance matrix and a model (four Lorentz curves) can be fitted and a chi-squared test can be performed.

Figure 6. |

However, as stated above, regularization smooths the spectrum and some narrow structures can be lost. In order to examine this possibility, the yield data was unfolded once more with a large (182 × 182) response matrix without regularization. The obtained spectrum is shown in Fig. 7. Four Lorentz curves were also fitted to this spectrum and the results are shown in columns 1 to 3 of table IV. The results obtained as well as the chi-squared (corresponding to a confidence level of 16 %) value show that the hypothesis of four Lorentz curves agree with the data without regularization.

Figure 7. Deconvolved (182 × 182 response matrix) |

In conclusion, the cross-section of the ^{34}*S* (g, *n*) reaction can be explained by four Lorentz peaks, being not required more than four peaks to explain the data.

** VI Conclusion**

We developed a deconvolution with regularization procedure based on the LSM, taking advantage of the linearity of the convolution equation and the optimum properties of the least squares estimator. The regularization procedure is linear and can be represented by a rectangular band matrix. The covariance matrix of the regularized spectrum can be calculated by a closed formula, making possible statistical hypothesis tests with the obtained spectrum.

The oscillatory pattern and the artifacts of deconvolved and regularized spectra were understood and explained. They follow from the structure of the covariance matrix of the deconvolved spectrum.

Finally, it may be worth to emphasize that, whenever the functional form of the signal function is known, a straightforward data fitting procedure should be preferred because it avoids the artifacts, inevitable in the deconvolution with regularization procedure.

**Acknowledgments**

We acknowledge Dr. P. Gouffon for a critical reading of the manuscript, and Dr. M. N. Martins for comments and suggestions. This work was partially supported by CNPq and FAPESP.

**References **

[1] Cs. Süsköd, W. Galster, I. Licot and M. P. Simonart, Nucl. Instr. and Meth. A **355**, 552 (1995). [ Links ]

[2] J. Pulpán and M. Králí k, Nucl. Instr. and Meth. A **325**, 314 (1993). [ Links ]

[3] W. R. Burrus and V. V. Verbisnki, Nucl. Instr. and Meth. **67**, 181 (1969). [ Links ]

[4] A. A. Marchetti and A. C. Mignerey, Nucl. Instr. and Meth. A **324**, 288 (1993). [ Links ]

[5] Thomas M. Semkow, Appl. Radiat. Isot. **46**, 341 (1995). [ Links ]

[6] J. Dryzek, C. A. Quartes, Nucl. Instr. and Meth. A **378**, 337 (1996). [ Links ]

[7] B. C. Cook, Nucl. Instr. and Meth. **24**, 256 (1963). [ Links ]

[8] A. S. Penfold and J. E. Leiss, Phys. Rev. **114-5**, 114-5 (1959). [ Links ]

[9] L. Hoffmann, A. Shukla, M. Peter, B. Barbiellini and A. A. Manuel, Nucl. Instr. and Meth. A **335**, 276 (1993). [ Links ]

[10] G. E. Coote and Betty P. Kwan, Nucl. Instr. and Meth. B **104**, 228 (1995). [ Links ]

[11] Per Christian Hansen, Inverse Problems **8**, 849 (1992). [ Links ]

[12] V. B. Anikeyev and V. P. Zhigunov, Phys. Part. Nucl. **24**, 424 (1993). [ Links ]

[13] F. M. Ramos, H. F. C. Velho, J. C. C. Carvalho and N. J Ferreira, Inverse Problems **15**, 1139 (1999). [ Links ]

[14] V. B. Anikeev A. A. Spiridonov and V. P. Zhigunov, Nucl. Instr. and Meth. A **303**, 350 (1991). [ Links ]

[15] N. D. Gagunashvili, Nucl. Instr. and Meth. A **343**, 606 (1994). [ Links ]

[16] M. G. Kendall and A. Stuart, *The Advanced Theory of Statistics* (Charles Griffin and Co Ltd., London, 1967), Vol. 2. [ Links ]

[17] Y.I. Assafiri, G.F. Egan and M.N. Thompson, Nucl. Phys. A **413**, 416 (1984). [ Links ]