SciELO - Scientific Electronic Library Online

vol.29 issue2Características físicas dos sistemas convectivos de mesoescala que afetaram o Rio Grande do Sul no período de 2004 a 2008Distribuição de K, eU e eTh e avaliação da dose anual de radiação na região do Granito Serra do Carambeí - PR author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand




Related links


Revista Brasileira de Geofísica

Print version ISSN 0102-261X

Rev. Bras. Geof. vol.29 no.2 São Paulo Apr./June 2011 

Seismic ray tomography using L1 integral norm



Vânia G. de Brito dos SantosI; Wilson M. FigueiróII

IUniversidade Estadual de Feira de Santana, Departamento de Ciências Exatas, módulo V, Km 3, BR-116, Campus Universitário, s/n, Caixa Postal 252294, 44031-460 Feira de Santana, BA, Brazil. Phone: +55 (75) 3161-8086 - E-mail: vania–
IIUniversidade Federal da Bahia, CPGG-UFBA, Centro de Pesquisas em Geofísica e Geologia da Universidade Federal da Bahia, Instituto de Geociências,Rua Barão de Jeremoabo, s/n, 40170-290, Federação, Salvador, BA, Brazil. Phone: +55 (71) 3283-8520/8551; Fax: +55 (71) 3283-8501/3235-2423 - E-mails:;




Seismic ray tomography methods are usually associated with substantial computer processing time. The reason for this is that at each step of the iterative inversion process defined by the tomographic method the two-point ray tracing problem must be solved for each source-receiver pair. In order to resolve this, an Euclidean norm (L2 vector norm), commonly used in error functions which are to be minimized in inversion procedures, is substituted by an L1 integral norm, which enables the estimation of model parameters by minimizing the area between observed and calculated traveltime curves that are interpolated (or adjusted) to the data points. Relatively simple mathematical developments and numerical experiments with two-dimensional compressional seismic wave velocity field models showthat L1 integral norm saves an enormous amount of processing time with no significant loss of accuracy. Occasionally, parameters of the model can be better estimated using L1 integral norm than the L2 vector norm that is traditionally utilized in seismic inversion tomography.

Keywords: seismic ray, tomography, polynomial parameterization, seismic velocity field, two-point ray tracing problem, L2 vector norm, L1 integral norm.


O alto consumo de tempo em processamento computacional é um problema que, geralmente, está associado aos métodos de tomografia sísmica. Isto ocorre porque, em cada passo do processo iterativo de inversão definido pelo método tomográfico, o problema da conexão de dois pontos, pela curva da trajetória do raio sísmico, deve ser resolvido para cada par fonte-receptor. A fim de reduzir a gravidade deste tipo de problema, a norma Euclideana (norma L2de vetor), comumente empregada nas funções de erro a ser minimizado no processo de inversão, é substituída por uma norma L1 de função. Essa mudança permite estimar parâmetros do modelo através da minimização da área entre curvas de tempos observados e calculados que são interpoladas (ou ajustadas) aos pontos referentes aos dados. Desenvolvimentos matemáticos e experimentos numéricos relativamente simples, com modelos bidimensionais de campos de velocidade sísmica de ondas compressionais, mostram que a norma L1 de função permite poupar uma enorme quantidade de tempo de processamento sem uma importante perda de precisão. Às vezes, os parâmetros do modelo são estimados de modo mais acurado usando-se a norma L1 da integral em lugar da norma L2 de vetor, tradicionalmente usada na inversão tomográfica.

Palavras-chave: raio sísmico, tomografia, parametrização polinomial, campo de velocidade sísmica, problema de conexão de dois pontos por traçado de raio,norma L2 de vetor, norma L1 de integral.




The concept of distance between observed and calculated data has a central role in inverse geophysical problems, where it can be expressed in terms of norms. Over the years certain works have attempted to improve objective functions based on norms, with improvements being witnessed in respect of the following issues: robustness, regularization, stability, accuracy, and processing time reduction. Claerbout & Muir (1973) suggested that some norms are more appropriate than others depending on the application, for example: the L1-norm is more robust than the L2-norm. Scales & Gersztenkorn (1988) presented an alternative to the regularization of inverse problems based on the L1-norm (least-absolute deviation) instead of damped least squares. Bube & Langan (1997) applied iteratively re-weighted least square to a hybrid L1/L2 minimization problem that works better for data with outliers than the L2 algorithm with a small additional computational cost. A combination of the L2-norm with L1-norm has been established by the so-called Huber norm that gives us more robust parameter model estimation than the L2-norm and it retains its smoothness better than the L1-norm for small residuals (Guitton & Symes, 2003). When wavelets are used to represent models, the L1-norm is better than L2 regularization on wavelet decomposition used to represent a model solution of alinearized seismic tomographic problem (Loris et al., 2007). Different norms are compared and used in combination with nonlinear regularization techniques in order to discuss seismic tomography aspects such as: ill-conditioned matrices, model reconstruction, and time-consumption (Loris et al., 2010).

Amongst other issues, seismic ray tomography is faced with the problem of large amounts of processing time being wasted in order to estimate model parameters. The traditional objective functions (Bishop et al., 1985) that use L2 vector norm, need to have a time calculated at the same receiver where a seismic rayarrives and where a real traveltime is observed. This means thatfor each source-receiver pair, the two-point ray tracing problem must be solved. The problem is therefore accentuated because, with tomographic work, it is recommended to have a good coverage of the velocity field model, and so it is necessary to have a large number of source-receiver pairs connected by seismic ray trajectories, for which traveltimes are calculated. In addition, even for relatively simple two-dimensional models, the two-point raytracing problem is an iterative process that can result in prolonged computer processing time. Given that seismic tomography is also an iterative method, the same procedure of connecting source and receiver positions (for each one of the several source-receiver pairs) must be repeated to this extent, since this is the supposed number of required iterations for convergence. In order to mitigate against such a worrying problem, we propose a substitution of the traditional L2 vector norm by an alternative integral L1-norm that does not demand connection positions by ray trajectories and, consequently, saves a significant amount of processing time.

Seismic Ray Tracing and Traveltime Calculation

If V(x, z) is a two-dimensional velocity field model, T represents traveltime, and ds is an arc length element; then to find the trajectory C that minimizes the functional

requires that the following system of equations be solved (Červený, 1987)

where τ is a ray parameter, x(τ) is a vector position of a ray trajectory point at τ and p(τ) is the slowness vector that is tangent to the ray trajectory at τ. Numerically, rays are traced by means of the following recursive system

obtained by a Taylor expansion of Equation (2). In this work, the two-point ray tracing problem is solved by the Heun method (Santos & Figueiró, 2006). The traveltime along ray trajectory is calculated numerically by the following expression:

where the calculation of the traveltime element, ΔTi at each knot of the polygonal line, which represents the ray trajectory, isachieved concomitantly with its trace; Vi is the wave velocity at the point xi = (xi, zi); N + 1 is the total number of small straight segments that constitute the ray trajectory up to xN+1; and 2 is the Euclidean norm. At the end of each step in theprocess of ray trajectory construction, the slowness vector p must be up-to-date in order to satisfy the eikonal equation in the following way:

where p = (p1, p2) with p1 = p2· cos(θ), p2 = p2 · sin(θ) and θ is the departure angle, meaning the anglebetween p(0) and the positive orientation of the X axis, see Figure 1. In other words, the direction of p is preserved, but its magnitude is altered in order to satisfy Equation (5). Initial conditions necessary to solve the ray tracing equations are: source position s = (x = 0, z = 0) = 0 and the slowness vector, p(0) given by (cos(θ), sin(θ)).



In the iterative process performed in order to solve the two-point ray tracing problem that connects source to receiver, the bisection method is used to determine the departure angle q of the ray trajectory. Figure 2 shows a ray network obtained by employing numerical methods such as those presented here for the seismic velocity target model V2 used for inversions in this work.



Gauss-Newton Method

The solution of an inverse problem begins with the use of a set of observed data (real or synthetic), and can be divided into three steps: model parameterization, direct modeling, and inverse modeling (Figueiró, 1994). The latter consists of the determination (or estimation) of model parameters that employ the aforementioned data set. The local optimization method called Gauss-Newton is an iterative procedure that enables us to solve a non-linear problem by means of successive linearization. In fact, the problem is solved on a step-by-step basis by applying, repeatedly, the least squares method.

In the modeling process the input of the system is described by model vector m whose components are the model parameters.As the problem is linearized, the output, in forward modeling, can be expressed by Gm where G is a matrix that describes thewave propagation process that depends on the geometrical structure of the problem. The model m can be written as:

If the vector d = (d1, d2, d3, ... , dn)T contains the observed data (synthetic for the purpose of this work, but it can be real, if available), then in order to solve the inverse problem we need to find the model m that minimizes the following residual error function:

A candidate for a minimum of Equation (7) is a least squares solution for the objective function Φ(m) written as follows:

The so-called least square solution of Equation (8) is provided by (Menke, 1989):

And residual error function is given by

where dobs and dcalc(m) are, respectively, the observed data and the calculated data for the model m. If Δd is nonlinear, it can become linear using its first order Taylor series expansion. It produces:

where Δdap is the linear approximation of Δd, m0 is a reference model (not necessarily an initial one, but more appropriately a current model), Δm = m - m0 is a model perturbation, and S(mk) is the sensibility matrix calculated at the current model mk. The ij-th entry of S(mk) is provided by:

The solution of Equation (11) is then:

This enables the following iterative procedure:

In this exposed way, the inverse iterative process employing an L2 vector norm is characterized.

Iterative Process defined by the L1 Integral Norm

Let mt = (µ1, µ2, µ3, ... , µm)T be a target model, where m is its number of parameters. All the parameters of mt arefixed and unknown. For mt we have n pairs of data (xi, ti)where ti is the synthetic observed traveltime data for the receiver position xi.

Then, we can construct a polynomial function,

by means of cubic spline interpolation (Santos, 2009) using pairs (xi, ti), i = 1, 2, ... , 18, xi I = [0, xmax], obtained by the ray-tracing method. The SPLINE((xi, ti), x) MATLAB routine was used to generate a piecewise third degree polynomial S3(x), i.e. a cubic spline on the interval I. After that, S3(x) is sampled in the points xk = k · 10-2, k = 0, 1, 2, ... , 900, in order to use POLYFIT(xk, S3(xk), p) MATLAB routine, producing a polynomial of degree p = 8 by adjustment to the data (xk, S3(xk)). Such a choice was motivated by the need to have a sufficient degree of freedom and not so much oscillation inagreement with the synthetic observed data behavior.

If m = (η1, η2, η3, ... , ηm)T represents a general (or a current) model that can vary freely and produces n pairs (i, i) of calculated traveltimes, i at the arrival position, i of a seismic ray; then we can obtain (applying the same procedure used to obtain Equation (15)), a polynomial function

where [m] is to remind us that the calculated traveltime depends on m. These two kinds of data pairs (observed and calculated) and the two functions tobs(x) and tcalc[m](x) are represented in Figure 3.



The problem consists in to find a model m that minimizesthe area function

where A(m) is a non-negative scalar functional defined in Rm. It is supposed that the model mk+1 is produced by a perturbation Δmk, i.e., mk+1 = mk + Δmk. Then, A(mk+1) = A(mk + Δmk). Using a Taylor expansion of A(mk+1), we have:



is the Hessian matrix.

As we expect A(mk+1) to assume a minimum (in fact, the desired value of the minimum we are seeking is zero, as can be seen in (17)), we can make its linear approximation, the two first terms of Equation (18), equal to zero, and so:

defines an iterative process. The partial derivative (m) is a real number and is calculated numerically by means ofthe expression:

where Δηi is a small perturbation of the parameter ηi. It quantifies the variation of the area between the curves tcalc[m](x) and tobs(x) when the model parameter ηi is perturbed.

Some Special Comments

To obtain (m) which is a component of the vector gk it is necessary to calculate A(m). This is done by means of the ABS and INT MATLAB routines. However, it is important to say that it is possible to obtain an analytical expression for A(m) and, for the sake of illustration, it is obtained as follows: initially, we have ΔT[m] (x) = tobs(x) - tcalc[m] (x). Without loss of generality, we can consider the polynomials (15) and (16) to the same degree, i.e., p = q and xmax > 0. Let us consider r1, r2, r3, ... , ru(u < p) the real roots of ΔT[m] (x), where its sign change, and, for the sake of uniformity, r0 = 0 and ru+1 = xmax (it is possible to choose adjusted polynomials tobs (x) and tcalc[m](x), such that 0 and xmax arenot roots of ΔT[m](x)).


This implies:

and finally

It is interesting to observe that

and if the polynomial coefficients of tobs and tcalc[m](x) are obtained satisfying the inequality constraint

we have:

Equations (25) and (27) shows that A(m) is between the L2 vector norm and an L1 vector pondered norm, where the vectors a = (a0, a1, ... , a8) and b = (b0, b1, ... , b8) are composed of the polynomial coefficients of tobs(x) and tcalc[m](x) with bi dependent of the model parameters, respectively. Therefore, it represents an intermediary norm that has the robustness and smoothness (near the minimum) that lack to the L2 vector norm (in case of big residuals) and to the L1 vector pondered norm (in case of small residuals), respectively. This can help us to understand why good accuracy was achieved in some experiments where it was not expected. In fact, the L1 integral norm is an L1-norm with definite derivatives near the minimum and it works like a hybrid L1/L2-norm with low computational cost.

As previously mentioned, the formula

can be understood as an L1 vector norm. Despite this, it is notappropriate to improve our understanding of the results. Theproblem lies in the fact that the vectors a = (a1, a2, ... , ap) and b = (b1, b2, ... , bp) are composed of coefficients of polynomials ("data parameters") and not model parameters, which is not suitable when used as an objective function to be minimized for estimating the parameters of the model m. Moreover, this formula can not be used to calculate A(m), because, in general

seeing that some (ai - bi)xi may be negative, because some bi can be larger than ai and therefore such a situation implies

Model Parameterization

Below are three seismic compressional wave velocity field models considered in this work:


They are two-dimensional with a horizontal length of 9.0 km and 3.0 km deep. This kind of model representation is able to simulate interfaces by means of strong variations in the velocity field. In order to construct the first target model, the coefficients of Equation (28) are chosen to keep V1(x, z) within a range of velocities defined by the extreme values 2.0 km/s and 8.0 km/s. To achieve this aim, the coefficient values werefound to be: c0,0 = 2.0 km.s-1, c1,0 = 0.45 s-1, and c1,0 = 0.66 s-1. To construct the second target model, with a range of velocities defined by the extreme velocity values 1.0 km/s and 8.0 km/s, the numerical coefficient values used for V2(x, z) are: c0,0 = 1.0 km.s-1, c1,0 = 0.098 s-1, c0,1 = 1.246 s-1, c2,0 = 0.0285 km-1.s-1, c1,1 = 0.0015 km-1.s-1, and c0,2 = 0.0035 km-1.s-1. Given the same considerations, the numerical value coefficients employed for Equation (30), V3(x, z) are: c0,0 = 1.0 km.s-1, c1,0 = 0.058 s-1, c0,1 = 1.326 s-1, c2,0 = 0.0195 km-1.s-1, c1,1 = 0.0016 km-1.s-1, c0,2 = 0.0055 km-1.s-1, and c3,0 = 0.0012 km-2.s-1. The graphical images of thesethree target models (V1, V2 and V3) are shown in Figures 4, 8 and 12; respectively.



















With respect to the application of the proposed method for the resolution of the described inverse problem in this work,it should be said that the geological interpretation of the model is not particularly relevant. Either way, for the above-mentioned coefficients, Equation (28) can be seen to represent a system of sedimentary layers inclined with respect to the horizontal. In an analogous way, the expressions (29) and (30) can represent asimilar geological situation with curved interfaces.



For all the situations considered, the source position s is kept on the observation surface (z = 0) at xs = 0.0 km. Receiver positions, r total 18 are distributed on the observation surface in a line obeying the rule Ri = 0.5 ×i km, i = 1,2,3,...,18. A seismic ray network for the model provided by Equation (29) is shown in Figure 2. All polynomials interpolated to traveltimedata have always degree 8.

For the three models under consideration, the calculated traveltimes are obtained numerically by using Equation (4), and several inversions are performed using the L2 vector norm. During the inversions it was used a PC with the following configuration: Processor Intel Pentium 4, 3.4 GHz, and RAM 1.0 Gb.

In the case of the model V1 each iteration of the inversionprocess takes approximately 1 hour. As the total number of iterations varies between 5 and 9, the total processing time required for convergence is between 5 and 9 hours. This drops dramatically when the L2 vector norm is substituted by the L1 integral norm, until it reaches, for the same experiment (with all parameters remaining fixed, except the norm), a total time of calculation is 3 minutes, distributed in 7 iterations, each one a matter of seconds.

With respect to the model V2 inversions are more unstable and are highly inaccurate in estimating the crossed term c1,1. Using L2 vector norm and the beginning of an initial model in which each parameter has a perturbation of 30%, relative to its correspondent in the target model, the total processing time is 7 hours and 15 minutes, distributed in 7 iterations. However,applying the L1 integral norm, the same inversion had 20iterations and a total processing time of 6 minutes. This equates to a 98.85% saving in processing time relative to the inversion using L2 vector norm.

In the case of model V1, Figure 5 shows the initial model and its difference in respect of the target model (this difference is obtained in terms of the least-absolute vector deviation and is approximately 50% when compared to the target model and considering all parameters with the same weight). Employing the L2 vector norm, a difference of 7% is obtained between the inverted and target models. This difference became 5.21% using the L1 integral norm. A graphical view of the inverted models and their differences in respect of the target model can be seen in Figures 6 and 7, utilizing the L2 vector norm and L1 integral norm, respectively. Results for the considered model can also be seen in Santos & Figueiró (2007).

Similarly, an inversion experiment is undertaken involving the model V2. Figure 9 gives graphical views of the initialmodel and the difference between initial and target models (calculated using least-absolute vector deviation and being approximately 30% when compared to the target model and taking into consideration all parameters with the same weight). Using the L2 vector norm, the inverted and target model difference is 19.08% which dropped to 6.47% when the L1 integral norm was applied. A graphical view of the inverted models and their differences in respect of the target model is showed in Figures 10 and 11, in the case of the L2 vector norm and L1 integral norm, respectively.

In addition, the experimental inversion is achieved with the model V3. Figure 13 gives graphical views of the initial model and of the difference between initial and target models (this difference is calculated by using the least-absolute vector deviation and is approximately 40% when compared to the target modeland taking into consideration all parameters with the same weight). Using the L2 vector norm, the inverted and target model difference is 7.23%, which drops to 6.83% when using the L1 integral norm. A graphical view of the inverted models and theirdifferences in comparison with the target model is shown in Figure 14 (using the L2 vector norm) and Figure 15 (using the L1 integral norm).







In this last model, inversions demonstrate high inaccuracy in the estimation of parameters c1,1 and c1,2, using L2 vector norm. The same occurs utilizing L1 integral norm with c2,1 and c1,2. For each iteration, the processing time using L2 vector norm is, approximately, 2 hours, however, with the L1 integral norm, this time was between 12 and 15 seconds. The total time spent in each case, was 9 hours and 50 minutes using L2 vector norm and 5 minutes and 15 seconds using L1 integral norm.

In the case of L1 integral norm, occasionally, during the iterative process, the matrix G = becomes singular. To overcome this problem a regularization method known as Levenberg-Marquardt is carried out, involving the use of thematrix G + λI in place of G, where λ is a small positive realnumber and I is the identity matrix. One way of avoiding numerical artifacts being introduced by regularization is to apply the Newton method. In this case the matrix G will often not be singular. Experiments with model V1 are carried out using the same iterative process defined by the Newton method, employing the L1 integral norm. The results obtained also presented betteraccuracy when compared with a similar experiment using the L2 vector norm, with singular matrices not appearing during the iterative inversion process.



There are scientists who are in agreement that two-point ray tracing is a time-consuming procedure and even dangerous in the vicinity of caustics when using methods like bisection, butdisagree about the presence of the stated problems when the method is based on paraxial approximation (Figueiró & Madariaga, 2000). In respect of these restrictions it could be said that the paraxial method is iterative like bisection, yet, depending on the specific situation, it can be as time-consuming as any other method. In addition, the problem with caustics does not disappear when we use the paraxial method. For theoretical, experimental, specific, or intrinsic use, two-point ray tracing, couldpossibly be a relatively non time-consuming procedure. However, for tomographic use it is undoubtedly an enormous time consumer. At least 70% of processing time is wasted solving the two-point ray tracing problem, if we assume that we do not want to abuse approximation techniques. It should be stated that intomography the two-point ray tracing problem must be solvedfor each source-receiver pair. It is then solved many times over in all iterations of a tomographic process. In any case, we are not proposing the complete abandoning of ray tracing in tomography as rays are inherent in tomography. We are just proposing a better use of rays in tomography. In the same way, this work is not a manifesto against ray tracing as we are simply comparing two kinds of norms: this is the essence of the work. If we use an integral norm, ray tracing must be adapted to be convenientlyused by such a norm. If we use a more efficient procedure for ray tracing the inversion will be faster.

The models utilized cannot be considered unrealistic for the following reasons: they take into consideration a large range of seismic velocities from 1.0 to 8.0 km/s (which, for practicalpurposes, is almost the total seismic velocity found in real situations); the polynomial parameterization process creates aninfinite amount of possibilities to represent the heterogeneities of seismic velocity fields found in a huge variety of real geological situations; our models are a long way from the set ofmodels considered simple; they have strong horizontal and vertical variations (at all points); every continuous function defined in a compact set is the limit of a sequence of polynomial functions (Stone-Weierstrass theorem, see Bartle (1983)); and field discontinuities can be considered as strong variation in order topreserve continuity.

When using integral norm, we benefit from a property of polynomial function, i.e., that polynomials can be easily integrated. Figure 3 has a central role, because it expresses the essence of the work and shows the area that we want to minimize. However, polynomials are being used to represent data andmodels, so, in this second case, the oscillatory character ofpolynomials can cause a few problems during ray tracing, and for this reason we tend to avoid very high-degree polynomials. In addition, as the coefficients of polynomials can vary freely, such parameterization can represent a very large family of models: in fact, an infinite set of models. Oscillations of polynomials can be controlled since we can impose a small value for its derivative. Polynomials can represent realistic though not real oscillations.

For the sake of simplicity, and desire to isolate the experiment from other potential problems, discontinuities in the syntheticobserved traveltime data are avoided. However, if the presence of a certain kind of problem is required for a more general experiment, calculated data must be considered to also allow discontinuities, in order to have appropriate conditions for comparison and adjustment.



The use of the L1 integral norm, instead of the L2 vector norm, in the seismic ray tomography procedures, produces an impressive saving in processing time without any significant loss of accuracy (in some cases accuracy actually improves). This saving occurs because when the L1 integral norm is used, it is not necessary to solve the two-point ray tracing problem several times in all iterations of the inversion process. The employed parameterization model is the polynomial, but we believe that this saving of processing time is independent of whatever parameterization is used. Episodic instabilities occur utilizing the L1 integral norm, but this is not especially problematic, as opposed to other cases that take place using a different norm. For conceptual reasons and because of strong results, we believe this study will work well when applied to more complicated and realistic models. In addition, integrals and, more importantly, areas, have a tendency to be more stable (possibly due to their global character) than vectors and points that have just local properties.



The authors would like to express their gratitude to the State University of Feira de Santana (UEFS) and the Federal University of Bahia (UFBA), Bahia, Brazil and to the educational and scientific agency, CAPES (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior) for providing the necessary conditions for the development of this work.



BARTLE RG. 1983. Elementos de Análise Real. Editora Campus Ltda. 429 pp.         [ Links ]

BISHOP TN, BUBE KP, CUTLER RT, LANGAN RT, LOVE PL, RESNICK JR, SHUEY RT, SPINDLER DA & WYLD HW. 1985. Tomographic determination of velocity and depth in laterally varying media. Geophysics, 50: 903-923.         [ Links ]

BUBE KP & LANGAN RT. 1997. Hybrid L1/L2 minimization with applications to tomography. Geophysics, 62: 1183-1195.         [ Links ]

ČERVENÝ V. 1987. Ray Methods for Three-Dimensional Seismic Modeling. Lecture notes, Continuing Education Course, University of Trondheim, NTH and Mobil Exploration Norway Inc., Trondheim, 830 p.         [ Links ]

CLAERBOUT JF & MUIR F. 1973. Robust modeling with erratic data. Geophysics, 38: 826-844.         [ Links ]

FIGUEIRÓ WM. 1994. Tomografia de Reflexão no caso de Refletor Curvo. Tese de Doutorado, PPPG-UFBA, Salvador. 150 pp.         [ Links ]

FIGUEIRÓ WM & MADARIAGA RI. 2000. A method to avoid arrival caustic points. In: 2000 Technical Program Expanded Abstract, SEG 70th Annual Meeting, Calgary, Canada. CD-ROM.         [ Links ]

GUITTON A & SYMES WW. 2003. Robust inversion of seismic data using the Huber norm. Geophysics, 68: 1310-1319.         [ Links ]

LORIS I, NOLET G, DAUBECHIES I & DAHLEN FA. 2007. Tomographic inversion using L1-norm regularization of wavelet coefficients. Geophys. J. Int., 170: 359-370.         [ Links ]

LORIS I, DOUMA H, NOLET G, DAUBECHIES I & REGONE C. 2010. Nonlinear regularization techniques for seismic tomography. J. Comp. Phys., 229: 890-905.         [ Links ]

MENKE W. 1989. Geophysical Data Analysis: Discrete Inverse Theory. Academic Press, New York, USA. 289 pp.         [ Links ]

SANTOS VGB. 2009. Inversão Sísmica Tomográfica usando Norma de Integral de Função. Tese de Doutorado, CPGG-UFBA, Salvador. 91 pp.         [ Links ]

SANTOS VGB & FIGUEIRÓ WM. 2006. Inversão sísmica tomográfica usando traçado analítico de raios. In: II Simpósio Brasileiro de Geofísica, Natal, CD-ROM.         [ Links ]

SANTOS VGB & FIGUEIRÓ WM. 2007. Seismic ray reflection tomography using integral function norm. In: 2007 Technical Program Expanded Abstract, SEG 77th Annual Meeting, San Antonio, Texas, USA. CD-ROM.         [ Links ]

SCALES JA & GERSZTENKORN A. 1988. Robust methods in inverse theory. Inverse Problems, 4: 1071-1091.         [ Links ]



Recebido em 23 dezembro, 2008 / Aceito em 28 junho, 2011
Received on December 23, 2008 / Accepted on June 28, 2011




Vânia Gonçalves de Brito dos Santos. Graduated (1992) in Mathematics from Catholic University of Salvador. She received an M.Sc. (1999) in Mathematics (differential geometry) and a Ph.D. (2009) in geophysics (seismic tomography) both from Federal University of Bahia. She is Mathematics Professor at the State University of Feira de Santana since 2001. Her research interests are geophysics and geometry.

Wilson Mouzer Figueiró. B.Sc. (1983) in Mathematics from Federal University of Rio de Janeiro (UFRJ). Since 1992, he has been Geophysics Professor at the Geophysics Department of the Geoscience Institute at the Federal University of Bahia (UFBA), where he teaches mathematical geophysics. He obtained his Ph.D. in Geophysics (1994) from UFBA. From 1997 to 1999, he did post-doctoral studies at the Earth-Atmosphere-Ocean Department of the Superior Normal School of Paris (DTAO-ENS), France. Now, he is researcher at the Geophysics and Geology Research Center (CPGG) of the UFBA. His main areas of interest are: geophysical inversion and mathematics applied to geophysics.

Creative Commons License All the contents of this journal, except where otherwise noted, is licensed under a Creative Commons Attribution License