Acessibilidade / Reportar erro

Treatment of geophysical data as a non-stationary process

Abstract

The Kalman-Bucy method is here analized and applied to the solution of a specific filtering problem to increase the signal message/noise ratio. The method is a time domain treatment of a geophysical process classified as stochastic non-stationary. The derivation of the estimator is based on the relationship between the Kalman-Bucy and Wiener approaches for linear systems. In the present work we emphasize the criterion used, the model with apriori information, the algorithm, and the quality as related to the results. The examples are for the ideal well-log response, and the results indicate that this method can be used on a variety of geophysical data treatments, and its study clearly offers a proper insight into modeling and processing of geophysical problems.

stochastic process; Kalmann-Bucy filter; deconvolution; state space


Treatment of geophysical data as a non-stationary process

Marcus P.C. Rocha; Lourenildo W.B. Leite

Department of Mathematics, Department of Geophysics, Graduate Course in Geophysics, Federal University of Pará, Brazil

E-mail: mrocha@ufpa.br / lwbleite@ufpa.br

ABSTRACT

The Kalman-Bucy method is here analized and applied to the solution of a specific filtering problem to increase the signal message/noise ratio. The method is a time domain treatment of a geophysical process classified as stochastic non-stationary. The derivation of the estimator is based on the relationship between the Kalman-Bucy and Wiener approaches for linear systems. In the present work we emphasize the criterion used, the model with apriori information, the algorithm, and the quality as related to the results. The examples are for the ideal well-log response, and the results indicate that this method can be used on a variety of geophysical data treatments, and its study clearly offers a proper insight into modeling and processing of geophysical problems.

Mathematical subject classification: 49N45, 54H20, 74H50, 93C05.

Key words: stochastic process, Kalmann-Bucy filter, deconvolution, state space.

1 Introduction

A seismic signal represents the transient response of the Earth to excitation due to natural phenomena, such as eathquakes, or due to artificial sources as used in geophysical exploration, and it results in non-stationaty signals in noise. The aim of the seismic processing is to improve conditions for the interpretation of registered data. A detailed representation of a seismic signal requires a rather complicated formulation, and the processing uses a group of techniques based on stochastic properties of the model.

Two types of mathematical methods for data treatment can be used to represent a seismic signal: deterministic and stochastic. The deterministic method consists of using physical theories of wave propagation involving solutions of integral and differential equations satisfying contour and initial conditions. The stochastic method takes the statistical description of time series for the expressions of dynamic laws as statistical facts.

In this this work we apply Kalman-Bucy method to treat geophysical data. For this, we analyze the a priori necessary conditions to transform the governing integral equation of the first kind to linear and to non-linear ordinary differential equations using the state space technique. We reinforce the understanding of the limitations of the procedure for the separation of the signal message from noise in the time domain, and we make use of the property that geophysical data represent realizations that are strongly white non-stationary stochastic processes.

The importance of the Kalman method stems both conceptually in the formulation of the solution of a fundamental geophysical problem, and also from its versatility and applicability as an adaptative data processing. Some basic geophysical references followed in the present topic are Robinson (1999), Mendel (1990, 1983), Crump (1974), Bayless and Brigham (1970) and Van Trees (1968). Engeneering applications have available a heavy machinery on this subject for real-time applications, and to name a few we count with Kailath (1981), Chui and Chen (1987), Candy (1987) and Brown and Hwang (1996).

2 The Wiener-Kolmogorov problem

For motivation, consider the classical optimum time-invariant operator obtained by using the criterion of minimum error variance between the actual output, (t), and the desired output, x(t). The model is expressed by

where z(t) is the measurement, and v(t) is the additive noise. The filter operation is described by the convolution integral

where h(t) is the unknown time-invarint operator that is constrained to satisfy the commonly referred as the Wiener-Hopf integral equation

where fxz(t) and fzz(t) are, respectively, the admitted known theoretical stochastic crosscorrelation and autocorrelation functions. It this basic formulation, x(t) and v(t) are stationary random processes, and together with z(t), h(t), fxz(t) and fzz(t) are real function, continuous, time unbound, and convergent.

To specify the optimun (not necessary best) operator it is necessary to solve the integral equation 3. This situation becomes more difficult as the complexity of the problem increases, and when it does not satisfy strictly the characteristics of the geophysical problem that we have in hands that has for description to be non-stationary.

The present problem, with respect to the non-stationarity and to the data window, does not satisfy the principles underlined by the convolution integral. For this reason the equation is rewritten in the form of a moving average according to the commonly referred to as the Wiener-Kolmolgorov problem. The generalization corresponds to the integral of the Booton type, and it is expressed by the matrix integral equation

for

where (t) is the estimated actual output, and (t,t) is the corresponding desired optimum time-variant operator. Again, the criterion used is the minimization of the residue covariance expressed as

that results in the normal equations between the deviations and the observations,

Equation (4) carries the inherent difficulties of the integral equation of the first kind. On the other hand, it is the natural representation for the treatment of multidimensional non-stationary random processes, it includes finite observations, and it admits time-variant estimates. The problem as written under equation (4) can be classified as an inversion identification problem, and this setting is in general non-linear and ill-posed in the sense of Hardamard; nevertheless, the solvability and the stability are achieved by a priori information. For a practical solution, Kalman and Bucy (1961) proposed conversion of this integral to linear and non-linear ordinary differential equation, which results to be more adaptable to numerical calculations. The optimum estimator is completely specified and synthesized with the a priori information on the forward model, and on the statistics of the components.

3 The state space solution

The formulation begins by expressing the original problem as an ordinary differential equation of ordem N – 1

The transformation to the state variable xn(t) and (t) is achieved by substituting the higher derivatives of y(t) in the form:

The resulting dynamic state equation in the general (continuous, time-variant) compact form are:

(t), (t) and (t) are matrices with variable elements in t; w(t) is the forcing function that generates the state; z(t) is the selected form for the output given by the structure of the matrix (t); v(t) is the addtive noise present (Ogata, 1995; Kuo, 1992).

For the development of the estimator, it is necessary to define the apriori general stochastic properties for the processes, z(t), w(t) and v(t), and the natural description takes into consideration average values and variances. The table of a priori conditions is formed by the autocorrelations of white noise, and by mull crosscorrelations in the window of definition of the governing integral equation, and they are:

Observe that delta functions multiplying (t) and (t) define the autocorrealation as diagonal matrices. These relations represent the strong properties of the model, and they do not establish a specific distribution, but only a structure. For instance, we chose red noise for our example, and the theory can be modified to account for such. (t), (t), (t), w(t), (t), (t) and (t) are real, continuous, time unbound, and convergent.

The philosophy behind this transformation is to substitute the state equations into the integral equations, and apply the premises. We present only the main critical steps to point out where the necessary properties are inserted.

We look at the transformation of the integral (4) by frist calculating the correlations (t,s) and (t,s). With this aim, we initiate with the definition of the crosscorrelation, (t,s), and insert the dynamic equation of the measurement; that is

With the a priori condition of (14)

The autocorrelation (t,s) is in a similar manner given by the expression

With the a priori conditions (13) and (14), the above is further simplified to

Substituting (16) in the original equation (4) results in that

Equation (18) and (19) are intermediate relations to express the autocorrelations (t, s) and (t, s), and they will be simplified in the next section and applied to original integral equation (4).

3.1 Inserting the state equation into the stochastic model

System equation (10) is inserted through the autocorrelation as an intermediate step for obtaining the differential equation for state estimate. For this purpose, it is necessary to differentiate (t, s), and therefore

Applying the a priori condition (14) gives us

Multiplying both sides by

T(s), we have that

And transposing (19) to (20) we arrive at

Differentiating equation (19) we obtain,

Multiplying both sides of equation (18) by (t, t), it follows that

Substituting (19) into (24) we obtain

Substituting (26) and (23) into (24), and rearranging terms we find that

Under the condition of non-vanish of in the integration interval, we conclude that

This is the differential equation that the time-variant operator (t, t) satisfies, and it represents the result searched for in this section. Equation (28) is used in the next section to arrive at the differential equation for the state estimate (t). The orthogonality of the correlation function (diagonality for autocorrelations and nullity for the crosscorrelations), are necessary conditions to obtain the above results; but the computer experiments on controlled data show that these conditions can be weakened in practical terms.

3.2 The gain matriz and the solution of the integral equation

In this section we define the gain matrix of the problem, and we arrive at the solution of the integral equation (4), using relations (28), (16) and (18) defined in the previous section, maintaining the validity conditions of the solution in the interval (t0, t).

The application of Leibnitz rule to differentiate equation (5) is an intermediate step for the differential of the estimated value (t), and it is given by

Substituting equation (28) in the above equation and rearranging terms, we obtain

where we use equation (5) to write

This is an intermediate result for the state estimate in terms of known function and of the apriori conditions. It is still necessary to arrive at an explicit expression for (t, t), which means that the values of definition for the operator is coincident with that of the data, t = t . In this special position, we identify (t, t) = (t), and K(t) is the commonly referred to as the gain matriz, and it contains the maximum extension of data involved in the integral equation operation. We now have the integral equation (4) transformed to the state differential equation rewritten in the form.

and its solution is given by knowing (t).

The matrix (t) is identified in the derivation of the error covariance equation defined as (t) = E{Dx(t)DxT (s), and expressed in terms of the known (t), (t), (t), (t) and (t), in the next section. In equation (4) the autocorrelation represents the data model, and with (11) we write it as; (t, s) = (t, s) + (t, s) + (t, s) + (t, s), with null crosscorrelation. The measurement model is made as y(t) = (t)x(t), and also condition (13) applies. In equation (4) the crosscorrelation (t, s) represents the measurement, and we make z(t) ® y(t), and (t, s) ® (t, t). With these conditions imposed on equation (4) it takes the form

For the case s = t, we have that the integral is the estimate for (t, s), them

The measurement model gives that Dy(t) = (t)Dx(t). Also, (t, t) = (t, t)T(t). We define (t, t) = (t, t) – (t, t). With these considerations, equation (34) given the result

3.3 The Ricatti differential equation

We continue with the analyzes of the error autocorrelation matrix defined by

The idea is to differential (t) and to apply the premises. Therefore,

The next steps are to expand both of these terms, with the system error defined by

Substituting equation (32), (10) and (11) in (37), and rearranging terms, we find that

The tarnsition matrix of Dx(t), (t, t0), is defined as

With this definition, the solution of the differential equation (38) is given by

Using equation (38), we can expand the frist term of equation (36) in the following form:

We observe that to solve equation (41) it is necessary to find the crosscorrelations (t, t) and (t, t). Frist, we consider the expression

This point is important because we can not consider a priori conditions of noncorrelation due to Dx(t). Substituting equation (40) in (42), and writing it with T(t, t0) = (t, t0) we obtain that

Bringing the expectation inside the integral, and applying the table of a priori conditions of non-correlation, it simplifies to

Substituting equation (13), it results in that

The integral (45) has t for upper limit. The unitary impulso can be described by the even rectangular function P(1/T) (Barton, 1999):

The integration covers only half of the interval of symmetry of d(t). Also, for the problem under study the integrand is to be causal, and (t, t) = . Applying these conditions, equation (45) simplifies to

In a similar procedure we determine fwDx(t, t) by the following relations

to obtain

Substituting equation (48) and (51) in (41), under the established a priori conditions, and that (t) = E {Dx(t)DxT(t)} we obtain

In an analogous form, we obtain for the second term of the expression (36)

Adding equation (52) and (53) we have for equation (36) that

Finally, by substituting (t) from (35) we arrive at the non-linear error covariance Ricatti natrix differential equation

This system of coupled equations, (55), (35) and (32) compose the solution of the transformation. The function (t) and (t) are real, continuous, time nonunbounded, and convergent. Our final goal is the application to synthetic and to real data, and we describe next the model that server as an axample (Rocha, 1998).

4 Model construction and data processing

We present the model in a scalar form as by square waves, and this is meant to correspond to ideal situations in geophysics of measurements. The signal message x(t) is measured in the presence of the noise, v(t), with a priori conditions of zero averege and variance . In this filtering simulation we look for recovering the signal message in noise, what represents a frist step in the search for the function that describes the geology of the medium.

4.1 Simulated example

This process is based on the convolution of the random function, s(t), representing the simple reflectivity of the medium, with a Heaviside function, u(t), to generate the message, x(t), and this is expressed as:

Continuing with the model description, we table the white stochastic properties as:

Comparing the above premises with the general dynamic Kalman equations, and adjusting the free variables as t = t1, t = t2, we have conveniently that

For the solution of the problem it is necessary to define the state variables, and here they are selected in the following form:

Comparing the above expressions with (10) and (11) we have that

and

The above relation give for the Ricatti matricial differential equation

And for the gain matrix

Substituing equation (61) and (62) in (66) and in (67), we obtain the differential equation for erro covariance in the scalar form, (t) ® p(t), as given by

and the corresponding gain by

In the general solution for the Ricatti equation (55) in closed form appears the hiperbolic tangent function (Lewis, 1986; Davis, 1963),

With the gain function as defined above, we obtain the state differential equation for estimation in the desired scalar form:

The method used here for the numerical integration of the above equation is of the type Runge-Kutta, second order.

The synthetics in figure 1a obeys the principles of the adopted model, which is the convolution of the impulse response function of the medium, s(t), with the Heaviside function, u(t), resulting in a square wave, x(t), with additive randon noise, v(t).


The apriori conditions are displayed in figure 2, where the crosscorrelations between the desired signal, z(i), and the noise, v(t), shows to be small, although the theory requires it to be zero. The autocorrelation of the nessage, x(t), and of the noise, v(t), shows that the noise attends better the requirement of the theory. The distribuition is considered uniform in the data window.


In the present example it is necessary to define intervals (t0 = ti, t = Ti) for the application of the sequential windows (ti < t < Ti), (i = 1, 2, 3, ..., I) for detection of events I interpreted in the data. To simplify we considered ti and Ti as positions immediately before and after the onset of an event interpreted by the steps of the square wave. But, this operation can be organized as an automatic assisted task.

The discretizing is in the form t = nDt. The depth is related to the index n; Dt is aquispaced, and it can represent any unit dimension of the SI system; namely, meter, kilometer, centimeter, second, etc. The signal message/noise ratio (S/N) is simply calculated by

where and are the simples averege values.

Analyzing the performance of the processor in the several controlled experiments we observe that the time-variant operator is versatile, and it can be made more sophisticated to heve applicability in different situations of natural nonstationarity. The algorithm appears to be numerically convergent, but in all obtained results, we can see the influence of the transient response of the filter operator characterized by the symmetrical form of the hyperbolic tangent function.

4.2 Real data

The operationality of the Kalman filter was verified on real data with different characteristics from the synthetically simulated experiments. For this task we used a published log profile of clay volume measured in a well of the Maracaibo lake,Venezuela, as shown in figure 3. The initial stage consisted on the definition of each interval of the sequential windows without overlap for filter application (t0< t < Ti), which begin at points where the variance is estimated. The sampling interval was fixed in Dt = 0.1 and the parameter l varied as l = 10 and l = 20.


This data does not present itself with a clear descriptive noise component, which limits the simple comparison with the synthetic simulations. Figure 3 shows the satisfactory result demonstrated by the smoothing of noise in the data window.

5 Conclusions

The numerical method is stable, and the algorithm satisfies the criteria of convergence to the continuous case that describes the physical phenomena.

The present method cam include sophistication through apriori knowledge in order to describe the model in more details, what can results in a gain in resolution in the processing of non-stationary data. On the other hand, the sophistication may represent a loss in the practical application of the algorithm.

On real data, the filter has the possibility of being structured in an interactive procedure with a continuous sweeping over the data using a group of control parameters. The display on the monitor screen serves the interpreter for making a systematic evaluation of the outputs.

6 Acknowledgments

The authors wish to thank the unknown reviewers and Prof. E. A. Robinson for their suggestions and comments that were important for improving this paper. The second author wishes also to acknowledge CAPES for his abroad fellowship that made possible an important partial conduction of this work, and to the WIT Consortium for the support in the sabbatical year.

Received: 9/II/99.

#466/99.

  • [1] G. Barton, Elements of Green's Functions and Propagation, Oxford Science Publications. NewYork, USA, (1999).
  • [2] J.W. Bayless and E. Brigham, Application of the Kalman filter to continuous signal restoration, Geophysics, 35 (1) (1970), 223.
  • [3] R.G. Brown and P.Y.C.Hwang, Introduction to Randon Signals and Applied Kalman Filtering, JohnWiley and Sons. NewYork, USA, (1996).
  • [4] J.V. Candy, Signal Processing. The Model-Based Approach, McGraw-Hill Book Company. NewYork, USA, (1987).
  • [5] C.K. Chui and G. Chem, Kalman Filtering Springer-Verlag. Berlin, Germany, (1987).
  • [6] N. Crump, A Kalman filter aproach to the deconvolution of seismic signals, Geophysics, 39 (1) (1974), 113.
  • [7] M.C. Davis, Factoring the spectral matriz, IEEE Trans. on Automatic Control, AC-8 (1963), 196225.
  • [8] T. Kailath, Lectures on Wiener and Kalman Filtering, Springer-Verlag. Berlin, Germany, (1981).
  • [9] R.E. Kalman and R.E. Bucy, New results in linear filtering and prediction theory, ASME, Series D, Journal of Basic Engineering, 82 (1961), 3545.
  • [10] B.C. Kuo, Digital Control Systems, Saunders College. Orlando, USA, (1992).
  • [11] F.L. Lewis, Optimal Estimation, JohnWiley and Sons. NewYork, USA, (1986).
  • [12] J.M. Mendel, Maximum-Likelihood Deconvolution. A Journey into Model-Based Signal Processing, Springer-Verlag. The Netherlands, (1990).
  • [13] J.M. Mendel, Optimal Seismic Deconvolution An Estimation-Based Approach, Academic Press. NewYork, USA, (1983).
  • [14] K. Ogata, Modern Control Systems, Prentice-Hall Inc. New Jersey, USA, (1990).
  • [15] E.A. Robinson, Seismic Inversion and Deconvolution. Part B: Dual-Sensor Technology, Pergamon Press. The Netherlands, (1999).
  • [16] M.P.C. Rocha, Application of the Kalman Method to Geophysical Data, Masters Degree. Graduate Course in Geophysiscs, Federal University of Pará. Belém, Pará (in Portuguese), (1998).
  • [17] H.L. Van Trees, Dectection, Estimation and Modulation Theory, JohnWiley and Sons, Inc. NewYork, USA, (1968).

Publication Dates

  • Publication in this collection
    20 July 2004
  • Date of issue
    2003

History

  • Received
    09 Feb 1999
Sociedade Brasileira de Matemática Aplicada e Computacional Sociedade Brasileira de Matemática Aplicada e Computacional - SBMAC, Rua Maestro João Seppe, nº. 900 , 16º. andar - Sala 163, 13561-120 São Carlos - SP Brasil, Tel./Fax: 55 16 3412-9752 - São Carlos - SP - Brazil
E-mail: sbmac@sbmac.org.br