Acessibilidade / Reportar erro

Using augmented state Kalman filter to localize multi autonomous underwater vehicles

Abstract

The present paper describes a system for the construction of visual maps ("mosaics") and motion estimation for a set of AUVs (Autonomous Underwater Vehicles). Robots are equipped with down-looking camera which is used to estimate their motion with respect to the seafloor and built an online mosaic. As the mosaic increases in size, a systematic bias is introduced in its alignment, resulting in an erroneous output. The theoretical concepts associated with the use of an Augmented State Kalman Filter (ASKF) were applied to optimally estimate both visual map and the fleet position.

Multi-Robots; Autonomous Underwater Vehicles; Mosaics; Robotic Localization


REGULAR ARTICLE

Using augmented state Kalman filter to localize multi autonomous underwater vehicles* * This work was sponsored by Conselho Nacional de Pesquisa (CNPq)

Silvia BotelhoI; Renato NevesII; Lorenzo TaddeiIII; Vinícius OliveiraI

IComputer Engineering - Federal University of Rio Grande - FURG - Av. Itália km 8, Campus Carreiros - Phone: +55 (53) 32336623 - Zip 96201-090 - Rio Grande - RS - BRAZIL - {silviacb|dmtvmo}@furg.br

IIInstitute of Computing - State University of Campinas - P.O.Box 6176, Zip 13084-971 - Campinas - SP - BRAZIL - renato.neves@students.ic.unicamp.br

IIIElectric Engineering - Federal University of Rio Grande do Sul- UFRGS - Av. Osvaldo Aranha, 103 - Sala 208 - Phone: +55 (51) 3316.3515 - Zip 90035-190 - Porto Alegre - RS - BRAZIL - lorenzo@delet.ufrgs.br

ABSTRACT

The present paper describes a system for the construction of visual maps ("mosaics") and motion estimation for a set of AUVs (Autonomous Underwater Vehicles). Robots are equipped with down-looking camera which is used to estimate their motion with respect to the seafloor and built an online mosaic. As the mosaic increases in size, a systematic bias is introduced in its alignment, resulting in an erroneous output. The theoretical concepts associated with the use of an Augmented State Kalman Filter (ASKF) were applied to optimally estimate both visual map and the fleet position.

Keywords: Multi-Robots, Autonomous Underwater Vehicles, Mosaics, Robotic Localization

1. INTRODUCTION

The diversity of resources found at the bottom of the sea is of well-known importance. In this context, the domain of the technology to design and develop Unmanned Underwater Vehicles (UUVs) becomes a matter of strategy [2, 17]. The use of UUVs to create visual maps of the ocean floor becomes an important tool for underwater exploration [5, 8]. UUVs can be divided into two class: a Remote Operated Vehicles (ROVs), that require a human pilot in the control loop; and Autonomous Underwater Vehicles (AUVs), that have the ability to perform high-level missions without user intervention. ROV operation depends on a skilled and experienced pilot to control the vehicle, meanwhile an autonomous navigation capability would significantly reduce this workload for pilots during many types of ROV exploration missions. Commercial AUVs, however, have a high cost, limited energy and navigation autonomy, allowing their use in very specific and short missions. Over past decades, ROV's have been highly used by the marine community.

For visual-based underwater exploration, UUVs are equipped with down-looking cameras that produce images from the bottom of the sea, providing a visual map during the vehicle navigation. Every frame captured by the vehicle is used to compose the map. Consecutive frames are aligned and then a final map is generated, which is called mosaic [16]. Mosaics can also be used as reference maps in the navigation/localization vehicle process [5]. This visual map can be used either in the surface for seabed exploration or for a visual-based AUV localization/navigation. Thus, the vehicle is able to navigate using its own online map. Given the vehicle altitude above the ocean floor (e.g. from an altimeter) and camera field of view, the actual area covered by the map (and individual frames) is known, the navigation in real world coordinates is possible.

A cumulative error, however, is introduced in the alignment of the consecutive frames within the mosaic. GPSs placed on the surface and Acoustic Transponder Network (ATN) distributed in the exploration area are sensorial approaches, which can be used to correct this kind of drift [15, 11], but both have similar disadvantages: a high cost of operation, and a restrict range of application (small depth, ATNs areas, etc).

Occasionally, the robot path may be crossed-over, [5] proposes a smooth adjustment of the mosaic cumulative error detected by the crossover points in the map construction. [7] uses Augmented State Kalman Filter (ASKF) to estimate the correct position of both the vehicle and every frame of the mosaic, based on crossover and displacement measurements [4]. The ASKF strategies for the mosaic update and vehicle localization take into account simplified dynamic model of the AUV, as well as the detected crossover regions, which is very important to the accuracy of the system.

1.1. MULTI-AUVS FOR VISUAL MAPPING

Two issues are considered in this paper: i. High costs and the complexity associated with more sophisticated and long missions unable the massive use of AUVs. On the other hand, simple vehicles are incapable to accomplish trivial tasks due to their low autonomy, poor number of sensors and other limitations. ii. The possibility of using its own explored image as information to localize the vehicle is an attractive and low cost operation. ASKF seems to be a good choice, specifically when the navigation provides a set of crossover regions.

In this context, we argue that a fleet of simple robots can be more efficient than a sophisticated AUV to seabed exploration tasks. A heterogeneous fleet can be composed by robots equipped with different sensors and no complex power supply systems. These simple vehicles would explore a region in a fraction of time needed by a single AUV. Mosaics would be more efficiently constructed and their visual information would be used to help the localization and navigation of the fleet.

We have a set of real situations where this approach could be applied. For instance, in northeast of the Brazilian Coast we have a protected reef of chorales area, called "Parrachos de Maracajau". This is a wide area, where a continuous visual inspection is important. Due to the need of a continuous mapping the sea-divers-based inspection is very expensive. In this case, the use of a robotic vehicle may be a good choice. Besides, the vehicle can use the visual information as sensorial input for its localization/navigation system, avoiding boring human navigation/piloting tasks on the surface. In this context, the idea is to use a fleet of AUVs that exchange information about the area. The visual map of this wide area would be built in a faster and more efficient way, cooperatively.

Therefore, this paper presents the first results of the ASKF extension proposed by [7] for a set of Multi-AUVS. The fleet needs to explore a seabed region, providing its visual map. The mosaic is composed by a set of frames. These images are obtained by several simple and inexpensive robots associated with an undersea central station. The mosaic is computed by this central station. A distributed ASKF provides an image position estimation as well as each robot position.1 1 In the literature, we can find the use of Kalman Filters to multi-robots localization [9, 12]. They use relative sensorial information as observed variable of the system. These papers report wheeled robots in outdoor and indoor environments.

1.2. AN ARCHITECTURE TO MULTI-AUV INSPECTION FLEET

We have developed a generic architecture for multi-robot cooperation [1]. The proposed architecture deals with issues ranging from mission planning for several autonomous robots to effective conflict free execution in a dynamic environment. Here, this generic architecture is applied to Multi-AUVs for a Visual Mapping Task, giving autonomy and communication capabilities for our AUVs. We suppose that the robots submerge inside a central station (CS). This CS is connected by a physical cable with the surface. The built maps are sent to the surface through the CS. Besides connecting physically the AUVs to the surface, the CS does the decomposition of missions in a set of individual robots tasks. CS receives high level missions in a TCP/IP web link.

After a brief multi-robot context analysis, section presents the theoretical extension of ASKF approach to Multi-AUV mosaicking. Next section details the implementation of the visual system, providing preliminary results and analysis of simulated tests. Finally a conclusion and future works are presented.

2. THE CONTEXT: MULTI AUVS AND VISUAL MAPS MODELS

Visual maps are an important tool to investigate seabed environments. An autonomous underwater vehicle equipped with a down-looking camera can take images from the bottom of the sea, sending the images to the surface. A visual map is constructed with these images, and it can be used also for the localization of the robot.

The cumulative error associated with the mosaic construction (frames localization) decreases the performance of the system. Several approaches treat this problem [5, 7]. For instance, strategies based on the crossover detection realigns all frames of the mosaic according to the crossover position information. [7] proposes a crossover based system using an Augmented State Kalman Filter. This filter estimates both the state of the AUV and the state of each map image. Then, each time a new state, associated with a new image added, needs to be estimated, resulting in an ASKF. This paper extends the theory developed by [7] to Multi-Robots context.

Consider a set of M robots in a visual seabed exploration mission. During the exploration mission, each robot sends information to a central station (CS) for the mosaic building. This CS congregates all information about the mosaic, adding new frames and updating online their localization.

Every time k only one robot vadd sends to CS an information associated with a new captured frame . The subindex add means the robot which currently adds a new frame to the mosaic. The mosaic F is composed by a set of these added frames. This mosaic is used to estimate the future state of each generic vehicle vr and the future state of each generic frame 2 2 We use r and i to describe rth and ith generic robot and frame, respectively, captured by robot vr. .

2.1. THE ROBOT MODEL

Each time k, a robot vr is described by a vector state :

where x, y are relative to a mosaic-fixed coordinate system3 3 We suppose that the robots have a known inertial referential, associated with the first frame of the mosaic. . z is relative to an inertial fixed coordinate system, Ψ (yaw) is the heading of the robot associated with a fixed coordinate system.

Each robot can have a different dynamic model , see [14] for a set of different dynamic models of AUVs. In this paper, we have chosen a simple kinematic model to demonstrate the proposed strategy:

where I is a 4-dimension identity matrix and dt is the sampling period between states at discrete-time instants.

A process noise, , associated with each vr, can be defined as:

where is a diagonal 4-dimension matrix of process noise variance in all coordinates (x,y,z,Ψ).

2.1.1. The Mosaic Construction Model: As described by [7], every frame has a state vector that contains the information required to pose the associated image in the map. In the multi-robot context, a frame , captured by vehicle vr, has the following state vector relative to a mosaic-fixed coordinate: = [x y z Ψ]T.

2.1.2. The Observation Model: Kalman Filters are based on the measurement of successive information of the system behavior. In this approach two measure vectors are used:

  • zadj(k): this measure is provided directly by each robot vadd, which is adding a new image to the mosaic map. It gives the displacement between two consecutive frames captured by vadd. In the literature we find several approaches to obtain zadj(k), for instance we can use Corner Points Correspondence [6], Motion Estimate and HSV Algorithms [10], Frequency Domain Analyse [13], see [7] for others. We have used texture matching algorithm [16], which runs onboard of each robot, giving the displacement between captured consecutive frames.

  • zcross(k): it measures the displacement associated with the mosaic area, where the crossover has been detected. To provide this information we detect a crossover trajectory, analyzing the current captured image and the mosaic region, applying the same algorithms used to obtain zadj(k). This process runs onboard the central station.

These vectors can be described by:

where the subindex adj is associated with the coordinated relatives (Δx, Δy, z, ΔΨ) between the current image k and the previous image of the same robot vr. Similarly,the subindex cross is related to the displacement between the crossover area with respect to the closer node of the mosaic image (node j). We suppose that z can be obtained from a sonar altimeter, being the absolute measure of the altitude of the vehicle at time k.

Two measured sub-matrices need to be defined:

that describes the vehicle measurement, and the image measurement sub-matrix:

which describes the image associated with adjacent, (k) (captured by a generic robot vr) and crossover situations, (k) (captured by any other robot vs). One should observe that a component related to z coordinate is provided directly by the altimeter sensor.

If there is no crossover, the measurement covariance matrix is

with:

where (k), (k) and (k) are the measurement variances of vadd added images correlation in the mosaic, and (k) the variance of the sonar altimeter of this robot.

Notice that equation 7 is associated only with the covariance of the adjacent image addition measurements done by vadd. However, if there is a crossover detection, R(k) becomes:

Similarly,

with (k), (k) and (k) are the measurement variances of all images correlation in the mosaic, and (k) the variance of the sonar altimeter.

In accordance to the kinematic model and to the zadj and zcross measurements, ASKF estimates a new state to the fleet and the mosaic frames at every k time.

3. ASKF FOR MULTI-AUV MOSAICKING

Kalman Filter uses two sets of equations to predict values of the variable state. The Time Update Equations are responsible for predicting the current state and covariance matrix, used in future time to predict the previous state. The Measurement Update Equations are responsible for correcting the errors in the Time Update equations. In a sense, it is backpropagating to get new value for the prior state to improve the guess for the next state. The equations for our ASKF for Multi-AUV and mosaic localization are presented.

3.1. THE PREDICTION STAGE

From the kinematic model of the system (vehicles and mosaic), ASKF can propagate the following state estimative:

which means the estimated position of each robot vr (r = 0..(M - 1)) and each frame estimated position (from frame 0 to (k - 1)). The covariance P(k) associated with this estimative is also propagated.

3.1.1. A robotvaddadds a new frame to the mosaic: When a new mosaic frame is added by vadd, new predictions and covariance (for time (k + 1)) are obtained, according to time update equations:

Notice that xaug(k + 1) is the state of x augmented of a new image added state, (k + 1), added by vadd, with

similarly,

where equation 15 selects the information from the row and column associated with the vehicle position vadd which captured this new frame .

As the position of images does not vary as a function of time, the system dynamics Aaug(k) and the noise covariance Qaug(k) can be described by:

where the identity matrix I has a size k.dim. Since the system does not have any input, u(k) = 0 and B(k) = I, see [7] for more details.

3.2. THE CORRECTION STAGE

For each time step k, a robot vadd adds a new image to the visual map. The vehicle finds the observation measurement between two consecutive (adjacent) captured frames. Notice that the adjacent concept is associated with two consecutive images (i.e. ) of the same robot vadd. Two frames can be consecutive to the robot vadd, but not necessarily consecutive to the mosaic system, for instance, in the capture interval between , any other robot rsadd can add an intermediate image to the mosaic system. In this case, for example, the final sequence of the mosaic becomes: . Therefore, two mosaic frames are adjacent if they are captured in a successive order by the same robot vr.

A new measured zadj(k) is obtained at every time step by the robot vadd. The value z(k) measures the position of the kth image (which corresponds to the position of the vadd) with respect to the (k - p)th previous frame of this robot in the mosaic, so that:

where adjacent measurement sub-matrix, see equations and associated with the vehicles, and the images are:

However, when a crossover is detected, the current image kth also intersects with the previous mosaic image. Then, the measurement vector z(k) becomes:

in this case we have two measurements: one regarding to the previous image of robot vadd, zadj(k), and the other with respect to the area where the crossover has been detected zcross(k). Notice that the crossover region could be captured by another robot vcross. If we suppose that the crossover corresponds to an image , the measurement matrix Haug(k) incorporates a measurement in column j, becoming:

with vehicle and image measurement sub-matrix defined as:

Innovation is the difference between the measurement z(k) and the previous a priory estimation and according to [7] it is given by:

and its covariance S(k) is defined as:

where R(k) is the measurement error covariance, see 9.

The adjacent and crossover measurements allow the correction of the estimated state (of the robots and frames) and its associated covariance are corrected according to the KF measurement update equations. So, the filter gain can be expressed as:

Once the KF gain is computed, the estimate state can be obtained:

and its corrected error covariance:

Once the stages of estimation and correction have been completed, the state vector and the covariance matrices are augmented to add the positioning of the new kth image, captured by robot vadd. The final mosaic is composed by the set of frames .

Observe that we can have either i. only one ASKF running in a Central Station, or ii. a set of decentralized ASKF run in each robot of the fleet. The first is a centralized localization system, where, even without crossover detection, to localize each robot is a role only of the CS. The second approach is a decentralized localization system, where each robot is in charge of localizing itself, except when a crossover is detected. However, the ASKF theory is the same for both workload approaches.

4. THE IMPLEMENTATION OF VISUAL MAPPING

We have a test environment where simple undersea robots can accomplish inspection tasks [3]. Starting from the proposed architecture, a CS can be accessed via TCP/IP web connection. Users can specify a set of missions for the robots: navigate, localize and inspect a specific region.

4.1. BUILDING MOSAICS

We have developed a set of image processing services, called NAVision. It is composed by two different modules: i. an individual robot module responsible for capturing and pre-processing the frames through the down-looking camera, and ii a module which provides an online mosaic and pattern recognition.

The visual correlation between consecutive frames is the main aspect of the mosaicking system. From frames taken by an underwater robot equipped with a camera, the visual correlation is made online, providing an offset between them. This offset is used as zadj information by ASKF.

Subwater images introduce different problems, like lack of distinct features, low contrast, nonuniform illumination and small particles suspended into the water (marine snow). To develop the visual correlation efficiently, it is necessary to treat the captured frames. A set of filters are applied, aiming to smooth the image and enhance borders. In the current version, the signum of Laplacian of Gaussian filter was chosen because it has useful properties, like a gaussian mask that smooths noises and the signum of Laplacian filter to convert a smoothed image into a binary image.

A correlation window, I(k - 1)(i, j), is defined in the previous frame. This window has n × m pixels. The algorithm correlation searches in the current frame, a set of candidate matches, Ik(i, j)w. An error measure is used:

to find the best candidate, Ik(i, j)Best|eBest = min(e). The measure, zadj can be extracted from the displacement between Ik and the best candidate region. This process is repeated every time when a new frame is added to the mosaic.

Figure 1 presents the generation of a mosaic from our underwater test environment. This mosaic was created using one robot and ASKF to estimate mosaic frames and robot positions. An Athlon XP 2400 with 512 MB of RAM memory was used to construct this mosaic and running simulations.


4.2. LOCALIZING MULTI-AUVS USING ASKF

We have developed an ASKF system able to receive as input the observation of the world: displacement between consecutive frames (zadj) and crossover regions (zcross), giving as output the prediction state (localization) of each robot and mosaic frames.

4.2.1. Two robots in a inspection task: We have tested our system in a inspection simulated task with two robots: R1 and R2. R1 begins at (-120:-65) coordinates and R2 begins at (0:-70) position, see figure 2(a). Each vehicle has a different dynamical model. Their perception systems have different noise features. Circular points represent the true trajectory of each robot. In addition to real trajectory, the simulator gives the estimated trajectory provided by the perception system without crossover detection, it means only observed information (see star green points). We can see that we have a cumulative error associated with the image processing observation (star points). Cross points show the smoothed trajectory obtained with our approach. We can see that, before the crossover both robots have a cumulative error between the real and estimated trajectories (see figure 2(b) and (c)). When R1 crosses an old mosaic area imaging by R2 (near (6:-49) coordinates), the ASKF provides a new estimation motion to R1, reseting its cumulative localization error (see 2(e)).







Once R2 has a lower cumulative localization error, it is used as setpoint. Therefore this robot holds the same old trajectory (and localization) after the crossover detection, see figure 2(d) and (f).

Figure 3(a) shows the final estimated position at the end of this mission.



4.2.2. Three robots in a inspection task: We have added a third robot, R3 to the context. R3 begins at (-30:125), see figure 4(a). Now during the inspection there is another crossover situation, near (-11.8:125.6), providing a second smoothed crossover correction in the final mosaic. It happens between R3 and R2.









Now R3 has a lower cumulative error, then it is used as setpoint. So this robot holds the same old trajectory (and localization), see 4(d)(h). However, R2 has an enhancement of its estimated localization, see 4(c)(g). Moreover, as ASKF updates all states of the system, and R1 has an old intersection situation with R2, the former suffers also a very small correction in its states, see 4(b)(f). The final mosaic and robot trajectories can be seen in figure 3(b).

Tables 1 and 2 show the final localization errors of both tests with 2 robots and 1 crossover situation and with one more robot and one more crossover situation. Second column shows the error between real and observed trajectory. Column 3 shows the error between real and ASKF corrected smoothed trajectory. Table 1 shows a decrease of around 93% of the estimation error of R2 after one crossover detection. In table 2, we can see that the final smoothed error decreases with two crossover situations. With our approach, the localization mean errors of two robots R1 and R2 have fallen by more than 85%. R3 mean error has not changed. As expected, the approach gives good results when a crossover situation between two robots happens when one of the robots is in the beginning of its trajectory, what means when one of the robots has a minor cumulative error.

Observe that the localization system workload is distributed among the robots. Each robot can run its own ASKF, measuring its own frame adjacent displacement (zadj). It is necessary a centralized process only when a crossover risk exists. In this moment, a centralized crossover measure (zcross) must be calculated. The fleet shares information aiming to enhance individual localization estimative.

5. CONCLUSION

We have proposed and discussed a theoretical scheme for cooperative multi-AUVs Mosaicking. A set of robots can explore the seabed in a more efficient and faster way than a single vehicle. We have built a generic architecture for multi-robot cooperation. Its interest stems from its ability to provide a framework for cooperative decisional processes at different levels: mission decomposition and high level plan synthesis, task allocation and task achievement.

For Visual Mapping, our approach is an extension of the [7] to the multi-vehicle context. ASFK is used for both estimating the state of each robot, and the position of each mosaic image. This estimation changes with the error observation based on adjacent and crossover measurements obtained by the fleet. The proposed methodology treats the mosaic as a centralized map, where different robots add frames and provide information of the observation. We intend to simulate our approach through a number of significant different dynamical models and parameters, taking into account real-time issues and other sensorial information.

  • [1] R. Alami and S. S. C. Botelho. Plan-based multirobot cooperation. In Michael Beetz, Jachim Hertzberg, Malik Ghallab, and Martha E. Pollack, editors, Advances in Plan-Based Control of Robotic Agents, volume 2466 of Lecture Notes in Computer Science Springer, 2002.
  • [2] D Blidberg. The development of autonomous underwater vehicles (auvs); a brief summary. In IEEE Int. Conf. on Robotics and Automation (ICRA'01), 2001.
  • [3] S. S. C. Botelho, R. Mendes, L. Taddei, and M. Teixeira. Lambdari um robô subaqüático autônomo. In Simpósio Brasileiro De Automação Inteligente - VI SBAI, 2003.
  • [4] R. Deaves. Covariance bounds for augmented state kalman filter application. IEEE Electronics Letters, 35(23):2062–2063, 1999.
  • [5] S. Fleischer. Bounded-error vision-based of autonomous underwater vehicles PhD thesis, Stanford University, 2000.
  • [6] R Garcia. A Proposal to Estimate the Motion of an Underwater Vehicle Through Visual Mosaicking PhD thesis, Universitat de Girona, 2001.
  • [7] R. Garcia, J. Puig, P. Ridao, and X. Cufi. Augmented state kalman filtering for auv navigation. In IEEE Int. Conf. on Robotics and Automation (ICRA'02), 2002.
  • [8] M.G. González, P. Holifield, and M. Varley. Improved video mosaic construction by accumulated alignment error distribution. In British Machine Vision Conference, pages 377–387, 1998.
  • [9] R. Madhavan, K. Fregene, and L. Parker. Distributed heterogeneous outdoor multi-robot localization. In IEEE Int. Conf. on Robotics and Automation (ICRA'02), 2002.
  • [10] H. Madjidi and S. Negahdaripour. On robustness and localization accuracy of optical flow computation from color imagery. In 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004.
  • [11] A. Matos and N. Cruz. Auv navigation and guidance in a moving acoustic network. In IEEE Oceans 2005, 2005.
  • [12] S. Roumeliotis and G. Bekey. Distributed multirobot localization. IEEE Trans. on Robotics and Automation, 18(5), 2002.
  • [13] Y. Rzhanov, L. M. Linnett, and R Forbes. Underwater video mosaicing for seabed mapping. In International Conference on Image Processing, 2000.
  • [14] A. Tavares. Um estudo sobre a modelagem e o controle de veículos subaquáticos não tripulados. Master's thesis, Engenharia Oceânica, Fundação Universidade Federal do Rio Grande, 2003.
  • [15] H Thomas. Advanced techniques for underwater vehicles positioning guidance and supervision. In IEEE Oceans 2005, 2005.
  • [16] A. Vargas, C.A. Madsen, and S. S. C. Botelho. Navision - sistema de visão subaqüático para navegação e montagem de mosaicos em auvs. In Seminário e Workshop em Engenharia Oceânica, 2004.
  • [17] R. Wernli. Auv's - a technology whose time has come. In IEEE International Conference in Underwater Tecnologies 2002, 2002.
  • *
    This work was sponsored by Conselho Nacional de Pesquisa (CNPq)
  • 1
    In the literature, we can find the use of Kalman Filters to multi-robots localization [9, 12]. They use relative sensorial information as observed variable of the system. These papers report wheeled robots in outdoor and indoor environments.
  • 2
    We use
    r and
    i to describe
    rth and
    ith generic robot and frame, respectively, captured by robot
    vr.
  • 3
    We suppose that the robots have a known inertial referential, associated with the first frame of the mosaic.
  • Publication Dates

    • Publication in this collection
      28 July 2008
    • Date of issue
      Dec 2007
    Sociedade Brasileira de Computação Sociedade Brasileira de Computação - UFRGS, Av. Bento Gonçalves 9500, B. Agronomia, Caixa Postal 15064, 91501-970 Porto Alegre, RS - Brazil, Tel. / Fax: (55 51) 316.6835 - Campinas - SP - Brazil
    E-mail: jbcs@icmc.sc.usp.br