Serviços Personalizados
Journal
Artigo
Indicadores
Links relacionados
Compartilhar
Brazilian Journal of Chemical Engineering
versão impressa ISSN 01046632versão Online ISSN 16784383
Braz. J. Chem. Eng. v.17 n.47 São Paulo dez. 2000
http://dx.doi.org/10.1590/S010466322000000400002
AUTOMATIC DIFFERENTIATION TOOLS IN THE DYNAMIC SIMULATION OF CHEMICAL ENGINEERING PROCESSES
M.C.Castro, R.C.Vieira and E.C.Biscaia Jr.*
Programa de Engenharia Química, PEQ/COPPE/UFRJ,
Universidade Federal do Rio de Janeiro
C.P. 68.502, CEP 21.945970, Rio de Janeiro  RJ, Brasil
Email: evaristo@peq.coppe.ufrj.br
(Received: Novembery 6, 1999 ; Accepted: April 6, 2000)
Abstract  Automatic Differentiation is a relatively recent technique developed for the differentiation of functions applicable directly to the source code to compute the function written in standard programming languages. That technique permits the automatization of the differentiation step, crucial for dynamic simulation and optimization of processes. The values for the derivatives obtained with AD are exact (to roundoff). The theoretical exactness of the AD comes from the fact that it uses the same rules of differentiation as in differential calculus, but these rules are applied to an algorithmic specification of the function rather than to a formula. The main purpose of this contribution is to discuss the impact of Automatic Differentiation in the field of dynamic simulation of chemical engineering processes. The influence of the differentiation technique on the behavior of the integration code, the performance of the generated code and the incorporation of AD tools in consistent initialization tools are discussed from the viewpoint of dynamic simulation of typical models in chemical engineering.
Keywords: Dynamic simulation, Automatic differentiation, DAE systems, Consistent initialization
INTRODUCTION
The modeling of chemical engineering processes leads naturally to systems of coupled differential and algebraic equations. The differentialalgebraic approach has been exhaustively discussed in previous works (Vieira et al., 1996; Vieira and Biscaia, 2000). Experience has proved that working directly with the differentialalgebraic equations (henceforth DAEs) is advantageous for many reasons.
For the numerical resolution of DAE systems the evaluation of derivatives is crucial, either for the determination of the iteration matrix or for the determination of consistent initial conditions. These topics will be discussed in detail in the next sections. Hand coding of derivatives is error prone and can only be carried out when the system is simple and small, being unfeasible for most systems of practical interest. Thus, robust methods of differentiation which are capable of dealing with a broader class of applications must be investigated.
Automatic Differentiation (AD) is a nonapproximate differentiation technique that permits the fast and accurate evaluation of derivatives of any order. As it does not incur in truncation error, it would lead to precise results if the floatingpoint operations involved could be carried out in arithmetic of infinite precision. This feature is particularly important for the determination of high order derivatives.
In this contribution, it is discussed the potential of the Automatic Differentiation technique in the dynamic simulation of chemical engineering processes. In Section 2, available differentiation techniques and its major drawbacks are briefly discussed, and in Section 3 Automatic Differentiation concepts, techniques and tools are introduced. Sections 4 and 5 show the application of AD for the iteration matrix determination, for the extended system construction and for the consistency system solution, areas necessarily involved in the numerical integration of DAE systems. Section 6 contains some final remarks and directions for future research in this field.
DIFFERENTIATION TECHNIQUES
Before comparing accuracy of differentiation methods, the source of the observed errors must be discussed. When derivatives are obtained with a computer, error may be introduced in two ways: truncation error and roundoff error. Truncation error is associated with approximate differentiation techniques. For instance, when the derivative is approximated by a truncated Taylor series expansion. Roundoff error is the unavoidable consequence of performing evaluations with the finite precision arithmetic of a computer. Hand coding of derivatives is error prone, impossible to perform automatically and unfeasible for large systems, and hence is not being discussed any further in this contribution.
Numerical Differentiation
The most common alternative to hand coding is the numerical approximation of derivatives by finite difference formulas. As an example, a simple formula can be constructed from the expansion of f(x) in Taylor series truncated after the first order term:
(1) 
evaluated at x= x_{k}Dx, which after some algebraic manipulation leads to:
(2) 
The order of the approximation is controlled by the term in which the series has been truncated. Note that only function evaluations are needed to calculate the derivative. Thus, the codification of the algorithm is very simple and existing codes can be used after slight modifications.
A serious disadvantage of this approach lies with the tradeoff between truncation error and roundoff error. As the Taylor series expansion is only valid in the neighborhood of the expansion point x_{k}, small values of Dx tend to reduce the truncation error. However, small values of Dx increase the roundoff error because of the division by a small divisor. There is an optimal perturbation value Dx which depends on the function being differentiated and on the magnitude of the independent variables. Its determination is unaffordable at runtime, and a perturbation value Dx somehow acceptable for most systems is commonly employed. Other disadvantages of this approach are the instability of higher order differentiation formulas and the computational cost of the technique, approximately n+1 times the computational effort associated with the evaluation of the function itself.
Symbolic Differentiation
Symbolic differentiation is a computer aided analog to analytical differentiation employing a graph theoretical approach. The formula representation of a function is transformed into a formula representation for its derivative, that is either interpreted or further transformed into a program in a common programming language. In principle, evaluation of these formulas gives exact values of the derivatives of the function. Symbolic Differentiation only incurs in roundoff error resulting from the individual floatingpoint operations.
The major disadvantage of the symbolic approach is its high computational cost: computer algebra systems may generate expressions containing many unnecessary instances of the same subexpressions. This obviously leads to a slow down of the resultant program. Besides, for complicated functions, the representation of the final expression may be an unaffordable overhead. Therefore, if only the value of the derivative (rather than an explicit expression) is needed, symbolic differentiation is not the most efficient approach. The best symbolic differentiation tools are implemented in computer algebra systems as Maple (Char et al., 1988) and Mathematica.
AUTOMATIC DIFFERENTIATION
Automatic Differentiation is the numerical computation of exact values of the derivative of a function at a given argument value. These derivative values are obtained without generating a formula for the derivatives, thus avoiding the unnecessary overhead of symbolic differentiation and the truncation error inherent in divided difference formulas (Griewank and Corliss, 1991).
This technique is based on the fact that all expressions, no matter how complicated, are executed in the computer as an ordered sequence of elementary operations, such as sums, products and calls to intrinsic functions (sine and exponential, for instance). The chain rule can be applied sequentially to each of these basic operations, in this way constructing the code to evaluate the derivative of the function starting from the code to evaluate the function.
As an example, consider the function f (x,y) represented bellow:
(3) 
The partial derivatives of this function are easily obtained and are equal to:
(4a) 
(4b) 
Using only binary operations, this function would be represented as shown in Table 1. Differentiating each line of the code, one would get the code to generate the derivative without the formula of the derivative, shown as (4a) and (4b).
In order to calculate the partial derivative in respect to x the vector [dx,dy]^{T} is set to [1,0]^{T}, meaning that ¶ y/¶ x = 0 and ¶ x/¶ x = 1. Analogously, to calculate the partial derivative in respect to y, the vector [dx,dy]^{T} is set to [0,1]^{T}. In Table 2 it is shown that the evaluation of the formulas on Table 1 would lead to the same expressions of (4a) and (4b).
Although great advances have been made in symbolic differentiation of formulas, AD generally requires less memory and CPU time, and also applies to functions defined by computer programs or subroutines for which no formula may be available. Maple research group has created a library for gradient and jacobian calculation employing AD. That shows that both differentiation techniques are complementary and each one is most suitable for different types of application (Monagan and Rodoni, 1996).
Modes of Differentiation
Concerning the application of the chain rule, there are basically two modes of differentiation: the forward mode and the reverse mode. As the name implies, the forward mode consists of application of the rules of differentiation to the entries of the code list in sequence. This mode is easy to understand and implement, but ordinarily requires computational effort proportional to m*n, where m is the number of independent variables and n the dimension of the system.
The reverse mode is performed in two steps: the forward sweep in which intermediate values are computed and the reverse sweep in which the elementary partials are accumulated. Several intermediate variables are created during this process, and most of them are eliminated for the final version of the code. It follows that the computational effort for the reverse mode is proportional no n, the length of the code list, and independent of m, the number of independent variables. This can result in significant savings in computational time (Rall and Corliss, 1996).
There are aspects to be considered other than merely computational cost when discussing AD modes. Reverse mode implementation is quite more sophisticated and may employ complex structures of indirect addressing. That may prevent vectorization of the final code. Hence, available automatic AD codes employ a combination of both strategies in order to balance complexity and computational cost.
Implementation of AD
Automatic differentiation can be implemented in various ways. More recent methods, however, use either source code transformation or operator overloading. Generally speaking, source code transformation is applicable to existing codes or even to new codes built in languages that do not permit overloading, while operator overloading is useful in languages with this capability.
Source code transformation is devoted primarily to adding the capability of AD to standard Fortran codes, and efforts in this field are due to the enormous amount of code already written in Fortran. The precompiler accepts Fortran code to evaluate the function and produces code to evaluate the derivative of the function in Fortran77 standard. That resulting code can be compiled and executed normally, as any other Fortran code the user may have. Source code transformation is a compiletime solution. It allows one to generate derivatives from large, complicated, existing code.
Central to the development of another class of AD tools (probably the most efficient of them) is the concept of operator overloading. Roughly speaking, the definition of operators (instead of the sequence of operations) is changed depending on the data type which is being processed. For instance, the "*" operator performs the usual product of two real or integer variables a,b declared as passive, but performs the operation a.b’+ b.a’ if these variables are declared as active. Minor changes in the original code are performed, mainly in the declaration of variables, definition of the operators and output format. Operator Overloading AD is only supported by more modern compilers that accept user defined data types, operators and functions. Such compilers include Ada, PascalXSC, C++ and Fortran90.
Codes Available
The most widely used source code transformation program is ADIFOR 2.0 (Bischof et al., 1992). This program accepts Fortran 77 code, and the output is a Fortran77 program from which some unnecessary arithmetic operations and temporary variables have been removed. ADIFOR uses a hybrid strategy: it combines the reverse mode at statement level with overall forward mode propagation.
The most widely used operatoroverloading code is ADOLC (Automatic Differentiation by OverLoading in C++) developed by Griewank et al. (1996b) at University of Dresden (Germany). The source code is freely distributed, along with comprehensive documentation. It produces derivatives for codes written in C++, and a Fortran90 interface, called ADOLF, has also been created. Unfortunately the latter code is no longer being supported by the authors (Shiriaev et al., 1995).
Several other available codes can be mentioned, as SYPERB (Lohner, 1992) for Pascal and Odyssée (Rostaing et al., 1993) for Fortran77. For a comprehensive discussion on novel techniques and applications, see Griewank et al. (1996a).
APPLICATION FOR THE NUMERICAL INTEGRATION OF DAEs
In the dynamic simulation of PDAE systems, the most common strategy is to employ the Method of Lines and to discretize spatial directions eventually considered. Then, the resulting initial value DAE F is integrated in time (or in another march direction present in the system).
(5) 
In this contribution DAE systems are presented in the autonomous form, and that represents no lack of generality. The nonautonomous DAE system F(t, z,) = 0 of dimension n is similar to the autonomous DAE (6) of dimension n+1. The new variable z_{n+1} = t was created and the equation = 1 was added to the original system.
(6a) 
(6b) 
If an automatic integration code as DASSL (Petzold, 1989) or DAWRS (Secchi et al., 1993) is employed, the time discretization is performed using variable order BDF formulas, leading to a algebraic nonlinear system to be solved at each time step by a newtontype method. The iteration matrix J, defined as follows, must be computed somehow. Codes try to keep this matrix for as many time steps as possible in order to reduce overall computing time.
(7) 
At Eq. (7), C_{J} is a constant proportional to the current time step. These codes require the user to provide code for computing J or else they compute an approximate iteration matrix by divided differences.
In this contribution, the iteration matrix has been calculated through different approaches:
(i) the model has been rewritten using Maple syntax, differentiated using the code’s symbolic differentiation features, and then exported to Fortran format;
(ii) it has been used the code ADIFOR to transform source code for computing F (that the user would have to supply anyway) into source code for computing J;
(iii) the builtin differentiation routines at DASSL code (employing divided differences) have been adopted.
Results are shown for the simulation of the crossflow dryer. The PDAE system has been converted into a DAE system by discretization of x direction employing implicit first order finitedifference formulas. The resulting initial value DAE in the z direction, represented in Eq. (8), is integrated via DASSL code. Simulations have been carried out employing 10 and 49 discretization points, which led to iteration matrixes of dimension 44 and 200, respectively. All parameters envolved are reproduced in Table 3. For details concerning the modeling and parameters involved, see Vieira et al. (1996).
(8a) 
(8b) 
(8c) 
(8d) 
Initial and boundary conditions

On these simulations the drying of soybeans has been considered. The state vector z is comprised of [Ys Ts Yg Tg]^{T}. Necessary constitutive relations used are represented as Eq. (9).
(9a) 
(9b) 
(9c) 
Simulation results and efficiency of the integrator (measured by average number of calls to routines for computing F and J) have been almost identical for all simulations. These results indicate that all code to compute J have been correctly implemented. However, significant differences have been observed regarding ease of utilization and performance of the resulting code. Significant changes in performance of the resulting code have been observed, as shown in Table 4. The computational effort was measured via the ratio between CPU time to evaluate J and CPU time to evaluate F.
The code obtained by symbolic differentiation seems to be the most appropriate, as presented the best ratio of J/F. That ratio was not only the smallest but, more important than that, it was almost independent of the number of independent variables. However, CPU time associated with the construction of the code has been not included in the measurements, only the final code run time. Even more limiting are the memory requirements of computer algebra environments. The construction of J employing 49 mesh points has been not performed automatically, as Maple ran out of system resources in a Pentium II 430 MHz with 128 Mbytes RAM. A general form of the differentiated equation has been identified and applied at each mesh point interactively. This shows the major drawbacks of the symbolic approach: it presents serious limitations regarding the dimension of the problems it can handle; and it can not be applied directly to the source code available for calculating F: the code had to be rewritten in Maple syntax.
The code obtained by automatic differentiation, although being accurate, was less efficient than the numerical approach. This is probably due to the fact that ADIFOR code generates several intermediate variables, and it is not always capable of identifying repeated or unnecessary code in the routine (Grewank et al., 1996b). Also, the final code is prepared to cope with all eventual exceptions, for instance: if the differentiation of x^{½} is required, ADIFOR generates code to compute ½ x^{½} and also code for the case x=0. This overhead does no depend on the dimension of the system, and tends to be amortized for increasing n. For large scale systems, not only this cost will not be so significant but also other approaches can be unaffordable.
APPLICATION AT CONSISTENT INITIALIZATION OF DAEs
Prior to the numerical resolution of a DAE system, one must concern about the index of the system and about consistency of initial conditions. The index of a DAE is the number of times that all or a part of a DAE must be differentiated with respect to time in order to convert it into an explicit set of first order ODEs. Index 1 systems may be solved with modified ODE methods, while higher index systems (systems with index 2 or greater) require specific methods. Generally, higher index systems are rewritten in an index 1 formulation and solved with standard integration codes (Brenan et al., 1989).
During the index reduction some extra algebraic equations are obtained which generally correspond to derivatives of the original algebraic equations. Those hidden algebraic equations along with the original DAEs compose the extended system. Consistent initial values z(t_{0}) and z’(t_{0}) must satisfy not only the DAE system but also the underlying extended system (Gear, 1971).
The consistency equations correspond basically to the extended system coupled with a feasible set of additional algebraic equations, corresponding to any information the user may possess about the initial state or to usercontrolled specifications. Rigorous initialization methods require characterization of the DAE system and differentiation of some of the algebraic equations in order to uncover hidden constraints. The consistency equations are then solved for the consistent initial values z(t_{0}) and z’(t_{0}).
Clearly two different applications for automatic differentiation can be identified: (1) in the differentiation of some equations aiming to reveal implicit constrains; (2) in the solution of the consistency system, probably by a derivativebased method at some stage. Hence, a rigorous initialization code needs a differentiation tool that is accurate, robust and efficient.
Even for index one DAE systems differentiation of the original equations may be necessary to uniquely define initial conditions. Consider the finite bath model shown in the set of equations (10). As the DAE system possesses more differential variables (in respect to time) than differential equations, the (t_{0}) vector could never be assigned uniquely unless equations (10.b) and (10.c) were differentiated.
(10a) 
(10b) 
(10c) 
A rigorous initialization code, under development at PEQ/COPPE/UFRJ, employs the optimization approach for the resolution of the initialization problem. This approach seems to be promising because it is expected that some typical features of the consistency equations, such as overdetermined algebraic systems and inequality constraints, can be easily handled. However, codes based on conventional optimization strategies generally require a good starting point (not always available) in order to achieve convergence. Also, such methods may be computationally intensive, especially if largescale systems are involved.
In a previous paper (Vieira and Biscaia Jr., 2000), the authors developed a preliminary DAE initialization code using the optimization approach but based solely on a class of stochastic methods known as Evolutionary Programs (Michalewicz, 1996), of which Genetic Algorithms is the most wellknown subclass. On this paper it has been observed that each generation of the algorithm is very simple and cheap to compute, and that the first generations are responsible for a drastic decrease in the objective function, corresponding to most part of the total improvement observed in the population. However, this code demands a large number of iterations, making the overall computational time unaffordable for large systems.
At the present stage of development of the code, Evolutionary Computation techniques are being used as first search tools, very appropriate to determine initial conditions for a subsequent optimization method. Such method could afford to compute derivatives and even Hessian matrices, as only a few iterations would be needed to improve the initial guess. As a consequence, convergence problems and the overall cost of the algorithm would be reduced. To further enhance performance, genetic operators have been redefined.
All differentiation involved is computed offline through the automatic differentiation code ADIFOR. The code has been so far capable of producing efficient derivative code even for largescale systems. At the next stage of development, an interface to the software will be created, in such a way that the code manipulation is performed automatically.
CONCLUSIONS
Dynamic simulation in general, and especially simulation of the highly nonlinear chemical engineering models, can benefit from the development of robust and accurate differentiation techniques and tools. In this work some key features in the Automatic Differentiation field have been discussed, and the possible achievements resultant from the utilization of the tool ADIFOR were exemplified by two different but complementary areas of research: numerical integration and consistent initialization of DAEs.
It has been noticed that computational performance of the code generated with AD is not the most attractive if compared with other available techniques. However, it must be emphasized that numerical differentiation is not suitable for the calculation of derivatives of higher order, and incurs in truncation and rounding error. Symbolic differentiation, as precise as AD, has been shown to be unaffordable for most systems of interest.
It has been briefly discussed the initialization code under development at PEQ/COPPE/UFRJ. In its next stage, an interface to the AD tool ADIFOR will be implemented, in such a way that necessary differentiation will be carried out automatically by the code following only directions given by the user. As it is still a work in progress, other AD tools are under consideration. There is a strong belief that operator overloading implementations of AD are going to produce the most efficient code, also allowing differentiation to be performed at run time. This direction of research will strongly depend on the further improvements of the interface ADOLF, as the main goal is to create an initialization tool to be used by Fortran programmers.
NOMENCLATURE
Finite Bath Example
a_{i0}  Constants of Langmüir’s Equation 
aij  Constants of Langmüir’s Equation 
Ao  external superficial area of particles, m^{2} 
Cd_{i}  dimensionless concentration of component i on the bath, Cd_{i}=Xd_{i}/Cd_{i0} 
Cp_{i}  dimensionless concentration of component i on the fluid phase of the particle, Cp_{i}=Xp_{i}/Cd_{i0} 
Cs_{I}  dimensionless concentration of component i on the solid phase of the particle, Cs_{i}=Xs_{i}/Cd_{i0} 
Dpi  effective diffusivity on the fluid phase of the particle, cm^{2}/s 
Kf i  mass transfer coefficient on the film, cm/s^{2} 
r  dimensionless radial direction , r=(x /R_{P})^{2} 
R_{P}  particle radius, m 
Shpi  Sherwood number, dimensionless 
t  dimensionless time, t=(Dp_{1}q )/A_{0} 
Xd_{i}  concentration of component i on the bath, mol/m^{3} 
Xp_{i}  concentration of component i on the fluid phase of the particle, mol/m^{3} 
Xs_{I}  concentration of component i on the solid phase of the particle, mol/m^{3} 
Greek Letters
e _{b}  void fraction in bath, dimensionless 
e _{p}  void fraction in particles, dimensionless 
q  dimensional time, s 
x  dimensional radial direction, m 
Dryer Example
Cp  heat capacity, J/ kg K 
Gg  gas mass flow, kg / m^{2} s 
ha  volumetric heat transfer coefficient, J / m^{3} s 
K_{s}a  volumetric mass transfer coefficient, kg / m^{3} s 
L_{d}  dryer length, m 
t  time, s 
Tg  fluid phase temperature, K 
Ts  grain temperature, K 
Yg  mass fraction of water in gas phase (%dry basis) 

mass fraction of water in gas phase in equilibrium with solid phase (%dry basis) 
Ys  mass fraction of water in solid phase (%dry basis) 
z  spatial direction, dimensionless 
Greek Letters
e  bed porosity, dimensionless 
l  latent heat of vaporization of free water, J / kg 
r_{s}  solid density, kg / m^{3} 
Subscripts
ar  dry air 
l  liquid water 
s  dry grain 
v  water vapor 
g  gas phase 
0  values at time t=0 
a  values at boundary z = 0 
ACKNOWLEDGEMENTS
The authors would like to thank FAPERJ (grant #E26/150.97099) for supporting this research.
REFERENCES
Bischof, C.H., Carle, A., Corliss, G.F., Griewank, A. and Hovland, P., ADIFOR: Generating Derivative Codes from Fortran Programs, Scientific Programming, vol. 1 pp 129. (1992) [ Links ]
Brenan, K.E., Campbell, S.L. and Petzold, L.R., Numerical Solution Of InitialValue Problems in DifferentialAlgebraic Equations, New York, 1st edition, Elsevier Sc Publishing Co Inc. (1989) [ Links ]
Char, B.W., Geddes, K.O., Gonnet, G.H., Monagan, M.B. and Watt, S.M., MAPLE Reference Manual, Watcom Publications, Waterloo, Ontario, Canada (1988) [ Links ]
Gear, C.W., The Simultaneous Numerical Solution of DifferentialAlgebraic Equations, IEEE Trans. Circuit Theory, CT18, pp. 8995 (1971) [ Links ]
Griewank, A. and Corliss, G.F., Automatic Differentiation of Algorithms: Theory, Implementation and Application, SIAM, Philadelphia, Penn (1991) [ Links ]
Griewank, A., Corliss, G., Berz, M. and Bischof, C., Computational Differentiation: Techniques, Applications and Tools, SIAM, Proceedings Series, Philadelphia, Penn (1996a) [ Links ]
Griewank,A., Juedes,D., Mitev,H., Utke, J., Vogel,O. and Walther,A., ADOLC: A Package for the Automatic Differentiation of Algorithms Written in C/C++, ACM TOMS, 22 n. 2, pp. 131167, Algor. 755 (1996b) [ Links ]
Lohner, R. J., Verified Computing and Programs in PascalXSC, with Examples, PhD Thesis, Habilitationsschrift, Institute for Applied Mathematics, University of Karlsruhe (1992) [ Links ]
Michalewicz, Z., Genetic Algorithms + Data Structures = Evolution Programs, SpringerVerlaig (1996) [ Links ]
Monagan,M.B. and Rodoni,R.R., An Implementation of the Forward and Reverse Modes of Automatic Differentiation in Maple, In:Computational Differentiation: Techniques, Applications and Tools, SIAM, Proceedings Series, Philadelphia, Penn, pp. 353365 (1996) [ Links ]
Petzold, L. R., DASSL code, version 1989, Computing and Mathematics Research Division, Lawrence Livermore National Laboratory, L316, PO Box 808, Livermore, CA 94559 (1989) [ Links ]
Rall, L. and Corliss, G.F., An Introduction to Automatic Differentiation, In: Computational Differentiation: Techniques, Applications and Tools, SIAM, Proceedings Series, Philadelphia, Penn, pp. 118 (1996) [ Links ]
Rostaing, N., Dalmas, S. and Galligo, A., Automatic Differentiation in Odysée, Telus, 45^{A}, pp.558568 (1993) [ Links ]
Secchi,A., Biscaia Jr.,E. and Morari,M., The Waveform Relaxation Method in the Concurrent Dynamic Process Simulation, Computers and Chemical Engineering, Pergamon Press, Vol 17, n. 7, pp. 683704 (1993) [ Links ]
Shiriaev, D., Griewank, A. and Utke, J., A User Guide to ADOLF: Automatic Differentiation of Fortran Codes, Technical Report, Institute of Scientific Computing, TU, Dresden (1995) [ Links ]
Vieira, R. and Biscaia Jr., E., Heuristic Optimization in Dynamic Process Simulation , to appear at Computers and Chemical Engineering (2000) [ Links ]
Vieira,R.; Ourique,C. and Biscaia Jr., E., Enfoque AlgébricoDiferencial na Simulação de Secadores de Leito Fixo e de Fluxo Cruzado, XXIV Congresso Brasileiro de Sistemas Particulados  XXIV ENEMP (1996) [ Links ]
*To whom correspondence should be addressed