Acessibilidade / Reportar erro

Prediciton of high-pressure vapor liquid equilibrium of six binary systems, carbon dioxide with six esters, using an artificial neural network model

Abstract

Artificial neural networks are applied to high-pressure vapor liquid equilibrium (VLE) related literature data to develop and validate a model capable of predicting VLE of six CO2-ester binaries (CO2-ethyl caprate, CO2-ethyl caproate, CO2-ethyl caprylate, CO2-diethyl carbonate, CO2-ethyl butyrate and CO2-isopropyl acetate). A feed forward, back propagation network is used with one hidden layer. The model has five inputs (two intensive state variables and three pure ester properties) and two outputs (two intensive state variables).The network is systematically trained with 112 data points in the temperature and pressure ranges (308.2-328.2 K), (1.665-9.218 MPa) respectively and is validated with 56 data points in the temperature range (308.2-328.2 K). Different combinations of network architecture and training algorithms are studied. The training and validation strategy is focused on the use of a validation agreement vector, determined from linear regression analysis of the plots of the predicted versus experimental outputs, as an indication of the predictive ability of the neural network model. Statistical analyses of the predictability of the optimised neural network model show excellent agreement with experimental data (a coefficient of correlation equal to 0.9995 and 0.9886, and a root mean square error equal to 0.0595 and 0.00032 for the predicted equilibrium pressure and CO2 vapor phase composition respectively). Furthermore, the comparison in terms of average absolute relative deviation between the predicted results for each binary for the whole temperature range and literature results predicted by some cubic equation of state with various mixing rules and excess Gibbs energy models shows that the artificial neural network model gives far better results.

Vapor liquid equilibrium; High pressure; Artificial neural networks; Carbon dioxide; Esters


THERMODYNAMICS AND SEPARATION PROCESSES

Prediciton of high-pressure vapor liquid equilibrium of six binary systems, carbon dioxide with six esters, using an artificial neural network model

C. Si-MoussaI,* * To whom correspondence should be addressed ; S. HaniniII; R. DerricheI; M. BouheddaII; A. BouzidiIII

IDépartement de Génie Chimique, Ecole Nationale Polytechnique, BP 182, El Harrach, Algérie E-mail : simoussa_cherif@yahoo.fr

IILBMPT, Université de Médéa, Quartier Ain D’Heb, 26000, Médéa, Algérie

IIICentre de Recherche Nucléaire de Birine, Algérie

ABSTRACT

Artificial neural networks are applied to high-pressure vapor liquid equilibrium (VLE) related literature data to develop and validate a model capable of predicting VLE of six CO2-ester binaries (CO2-ethyl caprate, CO2-ethyl caproate, CO2-ethyl caprylate, CO2-diethyl carbonate, CO2-ethyl butyrate and CO2-isopropyl acetate). A feed forward, back propagation network is used with one hidden layer. The model has five inputs (two intensive state variables and three pure ester properties) and two outputs (two intensive state variables).The network is systematically trained with 112 data points in the temperature and pressure ranges (308.2-328.2 K), (1.665-9.218 MPa) respectively and is validated with 56 data points in the temperature range (308.2-328.2 K). Different combinations of network architecture and training algorithms are studied. The training and validation strategy is focused on the use of a validation agreement vector, determined from linear regression analysis of the plots of the predicted versus experimental outputs, as an indication of the predictive ability of the neural network model. Statistical analyses of the predictability of the optimised neural network model show excellent agreement with experimental data (a coefficient of correlation equal to 0.9995 and 0.9886, and a root mean square error equal to 0.0595 and 0.00032 for the predicted equilibrium pressure and CO2 vapor phase composition respectively). Furthermore, the comparison in terms of average absolute relative deviation between the predicted results for each binary for the whole temperature range and literature results predicted by some cubic equation of state with various mixing rules and excess Gibbs energy models shows that the artificial neural network model gives far better results.

Keywords: Vapor liquid equilibrium; High pressure; Artificial neural networks; Carbon dioxide; Esters.

INTRODUCTION

The interest in high-pressure phase equilibrium is increasing due to its importance for many chemical processes that are conducted at high pressures in various industries (pharmaceutical, cosmetic, food, petroleum, natural gas etc.) and particularly supercritical-fluid extraction processes. Christov and Dohrn (2002) have reviewed the literature published between 1994 and 1999 on high-pressure fluid phase equilibrium in terms of the experimental methods used and systems investigated. That review shows that from the 1336 systems investigated 626 (47%) were carbon dioxide systems (350 binary and 191 ternary systems). Carbon dioxide is the most commonly used supercritical fluid for extraction, and material processing owing to its availability, inertness, non-flammability, non-toxicity, low cost and low critical temperature and pressure. Fundamentals and applications of supercritical fluid technology have been described by McHugh and Krukonis (1994) and Brunner (1994).

Information about the phase behavior of fluid mixtures can be obtained from direct measurement of phase equilibrium data or by the use of equation of state and/or activity coefficient based thermodynamic models. Direct measurement of precise experimental data is often difficult and expensive, while the second method, which includes a large number of equation of states and excess Gibbs free energy models, is tedious and involves a certain amount of empiricism to determine mixture constants, through fitting experimental data and using various arbitrary mixing rules, making difficult the selection of the appropriate model for a particular case.

On the other hand artificial neural networks (ANN), which can be viewed as an universal approximation tool with an inherent ability to extract from experimental data the highly non linear and complex relationships between the variables of the problem handled, have gained broad attention within process engineering as a robust and efficient computational tool. They have been successfully used to solve problems in biochemical and chemical engineering (Baughman and Liu, 1995). Such applications include bioreactor control with unstable parameters (Syu and Tsao, 1993), fault detection and predictive modelling (Ferentinos, 2005), modelling and simulation of a fermentation process, (Hongwen et al., 2005), dynamic modelling, simulation and control of fixed-bed reactor (Shahrokhi and Baghmisheh, 2005), modelling of a continuous fluidized bed dryer (Satish and Setty, 2005), polymerization processes (Fernandes and Lona, 2005), modelling of liquid-liquid extraction column (Chouai et al., 2000), modelling and simulation of CO2-supercritical fluid extraction of black cumin seed oil (Fullana et al., 1999), and black pepper essential oil (Izadifar and Abdolahi, 2005), and recovery of biological products from fermentation broths (Patnaik, 1999).

As far as thermo physical properties and phase equilibria are concerned, Scalabrin et al. (2002) have used an extended corresponding state NN model to predict residual properties of several pure halocarbons. Chouai et al. (2002) have used an ANN model to estimate the compressibility factor for the liquid and vapor phase as a function of temperature and pressure for several refrigerants. Laugier and Richon (2003) have used two ANN models, one for the vapor phase and other for the liquid phase, to estimate the compressibility factor and the density of some refrigerants as a function of pressure and temperature. Boozarjomehry et al. (2005) have developed a set of feedforward multilayer neural networks for the prediction of some basic properties of pure substances and petroleum fractions. Khayamian and Esteki (2004) have proposed a wavelet neural network model to predict the solubility of some polycyclic aromatic compounds in supercritical CO2. Tabaraki et al. (2005) have also used a wavelet neural network to predict the solubility of azo dyes in supercritical CO2 and Tabaraki et al. (2006) of 25 anthraquinone dyes in supercritical CO2 at different conditions of temperature and pressure. Havel et al. (2002) have proposed ANN for the evaluation of chemical equilibria.

Applications of ANN for the prediction of VLE have been reported in a number of papers. Petersen et al. (1994) have used ANN to estimate activity coefficient based on group contribution methods. Guimaraes and McGreavy (1995) have reported the use of ANN to estimate VLE in terms of bubble point conditions of the benzene-toluene binary. Sharma et al. (1999), in a paper which emphasizes on the potential advantages of ANN over EOS models for VLE prediction, have reported the use of ANN models which take the equilibrium pressure and temperature as inputs to predict the liquid and vapor phase compositions for low pressure VLE of methane-ethane and ammonia-water binaries. In Urata et al. (2002) work two multilayer perceptrons have been used to estimate VLE of binary systems containing hydrofluoroethers and polar compounds. Ganguly (2003) has used ANN with radial basis functions to predict VLE of binary and ternary systems. Piowtrowski et al. (2003), however, have used a feedforward multilayer neural network to simulate complex VLE in an industrial process of urea synthesis from ammonia and carbon dioxide. Bilgin (2004) has also used feedforward ANN to estimate isobaric VLE in terms of activity coefficients for the methylcyclohexane–toluene and isopropanol–methyl isobutyl ketone systems. More recently, Mohanty (2005) has used multilayer perceptron ANN with one hidden layer to predict VLE in terms of liquid and vapor phase compositions given the equilibrium temperature and pressure for each of the three binary systems (CO2-ethylcaprate, CO2-ethylcaproate and CO2-ethylcaprylate). In another paper Mohanty (2006) has reported the use of ANN to estimate the bubble pressure and the vapor phase composition of the CO2–difluoromethane system. In all the works reported herein regarding phase equilibrium ANN has been applied to a single binary system at various conditions of equilibrium pressure and temperature.

In this work an attempt was made to estimate high pressure VLE of six CO2-esters binaries using a single ANN predictive model. Three pure component properties and two intensive state variables have been selected as the NN inputs in order to describe the VLE of the six binaries in one system. The experimental data used for training and validation of the NN are those reported by Hwu et al. (2004) for CO2-ethyl caprate, CO2-ethyl caproate and CO2-ethyl caprylate systems, and by Cheng and Chen (2005) for CO2-diethyl carbonate, CO2-ethyl butyrate and CO2-isopropyl acetate systems. As mentioned by these authors, these systems are of importance for supercritical fluid extraction. Isopropyl acetate, ethyl butyrate and ethyl caprylate are used as perfumes or aroma additives in cosmetic and food industries, whereas diethyl carbonate, ethyl caproate and ethyl caprate are used in organic synthesis, in the production of essential oils and in the production of resin.

NEURAL NETWORK MODELLING

Feedforward Artificial Neural Networks Concept

The idea of artificial neural networks was inspired in the way biological neurons process information. This concept is used to implement software simulations for the massively parallel processes that involve processing elements interconnected in network architecture. Learning in the human brain occurs in a network of neurons that are interconnected by axons, synapse and dendrites. A variable synaptic resistance affects the flow of information between two biological neurons. The artificial neuron receives inputs that are analogous to the electrochemical impulses that the dendrites of biological neurons receive from other neurons. Therefore, ANN can be viewed as a network of neurons which are processing elements and weighted connections. The connections and weights are analogous to axons and synapses in the human brain respectively. The ANN, simulating human brain analytical function, has an intrinsic ability to learn and recognize highly non-linear and complex relationships by experience.

The artificial neurons are arranged in layers (Fig. 1) wherein the input layer receives inputs (ui) from the real world and each succeeding layer receives weighted outputs (wij.ui) from the preceding layer as its input resulting therefore a feedforward ANN, in which each input is fed forward to its succeeding layer where it is treated. The outputs of the last layer constitute the outputs to the real world.


In such a feedforward ANN a neuron in a hidden or an output layer has two tasks:

It sums the weighted inputs from several connections plus a bias value and then applies a transfer function to the sum as given by (for neuron j of the hidden layer):

It propagates the resulting value through outgoing connections to the neurons of the succeeding layer where it undergoes the same process as given by (for instance outputs Zjof the hidden layer fed to neuron k of the output layer gives the output vk):

Combining equations 1 and 2 the relation between the output vk and the inputs ui of the NN is obtained:

The output is computed by means of a transfer function, also called activation function. It is desirable that the activation function has a sort of step behavior. Furthermore, because continuity and derivability at all points are required features of the current optimization algorithms, typical activation functions which fulfill these requirements are:

Hyperbolic tangent sigmoid transfer function:

Logarithmic sigmoid transfer function:

Pure linear transfer function:

The number of neurons in the input and output layers is determined by the number of independent and dependent variables respectively. The user defines the number of hidden layers and the number of neurons in each hidden layer. Model development is achieved by a process of training in which a set of experimental data of the independent variables are presented to the input layer of the network. The outputs from the output layer comprise a prediction of the dependant variables of the model. The network learns the relationships between the independent and dependent variables by iterative comparison of the predicted outputs and experimental outputs and subsequent adjustment of the weight matrix and bias vector of each layer by a back propagation training algorithm. Hence, the network develops a NN model capable of predicting with acceptable accuracy the output variables lying within the model space defined by the training set. Consequently, the objective of ANN modelling is to minimize the prediction errors of validation data presented to the network after completion of the training step.

Selection of a Neural Network Model

Although there is continuing debate on model selection strategies, it is clear that the successful application of ANN in modelling engineering problems is highly affected by four major factors:

1. Network type (recurrent networks, feedforward backpropagation, wavelet neural network, radial basis functions, etc.),

2. Network structure (number of hidden layers, number of neurons per hidden layer),

3. Activation functions,

4. Training algorithms.

It is well established that the variation of the number of neurons of the hidden layer(s) has a significant effect on the predictive ability of the network. For Plumb et al. (2005) the most common way of optimizing the performance of ANN is by varying the numbers of neurons in the hidden layer(s) and selecting the architecture with the highest predictive ability. According to Swingler (1996) a heuristic rule suggests that for multilayer NN with one hidden layer the hidden layer will never require neurons more than twice the inputs. However, Curry and Morgan (2006), who have discussed relating difficulties in the selection of a neural network structure, state that heuristic rules have little to offer and such rules, suggesting that the number of hidden neurons should be directly related to the number of inputs and outputs, have only historical interest, being popular immediately after the path finding work of Rumelhart et al. (1986). With regard to methods of choosing the number of hidden layers and hidden neurons in the design of a NN model, Curry and Morgan (2006) findings show that neither heuristics nor statistical concepts can be used conclusively. They concluded that the most simple, and probably still the most popular sensible, strategy (practitioner’s strategy as they labeled it) is to focus directly on minimizing the root mean square error. The obvious difficulty for this strategy is the excessive computational burden brought about by the time consuming process of experimenting different models: the cause is ‘combinatorial explosion’ in the number of different NN models which need to be run. Henrique et al. (2000) presented a procedure, for model structure determination in feedforward networks, which is based on network pruning using orthogonal least-squares techniques to determine insignificant or redundant synaptic weights, biases, hidden neurons and network inputs.

As far as training algorithms are concerned, the following classes of algorithms, which are implemented in MATLAB® neural network toolbox, are the most commonly used algorithms: Levenberg-Marquardt backpropagation (Hagan and Menhaj, 1994); Bayesian regularization backpropagation (MacKay, 1992; Foresee and Hagan, 1997); conjugate gradient backpropagation (Moller, 1993; Powell, 1977); gradient descent backpropagation (Hagan et al., 1996); quasi-Newton (Battiti , 1992).

A significant problem with ANN training is the tendency to over-train resulting in a lack of ability to predict, accurately, data excluded from the training set. To overcome this, test data set may be presented to the network during training in such a way that training will terminate at the point where the error of the test set predictions begins to diverge. This method is known as "stopped training" (Bourquin et al., 1997) or "attenuated training" (Plumb et al., 2002; Plumb et al., 2005). Attenuated training is generally applicable to gradient descent, conjugate gradient and quasi-Newton training algorithms. Bayesian regularization training algorithm uses a modified performance function designed to minimize over-training by smoothing the error surface of the training set. Plumb et al. (2005) have compared these classes of algorithms and three different NN packages when applied to problems in the pharmaceutical industry. Amongst their conclusions, is that MATLAB/Bayesian regularization models train more successfully than MATLAB models using attenuated training. They too have proposed the trail and error strategy (practitioner’s strategy) for the selection of the best NN structure and training algorithms focusing on the goodness of fit, R2, determined for validation plots of the predicted versus observed data, as a measure of the predictive ability of the model.

VLE MODELLING WITH NEURAL NETWORK

In order to describe the phase behavior of the six CO2(1)-ester(2) binaries by one ANN model a total of seven variables have been selected in this work: four intensive state variables (equilibrium temperature, equilibrium pressure and equilibrium CO2 mole fractions in the liquid and vapor phases) and three pure component properties of the ester (critical temperature, critical pressure and acentric factor). The choice of the input and output variables was based on the phase rule, practical considerations (bubble or dew point computation) and the need to describe the six binaries by only one ANN model. Therefore, the equilibrium temperature, the CO2 mole fraction in the liquid phase together with the pure component properties of the esters have been selected as input variables and the remaining as output variables (Fig.2).


The experimental data reported by, Hwu et al. (2004) for CO2-ethyl caprate, CO2-ethyl caproate and CO2-ethyl caprylate systems and Cheng and Chen (2005) for CO2-diethyl carbonate, CO2-ethyl butyrate and CO2-isopropyl acetate systems, have been used for training and validation of the ANN model. The pure component properties of the six esters used in this work are listed in Table 1. The range of the intensive state variables and the number of data points for each binary are listed in Table 2.

The application of ANN modelling of VLE of the six CO2-ester binaries was performed using MATLAB® (version 6.1) and the strategy proposed by Plumb et al. (2005) as follows:

1. The experimental data should be divided into a training set, a test set (when attenuated training is adopted) and a validation set. Each data set should be well distributed throughout the model space.

2. Initially, the model should be trained using the default training algorithm and network architecture. The parameters of the equation of the best fit (the slope and the y intercept of the linear regression) or the goodness of fit (correlation coefficient, R2) are determined for validation plots of the predicted versus the experimental properties of the validation data set. These parameters are used as a measure of the predictive ability of the model. Where the agreement vector values approach the ideal, i.e. [a=1 (slope), b=0 (y intercept), R2=1], little improvement in predictive ability is to be expected. The ANN model with the best agreement vector is retained and the procedure is stopped.

3. Where the values of the parameters of the agreement vector vary greatly from the ideal and the model is poorly predictive, modification of the number of hidden layer neurons is then considered.

4. If model performance remains unsatisfactory a systematic investigation of the effect of varying both the training algorithm and network architecture is required.

Based on this global strategy, details of the different phases of the procedure followed in this work are depicted in Figure 3.


All the input and output data were scaled so as to have a normal distribution with zero mean and unit standard deviation using the following scaling equation:

Where µ and s: are the mean and standard deviations of the actual data respectively. The values of µ and s for the input and output data, referred to in Table 1 and Table 2, are listed in Table 3. MATLAB neural network toolbox contains various pre and post data processing methods. When using the above scaling method, scaling and de-scaling are carried out by prestd and poststd MATLAB functions respectively. In order to have data sets well distributed throughout the model space, the data points referred to in Table 2 were sorted in increasing order of the equilibrium temperature and CO2 mole fraction in the liquid phase, then two records have been used for training and the third for validation which resulted in a two third (112 data points) one third (56 data points) partitions for training and validation respectively. Whenever attenuated training was used, the initial training set (112 data points) was divided, in the same way, into a training set (75 data points) and a test set (37 data points).

The above strategy was implemented in a MATLAB program for ANN modelling of the VLE of six CO2(1)-ester(2) as described in Figure 2. Initially, the program starts with the default feedforward backpropagation NN type (newff MATLAB function), the Levenberg-Marquardt backpropagation training algorithm (trainlm MATLAB training function) and one hidden layer. Once the topology is specified the starting and ending number(s) of neurons in the hidden layer(s) have to be specified. The number of neurons in a hidden layer is then modified by adding neurons one at a time. The procedure begins with the logarithmic sigmoid activation function and then the hyperbolic tangent sigmoid activation function for the hidden layers and the linear activation function for the output layer. The results of different runs of the program show, as pointed out by Plumb et al. (2005), that the Bayesian regularization backpropagation, using Levenberg-Marquardt optimization (BRBP) models, train more successfully than models using attenuated training. Table 4 shows the structure of the optimized NN model. The weight matrices and bias vectors of the optimized NN model are listed in Table 5, where wI is the input-hidden layer connection weight matrix (20 rows x 5 columns), wh is the hidden layer-output connection weight matrix (2 rows x 20 columns), bh is the hidden neurons bias column vector (20 rows) and bo is the output neurons bias column vector (2 rows).

RESULTS AND DISCUSSION

The predictive ability assessment requires evaluation of data records excluded from the training set. Accordingly, the validation agreement vector and the validation agreement plot of the predicted versus the experimental outputs for the validation data set were used to evaluate the predictive ability of the NN model. The plot and the parameters of the linear regression are, straightforwardly, obtained using postreg MATLAB function. Figure 4 shows the validation agreement plot for the equilibrium pressure with an agreement vector approaching the ideal, [a, b, R2] = [1.01, -0.0235, 0.999]. Figure 5 shows the same plot for CO2 mole fraction in the vapor phase with an agreement vector equal to [0.884, 0.115, 0.971].



Table 6 shows the validation agreement vector calculated per binary for the training and validation data sets. It shows that the least favorable regression parameters are those obtained for the prediction of the vapor phase composition of the CO2 (1)-ethyl caprylate (2) system.

Table 7 represents the commonly used deviations calculated for the two predicted outputs of the NN model (P: equilibrium pressure, y1: CO2 mole fraction in the vapor phase) for the whole data set:

Average Absolute Relative Deviation:

Average Absolute Relative Deviation:

Root Mean Square Error (square root of the average sum of squares):

The maximum of the Absolute Relative Deviation is equal to 4.95% and 0.19% for P and y1 respectively. Similarly, the maximum of the Absolute Deviation is equal to 0.42 MPa and 0.0019 for P and y1 respectively.

Figures 6-11 show the bubble pressure and dew pressure curves for the six CO2(1)-ester(2) binaries. They include a comparison between experimental data and NN predicted results at three temperatures and two test temperatures inside the range of experimental data in order to show the interpolating ability of the ANN model. For the three experimental temperatures (shown as circles, squares and downward-triangles), the figures show excellent agreement between experimental literature data (shown as white face markers) and the NN predicted results (shown as dark face markers). For the two test temperatures (shown as dark face pentagrams and diamonds) the predicted bubble and dew curves follow exactly the trend of the experimental data of adjacent temperatures which suggests a good predictive ability of the NN model for temperatures within the range of temperature for which the model has been designed.


The predictions of CO2 mole fraction in the vapor phase for the three temperatures for which experimental data are available and the two test temperatures are shown (Fig. 12) in term of the plots of the K-value of CO2 versus the CO2 mole fraction in the liquid phase for the sample case (CO2 (1)- isopropyl acetate (2) system) with the highest root mean square error (0.0005). Here also the comparison between the experimental data and the NN model predicted results shows a very good agreement and a good interpolating ability for temperatures within the range for which the model has been designed.


In order to establish the developed NN model as plausible alternative to some cubic EOS and GE models for VLE data prediction of the CO2-ester systems studied herein, the results obtained by the ANN model were compared to the experimental data reported by Hwu et al. (2004) for CO2-ethyl caprate, CO2-ethyl caproate and CO2-ethyl caprylate systems, and by Cheng and Chen (2005) for CO2-diethyl carbonate, CO2-ethyl butyrate and CO2-isopropyl acetate systems. They relate the predicted results for each binary for the entire temperature range using Peng-Robinson (PR) and Soave-Redlick-Kwong (SRK) EOS, with the one and two parameters Van der Waals (VDW1 and VDW2) mixing rules (MR), and the Panagiotopoulos-Reid (PPR) mixing rules and also those predicted using NRTL and UNIQUAC combined with PR and SRK EOS with Huron-Vidal (HV) mixing rules. The results of the comparison are shown in Table 8. It shows that the % deviations of the NN model predicted bubble pressure and CO2 mole fraction in the vapor phase are lower than those obtained by EOS and GE models for all the CO2-ester binaries.

CONCLUSIONS

A feed forward artificial neural network model has been used to predict the bubble pressure and the vapor phase composition of six CO2-ester binaries (CO2-ethyl caprate, CO2-ethyl caproate, CO2-ethyl caprylate, CO2-diethyl carbonate, CO2-ethyl butyrate and CO2-isopropyl acetate) given the temperature, the mole fraction of CO2 in the liquid phase and the critical temperature, the critical pressure and the acentric factor of the ester, in contrast to previous works where ANN have been used to model only one binary. The optimized NN consisted of five neurons in the input layer, 20 neurons in the hidden layer and two neurons in the output layer. This was obtained by applying a strategy based on assessing the parameters of the best fit of the validation agreement plots (slope and y intercept of the equation of the best fit and the correlation coefficient R2) for the validation data set as a measure of the predictive ability of the model. The statistical analysis shows that the model was able to yield quite satisfactorily estimates. Furthermore, the deviations in the prediction of both the bubble pressure and the CO2 mole fraction in the vapor phase are lower than those obtained by PR and SRK EOS with Van der Waals type mixing rules and those obtained by NRTL and UNIQUAC models. Therefore, the ANN model can be reliably used to estimate the equilibrium pressure and the vapor phase compositions of the CO2-ester binaries within the ranges of temperature considered in this work. This study also shows that ANN models could be developed for high pressure phase equilibrium for a family of CO2 binaries, provided reliable experimental data are available, to be used in supercritical fluid processes. Hence, at least for a non expert in selecting appropriate EOS for the application in hand, alternatives to EOS and activity coefficient models are offered to be used in a more reliably and less cumbersome way, in process simulators and processes involving real time process control.

NOMENCLATURE

AAD

Average Absolute Deviation

(-)

AARD

Average Absolute Relative Deviation

(-)

ANN

Artificial Neural Networks

(-)

b

bias

(-)

BRBP

Bayesian Regularization Back Propagation

(-)

EOS

Equation Of State

(-)

f

activation function

(-)

FFBP

Feed Forward Back Propagation

(-)

HV

Huron Vidal

(-)

K

K-value

(-)

MaxAAD

maximum of the Average Absolute Deviation

(-)

MaxAARD

maximum of the Average Absolute Relative Deviation

(-)

MR

Mixing Rules

(-)

N

number of data points

(-)

NN

Neural Networks

(-)

P

equilibrium pressure

MPa

Pc

critical pressure

MPa

PPR

Panagiotopoulos Reid

(-)

PR

Peng Robinson

(-)

Psat

vapor pressure

(-)

RMSE

Root Mean Square Error

(-)

R2

correlation coefficient

(-)

SRK

Soave Redlich Kwong

(-)

T

equilibrium temperature

K

Tc

critical temperature

K

u

neural network input vector

(-)

v

neural network output vector

(-)

VDW

Van der Waals

(-)

w

weights

(-)

x

liquid phase mole fraction

(-)

y

vapor phase mole fraction

(-)

z

hidden layer output vector

(-)

Greek Letters

µ

mean

(-)

a

slope of the linear regression equation

(-)

b

y intercept of the linear regression equation

(-)

s

standard deviation

(-)

w

acentric factor

(-)

Subscripts

1

component 1

(-)

2

component 2

(-)

h

hidden layer

(-)

i

component i

(-)

ij

connection between jth input neuron and ith output neuron.

(-)

o

output layer

(-)

Superscripts

cal

calculated

exp

experimental

h

hidden layer

I

input layer

(Received: May 4, 2007 ; Accepted : November 5, 2007)

  • Battiti, R., First- and second-order methods for learning between steepest descent and Newtons method, Neural Comput., 4, 141 (1992).
  • Baughman, D.R. and Liu, Y.A., Neural Networks in Bio-Processing and Chemical Engineering, Academic Press, New York (1995).
  • Bilgin, M., Isobaric vapour-liquid equilibrium calculations of binary systems using neural network, J. Serb. Chem. Soc., 69, 669 (2004).
  • Boozarjomehry, R.B., Abdolahi, F. and Moosavian, M.A., Characterization of basic properties for pure substances and petroleum fractions by neural network, Fluid Phase Equilib., 231, 188 (2005).
  • Bourquin, J., Schmidli, H . and van Hoogvest, P., Leuenberger, H., Basic concept of artificial neural networks (ANN) modelling in the application of pharmaceutical development, Pharm. Dev. Technol. 2, 95 (1997).
  • Brunner, G., Gas Extraction, Steinkopff-Verlag, Darmstadt (1994).
  • Cheng, C.-H. and Chen, Y.-P., Vapor-liquid equilibrium of carbon dioxide with isopropyl acetate, diethyl carbonate, and ethyl butyrate at elevated pressures, Fluid Phase Equilib., 234, 77 (2005).
  • Chouai, A., Cabassud, M., Le Lann, M.V., Gourdon, C. and Casamatta, G., Use of neural networks for liquid-liquid extraction column modelling : an experimental study, Chem. Engng. Processing, 39, 171 (2000).
  • Chouai, A., Laugier, S. and Richon, D., Modeling of thermodynamic properties using neural networks. Application to refrigerants, Fluid Phase Equilib., 199, 53 (2002).
  • Christov, M. and Dohrn, R., High-pressure fluid phase equilibria. Experimental methods and systems investigated (1994-1999), Fluid Phase Equilib., 202, 153 (2002).
  • Curry, B. and Morgan, P.H., Model selection in neural networks: some difficulties, Eur. J. Operational Research, 170, 567 (2006).
  • Ferentinos, K.P., Biological engineering applications of feedforward neural networks designed and parameterized by genetic algorithms, Neural networks, 18, 934 (2005).
  • Fernandes, F.A.N. and Lona, L.M.F., Neural network applications in polymerization processes, Braz. J. Chem. Eng., 22, No. 03, 401 (2005).
  • Foresee, F.D. and Hagan, M.T., Gauss-Newton approximation to Bayesian regularization in: Proceedings of the 1997 International Joint Conference on Neural Networks, 1930 (1997).
  • Fullana, M., Trabelsi, F., and Recasens, F., Use of neural net computing for statistical and kinetic modelling and simulation of supercritical fluid extractors, Chem. Engng. Sci., 54, 5845 (1999).
  • Ganguly, S., Prediction of VLE data using artificial radial basis function network, Comput. Chem. Engng., 27, 1445 (2003).
  • Guimaraes, P.R.B. and McGreavy, C., Flow of information through an artificial neural network, Comput. Chem. Engng., 19, (Suppl.) 741 (1995).
  • Hagan, M.T. and Menhaj, M.B., Training feedforward networks with Marquardt algorithm IEEE Trans. Neural Net., 5, 989 (1994).
  • Hagan, M.T., Demuth, H.B. and Beale, M.H., Neural Network Design, PWS Publishing, Boston, MA (1996).
  • Havel, J., Lubal, P. and Farkova, M., Evaluation of chemical equilibria with the use of artificial neural networks, Polyhedron, 21, 1375 (2002).
  • Henrique, H.M., Lima, E.L. and Seborg, D.E., Model structure determination in neural network models, Chem. Engng. Sci. 55, 5457 (2000).
  • Hongwen, C., Baishan, F. and Zongding, H., Optimazation of process parameters for key enzymes accumulation of 1,3-propanediol production from Klebsiella pneumoniae, Biochem. Engng. J., 25, 47 (2005).
  • Huron, M.J., Vidal, J., New mixing rules in simple equation of state for representing vapour-liquid equilibria of strongly non-ideal mixtures, Fluid Phase Equilib., 3, 255 (1979).
  • Hwu, W.-H., Cheng, J.-S., Cheng, K.-W. and Chen, Y.-P., Vapor-liquid equilibrium of carbon dioxide with ethyl caproate, ethyl caprylate and ethyl caprate at elevated pressures, J. Supercritical Fluids, 28, 1 (2004).
  • Izadifar, M.and Abdolahi, F., Comparison between neural network and mathematical modelling of supercritical CO2 extraction of black pepper essential oil, J. Supercritical Fluids, 38, 37 (2006).
  • Khayamian, T. and Esteki, M., Prediction of solubility for polycyclic aromatic hydrocarbons in supercritical carbon dioxide using wavelet neural networks in quantitative structure property relationship, J. Supercritical Fluids, 32, 73 (2004).
  • Laugier, S. and Richon, D., Use of artificial neural networks for calculating derived thermodynamic quantities from volumetric property data, Fluid Phase Equilib., 210, 247 (2003).
  • MacKay, D.J.C., Bayesian interpolation, Neural Comput., 4, 415 (1992).
  • MATLAB 6.1.0.450, Release 12.1, CD-ROM, MathWorks Inc., (2001).
  • McHugh, M.A. and Krukonis, V., Supercritical Fluid Extraction, Elsevier (1994).
  • Mohanty, S., Estimation of vapour liquid equilibria for the system, carbon dioxide-difluormethane using artificial neural networks, Int. J. Refrigeration, 29, 243 (2006).
  • Mohanty, S., Estimation of vapour liquid equilibria of binary systems, carbon dioxide-ethyl caproate, ethyl caprylate and ethyl caprate using artificial neural networks, Fluid Phase Equilib., 235, 92 (2005).
  • Moller, M.F., A scaled conjugate gradient algorithm for fast supervised learning, Neural Netw., 6, 525 (1993).
  • Patnaik, P.R., Applications of neural networks to recovery of biological products, Biotechnology Advances, 17, 477 (1999).
  • Petersen, R., Fredenslund, A. and Rasmussen, P., Artificial neural networks as a predictive tool for vapor-liquid equilibrium, Comput. Chem. Engng., 18, (Suppl. S63-S67) (1994).
  • Piotrowski, K., Piotrowski, J. and Schlesinger, J., Modelling of complex liquid-vapour equilibria in the urea synthesis process with the use of artificial neural network, Chem. Engng. Processing, 42, 285 (2003).
  • Plumb, A.P., Rowe, R.C., York, P. and Doherty, C., The effect of experimental design on the modelling of a tablet coating formulations using artificial neural networks, Eur. J. Pharm. Sci., 16, 281 (2002).
  • Plumb, A.P., Rowe, R.C. York, P. and Brown, M., Optimisation of the predictive ability of artificial neural network (ANN) models: A comparison of three ANN programs and four classes of training algorithm, Eur. J. Pharm. Sci., 25, 395 (2005).
  • Powell, M.J.D., Restart procedures for the conjugate gradient method, Math. Prog., 12, 241 (1977).
  • Rumelhart, D.E., Williams, R.J., Hinton, G.E., Learning internal representations by error propagation. In: McLelland, D.J., Rumelhart, D.E. (Eds.), Parallel Distributed Processes. MIT Press, Cambridge, MA. (1986).
  • Satish, S. and Pydi Setty, Y., Modeling of a continuous fluidized bed dryer using artificial neural networks, Int. Commun. Heat and Mass Transfer, 32, 539 (2005).
  • Scalabrin, G., Piazza, L., Cristofoli, G., Application of neural networks to a predictive extended corresponding states model for pure halocarbons thermodynamics, Int. J. Thermophys., 23, 57 (2002).
  • Shahrokhi, M. and Baghmisheh, G.R., Modeling, simulation and control of a methanol synthesisfixed-bed reactor, Comput. Chem. Eng., 60, 4275 (2005).
  • Sharma, R., Singhal, D., Ghosh, R. and Dwivedi, A., Potential applications of artificial neural networks to thermodynamics: vapour-liquid equilibrium predictions, Comput. Chem. Engng. 23, 385 (1999).
  • Swingler, K., Applying Neural Networks: A Practical Guide, Academic Press, London, 1996.
  • Syu, M. J., and Tsao, G.T., Neural network modeling of batch cell growth pattern, Biotechnol. Bioeng. 42, 376 (1993).
  • Tabaraki, R., Khayamian, T. and Ensafi, A.A., Solubility prediction of 21 azo dyes in supercritical carbon dioxide using wavelet neural network, Dyes and Pigments, 73, 230 (2006).
  • Tabaraki, R., Khayamian, T. and Ensafi, A.A., Wavelet neural network modeling in QSPR for prediction of solubility of 25 anthraquinone dyes at different temperatures and pressures in supercritical carbon dioxide, J. Molec. Graph. and Model., 25, 46 (2005).
  • Urata, S., Takada, A., Murata, J., Hiaki, T. and Sekiya, A., Prediction of vapour-liquid equilibrium for binary systems containing HFEs by using artificial neural network, Fluid Phase Equilibria, 199, 63 (2002).
  • *
    To whom correspondence should be addressed
  • Publication Dates

    • Publication in this collection
      28 Apr 2008
    • Date of issue
      Mar 2008

    History

    • Received
      04 May 2007
    • Accepted
      05 Nov 2007
    Brazilian Society of Chemical Engineering Rua Líbero Badaró, 152 , 11. and., 01008-903 São Paulo SP Brazil, Tel.: +55 11 3107-8747, Fax.: +55 11 3104-4649, Fax: +55 11 3104-4649 - São Paulo - SP - Brazil
    E-mail: rgiudici@usp.br