+ All Categories
Home > Documents > 0deec526212eb902a2000000

0deec526212eb902a2000000

Date post: 02-Jun-2018
Category:
Upload: 1000amores
View: 214 times
Download: 0 times
Share this document with a friend
8
8/10/2019 0deec526212eb902a2000000 http://slidepdf.com/reader/full/0deec526212eb902a2000000 1/8 Prediction of hydrate formation temperature by both statistical models and artificial neural network approaches Gholamreza Zahedi * , Zohre Karami, Hamed Yaghoobi Simulation and Artificial Intelligence Research Center, Department of Chemical Engineering, Razi University, Kermanshah, Iran a r t i c l e i n f o  Article history: Received 26 June 2008 Received in revised form 18 March 2009 Accepted 6 April 2009 Available online 6 May 2009 Keywords: Hydrate formation temperature Natural gas hydrate Artificial neural network a b s t r a c t In this study, various estimation methods have been reviewed for hydrate formation temperature (HFT) and two procedures have been presented. In the first method, two general correlations have been pro- posed for HFT. One of the correlations has 11 parameters, and the second one has 18 parameters. In order to obtain constants in proposed equations, 203 experimental data points have been collected from liter- atures. The Engineering Equation Solver (EES) and Statistical Package for the Social Sciences (SPSS) soft wares have been employed for statistical analysis of the data. Accuracy of the obtained correlations also has been declared by comparison with experimental data and some recent common used correlations. In the second method, HFT is estimated by artificial neural network (ANN) approach. In this case, var- ious architectures have been checked using 70% of experimental data for training of ANN. Among the var- ious architectures multi layer perceptron (MLP) network with trainlm training algorithm was found as the best architecture. Comparing the obtained ANN model results with 30% of unseen data confirms ANN excellent estimation performance. It was found that ANN is more accurate than traditional methods and even our two proposed correlations for HFT estimation.  2009 Published by Elsevier Ltd. 1. Introduction Natural gas hydrates are a curious kind of chemical compound called a ‘‘clathrate”. Clathrates consist of two dissimilar molecules mechanically intermingled but not truly chemically bonded. In- stead one molecule forms a framework that traps the other mole- cules. Natural gas hydrates can be considered modified ice structures enclosing methane and other hydrocarbons, but they can melt at temperatures well above normal ice. This behavior has two important practical implications. First, it is a big problem to the gas companies. They have to dehydrate natural gas thor- oughly to prevent methane hydrates from forming in high pres- sure gas lines. Second, methane hydrates will be stable on the sea floor at depths below a few hundred meters and will be solid within sea floor sediments. Masses of methane hydrate have been photographed on the sea floor. Chunks occasionally break, loose and float to the surface, where they are unstable as they decom- pose [1,2]. The hydrate should nucleate and grow from dissolved methane in fluids that migrate toward the sea floor from below. The concentration of gas required to form the hydrate in the sea floor is significantly lower than the concentration needed to form gas bubbles. In fact, the gas concentration required for forming the hydrate drops sharply as the temperature decreases toward the sea floor [3]. Natural gas hydrates are considered as a new method of storage and transmission of natural gas [4,5]. There are three types of methane hydrate structure. They all in- clude pentagonal dodecahedra of water molecules enclosing meth- ane. This geometry arises from the happy accident that the bond angle in water is fairly close to the 108 angle of a pentagon. Gen- erally, the dodecahedra are slightly distorted so that three dodeca- hedra can share an edge. This requires a dihedral (interface) angle of 120, whereas the dihedral angle of a true dodecahedron is 116.5. Inside the dodecahedra there are other cages of water mol- ecules with different shapes. In practice, not all cages are occupied by hydrocarbons, but occupancy rates are over 90% [1]. Hydrates were discovered in 1810 by Sir Humphrey Davy, yet only in the last half century their occurrence has been of interest to the natural gas industry. In 1934, Hammerschmidt [6] determined that hydrates were the cause of plugged natural gas pipelines, thereby leading to the regulation of gas water content, and to the development of improved methods of prevention of hydrate plugs, including the injection of methanol and other inhibitors into the gas stream. Re- cent processing practice, which emphasis on extreme conditions of temperature, pressure, has caused a renewed interest in determin- ing hydrate formation conditions. The gas gravity method is very simple for predicting the gas hydrate conditions [7]. To avoid te- dious calculations based on GPSA’s hydrate formation curve, a regression analysis was used to fit the GPSA’s hydrate formation curve to predict the hydrate developed for gases where specific gravity was known. The available and currently used correlations 0196-8904/$ - see front matter   2009 Published by Elsevier Ltd. doi:10.1016/j.enconman.2009.04.005 * Corresponding author. Fax: +98 831 4274542. E-mail address:  [email protected] (G. Zahedi). Energy Conversion and Management 50 (2009) 2052–2059 Contents lists available at ScienceDirect Energy Conversion and Management journal homepage: www.elsevier.com/locate/enconman
Transcript
Page 1: 0deec526212eb902a2000000

8/10/2019 0deec526212eb902a2000000

http://slidepdf.com/reader/full/0deec526212eb902a2000000 1/8

Prediction of hydrate formation temperature by both statistical

models and artificial neural network approaches

Gholamreza Zahedi *, Zohre Karami, Hamed Yaghoobi

Simulation and Artificial Intelligence Research Center, Department of Chemical Engineering, Razi University, Kermanshah, Iran

a r t i c l e i n f o

 Article history:

Received 26 June 2008Received in revised form 18 March 2009

Accepted 6 April 2009

Available online 6 May 2009

Keywords:

Hydrate formation temperature

Natural gas hydrate

Artificial neural network

a b s t r a c t

In this study, various estimation methods have been reviewed for hydrate formation temperature (HFT)

and two procedures have been presented. In the first method, two general correlations have been pro-

posed for HFT. One of the correlations has 11 parameters, and the second one has 18 parameters. In order

to obtain constants in proposed equations, 203 experimental data points have been collected from liter-

atures. The Engineering Equation Solver (EES) and Statistical Package for the Social Sciences (SPSS) soft

wares have been employed for statistical analysis of the data. Accuracy of the obtained correlations also

has been declared by comparison with experimental data and some recent common used correlations.

In the second method, HFT is estimated by artificial neural network (ANN) approach. In this case, var-

ious architectures have been checked using 70% of experimental data for training of ANN. Among the var-

ious architectures multi layer perceptron (MLP) network with trainlm training algorithm was found as

the best architecture. Comparing the obtained ANN model results with 30% of unseen data confirms

ANN excellent estimation performance. It was found that ANN is more accurate than traditional methods

and even our two proposed correlations for HFT estimation.

  2009 Published by Elsevier Ltd.

1. Introduction

Natural gas hydrates are a curious kind of chemical compound

called a ‘‘clathrate”. Clathrates consist of two dissimilar molecules

mechanically intermingled but not truly chemically bonded. In-

stead one molecule forms a framework that traps the other mole-

cules. Natural gas hydrates can be considered modified ice

structures enclosing methane and other hydrocarbons, but they

can melt at temperatures well above normal ice. This behavior

has two important practical implications. First, it is a big problem

to the gas companies. They have to dehydrate natural gas thor-

oughly to prevent methane hydrates from forming in high pres-

sure gas lines. Second, methane hydrates will be stable on the

sea floor at depths below a few hundred meters and will be solid

within sea floor sediments. Masses of methane hydrate have beenphotographed on the sea floor. Chunks occasionally break, loose

and float to the surface, where they are unstable as they decom-

pose [1,2]. The hydrate should nucleate and grow from dissolved

methane in fluids that migrate toward the sea floor from below.

The concentration of gas required to form the hydrate in the sea

floor is significantly lower than the concentration needed to form

gas bubbles. In fact, the gas concentration required for forming the

hydrate drops sharply as the temperature decreases toward the

sea floor [3]. Natural gas hydrates are considered as a new method

of storage and transmission of natural gas  [4,5].

There are three types of methane hydrate structure. They all in-

clude pentagonal dodecahedra of water molecules enclosing meth-

ane. This geometry arises from the happy accident that the bond

angle in water is fairly close to the 108 angle of a pentagon. Gen-

erally, the dodecahedra are slightly distorted so that three dodeca-

hedra can share an edge. This requires a dihedral (interface) angle

of 120, whereas the dihedral angle of a true dodecahedron is

116.5. Inside the dodecahedra there are other cages of water mol-

ecules with different shapes. In practice, not all cages are occupied

by hydrocarbons, but occupancy rates are over 90%  [1]. Hydrates

were discovered in 1810 by Sir Humphrey Davy, yet only in the last

half century their occurrence has been of interest to the natural gas

industry. In 1934, Hammerschmidt  [6] determined that hydrateswere the cause of plugged natural gas pipelines, thereby leading

to the regulation of gas water content, and to the development of 

improved methods of prevention of hydrate plugs, including the

injection of methanol and other inhibitors into the gas stream. Re-

cent processing practice, which emphasis on extreme conditions of 

temperature, pressure, has caused a renewed interest in determin-

ing hydrate formation conditions. The gas gravity method is very

simple for predicting the gas hydrate conditions  [7]. To avoid te-

dious calculations based on GPSA’s hydrate formation curve, a

regression analysis was used to fit the GPSA’s hydrate formation

curve to predict the hydrate developed for gases where specific

gravity was known. The available and currently used correlations

0196-8904/$ - see front matter    2009 Published by Elsevier Ltd.doi:10.1016/j.enconman.2009.04.005

*   Corresponding author. Fax: +98 831 4274542.

E-mail address: [email protected] (G. Zahedi).

Energy Conversion and Management 50 (2009) 2052–2059

Contents lists available at  ScienceDirect

Energy Conversion and Management

j o u r n a l h o m e p a g e :   w w w . e l s e v i e r . c o m / l o c a t e / e n c o n m a n

Page 2: 0deec526212eb902a2000000

8/10/2019 0deec526212eb902a2000000

http://slidepdf.com/reader/full/0deec526212eb902a2000000 2/8

for a specific gravity method to calculate the hydrate formation

conditions are Sloan  [1], Berge  [8], Motiee  [9], and Hammersch-

midt [6] correlations.

Vu et al. [10], applied an electrolyte equation of state based on

Péneloux’s non-electrolyte PR-EOS to the prediction of the temper-

ature associated to the formation of gas hydrate from water–meth-

anol–salts solutions. Original assumptions have been developed to

allow the calculation of model ionic parameters from experimentalsolvation diameters. Their model when applied to systems with

CH4   and CO2   hydrates has less than 1 K over a wide range of 

conditions.

Sun and Chen  [11]  proposed a thermodynamic model for hy-

drate formation condition of sour gases. The model studies dissolu-

tion of gas in water and assumes equilibrium for hydrolytic

reaction.

Taylor et al. study   [12]   describes laboratory experiments and

computational modeling to address several key areas in hydrate re-

search. The laboratory results are used in the computational mod-

els and the results from the computational modeling is used to

help direct future laboratory research. Laboratory research is

accomplished in one of the numerous high-pressure hydrate cells:

thermal conductivity of hydrates (synthetic and natural) at tem-

perature and pressure, computational modeling studies are inves-

tigating the kinetics of hydrate formation and dissociation,

modeling methane hydrate reservoirs, molecular dynamics simula-

tions of hydrate formation, dissociation, and thermal properties,

and Monte Carlo simulations of hydrate formation and

dissociation.

Ahmadi et al. [13] described one-dimensional and axisymmetric

models for natural gas production from the dissociation of meth-

ane hydrate in a confined reservoir by a depressurizing well. They

accounted for the heat sink from hydrate dissociation and solved

the convection–conduction heat transfer in the gas and hydrate

zones. Using a finite-difference scheme, they evaluated the gas

flow and hydrate dissociation process inside the reservoir. For dif-

ferent well pressures, and reservoir temperatures and pressures,

they simulated the pressure and temperature conditions in the res-ervoir. It was shown that the gas production rate was a sensitive

function of well pressure. In addition, both heat conduction and

convection in hydrate zone were important. The simulation results

were compared with the linearization approach and discussed.

Since 1945, the gas gravity method given by Katz has been an

indispensable and simple tool for predicting the gas hydrate stabil-

ity zone. Despite the development of more sophisticated predictive

tools, such as the vapor–solid equilibrium ratio (Ki value) method

or the solid solution theory of Van der Waals and Platteeuw (1959),

the gas gravity method has kept its popularity among engineers

in the petroleum industry. The main advantage of this technique

is the availability of input data (it only requires the specific gravity

of the mixture, i.e., the molecular mass of the gas mixture divided

by that of air) and the simplicity of the calculation, which can beperformed by using charts or hand-held calculators [14].

Noting all reviewed articles a simple, easy to use and simulta-

neously accurate model for estimation of HTF is necessary. In this

paper first, four correlations based on statistical analysis will be

presented for HFT. Next among various ANN methods and architec-

tures best network will be found. This approach is new based on

our literature survey and fulfils simplicity and accuracy aims. Fi-

nally estimation capability of statistical correlation, ANN and com-

mon used correlation will be compared.

2. Common correlations to estimate HFT 

In this study different widely used relations to estimate HFTwere investigated. The relations are as below:

 2.1. Berge-correlation [2,8]

This correlation is valid for 0.555 6 c g  < 0.58:

T  ¼ 96:03 þ 25:37 ln P   0:64 ðln P Þ2

þ ðc g   0:555Þ=0:025 ½80:61  P  þ 1:16 104=ðP  þ 596:16Þ

ð96:03 þ 25:37 ln P   0:64 ðln P Þ2Þ ð1Þ

and for 0.586 c g  < 1.0 Eq. (2)  provides estimation as:

T  ¼ f80:61  P   2:1 104 1:22 10

3=ðc g   0:535Þ

½1:23 104 þ 1:71 10

3=ðc g   0:509Þg=½P   ð260:42

15:18=ðc g   0:535ÞÞ ð2Þ

 2.2. Motiee correlation [2,9,15]

The correlation describes logarithm of pressure as a function of 

temperature and gas specific gravity as:

logðP Þ ¼  a1 þ a2T  þ a3T 2 þ a4c g  þ  a5c

2 g  þ  a6T c g    ð3Þ

Correlation (4)  provides a relation to estimate HFT as a function of pressure and gas gravity:

T  ¼  b1 þ b2 logðP Þ þ b3ðlogðP ÞÞ2 þ b4c g  þ  b5c2 g  þ  b6c g ðlogðP ÞÞ ð4Þ

 2.3. Hammerschmidt correlation [2,3]

T  ¼  8:9  P 0:285 ð5Þ

This correlation describes HFT as a function of only pressure.

 2.4. Kobayashi and Sloan correlation [2,15,16]

Kobayashi and Sloan in 1978 have presented correlation to pre-

dict HFT based on gas gravity curves [5,7] as:

T  ¼  1=½ A1 þ A2ðln P Þ þ A3ðln c g Þ þ A4ðln P Þ2 þ A5ðlnP Þðln c g Þ

þ A6ðln c g Þ2 þ A7ðlnP Þ3 þ A8ðlnP Þ2ðlnc g Þ þ A9ðln P Þðln c g Þ

2

þ A10ðlnc g Þ3 þ A11ðln P Þ4 þ A12ðln P Þ3ðln c g Þ þ A13ðlnP Þ2ðlnc g Þ

2

þ A14ðln P Þðlnc g Þ3 þ A15ðln c g Þ

4 ð6Þ

This correlation is one of the most accurate and reliable equa-

tions in gas industry which is widely used to estimate HFT.

3. Methods

 3.1. Presented models

According to Kobayashi and Sloan correlation, pressure and spe-

cific gravity are independent variable and temperature has been

assumed as a depended variable. Correlations (7)–(10) are polyno-

mial form of depended and independent variables which include

cross terms. Variables of pressure and specific gravity have been

investigated in logarithm form in correlations (7) and (8). 203 data

point have been collected form gas–gravity curves   [15]. At first

step, 136 data have been employed. One of the correlations (Eq.

(7)) has 11 unknown and the other correlation (Eq.   (8)) has 18

parameters. Eq. (9) has 11 parameters and Eq.  (10) again contains

18 unknowns. In order to find parameters of correlations (7)–(10),

ESS and SPSS software’s have been used   [17,18]. EES allows the

user to enter any equation of the form Y  = F ( X , Z ) with up to sevenunknown coefficients represented as   a0, a1, a2, . . . , a6. EES em-

G. Zahedi et al. / Energy Conversion and Management 50 (2009) 2052–2059   2053

Page 3: 0deec526212eb902a2000000

8/10/2019 0deec526212eb902a2000000

http://slidepdf.com/reader/full/0deec526212eb902a2000000 3/8

ploys a nonlinear least squares curve-fitting method to determine

the unknown coefficients. EES calculates the standard error of 

each fitted coefficient as well as other information such as the

root mean square error (RMSE) and bias errors. At second step,

68 unseen data were employed for validating equations. Corre-

sponding coefficient (R2) for each class of equations and parame-

ter values of correlations   (7)–(10)  have been obtained in   Tables

1–4.T  ¼  1=½ A0 þ A1ðln P Þ þ A2ðln P Þ

2þ A3ðln P Þ

3þ A4ðln c g Þ

þ A5ðln c g Þ2 þ A6ðlnc g Þ

3 þ A7ðln P Þðlnc g Þ þ A8ðln P Þ

ðln c g Þ2 þ A9ðln P Þ2ðln c g Þ þ A10ðln P Þ2ðln c g Þ

2 ð7Þ

T  ¼  1=½ A0 þ A1ðln P Þ þ A2ðln P Þ2 þ A3ðln P Þ3 þ A4ðln P Þ4

þ A5ðln c g Þ þ A6ðln c g Þ2 þ A7ðln c g Þ

3 þ A8ðln c g Þ4

þ A9ðln P Þðln c g Þ þ A10ðln P Þðln c g Þ2 þ A11ðlnP Þðln c g Þ

3

þ A12ðln P Þ2

ðlnc g Þ þ A13ðln P Þ2

ðlnc g Þ2

þ A14ðln P Þ2

ðln c g Þ3

þ A15ðln P Þ3ðlnc g Þ þ A16ðln P Þ3ðlnc g Þ2 þ A17ðln P Þ3ðln c g Þ

3 ð8Þ

T  ¼ ½ A0 þ A1ðP Þ þ A2ðP Þ2 þ A3ðP Þ3 þ A4ðc g Þ þ A5ðc g Þ2 þ A6ðc g Þ

3

þ A7ðP Þðc g Þ þ A8ðP Þðc g Þ2 þ A9ðP Þ2ðc g Þ þ A10ðP Þ2ðc g Þ

2 ð9Þ

T  ¼ ½ A0 þ A1ðP Þ þ A2ðP Þ2

þ A3ðP Þ3

þ A4ðP Þ4

þ A5ðc g Þ

þ A6ðc g Þ2 þ A7ðc g Þ

3 þ A8ðc g Þ4 þ A9ðP Þðc g Þ þ A10ðP Þðc g Þ

2

þ A11ðP Þðc g Þ3

þ A12ðP Þ2

ðc g Þ þ A13ðP Þ2

ðc g Þ2

þ A14ðP Þ2

ðc g Þ3

þ A15ðP Þ3ðc g Þ þ A16ðP Þ3ðc g Þ2 þ A17ðP Þ3ðc g Þ

3 ð10Þ

 3.1.2. Comparison between presented correlations and empirical

correlations

Correlations (7) and (8)  have been illustrated in  Figs. 1–6 and

have been compared with other methods, such as ‘‘Hammersch-midt” and ‘‘Katz” and Berge methods.

In low specific gravity, Katz’s diagram which has been plotted

based on pressure and temperature has a linear form.

All correlations and figures which have been presented based

on logarithm of pressure and specific gravity, have high accuracy

for prediction of HFT at low specific gravity.

Corresponding coefficient (R2) confirm high accuracy of correla-

tions   (7)–(10). Correlations of Berge and Hammerschmidt, have

low accurate for prediction of HFT at higher pressures and specific

gravities (Figs. 4–6).

Finally, correlation   (7)   has 11 parameter ((4)   parameter lessthan Sloan equation), and is recommended for practical use espe-

cially at high specific gravity. It is obvious that proposed correla-

tions have good overlap with experimental data at higher specific

gravities which currently used correlations don’t have such estima-

tion capability.

 3.2. Artificial neural network

Neural network is an inductive model for the structure and

function of neuron. A neural network consists of complex units

which, in turn, are demonstrative of neurons of the body. The units

are in the shape of conjunct loop structure which, in fact, functions

like axons and dendrites.

Oneof thewell knowntypeneural network isthe multilayer per-ceptron which is utilized to classify and estimate neural problems.

Oneexampleof thelayerednetworks is provided in Fig. 7. Inthe fig-

ure, ANNinput, hidden and output layers areshown. In thisnetwork,

each pair of lines is interconnected via a weight. The two important

capabilities of neural network are swift response to problems and

the ability of generalization of these responses to the unobserved

samples. Thus, we must be familiar with the problems and ordinary

learning of the network which is called training.

In this figure L  input layers, M  hidden layers and N  output lay-

ers, exist. The  jth output of hidden layer can be found by the fol-

lowing linear combination of  L  times input layer:

Xv ija j   ð11Þ

In this formula   mij  are weights;   i  is an index representing hidden

layer, and j  is an index for input layer. One can estimate the output

neuron j  by the following function for  f :

 Table 1

Statistical results and value of parameters for Eq.  (7).

Coefficient Value Std. error Coefficient Value Std. error

 A0   2.998674E3 1.491795E3   A6   2.169920E3 1.608001E4

 A1   1.615272E3 3.034751E4   A7   7.114145E4 3.233519E4

 A2   1.241929E4 2.061456E5   A8   2.062154E3 5.745757E4

 A3   3.012534E6 4.677437E7   A9   2.288624E5 1.087459E5

 A4   5.788643E3 2.394345E3   A10   7.044740E5 1.892436E5

 A5   1.636478E2 4.324969E3

No. point = 136: RMS = 8.9978E6: bias = 1.7566E21:  R2 = 98.95%.

 Table 2

Statistical results and value of parameters for Eq.  (8).

Coefficient Value Std. error Coefficient Value Std. error

 A0   2.396490E2 1.397764E2   A9   1.956523E2 9.282471E3

 A1   5.482741E3 3.829677E3   A10   9.083137E2 3.674821E2

 A2   5.745664E4 3.934606E4   A11   9.845863E2 4.101932E2

 A3   2.747777E5 1.796694E5   A12   1.352731E3 6.299044E4

 A4   4.976688E7 3.076875E7   A13   6.356978E3 2.471904E3

 A5   9.398840E2 4.547328E2   A14   6.993158E3 2.733194E3

 A6   4.276777E1 1.816661E1   A15   3.110160E5 1.421058E5

 A7   4.502661E1 2.047528E1   A16   1.475054E4 5.529886E5

 A8   9.282732E3 6.397091E4   A17   1.644929E4 6.060549E5

No. point = 136: RMS = 4.8338E6: bias = 1.6476E20: R2 = 99.70%.

2054   G. Zahedi et al. / Energy Conversion and Management 50 (2009) 2052–2059

Page 4: 0deec526212eb902a2000000

8/10/2019 0deec526212eb902a2000000

http://slidepdf.com/reader/full/0deec526212eb902a2000000 4/8

b j  ¼  f XLi¼0

v ija j

!  ð12Þ

Training of the ANN is an improvement process in which error

of functions can be minimized by according them to the network

weights. When the pattern of teaching input data introduced in

the network, neural network computes output and compares it

with the real outputs. These differences can be used by improve-

ment technique to teach neural network. Function of errors, stud-

ied here, is Mean Square Error (MSE) which can be shown by  E  j inthe following formula:

E i  ¼ 1

n

Xni¼1

ðC i  C ir Þ2

ð13Þ

In this formula C ir  is a real output and  C i is a similar output for j

in the input. Training process is a way from input layer to output

layer in order to compute outputs and a backward route to correct

weights. This process continues until the  E  j  is minimized. As soon

as errors of tested volumes were minimized, the teaching process

terminates.

Beside MLP, a group of ANN has recently been recognized which

is called Radial Basis Function (RBF) network. RBF also has three

layers including: input layers, hidden layer (with Gaussian func-

tion), and output layer. The weight of the loops between inputand hidden layer is a unit which remains constant. While being

taught, the hidden layer makes a non-linear change which, conse-

quently, draws a new space like MLP from the inner space. ANNs

have been used in recent years to avoid the problems associated

with deterministic approaches and have been shown to approxi-

mate non-linear functions up to any desired level of accuracy

[19–25].

 Table 3

Statistical results and value of parameters for Eq.  (9).

Coefficient Value Std. error Coefficient Value Std. error

 A0   4.300068E2 5.878018E1   A6   7.987736E2 1.173947E2

 A1   5.776973E2 3.044887E2   A7   2.229806E2 7.813343E2

 A2   2.705239E5 9.703460E6   A8   2.208720E2 5.013645E2

 A3   2.909084E9 3.236679E10   A9   2.067366E5 2.520617E5

 A4   1.677816E3 2.235983E2   A10   1.939392E5 1.727848E5

 A5 

2.009781E3 2.816434E2

No. point = 136: RMS = 2.9803E0: bias = 1.591E15: R2 = 94.58%.

 Table 4

Statistical results and value of parameters for Eq.  (10).

Coefficient Value Std. error Coefficient Value Std. error

 A0   2.116379E3 2.894034E2   A9   5.229394E1 1.028631E0

 A1   6.674690E2 2.600775E1   A10   6.776850E1 1.339959E0

 A2   2.004185E5 1.791694E4   A11   3.167324E1 5.735178E1

 A3   3.394457E9 3.635251E8   A12   2.696084E4 7.311481E4

 A4   1.500678E12 2.289350E13   A13   4.042360E4 9.827757E4

 A5   1.077808E4 1.452659E3   A14   2.350797E4 4.333993E4

 A6   2.018584E4 2.761310E3   A15   5.966861E8 1.549012E7

 A7   1.669117E4 2.318908E3   A16 

1.169590E

7 2.167171E

7 A8   5.135225E3 7.258498E2   A17   7.854232E8 9.930572E8

No. point = 136: RMS = 1.8117E00: bias = 6.8040E14:  R2 = 98.00%.

Fig. 1.  Comparison of Eqs. (7) and (8) with experimental data, Hammerschmidt andBerge methods. (Specific gravity is equal to 0.6.)

Fig. 2.  Comparison of Eqs. (7) and (8) with experimental data, Hammerschmidt andBerge methods. (Specific gravity is equal to 0.65.)

G. Zahedi et al. / Energy Conversion and Management 50 (2009) 2052–2059   2055

Page 5: 0deec526212eb902a2000000

8/10/2019 0deec526212eb902a2000000

http://slidepdf.com/reader/full/0deec526212eb902a2000000 5/8

 3.3. Applying ANN for HFT estimation

In this case among 203 data, the network is being taught by 136

data and the 67 remaining data is used to test generalization

capacity of the network  [15]. Neural network variables and their

domains are illustrate in Table 5.

In order to estimate hydrate data, two networks, namely MLP

and RBF are utilized. In the first method, i.e., using MLP network

according to the Fig. 8, in which MSE is plotted in conformity with

the number of hidden layers, the optimal number for hidden layersis seven which has the least error. Fig. 9 depicts the error percent-

age for tested data in seven hidden layers. The second method uses

RBF neural network. In Fig. 10, MSE is plotted as a function of num-

ber of hidden layers. According to this figure in the three hidden

layer, MSE approaches its least amount. Error percentage for best

RBF network has been depicted in Fig. 11.

Fig. 3.  Comparison of Eqs. (7) and (8) with experimental data, Hammerschmidt and

Berge methods. (Specific gravity is equal to 0.7.)

Fig. 4.  Comparison of Eqs. (7) and (8) with experimental data, Hammerschmidt and

Berge methods. (Specific gravity is equal to 0.8.)

Fig. 5.  Comparison of Eqs. (7) and (8) with experimental data, Hammerschmidt andBerge methods. (Specific gravity is equal to 0.9.)

Fig. 6.  Comparison of Eqs. (7) and (8) with experimental data, Hammerschmidt and

Berge methods. (Specific gravity is equal to 1.0.)

InputLayer 

HiddenLayer 

OutputLayer 

I1

I i

I L

V11

V1j

V1m

Vi1

Vij

Vim

VL1

VLj

VLm

aL

ai

ai   W11

W1K

W1n

W j1

W jK

W jn

Wm1

WmK

Wmn

bm

b j

b1C1

CK

Cn

Fig. 7.   Structure of a neural network.

 Table 5

Neural network variables and domain.

Variables Domain

Specific gravity of gas 0.555–1

Pressure (psi) 200–2680.44

Temperature (F) 33.7–75.7

2056   G. Zahedi et al. / Energy Conversion and Management 50 (2009) 2052–2059

Page 6: 0deec526212eb902a2000000

8/10/2019 0deec526212eb902a2000000

http://slidepdf.com/reader/full/0deec526212eb902a2000000 6/8

 3.4. Comparison between MLP and RBF 

MLP and RBF are two advances models for estimation of the

HFT. Regarding Figs. 7 and 9, MLP networks have less MSE than

RBF networks. MSE is 0.2248 and 0.7053 for best MLP and RBF net- works, respectively. RBF, has more error for other data, but MLPnetwork represent an optimum error percentage for the most data.

0 10 20 30 40 50  0 

0.2 

0.4 

0.6 

0.8 

1

1.2 

1.4 

1.6 

1.8 

   M

   S   E

number of hidden layers 

Fig. 8.   MSE versus number of hidden layer neurons for MLP.

0 10 20 30 40 50 60 70  0 

0.5 

1

1.5 

2.5 

number of tested data 

  e  r  r  o  r  p  e  r  c  e  n   t  a  g  e

Fig. 9.   Percentage in generalization error for best obtained MLP network.

0 10 20 30 40 50 60 70  0 

1

number of tested data 

  e  r  r  o  r  p  e  r  c  e  n   t  a  g  e

Fig. 11.   Percentage of generalization error for best obtained RBF network.

30 35 40 45 50 55 60 65 70  10 2 

10 3 

10 4 

temperature (F) 

  p  r  e  s  s  u  r  e   (  p  s   i   )

SG=0.6 

EXP 

ANN 

Eq.7 

Eq.8 

Eq.9 

Eq.10 

Fig. 12.  Comparison of experimental data with ANN and present models. (Specific

gravity is equal to 0.6.)

35 40 45 50 55 60 65 70 75  10 

10 3 

10 4 

temperature (F) 

  p  r  e  s  s  u  r  e

   (  p  s   i   )

SG=0.65 

EXP 

ANN 

Eq.7 

Eq.8 

Eq.9 

Eq.10 

Fig. 13.  Comparison of experimental data with ANN and present models. (Specific

gravity is equal to 0.65.)

0 10 20 30 40 50  0 

0.25 

0.5 

0.75 

1

1.25 

1.5 

1.75 

2.25 

2.5 2.5 

number of hidden layers 

   M   S   E

Fig. 10.   MSE in versus number of hidden layer neurons for RBF.

G. Zahedi et al. / Energy Conversion and Management 50 (2009) 2052–2059   2057

Page 7: 0deec526212eb902a2000000

8/10/2019 0deec526212eb902a2000000

http://slidepdf.com/reader/full/0deec526212eb902a2000000 7/8

MLP network has better prediction of the HFT rather than RBF

network. This model, approximately, can be generalized to all sorts

of data because the differences between predicted and real

amounts are diminutive, which proves the capability of artificial

neural network to predict unobserved data correctly.

4. Comparison between ANN and present models in HFT 

estimation

The research has targeted investigation of HFT prediction by

presenting four models and ANN. This section deals with the HFT

in the different pressure and special gravities concerning predicted

temperature by MLP neural network and presented models. Re-

sults of the comparison are illustrated in Figs. 12–17.

Fig. 18  indicates comparison of generalization error of experi-

mental data with ANN and Eqs.  (7) and (8). According to this figure

compared to predicted temperature by ANN and best present mod-

els (Eqs. (7) and (8)) shows that simulation with ANN has a better

result with a higher accuracy for HFT estimation.

5. Conclusion

Inthiswork,twocorrelationsbasedonKobayashiandSloanmod-el andtwo correlationsbased on Berge model,have been developed.

35 40 45 50 55 60 65 70 75  10 

10 3 

10 4 

temperature (F) 

  p  r  e  s  s  u  r  e

   (  p  s   i   )

SG=0.7 

EXP 

ANN 

Eq.7 

Eq.8 

Eq.9 

Eq.10 

Fig. 14.   Comparison of experimental data with ANN and present models. (Specific

gravity is equal to 0.7.)

35 40 45 50 55 60 65 70 75 80  10 

10 3 

10 4 

temperature (F) 

  p  r  e  s  s  u  r  e   (  p  s   i   )

SG=0.8 

EXP 

ANN 

Eq.7 

Eq.8 

Eq.9 

Eq.10 

Fig. 15.   Comparison of experimental data with ANN and present models. (Specific

gravity is equal to 0.8.)

30 40 50 60 70 80 90  10 

10 3 

10 4 

temperature (F) 

  p  r  e  s  s  u  r  e   (  p  s   i   )

SG=0.9 

EXP 

ANN 

Eq.7 

Eq.8 

Eq.9 

Eq.10 

Fig. 16.   Comparison of experimental data with ANN and present models. (Specificgravity is equal to 0.9.)

40 45 50 55 60 65 70 75 80  10 

10 3 

10 4 

temperature(F) 

  p  r  e  s  s  u  r  e   (  p  s   i   )

SG=1.0 

EXP 

ANN 

Eq.7 

Eq.8 

Eq.9 

Eq.10 

Fig. 17.  Comparison of experimental data with ANN and present models. (Specific

gravity is equal to 1.0.)

0 10 20 30 40 50 60 70  0 

10 

15 

20 

25 

30 

35 

40 

number of tested data 

  e  r  r  o  r  p  e  r  c  e  n   t  a  g  e

ANN 

Eq.7 

Eq.8 

Fig. 18.   Comparison of percentage in generalization error of experimental data

with ANN and Eqs. (7) and (8).

2058   G. Zahedi et al. / Energy Conversion and Management 50 (2009) 2052–2059

Page 8: 0deec526212eb902a2000000

8/10/2019 0deec526212eb902a2000000

http://slidepdf.com/reader/full/0deec526212eb902a2000000 8/8

Themodelswhich estimateHFT as functionsof logarithm of pressure

and specific gravity have good accuracy compared to the common

used equations in industry.Obtained correlations are moreaccurate

than Kobayashi and Berge correlations and have good agreement

with experimental data and can be used for prediction of HFT and

amount of inhibitor that should be injected to gas stream line. In

the next step of work ANN model hasbeen build forHFT estimation.

Among the various MLP and RBF structures, MLP with seven neuronhas been found as the best predictor of HFT data. The obtained ANN

estimation capability was compared with our correlations. Results

show that ANN is more accurate than our obtained correlations. In

thiscase ANNis recommended for HFT rather thancommon correla-

tion and also our correlations.

 Acknowledgement

This work was supported by Razi University research council.

References

[1] Sloan ED. Clathrates hydrates of natural gas. 2nd ed. New York: Marcel DekkerInc.; 1998. p. 757.

[2] Khaled Ahmed Abdel Fattah. Evaluation of empirical correlations for naturalgas hydrate predictions. Oil Gas Bus; 2004. <http://www.ogbus.ru/eng/>.[3] Buffett BA, Zatsepina OY. Formation of gas hydrate from dissolved gas in

natural porous media. Mar Geol 2000;164:69–77.[4] Hao Wenfeng, Wang Jinqu, Fan Shuanshi, Hao Wenbin. Evaluation and analysis

method for natural gas hydrate storage and transportation processes. EnergyConvers Manage 2008;49:2546–53.

[5] Sun Z, Wang R, Ma R, Guo K, Fan S. Natural gas storage in hydrates with thepresence of promoters. Energy Convers Manage 2003;44:2733–42.

[6] Hammerschmidt EG. Formation of gas hydrates in natural gas transmissionlines. Ind Eng Chem Res 1934;26:851.

[7] Katz DL, Lee RL. Natural gas engineering production and storage. NewYork: McGraw Hill; 1990.

[8] Berge BK. Hydrate prediction on a microcomputer, paper SPE 15306. Presentedat the 1986 symposium on petroleum industry applications of microcomputers.

[9] Motiee M. Estimate possibility of hydrates. In: Hydrological proceeding; July1991. p. 98.

[10] Vu Vinh Quang, Suchaux Pierre Duchet, Fürst Walter. Use of a predictiveelectrolyte equation of state for the calculation of the gas hydrate formationtemperature in the case of systems with methanol and salts. Fluid PhaseEquilibria 2002;194–197:361–70.

[11] Chang-Yu Sun, Guang-Jin Chen. Modelling the hydrate formation condition forsour gas and mixtures. Chem Eng Sci 2005;60:4879–85.

[12] Taylor CE, Link DD, English N. Methane hydrate research at NETL research tomake methane production from hydrates a reality. J Petrol Sci Eng

2007;56:186–91.[13] Ahmadi G, Ji C, Smith DH. Natural gas production from hydrate

disso ciation: an axisymmetric model. J Petrol Sci Eng2007;58:245–58.

[14] Østergaard Kasper K, Masoudi Rahim, Tohidi Bahman, Danesh Ali, Todd AdrianC. A general correlation for predicting the suppression of hydrate dissociationtemperature in the presence of thermodynamic inhibitors. J Petrol Sci Eng2005;48:70–80.

[15] Kobayashi R, Song KY, Sloan ED. Phase behavior of water/hydrocarbonsystems. Quoted in Bradley HB, ‘‘Petroleum engineers handbook”, andRichardson: ‘‘Society of petroleum engineers”; 1987.

[16] Elgibaly Ahmed A, Elkamel Ali M. A new correlation for predicting hydrateformation conditions for various gas mixtures and inhibitors. Fluid PhaseEquilibria 1998;152.

[17]   http://www.mhhe.com/engcs/mech/ees/. 2003.[18]  www.spss.com.[19] Blusari AB. Neural networks for chemical engineers. Amsterdam: Elsevier

Science Press; 1995.[20] Joseph B, Wang FH, Shieh PS. Exploratory data analysis a comparison of 

statistical methods with artificial neural network. Comput Chem Eng1992;16:413.

[21] Matlab neural network toolbox; 2008. <www.mathwork.Com>.[22] Qi Xiaoni, Liu Zhenyan, Li Dandan. Numerical simulation of shower cooling

tower based on artificial neural network. Energy Convers Manage2008;49:724–32.

[23] Mohammad Zahedi G, Zadeh S, Moradi G. Enhancing gasoline production in anindustrial catalytic reforming unit using artificial neural networks. EnergyFuels 2008;22:2671–7.

[24] Zahedi G, Jahnmiri A, Rahimpor MR. A neural network approach for predictionof the CuO–ZnO–Al2O3 catalyst deactivation. Int J Chem Reactor Eng 2005;3[Article A8].

[25] Zahedi G, Fgaier H, Jahanmiri A, Al-Enezi G. Artificial neural networkidentification and evaluation of hydrotreater plant. Petrol Sci Technol2006;24:1447–56.

G. Zahedi et al. / Energy Conversion and Management 50 (2009) 2052–2059   2059