+ All Categories
Home > Documents > Numerical treatment for nonlinear MHD Jeffery–Hamel problem using neural networks optimized with...

Numerical treatment for nonlinear MHD Jeffery–Hamel problem using neural networks optimized with...

Date post: 16-Dec-2016
Category:
Upload: raza
View: 212 times
Download: 0 times
Share this document with a friend
16
Numerical treatment for nonlinear MHD JefferyHamel problem using neural networks optimized with interior point algorithm Muhammad Asif Zahoor Raja a,n , Raza Samar b a Department of Electrical Engineering, COMSATS Institute of Information Technology, Attock, Pakistan b Muhammad Ali Jinnah University, Islamabad, Pakistan article info Article history: Received 13 August 2012 Received in revised form 9 May 2013 Accepted 29 July 2013 Communicated by Bin He Keywords: JefferyHamel Problem Neural networks Radial basis function Nonlinear ODEs Boundary value problems Interior point method abstract In this paper new computational intelligence techniques have been developed for the nonlinear magnetohydrodynamics (MHD) JefferyHamel ow problem using three different feed-forward articial neural networks trained with an interior point method. The governing equation for the two-dimensional MHD JefferyHamel ow problem is transformed into an equivalent third order nonlinear ordinary differential equation. Three neural network models using log-sigmoid, radial basis and tan-sigmoid activation functions are developed for the transformed equation in an unsupervised manner. The training of weights of each neural network is carried out with an interior point method. The proposed models are evaluated on different variants of the JefferyHamel problem by varying the Reynolds number, angles of the walls and the Hartmann number. The accuracy, convergence and effectiveness of the designed models are validated through statistical analyses based on a sufciently large number of independent runs. Comparative studies of the proposed solutions with standard numerical results, as well as recently reported solutions of analytic solvers illustrate the worth of the proposed solvers. & 2013 Elsevier B.V. All rights reserved. 1. Introduction JefferyHamel problems are related to incompressible viscous uid ows between non-parallel walls. JefferyHamel ows have been extensively studied in numerous applied science and engi- neering applications including uid mechanics, civil, environmen- tal, mechanical and bio-mechanical engineering. The detailed mathematical description of the problem has been presented by Jeffery [1] and Hamel [2]. JefferyHamel ows fundamentally provide an exact similarity solution of the NavierStokes equations in the special case of a two-dimensional ow through a channel with inclined plane walls meeting at a source or sink at the vertex. Historical background, importance and applications of JefferyHamel ows equations can be seen in references [38]. The classical JefferyHamel problems in the presence of an external magnetic eld on a conducting uid were examined in [9] by taking the magnetic eld as a control parameter, along with the ow, Reynolds number, angles of the channels, and the Hartmann number. The MHD JefferyHamel ow problems are inherently nonlinear like the most uid mechanics problems, and do not have an exact solution. However, numerical and analytical solutions for these kind of problems have been extensively reported in the literature, including Homotopy Perturbation method (HPM) [1013], Homo- topy analysis methods (HAM) [14,15], the Adomian decomposition method [16,17], the Differential transform method (DTM) [18,19], variational iteration methods (VIM) [2022], and so on. A few recent publications in which the JefferyHamel ow equations are addressed can be seen in [2327]. All techniques available in the literature for these problems are based on well known deterministic numerical and analytical procedures; there is a need therefore to explore stochastic numerical methods based on computational intelligence techniques to solve these problems. Stochastic solvers based on articial intelligence techniques using neural networks have been applied extensively by the research community to solve a variety of initial and boundary value problems of linear and non-linear differential equations [2831]. A few recent applications of these solvers are non-linear Van-der Pol oscillators [32], Troesch's problems arising in plasma physics [33], solution of thin plate bending problem [34], non- linear singular systems based on Lane Emden Flower equations [35], tracking problems of a spherical inverted pendulum [36], the rst Painlevé transcendent [37], surrogate modeling for the solution of integral equations [38], Bratu's problem in fuel ignition modeling [39], etc. Variants of these methodologies are formu- lated to solve linear and nonlinear fractional differential equations also, such as the Riccati and BagleyTrovik fractional order systems [40,41]. There is thus a motivation for the authors to investigate an alternate, accurate and reliable framework based on Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/neucom Neurocomputing 0925-2312/$ - see front matter & 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.neucom.2013.07.013 n Corresponding author. Tel.: þ92 30 09893 800. E-mail addresses: [email protected], [email protected] (M.A.Z. Raja), [email protected], [email protected] (R. Samar). Please cite this article as: M.A.Z. Raja, R. Samar, Numerical treatment for nonlinear MHD JefferyHamel problem using neural networks optimized with interior point algorithm, Neurocomputing (2013), http://dx.doi.org/10.1016/j.neucom.2013.07.013i Neurocomputing (∎∎∎∎) ∎∎∎∎∎∎
Transcript

Numerical treatment for nonlinear MHD Jeffery–Hamel problem usingneural networks optimized with interior point algorithm

Muhammad Asif Zahoor Raja a,n, Raza Samar b

a Department of Electrical Engineering, COMSATS Institute of Information Technology, Attock, Pakistanb Muhammad Ali Jinnah University, Islamabad, Pakistan

a r t i c l e i n f o

Article history:Received 13 August 2012Received in revised form9 May 2013Accepted 29 July 2013Communicated by Bin He

Keywords:Jeffery–Hamel ProblemNeural networksRadial basis functionNonlinear ODEsBoundary value problemsInterior point method

a b s t r a c t

In this paper new computational intelligence techniques have been developed for the nonlinearmagnetohydrodynamics (MHD) Jeffery–Hamel flow problem using three different feed-forward artificialneural networks trained with an interior point method. The governing equation for the two-dimensionalMHD Jeffery–Hamel flow problem is transformed into an equivalent third order nonlinear ordinarydifferential equation. Three neural network models using log-sigmoid, radial basis and tan-sigmoidactivation functions are developed for the transformed equation in an unsupervised manner. The trainingof weights of each neural network is carried out with an interior point method. The proposed models areevaluated on different variants of the Jeffery–Hamel problem by varying the Reynolds number, angles ofthe walls and the Hartmann number. The accuracy, convergence and effectiveness of the designedmodels are validated through statistical analyses based on a sufficiently large number of independentruns. Comparative studies of the proposed solutions with standard numerical results, as well as recentlyreported solutions of analytic solvers illustrate the worth of the proposed solvers.

& 2013 Elsevier B.V. All rights reserved.

1. Introduction

Jeffery–Hamel problems are related to incompressible viscousfluid flows between non-parallel walls. Jeffery–Hamel flows havebeen extensively studied in numerous applied science and engi-neering applications including fluid mechanics, civil, environmen-tal, mechanical and bio-mechanical engineering. The detailedmathematical description of the problem has been presentedby Jeffery [1] and Hamel [2]. Jeffery–Hamel flows fundamentallyprovide an exact similarity solution of the Navier–Stokes equationsin the special case of a two-dimensional flow through a channelwith inclined plane walls meeting at a source or sink at the vertex.Historical background, importance and applications of Jeffery–Hamel flows equations can be seen in references [3–8]. Theclassical Jeffery–Hamel problems in the presence of an externalmagnetic field on a conducting fluid were examined in [9] bytaking the magnetic field as a control parameter, along with theflow, Reynolds number, angles of the channels, and the Hartmannnumber.

The MHD Jeffery–Hamel flow problems are inherently nonlinearlike the most fluid mechanics problems, and do not have an exactsolution. However, numerical and analytical solutions for these kind

of problems have been extensively reported in the literature,including Homotopy Perturbation method (HPM) [10–13], Homo-topy analysis methods (HAM) [14,15], the Adomian decompositionmethod [16,17], the Differential transform method (DTM) [18,19],variational iteration methods (VIM) [20–22], and so on. A few recentpublications in which the Jeffery–Hamel flow equations areaddressed can be seen in [23–27]. All techniques available in theliterature for these problems are based on well known deterministicnumerical and analytical procedures; there is a need therefore toexplore stochastic numerical methods based on computationalintelligence techniques to solve these problems.

Stochastic solvers based on artificial intelligence techniquesusing neural networks have been applied extensively by theresearch community to solve a variety of initial and boundaryvalue problems of linear and non-linear differential equations[28–31]. A few recent applications of these solvers are non-linearVan-der Pol oscillators [32], Troesch's problems arising in plasmaphysics [33], solution of thin plate bending problem [34], non-linear singular systems based on Lane Emden Flower equations[35], tracking problems of a spherical inverted pendulum [36],the first Painlevé transcendent [37], surrogate modeling for thesolution of integral equations [38], Bratu's problem in fuel ignitionmodeling [39], etc. Variants of these methodologies are formu-lated to solve linear and nonlinear fractional differential equationsalso, such as the Riccati and Bagley–Trovik fractional ordersystems [40,41]. There is thus a motivation for the authors toinvestigate an alternate, accurate and reliable framework based on

Contents lists available at ScienceDirect

journal homepage: www.elsevier.com/locate/neucom

Neurocomputing

0925-2312/$ - see front matter & 2013 Elsevier B.V. All rights reserved.http://dx.doi.org/10.1016/j.neucom.2013.07.013

n Corresponding author. Tel.: þ92 30 09893 800.E-mail addresses: [email protected],

[email protected] (M.A.Z. Raja), [email protected],[email protected] (R. Samar).

Please cite this article as: M.A.Z. Raja, R. Samar, Numerical treatment for nonlinear MHD Jeffery–Hamel problem using neural networksoptimized with interior point algorithm, Neurocomputing (2013), http://dx.doi.org/10.1016/j.neucom.2013.07.013i

Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

artificial intelligence and soft-computing techniques to solve theMHD Jeffery–Hamel flow equations.

In the article, numerical solvers are proposed to solve the MHDJeffery–Hamel flow equations based on three different feed-forwardartificial neural network models optimized with interior point meth-ods. The neural networks are formulated using log-sigmoid, radialbasis and tan-sigmoid functions. The original Jeffery–Hamel flowequation is first transformed into an equivalent non-linear boundaryvalue problem (BVP) of ordinary differential equation. Three neuralnetwork models of the transformed equation are developed in anunsupervised manner. The optimal weights of each model are trainedwith an interior point method (IPM). The proposed solvers are thenapplied to a number of cases of MHD Jeffery–Hamel problems bytaking different values for the Hartmann number, the angles of thechannels, and the Reynolds number. Results of the solvers arecompared with reference numerical solutions obtained by MATHE-MATICA, as well as with those reported in the literature using differentmethods, such as VIM [42], DTM [43], HAM [43,44], Optimal Homo-topy asymptotic method (OHAM) [45] and HPM [42,43,46].

The organization of the paper is as follows. In Section 2,governing equations for the transformed Jeffery–Hamel problemare derived. In Section 3, three different mathematical models ofthe transformed problem are developed using unsupervisedneural networks with log-sigmoid, radial basis and tan-sigmoidactivation functions. In Section 4, the training procedure for neuralnetwork models is presented; an interior point method isemployed to train the network and find the best weights.Section 5 discusses results of detailed simulations carried out fortwo Jeffery–Hamel flow problems. In Section 6, a comparativestudy of the proposed solvers is presented based on a detailedstatistical analysis. In the last section, our findings and observa-tions are concluded along with some direction for future work.

2. Mathematical formulation

Consider cylindrical polar coordinates (r, θ, z), and a steady two-dimensional flow of an incompressible conducting viscous fluid froma source or sink with channel walls lying in planes, and intersectingalong the z-axis. The schematic of such a flow is presented in Fig. 1,and the governing mathematical relations are given as [10]:

ρ∂r∂r

ðruðr; θÞÞ ¼ 0; ð1Þ

uðr; θÞ∂uðr; θÞ∂r

¼�1ρ

∂p∂r

þν∂2uðr; θÞ

∂r2þ1r∂uðr; θÞ

∂rþ 1r2

∂2uðr; θÞ∂θ2

�uðr; θÞr2

!

�sB20

ρr2uðr; θÞ; ð2Þ

1ρr

∂P∂θ

�2νr2

∂uðr; θÞ∂θ

¼ 0; ð3Þ

where ρ stands for fluid density, u(r, θ) represents the velocity alongthe radial direction, P denotes fluid pressure, ν is the coefficient ofkinematic viscosity, s is the conductivity of the fluid, and B0represents electromagnetic induction. Using dimensionless para-meters, Eq. (1) can be written as

f ðθÞ ¼ ruðr; θÞ ð4Þ

f ðηÞ ¼ f ðθÞfmax

; η¼ θ

αð5Þ

Using (5) in Eqs. (2) and (3), and eliminating P, we get a BVP of athird order ordinary differential equation for the normalized functionprofile f(η) as [10]:

f‴ðηÞþ2αRef ðηÞf ′ðηÞþð4�HÞα2f ′ðηÞ ¼ 0 ð6Þ

f ð0Þ ¼ 1; f ′ð0Þ ¼ 0; f ð1Þ ¼ 0: ð7Þhere Re and H are the Reynolds and Hartmann numbers, respectively;these are defined as follows:

Re¼ fmaxα

ν¼Umaxrα

ν

divergent�channel : αg0; fmaxg0convergent�channel : α!0; fmax!0

!; ð8Þ

H¼ffiffiffiffiffiffiffiffisB2

0

ρν

s; ð9Þ

where Umax represents the velocity at the center of the channel(r¼0).

3. Neural network modeling

Artificial neural networks are well-known to be used exten-sively as universal function approximators. The solution f(η) of thedifferential equation along with its nth order derivative f(n)(η)) canbe approximated by the following continuous mapping in neuralnetwork methodology [47,48]:

f̂ ðηÞ ¼ ∑m

i ¼ 1δigðwiηþβiÞ; ð10Þ

f̂ðnÞðηÞ ¼ ∑

m

i ¼ 1δigðnÞðwiηþβiÞ ð11Þ

where m is the number of neurons, g is called the activationfunction, δ, w, and β are real-valued bounded adaptive parametersor weights, written as:

W¼ ðδ1; δ2; :::; δm;w1;w2; :::;wm; β1; β2; :::; βmÞMathematical models have been developed using log-sigmoid gLS ,radial basis gRB and tan-sigmoid gTS activation functions for hiddenlayers of the network. These activation functions can be written as:

gLSðtÞ ¼1

1þe�t ð12Þ

α

MHDDevice

AC Power Source

FlowDirection

FlowDirection2α

B

Fig. 1. Geometry for MHD Jeffery–Hamel flow in a convergent cannel (a) 2-D view and (b) Schematic setup of the problem.

M.A.Z. Raja, R. Samar / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎2

Please cite this article as: M.A.Z. Raja, R. Samar, Numerical treatment for nonlinear MHD Jeffery–Hamel problem using neural networksoptimized with interior point algorithm, Neurocomputing (2013), http://dx.doi.org/10.1016/j.neucom.2013.07.013i

gRBðtÞ ¼ e�t2 ð13Þ

gTSðtÞ ¼2

1þe�2t�1 ð14Þ

Differential equation neural networks using gLS (DENN-LS), gRB(DENN-RB) and gTS (DENN-TS) functions have been developed toapproximate solutions of the Eq. (6) along with the 1st and 3ndorder derivatives. In case of DENN-LS, the solution f(η) along withits first and third derivatives (f′(η) and f′″(η)) can be approximatedby the following continuous mapping:

f̂ LSðηÞ ¼ ∑m

i ¼ 1δi

11þe�ðwiηþβiÞ ð15Þ

f̂ ′LSðηÞ ¼ ∑m

i ¼ 1δiwi

e�ðwiηþβiÞ

ð1þe�ðwiηþβiÞÞ2ð16Þ

f‴LSðηÞ

¼ ∑m

i ¼ 1δiw

3i

6e�3ðwiηþβiÞ

ð1þe�ðwiηþβiÞÞ4� 6e�2ðwiηþβiÞ

ð1þe�ðwiηþβiÞÞ3þ e�ðwiηþβiÞ

ð1þe�ðwiηþβiÞÞ2

!

ð17ÞIn case of DENN-RB, the solution f(η) and its derivatives can beapproximated as

f̂ RBðηÞ ¼ ∑m

i ¼ 1δie�ðwiηþβiÞ2 ð18Þ

f̂ ′RBðηÞ ¼ ∑m

i ¼ 1�2δiwiðwiηþβiÞe�ðwiηþβiÞ2 ð19Þ

f̂‴RBðηÞ ¼ ∑m

i ¼ 1δiw3

i ð12ðwiηþβiÞe�ðwiηþβiÞ2�8ðwiηþβiÞ3e�ðwiηþβiÞ2 Þ

ð20ÞSimilarly, for DENN-TS the solution and its derivatives can beapproximated by the following continuous mapping:

f̂ TSðηÞ ¼ ∑m

i ¼ 1δi

21þe�2ðwiηþβiÞ�1� �

ð21Þ

f̂ ′TSðηÞ ¼ ∑m

i ¼ 14δiwi

e�2ðwiηþβiÞ

ð1þe�2ðwiηþβiÞÞ2

!ð22Þ

f̂‴TSðηÞ

¼ ∑m

i ¼ 12δiw3

i48e�6ðwiηþβiÞ

ð1þe�2ðwiηþβiÞÞ4� 48e�4ðwiηþβiÞ

ð1þe�2ðwiηþβiÞÞ3þ 8e�2ðwiηþβiÞ

ð1þe�2ðwiηþβiÞÞ2

!

ð23Þ

The fitness function ε has been developed for the transformedequation (6) using neural network models by defining the unsu-pervised error as the sum of mean squared errors:

ε¼ ε1þε2 ð24Þ

The error term ε1 is associated with the differential equation andgiven as

ε1 ¼1

Kþ1∑K

k ¼ 0ðf̂‴kþ2αRef̂ k f̂ ′kþð4�HÞα2 f̂ ′kÞ2; ð25Þ

where K¼1/h, f̂ k ¼ f̂ ðηkÞ, and ηk¼kh. The interval η ϵ (0, N) isdivided into K number of steps, i.e., η ϵ (η0¼0, η1, η2,…, ηK¼N) witha step size of h andf̂ kðηÞ; f̂ ′kðηÞ and f̂‴kðηÞ are any of the three DENNmodels given above.

Similarly, the error term ε2 is for initial and boundary condi-tions, and is given as

ε2 ¼13ððf̂ 0�1Þ2þðf̂ ′0Þ2þðf̂ K Þ2Þ ð26Þ

It is clear that for weights δ, w, and β for which the error functionsε1 and ε2 approach zero, the value of fitness ε also approacheszero; thus the proposed solutionf̂ ðηÞ as given in Eqs. (15), (18) and(21) approaches the exact solution f(η). The generic architecture ofthe differential equation neural network for the MHD Jeffery–Hamel problem is shown in Fig. 2.

4. Learning methodology: interior point method

Learning methodology based on the interior point method (IPM)is used for training of weights of the three neural networks for theJeffery–Hamel flow equations. IPM belongs to a class of algorithmswhich are used for solving constrained optimization problems. Themethod is based on Karmarkar's algorithmwhich was developed byNarendra Karmarkar in 1984 for linear programming approaches[49]. Detailed information about the algorithm is available inreferences [50,51]. IPMs have been used in many optimizationproblems in applied science and engineering; some recent worksinclude multi-area optimal reactive power flow [52] and economicdispatch problem [53].

The fundamental elements of interior point methods are basedon self-concordant barrier functions which are used to encode theconvex set. In contrast to the classical simplex method, search for anoptimal solution is made by traversing the interior of the feasibleregion [43] and solving a sequence of sub problems. The aim of theIPM algorithm is to find a vector of desired weights W to optimize

Fig. 2. Neural network architecture for the transformed problem of nonlinear MHD Jeffery–Hamel flow equation.

M.A.Z. Raja, R. Samar / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎ 3

Please cite this article as: M.A.Z. Raja, R. Samar, Numerical treatment for nonlinear MHD Jeffery–Hamel problem using neural networksoptimized with interior point algorithm, Neurocomputing (2013), http://dx.doi.org/10.1016/j.neucom.2013.07.013i

the given objective function ε(W) subject to constraints:

minW

εðWÞ;

subject to h1ðWÞ ¼ 0 and h2ðWÞr0 ð27Þ

where h1 and h2 are vector functions giving equality and inequalityconstraints respectively. For each μ40, the approximate problem iswritten as

minW;s

εμðW; sÞ ¼minW;s

εðWÞþμ∑ilnðsiÞ

subject to h1ðWÞ ¼ 0 and h2ðWÞþs¼ 0: ð28Þ

here, si are slack variables, their number equals the number ofinequality constraints h2. In order to keep ln(si) bounded, the si arerestricted to be positive. If μ approaches zero, then the minimum ofεμ will approach the minimum of ε. The addition of the logarithmicterm (known as a barrier function) in (28) is the reason such type ofmethods are known as barrier methods. The approximate problemas given in (28) corresponds to the original inequality constrainedproblem given in (27) in terms of a sequence of equality constraintsthat are easier to tackle. The algorithm proceeds by taking one oftwo main steps at each iteration to solve the approximate problem.Newton step: This step is based on the Karush–Kuhn–Tucker (KKT)equations; these equations are solved with linear approximation

using an auxiliary Lagrangian function for the approximate problem.This step is also known as the direct step.

Conjugate gradient step: This step is based on a trust region tocontinue its working.

Typically, the algorithm first attempts a Newton step and thentries for a CG step only if the approximate problem does not locallyconverge. If an attempted step does not provide the desired resultsfor the objective function, it is rejected and a new shorter step isattempted.

In this study, IPM is used for training of weights of each neuralnetwork model, i.e., the DENN-LS-IPM, DENN-RB-IPM and DENN-TS-IPM. The generic flow diagram of the overall process is shownin Fig. 3(a) and (b).

Necessary detail about the procedural steps for the optimiza-tion is outlined below.

Step 1: Initialization: A vector with randomly generated boundedreal values of length equal to the number of weights in each neuralnetwork model acts as the starting point for each solver:

W¼ fδ;w; βg ¼ fδ¼ δ1; δ2;…; δm;w¼w1;w2;…;

wm; β¼ β1; β2;…; βmg;herem represents the number of neurons. Settings for the MATLABfunction ‘fmincon’ parameter ‘optimset’ are listed in Table 1.

Trasformation

MHD Jeffery Hamel-Flow Equations

Equivalent Boundary Value Problems

Modeling

Neural Networks with Log-Sigmoid

Neural Networks with Radial Basis

Neural Networks with Tag-Sigmoid

Optimization

Interior Point Mathod

Optimal Weights of each Model

Start

Initialize Program Parameters.

Fitness Evaluation

Termination Criterion

Achieved?

Save Final Weights & Execution Time

Step Increments in Weights as per IPM

Stop

optimset

Initialization of Weights Random Assignment. Bounds Declaration

fmincon Yes

No

Fig. 3. Process of finding desired weights of neural networks (a) overall procedure (b) Optimization with IPM.

M.A.Z. Raja, R. Samar / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎4

Please cite this article as: M.A.Z. Raja, R. Samar, Numerical treatment for nonlinear MHD Jeffery–Hamel problem using neural networksoptimized with interior point algorithm, Neurocomputing (2013), http://dx.doi.org/10.1016/j.neucom.2013.07.013i

Step 2: Fitness Evaluation: The MATLAB built-in function forconstrained optimization problems ‘fmincon’ is invoked foreach model by defining the following:

� Start point W and ‘optimset’ as initialized in step 1.� Fitness function ε, as given in Eqs. (24)–(26), for each model.

Step 3: Termination Criteria: Terminate the execution of thesolver, if any of the following criteria is satisfied:

� Desired level of predefined fitness achieved, i.e., εr10–14.� Total number of iterations executed.� Any defined value in ‘optimset’ for function tolerance (TolFun),

maximum function evaluation (MaxFunEvals), X-tolerance (TolX),or constraint tolerance (TolCon), is achieved as set in Table 1.Step 4:Storage: Store the final optimal weights along withfitness values and computational time taken by the algorithm.Step 5:Statistical Analysis: Repeat steps 1–4 for sufficientlylarge number of times to perform an effective and reliablestatistical analysis.

5. Numerical experimentation and results

The proposed neural network models trained with IPM areevaluated on four variants of two Jeffery–Hamel flow problemsby varying the Reynolds number, channel angles and Hartmannumber. Results of our proposed schemes are presented herealong with a comparative analysis with standard numerical solu-tions and previously published results.

5.1. Jeffery–Hamel problem 1

In this case four variants of Jeffery–Hamel flows have beentaken using different Reynolds number and channel angles, butwith no change in the Hartman number. The four cases are: Case 1:Re¼110 and α¼31 [43], Case 2: Re¼80 and α¼�51 [43], Case 3:Re¼50 and α¼7.51 [46] and Case 4: Re¼50 and α¼51 [45].The transformed equation for this problem is written as:

f‴ðηÞþ2αRef ðηÞf ′ðηÞþ4α2f ′ðηÞ ¼ 0 ð29Þ

f ð0Þ ¼ 1; f ′ð0Þ ¼ 0; f ð1Þ ¼ 0: ð30ÞThe exact solution for this equation is not available, thereforewe calculate the numerical solution using the inbuilt formulationof MATHEMATICA for the four cases for inputs ηϵ[0, 1] with a stepsize of 0.1. These results will be used as a reference for comparison.

We now apply the three neural network models with 10neurons each to solve the problem (29). In each model thereare a total of 30 unknown adjustable parameters or weightsW(δ, w, β). The fitness function ε for the input span from ηϵ[0, 1]with a step size of 0.1 is given by the following relation:

ε¼ 111

∑10

k ¼ 0ðf̂‴kþ2αRef̂ k f̂ ′kþ4α2 f̂ ′kÞ2þ

13ððf̂ 0�1Þ2þðf̂ ′0Þ2þðf̂ 10Þ2Þ

ð31Þ

Our objective is to find weights for each network such that thevalue of fitness function ε-0, and hence the approximate solutionapproaches the exact solution, i.e., f̂-f. Training of weights foreach network is carried out with IPM using the values ofparameter as given in Table 1. The same hardware platform, initialweight vector, and adaptation procedure are applied for eachmodel. These fixed settings for the optimizer are used in orderto observe the performance difference of each neural networkmodel in optimization of the fitness function (31). The solutionsdetermined with one particular set of weights learned (givenin appendix Table A1) by DENN-LS-IPM, DENN-RB-IPM andDENN-TS-IPM with fitness values ε of 4.2628�10–10,4.6742�10–11 and 4.1211�10–11 respectively, for case 1, are givenin the form of expressions as:

f̂ LSðηÞ ¼2:282

1þe�ð2:552ηþ0:795Þ þ�2:196

1þe�ð1:953η�0:877Þ þ�2:036

1þe�ð�0:723ηþ1:990Þ

þ 2:2551þe�ð2:800ηþ0:451Þ þ

2:8661þe�ð0:751η�1:232Þ þ

�0:4261þe�ð1:611ηþ0:612Þ

þ 1:0501þe�ð�2:079ηþ3:153Þ þ

1:7751þe�ð0:533η�0:687Þ þ

�0:0541þe�ð0:934ηþ2:197Þ

þ �3:8171þe�ð2:629η�0:505Þ ð32Þ

f̂ RBðηÞ ¼ �0:158e�ð�1:970η�0:683Þ2�0:344e�ðþ0:459η�1:165Þ2

�0:210e�ðþ1:212η�0:955Þ2 þ0:819e�ðþ0:265ηþ0:332Þ2

þ0:802e�ð�0:332ηþ1:920Þ2 þ2:762e�ð�0:549ηþ2:189Þ2

þ1:072e�ðþ1:319ηþ0:326Þ2�1:496e�ðþ0:330η�1:092Þ2

þ0:573e�ðþ0:890η�0:237Þ2�1:113e�ð�1:080η�0:833Þ2 ð33Þ

f̂ TSðηÞ ¼ �7:311þ 1:8061þe�2ð1:637ηþ0:203Þ þ

4:6741þe�2ð�0:753η�0:254Þ

þ 1:8241þe�2ð�1:366ηþ0:364Þ þ

�0:7461þe�2 þ

3:3661þe�2ð0:811η�2:418Þ

þ 0:1201þe�2ð1:366η�1:413Þ þ

�1:9421þe�2ð�1:688η�0:715Þ

þ 1:1761þe�2ð�0:425ηþ1:0633Þ þ

0:1011þe�2ð�1:383ηþ0:457Þ

þ 4:2441þe�2ð1:205ηþ1:686Þ ð34Þ

Similarly, for case 2 of this problem, the proposed solutionsf̂ LS , f̂ RBand f̂ TS are obtained using a set of trained weights (given inappendix Table A2) with fitness ε achieved as 1.7607�10–09,1.0298�10�08 and 6.8890�10�10, respectively. These solutionsare given as

f̂ LSðηÞ ¼�4:323

1þe�ð�3:153þ4:974Þ þ3:835

1þe�ð�5:077ηþ6:890Þ

þ 2:3621þe�ð�0:319η�3:603Þ þ

�3:5151þe�ð�3:968η�4:910Þ

þ �0:9231þe�ð�3:842ηþ4:440Þ þ

�2:6121þe�ð2:700η�2:778Þ

Table 1Parameter settings for the function “fmincon” in MATLAB simulations.

Parameters Settings/values Parameters Settings/values

'FinDiffType' 'Central' Start Point generation Randomly between (�2,2)Hessian BFGS Start Point Size 30Minimum Perturbation 10�08 Total Start Points 100'TolX' 10–18 Bounds δi, wi, βi ϵ (720) 8 i 1 to m'MaxIter' 1500 Sub-problem algorithm IDI factorization'MaxFunEvals' 200,000 Scaling Objective and constraintsTolFun 10–30 Maximum Perturbation 0.1'TolCon' 10–30 Derivative Approximate by solver'UseParallel' 'Always' Other As defaults

M.A.Z. Raja, R. Samar / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎ 5

Please cite this article as: M.A.Z. Raja, R. Samar, Numerical treatment for nonlinear MHD Jeffery–Hamel problem using neural networksoptimized with interior point algorithm, Neurocomputing (2013), http://dx.doi.org/10.1016/j.neucom.2013.07.013i

þ �1:2131þe�ð1:464η�2:737Þ þ

�0:1341þe�ð�1:867η�0:235Þ

þ 2:5071þe�ð�4:087ηþ4:754Þ þ

1:5171þe�ð2:554η�2:305Þ ð35Þ

f̂ RBðηÞ ¼ þ0:063e�ð�0:863ηþ0:378Þ2�2:196e�ð�2:143ηþ3:490Þ2

þ3:974e�ðþ0:180ηþ1:095Þ2

�1:835e�ðþ0:055ηþ1:922Þ2 þ0:081e�ðþ2:881η�3:901Þ2

�0:694e�ðþ1:765η�2:414Þ2 þ0:292e�ð�0:550η�1:862Þ2

�1:525e�ð�1:530η�2:223Þ2�0:468e�ð�1:064η�0:913Þ2

þ0:126e�ð�0:941η�2:138Þ2 ð36Þ

f̂ TSðηÞ ¼ þ1:380þ 0:6741þe�2ð�1:713ηþ1:733Þ þ

2:0221þe�2ð�0:223η�0:222Þ

Val

ues o

f wei

ghts

Number of neurons

Val

ues o

f wei

ghts

Number of neurons

Val

ues o

f wei

ghts

Number of neurons

Val

ues o

f wei

ghts

Number of neurons

Val

ues o

f wei

ghts

Number of neurons

Val

ues o

f wei

ghts

Number of neurons

Fig. 4. A particular set of weights of neural network models trained using IPM for different cases of Jeffery–Hamel flow equation, problem 1. (a) Case III: Re = 50 and α = 7.5ofor DENN-LS-IPM, (b) Case III: Re = 50 and α = 7.5o forDENN-RB-IPM, (c) Case III: Re = 50 and α = 7.5o for DENN-TS-IPM, (d) Case IV: Re = 50 and α = 5o for DENN-LS-IPM,(e) Case IV: Re = 50 and α = 5o for DENN-RB-IPM and (f) Case IV: Re = 50 and α = 5o for DENN-TS-IPM.

Solu

tion

‘f LS (η

)’

Inputs‘η’

Inputs‘η’ Inputs‘η’

Solu

tion

‘f RB

(η)’

Solu

tion

‘f TS (η

)’

Fig. 5. Solutions of four variants of Jeffery–Hamel flow as given in problem 1.

M.A.Z. Raja, R. Samar / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎6

þ �2:4841þe�2ð0:050ηþ1:933Þ þ

�1:2261þe�2ð2:271η�2:603Þ

þ 0:6601þe�2ð0:508ηþ0:473Þ þ

�2:1281þe�2ð2:810η�3:705Þ

þ 3:1321þe�2ð�0:461η�2:642Þ þ

�4:1461þe�2ð�1:968η�2:533Þ

þ 1:2181þe�2ð0:072η�0:184Þ þ

�0:4841þe�2ð�0:306ηþ0:467Þ ð37Þ

One set of trained weights for DENN-LS-IPM, DENN-RB-IPM andDENN-TS-IPM with respective fitness values 3.4719�10–10,2.1106�10�10 and 1.1910�10�10 for case 3, and 2.1429�10–11,2.6831�10�10 and 3.7522�10–11 for case 4 of the problem areshown in Fig. 4.

Solutions for cases 1 and 2 of the problem have been calculated forthe three algorithms using Eqs. (32–34) and (35–37), respectively,while for cases 3 and 4 the solutions f̂ LS , f̂ RB and f̂ TS are obtained usingweights of Fig. 4 in Eqs. (15), (18) and (21), respectively. The proposedresults along with the numerical solution calculated using fully explicitRange–Kutta method are presented in Fig. 5 for inputs ηϵ[0, 1] with astep size of 0.1. It is seen from Fig. 5(a)–(c) that the solutions obtainedby the neural network models consistently overlap the referencenumerical solutions, hardly any difference in the results is observed.In order to elaborate small differences, values of absolute error (AE) arecalculated, and results tabulated for cases 1 and 2 in Table 2, while forcases 3 and 4 the results are given in Table 3. Results published in theliterature, such as DTM [43], HPM [43] and HAM [43] are also given inTable 2 for cases 1 and 2. For case 3, solution with HPM [46], and for

case 4, solutions with VIM [42], OHAM [45], and HPM [46] are alsoprovided in Table 3.

The mean absolute error (MAE) from the reference solution forDTM, HPM, HAM, DENN-LS-IPM, DENN-RB-IPM, and DENN-TS-IPM are found to be 4.8285�10–03, 2.7141�10–04, 7.0544�10–08,2.5924�10–08, 2.2580�10�08 and 1.2584�10�08 respectively,for case 1, while for case 2 values of MAE are 9.4919�10–06,8.0723�10–04, 2.1014�10–07, 4.1212�10–08, 1.0259�10�07 and2.2288�10�08 respectively. For case 3 of the problem, values ofMAE calculated for HPM, DENN-LS-IPM, DENN-RB-IPM and DENN-TS-IPM are 1.1875�10–03, 2.0855�10–08, 3.7580�10�08 and1.2968�10�08 respectively. Values of MAE for published resultsof OHAM, HPM and VIM are 2.5402�10–03, 2.7294�10�04 and3.4754�10�06 respectively, while for our proposed solutionsDENN-LS-IPM, DENN-RB-IPM, and DENN-TS-IPM the MAE valuesare 2.2594�10–08, 9.1674�10�09 and 3.4351�10�09 respectivelyfor case 4 of the problem. In general, results of the proposed neuralnetwork models are found to be in good agreement with standardnumerical solutions for all four cases. The solutions forf̂ TSmoreclosely match the reference numerical solutions for all fourvariants of the problem.

5.2. Jeffery–Hamel flow problem 2

In this problem, four variants of another form of the MHDJeffery–Hamel flow are taken by changing the Hartmann numberand channel angles, but the Reynolds number is fixed at Re¼50.

Table 2Comparison of results for cases 1 and 2 of Jeffery–Hamel problem 1.

η Re¼110, α¼31 Re¼80, α¼�51

Reported results Proposed result Reported results Proposed result

jf�f̂ DTM j jf�f̂ HPM j jf�f̂ HAM j jf�f̂ LSj jf�f̂ RBj jf�f̂ TS j jf�f̂ DTM j jf�f̂ HPM j jf�f̂ HAM j jf�f̂ LSj jf�f̂ RBj jf�f̂ TSj

0.0 0.000000 0.000000 0.000000 6.75E�08 1.52E�08 3.38E�08 0.000000 0.000000 0.000000 2.60E�08 4.60E�08 2.39E�080.1 2.59E�04 5.95E�05 3.25E�09 4.36E�08 2.22E�08 2.48E�08 2.34E�07 1.07E�04 1.48E�09 1.41E�09 5.84E�08 1.32E�080.2 1.01E�03 2.23E�04 2.03E�09 2.77E�08 3.14E�08 1.70E�08 9.82E�07 4.20E�04 4.22E�09 8.57E�09 1.08E�07 1.16E�080.3 2.17E�03 4.40E�04 1.85E�10 1.05E�08 3.70E�08 1.03E�08 2.35E�06 8.96E�04 9.65E�09 9.31E�09 1.47E�07 1.24E�080.4 3.64E�03 6.17E�04 3.85E�09 8.99E�10 3.47E�08 7.46E�10 4.51E�06 1.40E�03 2.55E�09 3.73E�08 1.35E�07 1.66E�090.5 5.35E�03 6.64E�04 1.50E�08 2.58E�09 3.39E�08 5.96E�09 7.73E�06 1.74E�03 3.60E�08 4.98E�08 1.33E�07 3.48E�090.6 7.22E�03 5.45E�04 3.07E�08 7.89E�09 2.83E�08 1.12E�08 1.24E�05 1.74E�03 1.09E�07 4.54E�08 1.17E�07 1.36E�080.7 9.25E�03 3.21E�04 2.95E�08 4.04E�09 1.95E�08 1.16E�08 1.88E�05 1.39E�03 1.36E�07 5.53E�08 5.87E�08 1.93E�080.8 1.14E�02 1.04E�04 2.77E�08 2.10E�08 1.44E�08 1.00E�08 2.67E�05 8.46E�04 1.07E�07 4.99E�08 2.67E�08 4.32E�080.9 1.28E�02 1.17E�05 1.94E�07 3.67E�08 8.21E�09 8.49E�09 3.08E�05 3.35E�04 7.57E�07 4.54E�08 8.19E�08 5.95E�081.0 1.50E�09 7.00E�10 4.70E�07 6.29E�08 3.49E�09 4.60E�09 0.00Eþ00 1.00E�10 1.15E�06 1.25E�07 2.16E�07 4.33E�08

Table 3Comparison of results for Cases 3 and 4 of Jeffery–Hamel problem 1.

η Re¼50, α¼7.51 Re¼50, α¼51

Reported results Proposed result Reported results Proposed results

jf�f̂ HPM j jf�f̂ LSj jf�f̂ RBj jf�f̂ TS j jf�f̂ OHAM j jf�f̂ HPM j jf�f̂ VIM j jf�f̂ LSj jf�f̂ RBj jf�f̂ TSj

0.0 0.000000 3.84E�08 4.84E�08 4.04E�08 0.000000 0.000000 0.000000 4.50E�08 5.67E�09 2.71E�090.1 7.15E�05 3.47E�08 4.84E�08 2.87E�08 8.68E�05 1.82E�04 2.36E�07 2.17E�08 4.62E�09 2.01E�100.2 2.77E�04 2.75E�08 5.15E�08 2.03E�08 3.40E�04 5.63E�04 1.04E�06 4.81E�09 9.00E�09 1.44E�090.3 5.94E�04 2.54E�08 4.84E�08 1.11E�08 7.71E�04 7.95E�04 2.38E�06 6.15E�09 1.14E�08 3.94E�110.4 9.92E�04 1.91E�08 4.28E�08 3.10E�09 1.47E�03 6.60E�04 3.20E�06 1.23E�08 5.48E�09 6.72E�100.5 1.45E�03 9.04E�09 3.90E�08 2.96E�09 2.59E�03 2.72E�04 4.83E�06 1.49E�08 4.76E�09 3.86E�090.6 1.94E�03 1.11E�08 2.90E�08 8.24E�09 4.19E�03 3.34E�05 6.55E�06 8.52E�09 4.28E�09 3.98E�090.7 2.45E�03 1.12E�08 2.67E�08 3.83E�09 5.97E�03 6.03E�05 7.66E�06 3.37E�09 6.50E�09 5.49E�090.8 2.84E�03 1.07E�08 2.74E�08 1.87E�09 6.96E�03 1.09E�04 7.25E�06 1.93E�08 1.19E�08 6.74E�090.9 2.45E�03 1.99E�08 2.49E�08 6.51E�09 5.56E�03 3.27E�04 5.07E�06 4.33E�08 1.32E�08 4.97E�091.0 0.000000 2.23E�08 2.69E�08 1.56E�08 1.00E�09 0.000000 0.000000 6.93E�08 2.40E�08 7.68E�09

M.A.Z. Raja, R. Samar / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎ 7

Please cite this article as: M.A.Z. Raja, R. Samar, Numerical treatment for nonlinear MHD Jeffery–Hamel problem using neural networksoptimized with interior point algorithm, Neurocomputing (2013), http://dx.doi.org/10.1016/j.neucom.2013.07.013i

Number of neurons Number of neurons Number of neurons

Number of neurons Number of neurons Number of neurons

Fig. 6. A set of weights of neural network models trained with IPM for different cases of the Jeffery–Hamel flow as given in problem 2. (a) Case 3: H = 1000 and α = -5o forDENN-LS-IPM, (b) Case 3: H = 1000 and α = -5o for DENN-LS-IPM, (c) Case 3: H = 1000 and α = -5o for DENN-LS-IPM, (d) Case 4: H = 1000 and α = 5o for DENN-LS-IPM, (e)Case 4: H = 1000 and α = 5o for DENN-LS-IPM and (f) Case 4: H = 1000 and α = 5o for DENN-LS-IPM.

Solu

tion

‘f (η

)’

Solu

tion

‘f (η

)’

Solu

tion

‘f (η

)’

Inputs ‘η’

Inputs ‘η’ Inputs ‘η’

Fig. 7. Solutions of four variants of Jeffery–Hamel flow as given in problem 2.

M.A.Z. Raja, R. Samar / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎8

Please cite this article as: M.A.Z. Raja, R. Samar, Numerical treatment for nonlinear MHD Jeffery–Hamel problem using neural networksoptimized with interior point algorithm, Neurocomputing (2013), http://dx.doi.org/10.1016/j.neucom.2013.07.013i

These variants are given in the form of cases as: Case 1: H¼250,α¼7.51 [46], Case 2: H¼500, α¼7.51 [46], Case 3: H¼1000, α¼�51[44] and Case 4: H¼1000, α¼51 [44]. The governing mathematicalexpression for the transformed equation for MHD Jeffery–Hamelflow is

f‴ðηÞþ100αf ðηÞf ′ðηÞþð4�HÞα2f ′ðηÞ ¼ 0 ð38Þ

f ð0Þ ¼ 1; f ′ð0Þ ¼ 0; f ð1Þ ¼ 0: ð39Þ

Exact solutions for the differential equation (38) are not known forthese cases, therefore we calculate the reference numerical solu-tion for each case using the inbuilt MATHEMATICA solvers.

We now apply the three proposed neural network models tothis problem; the fitness function ε is developed for (38) based oninputs of the training set taken between ηϵ[0, 1] with a step size of0.1; this is given as

ε¼ 111

∑10

k ¼ 0ðf̂‴kþ100αRef̂ k f̂ ′kþð4�HÞα2 f̂ ′kÞ2þ

13ððf̂ 0�1Þ2

þðf̂ ′0Þ2þðf̂ 10Þ2Þ ð40Þ

Our goal is to search for the optimal weights in (40) for eachneural network model to approximate the solutions to the Jeffery–Hamel equation. Learning of weights for each model is done withIPM using the fixed parameter settings as given in Table 1.Proposed solutions f̂ LS , f̂ RB and f̂ TS obtained with one particularset of trained weights (given in appendix Table A3) using DENN-LS-IPM, DENN-RB-IPM and DENN-TS-IPM yield fitness values of3.9034�10–10, 9.1579�10�10 and 3.2072�10–11 respectively, for

case 1 of this problem. These solutions are given as

f̂ LSðηÞ ¼1:053

1þe�ð1:294ηþ1:973Þ þ2:281

1þe�ð2:720ηþ0:678Þ þ�1:509

1þe�ð�2:153ηþ0:023Þ

þ �2:1451þe�ð2:357η�4:094Þ þ

�4:3471þe�ð5:536η�10:477Þ

þ 0:1011þe�ð�3:258ηþ2:328Þ þ

�1:1841þe�ð2:198η�0:002Þ þ

�3:0271þe�ð2:538η�0:660Þ

þ 0:4751þe�ð3:384ηþ4:239Þ þ

0:4851þe�ð2:179ηþ1:916Þ ð41Þ

f̂ RBðηÞ ¼ �5:731e�ð�1:649ηþ3:858Þ2�0:641e�ðþ1:027ηþ0:918Þ2

þ1:942e�ðþ1:042ηþ0:385Þ2�1:222e�ðþ0:211η�0:579Þ2

�1:943e�ðþ0:719η�2:399Þ2�0:395e�ð�1:622η�0:716Þ2

þ0:871e�ð�2:118ηþ4:175Þ2 þ1:0981e�ðþ0:289η�1:110Þ2

�0:876e�ð�1:246ηþ0:4619Þ2 þ1:362e�ð�1:108ηþ0:456Þ2 ð42Þ

f̂ TSðηÞ ¼ þ2:181þ �10:2221þe�2ð3:022η�6:238Þ þ

2:0701þe�2ð0:250η�3:678Þ

þ �5:0301þe�2ð�0:023ηþ1:371Þ þ

4:1321þe�2ð�1:195ηþ0:220Þ

þ �4:1361þe�2ð0:080ηþ1:017Þ þ

4:9321þe�2ð1:232ηþ0:270Þ

þ 6:0721þe�2ð�0:361η�0:746Þ þ

�0:6361þe�2ð�0:457η�0:160Þ

þ 0:9041þe�2ð1:805ηþ1:552Þ þ

�2:2921þe�2ð1:541η�2:823Þ ð43Þ

Accordingly for case 2 of this problem, the approximatesolutionsf̂ LS , f̂ RB and f̂ TS are obtained using a set of trained weights

Table 5Comparison of results for cases 3 and 4 of Jeffery–Hamel problem 2.

η H¼1000 and α¼�51 H¼1000 and α¼51

Reported results Proposed result Reported results Proposed result

jf�f̂ HAM j jf�f̂ LSj jf�f̂ RBj jf�f̂ TS j jf�f̂ HPM j jf�f̂ LSj jf�f̂ RBj jf�f̂ TS j

0.0 0.000000 1.89E�07 5.28E�07 9.51E�08 0.000000 1.34E�08 1.01E�07 8.54E�080.1 4.51E�09 1.69E�07 2.87E�07 1.05E�07 3.45E�07 1.62E�08 1.81E�07 1.09E�070.2 6.89E�09 1.66E�07 5.65E�08 1.01E�07 1.33E�06 3.46E�08 2.46E�07 1.11E�070.3 1.44E�08 1.49E�07 3.09E�08 9.04E�08 2.76E�06 4.99E�08 2.65E�07 1.15E�070.4 1.74E�08 1.24E�07 2.84E�08 9.29E�08 4.25E�06 6.13E�08 3.09E�07 1.01E�070.5 2.00E�08 1.27E�07 1.45E�07 9.37E�08 5.19E�06 9.98E�08 3.17E�07 5.59E�080.6 3.10E�08 1.02E�07 1.21E�07 7.22E�08 5.19E�06 1.24E�07 3.02E�07 3.20E�080.7 2.96E�08 7.77E�08 3.57E�08 6.76E�08 4.15E�06 1.62E�07 3.09E�07 1.23E�080.8 6.14E�08 7.27E�08 2.69E�08 7.55E�08 2.58E�06 2.43E�07 2.57E�07 9.58E�080.9 3.98E�08 2.92E�08 1.48E�07 6.02E�08 9.70E�07 3.24E�07 2.30E�07 1.76E�071.0 0.000000 1.13E�07 3.77E�07 4.39E�08 0.000000 3.22E�07 3.09E�07 1.78E�07

Table 4Comparison of results for cases 1 and 2 of Jeffery–Hamel problem 2.

η H¼250, α¼7.51 H¼500, α¼7.51

Reported results Proposed result Reported results Proposed result

jf�f̂ HPM j jf�f̂ LS j jf�f̂ RBj jf�f̂ TS j jf�f̂ HPM j jf�f̂ LSj jf�f̂ RBj jf�f̂ TSj

0.0 0.000000 1.09E�08 2.29E�08 1.74E�08 0.000000 2.53E�08 3.93E�07 4.19E�100.1 2.78E�06 6.09E�09 1.08E�08 2.40E�08 3.28E�08 1.18E�08 3.97E�07 1.16E�080.2 1.10E�05 2.00E�08 2.65E�08 3.06E�08 2.52E�07 3.49E�08 4.39E�07 1.23E�080.3 2.40E�05 3.36E�08 5.23E�08 3.48E�08 4.56E�07 6.69E�08 4.71E�07 1.60E�080.4 4.10E�05 4.21E�08 7.43E�08 3.19E�08 8.56E�07 8.59E�08 4.61E�07 5.56E�090.5 6.12E�05 5.70E�08 8.19E�08 2.71E�08 1.36E�06 1.06E�07 5.39E�07 2.55E�080.6 8.46E�05 6.57E�08 9.89E�08 2.09E�08 1.94E�06 1.34E�07 6.30E�07 4.56E�080.7 1.10E�04 7.05E�08 1.09E�07 9.59E�09 2.61E�06 1.48E�07 6.95E�07 8.04E�080.8 1.27E�04 8.26E�08 1.14E�07 7.00E�10 3.59E�06 1.81E�07 8.68E�07 1.37E�070.9 9.83E�05 8.51E�08 1.23E�07 1.04E�08 4.58E�06 2.05E�07 1.03E�06 1.91E�071.0 0.000000 1.00E�07 1.33E�07 2.66E�08 0.000000 1.28E�07 1.13E�06 2.48E�07

M.A.Z. Raja, R. Samar / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎ 9

Please cite this article as: M.A.Z. Raja, R. Samar, Numerical treatment for nonlinear MHD Jeffery–Hamel problem using neural networksoptimized with interior point algorithm, Neurocomputing (2013), http://dx.doi.org/10.1016/j.neucom.2013.07.013i

(given in appendix Table A3) for each model yielding fitness values1.2235�10–09, 1.9435�10�08 and 1.2175�10�09 respectively;these solutions are given as

f̂ LSðηÞ ¼�6:752

1þe�ð5:098η�8:154Þ þ�1:036

1þe�ð�2:698ηþ2:222Þ þ�1:492

1þe�ð0:804η�2:523Þ

þ 0:5071þe�ð�3:044ηþ3:041Þ þ

1:8901þe�ð3:517ηþ4:239Þ þ

�3:8161þe�ð1:496η�0:569Þ

þ �3:6881þe�ð�2:275η�0:757Þ þ

1:6581þe�ð2:955η�4:732Þ þ

0:5151þe�ð6:857η�9:100Þ

þ 3:1491þe�ð�1:130ηþ0:906Þ ð44Þ

f̂ RBðηÞ ¼ þ0:496e�ðþ0:214η�0:973Þ2�1:308e�ð�1:510η�2:670Þ2

�0:832e�ð�0:194ηþ3:770Þ2�4:703e�ð�1:006ηþ2:527Þ2

�1:086e�ðþ0:068ηþ3:974Þ2�1:808e�ð�2:142ηþ4:229Þ2

þ0:817e�ðþ1:137ηþ0:026Þ2�0:194e�ðþ4:085η�6:544Þ2

þ1:461e�ð�3:484η�4:920Þ2 þ3:730e�ð�0:617η�4:284Þ2 ð45Þ

f̂ TSðηÞ ¼ 5:767þ 4:9901þe�2ð�0:515η�0:373Þ þ

�7:0021þe�2ð�0:0217ηþ2:085Þ

þ 5:4001þe�2ð0:996ηþ0:299Þ þ

�5:0541þe�2ð�0:959ηþ3:245Þ

þ �8:4181þe�2ð3:850η�6:773Þ þ

2:4601þe�2ð�0:137η�1:556Þ

þ �2:5441þe�2ð1:922349162281210η�3:221078681408920Þ

þ �0:9881þe�2ð�1:924η�1:821Þ þ

�2:5701þe�2ð1:103η�0:237Þ

þ 1:9881þe�2ð�0:682ηþ1:584Þ ð46Þ

A set of weights learned using DENN-LS-IPM, DENN-RB-IPM andDENN-TS-IPM with respective fitness values 3.8182�10–09,9.7695�10�08 and 1.8809�10�09 for case 3, and 3.5651�10–09,7.4276�10�09 and 3.4403�10�09 for case 4 of the problem areplotted in Fig. 6.

Solutions for cases 1 and 2 of the problem are obtained for thethree models using Eqs. (41–43) and (44–46) respectively, whilefor cases 3 and 4 the solutionsf̂ LS , f̂ RB and f̂ TS are determined usingweights of Fig. 6 in Eqs. (15), (18) and (21) respectively. Theproposed and reference solutions are plotted in Fig. 7 for inputs η ϵ[0, 1] with a step size of 0.1. It is seen for this case also that theproposed solutions closely match the reference results. In order toanalyze the accuracy, values of absolute error (AE) are tabulatedfor cases 1 and 2 in Table 4, and for cases 3 and 4 in Table 5. Valuesof AE for previously published solutions using the HPM solver [46]are also given in Table 4 for cases 1 and 2. Values of AE for

0 20 40 60 80 10010−12

10−10

10−8

10−6

10−4

10−2

100

10−12

10−10

10−8

10−6

10−4

10−2

100

10−12

10−12

10−10

10−8

10−6

10−4

10−2

100

10−12

10−10

10−8

10−6

10−4

10−2

100

10−12

10−10

10−8

10−6

10−4

10−2

100

10−10

10−8

10−6

10−4

10−2

100

10−12

10−10

10−8

10−6

10−4

10−2

100

10−12

10−10

10−8

10−6

10−4

10−2

100

0 20 40 60 80 100

0 20 40 60 80 100 0 20 40 60 80 100 0 20 40 60 80 100

0 20 40 60 80 100 0 20 40 60 80 100 0 20 40 60 80 100

Fig. 8. The values of fitness against number of independent executions of the proposed neural network models, (a), (b), (c) and (d) for problem 1, and (e), (f), (g) and (h) forproblem 2.

M.A.Z. Raja, R. Samar / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎10

Please cite this article as: M.A.Z. Raja, R. Samar, Numerical treatment for nonlinear MHD Jeffery–Hamel problem using neural networksoptimized with interior point algorithm, Neurocomputing (2013), http://dx.doi.org/10.1016/j.neucom.2013.07.013i

solutions using HAM [44] are also tabulated in Table 5 for cases3 and 4 of the problem.

MAE values calculated for HPM, DENN-LS-IPM, DENN-RB-IPM,and DENN-TS-IPM are 5.0887�10–05, 5.2157�10–08, 7.6968�10�08

and 2.1275�10�08 respectively, for case 1, while for case 2 the

respective MAE values are 1.4249�10–06, 1.0244�10–07, 6.4108�10�07, and 7.0234�10�08. For case 3 the calculated MAE values are2.0462�10–08, 1.1978�10–07, 1.6229�10�07, and 8.1628�10�08 forHAM, DENN-LS-IPM, DENN-RB-IPM, and DENN-TS-IPM respectively,while for case 4 the calculated MAE values are 2.4330�10–06,

0 20 40 60 80 100 0 20 40 60 80 100

0 20 40 60 80 100 0 20 40 60 80 100 0 20 40 60 80 100

0 20 40 60 80 100 0 20 40 60 80 100 0 20 40 60 80 100

10−10

10−8

10−6

10−4

10−2

100

10−10

10−8

10−6

10−4

10−2

100

10−10

10−8

10−6

10−4

10−2

100

10−10

10−8

10−6

10−4

10−2

100

10−10

10−8

10−6

10−4

10−2

100

10−8

10−6

10−4

10−2

100

10−8

10−6

10−4

10−2

100

10−8

10−6

10−4

10−2

100

Fig. 9. Values of MAE against number of independent executions of the proposed neural network models, (a), (b), (c) and (d) for problem 1, and (e), (f), (g) and (h) forproblem 2.

Table 6Convergence analysis of the results.

Case Method Problem 1 Problem 2

% runs with fitness % runs with MAE % runs with fitness % runs with MAE

10�04 10�06 10�08 10�03 10�05 10�07 10�04 10�06 10�08 10�03 10�05 10�07

1 DENN-LS-IPM 100 99 81 99 98 62 100 99 45 100 98 31DENN-RB-IPM 68 63 45 68 59 33 69 56 14 69 53 13DENN-TS-IPM 98 97 77 98 96 61 99 94 42 99 95 32

2 DENN-LS-IPM 100 97 34 100 93 16 100 97 18 100 95 17DENN-RB-IPM 78 55 21 79 52 11 64 52 5 64 48 4DENN-TS-IPM 97 91 48 97 89 30 99 93 29 99 92 22

3 DENN-LS-IPM 100 100 85 100 100 69 99 93 31 99 86 20DENN-RB-IPM 72 59 39 71 59 31 72 47 10 72 47 8DENN-TS-IPM 100 97 82 100 96 66 100 96 36 100 95 29

4 DENN-LS-IPM 100 100 90 100 99 63 100 96 37 100 96 26DENN-RB-IPM 76 74 50 76 73 31 77 53 9 77 55 10DENN-TS-IPM 97 94 80 97 94 62 98 84 42 98 87 28

M.A.Z. Raja, R. Samar / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎ 11

Please cite this article as: M.A.Z. Raja, R. Samar, Numerical treatment for nonlinear MHD Jeffery–Hamel problem using neural networksoptimized with interior point algorithm, Neurocomputing (2013), http://dx.doi.org/10.1016/j.neucom.2013.07.013i

1.3177�10–07, 2.5694�10�07 and 9.7500�10�08 respectively forthe four algorithms. It is found that results of the three proposedneural networks optimized with IPM match closely with the refer-ence solutions for all four cases. Solutions employing f̂ TS are relativelymore accurate for each case of this problem also.

6. Statistical analysis of the solvers

Statistical analysis based on a large number of independentruns of each neural network model is performed for all cases ofboth Jeffery–Hamel problems in order to investigate the reliabilityand effectiveness of the schemes.

Accuracy and convergence of the proposed neural networkmodels are examined on the basis of one hundred independentruns of each solver. These runs are executed for each modeloptimized with IPM for all cases of both Jeffery–Hamel flowequations. The values of fitness ε achieved and the MAE are plottedagainst the number of independent runs (rearranged on the basis offitness or MAE values) of algorithms in Figs. 8 and 9 respectively, foreach of four variants of the two problems. Results are presented inthe figures on a semi-log scale in order to elaborate the smallvariations. It can be inferred from Figs. 8 and 9 that the lower thevalue of fitness achieved, the lower is the MAE, i.e., the precision ofresults is better, and vice versa. It is also seen that the lowest valuesof fitness and MAE obtained lie in the range 10�10 to 10–12 and10�05 to 10�08 respectively, for all the three proposed models, but

DENN-TS-IPM generally achieves the lowest values. Secondly, theDENN-LS-IPM consistently provides low values of fitness and MAEfor each independent run; a few independent runs of DENN-TS-IPMalso show divergence, however this trend is seen more frequentlyfor DENN-RB-IPM.

Reliability and effectiveness of the proposed models are judgedon the basis of percentage of convergent runs, i.e., achieving a pre-defined criteria based on fitness and MAE values. Results ofconvergence analysis are tabulated in Table 6. It is seen that basedon the criteria: fitness εr10�06, the average convergence rates forDENN-LS-IPM, DENN-RB-IPM, and DENN-TS-IPM are 99.00%,62.75% and 94.75% for problem 1, and 96.25%, 52.00% and 91.75%for problem 2, respectively. On the other hand, for the criteria:MAEr10�05, the average convergence rates for DENN-LS-IPM,DENN-RB-IPM, and DENN-TS-IPM are 97.50%, 60.75% and 93.75%for problem 1, and 93.75%, 50.75% and 92.25% for problem 2,respectively.

Accuracy and convergence of the proposed neural networkmodels are investigated further on the basis of statistical para-meters of mean, standard deviation (STD) and minimum (MIN)values of AE. Results are obtained for the three proposed modelsoptimized with IPM for one hundred independent runs for eachcase of both Jeffery–Hamel flow equations. Results of statisticalparameters for the runs with fitness εr10�04 are provided inTable 7 for inputs between 0.1 and 0.9 with a step size of 0.2. It isfound that the MIN, mean and STD values lie generally in the range10�08 to 10–10, 10�05 to 10�06 and 10�04 to 10�05 respectively, for

Table 7Results of statistical analysis for MHD Jeffery–Hamel problems.

Case Method Mode Problem I Problem II

Values of AE for inputs ‘η’ Values of AE for inputs ‘η’

0.1 0.3 0.5 0.7 0.9 0.1 0.3 0.5 0.7 0.9

I DENN-LS-IPM MIN 1.07E�08 1.09E�09 6.52E�10 1.51E�09 4.85E�09 6.09E�09 1.19E�08 1.69E�08 2.33E�09 2.81E�08Mean 6.09E�05 5.51E�05 5.43E�05 6.49E�05 9.21E�05 5.11E�06 4.22E�06 3.47E�06 5.13E�06 9.24E�06STD 5.60E�04 5.07E�04 5.01E�04 6.00E�04 8.53E�04 1.21E�05 1.08E�05 1.10E�05 1.61E�05 2.56E�05

DENN-RB-IPM MIN 4.27E�09 6.16E�09 1.89E�08 1.54E�08 5.51E�10 3.60E�09 1.61E�08 2.94E�09 5.80E�08 2.85E�08Mean 5.36E�05 4.62E�05 3.69E�05 3.59E�05 5.26E�05 6.82E�05 7.33E�05 8.63E�05 1.14E�04 1.69E�04STD 1.65E�04 1.40E�04 1.07E�04 1.06E�04 1.48E�04 1.83E�04 2.18E�04 2.53E�04 3.55E�04 5.74E�04

DENN-TS-IPM MIN 3.49E�09 8.55E�10 3.51E�09 3.48E�09 1.25E�09 2.40E�08 1.61E�08 2.91E�09 5.76E�09 5.13E�10Mean 2.77E�05 2.34E�05 2.29E�05 2.89E�05 4.34E�05 1.02E�05 8.80E�06 6.83E�06 9.78E�06 1.89E�05STD 2.43E�04 2.03E�04 2.01E�04 2.58E�04 3.92E�04 3.28E�05 2.77E�05 1.85E�05 2.46E�05 4.57E�05

II DENN-LS-IPM MIN 1.41E�09 9.31E�09 2.07E�08 4.46E�08 3.04E�09 1.18E�08 1.61E�08 1.83E�08 8.38E�09 4.67E�08Mean 1.76E�05 1.85E�05 1.75E�05 1.25E�05 8.05E�06 3.06E�05 2.70E�05 1.93E�05 1.41E�05 1.44E�05STD 4.26E�05 4.50E�05 4.46E�05 3.87E�05 2.57E�05 2.18E�04 2.01E�04 1.56E�04 1.03E�04 6.34E�05

DENN-RB-IPM MIN 5.84E�08 4.53E�08 9.68E�08 5.87E�08 2.95E�08 1.24E�08 7.59E�08 1.94E�07 6.17E�08 7.50E�07Mean 1.22E�04 1.22E�04 1.15E�04 9.11E�05 3.09E�05 1.04E�04 8.97E�05 8.42E�05 1.24E�04 2.13E�04STD 2.95E�04 2.80E�04 2.63E�04 2.18E�04 1.02E�04 3.73E�04 3.68E�04 3.30E�04 5.17E�04 7.69E�04

DENN-TS-IPM MIN 2.81E�09 2.63E�09 3.48E�09 8.71E�09 3.29E�10 1.16E�08 1.60E�08 2.21E�08 7.33E�10 1.22E�07Mean 2.54E�05 4.17E�05 4.40E�05 3.11E�05 1.48E�05 1.96E�05 1.66E�05 1.20E�05 8.37E�06 1.44E�05STD 9.67E�05 2.33E�04 2.64E�04 1.86E�04 6.92E�05 6.74E�05 5.83E�05 4.00E�05 2.47E�05 4.24E�05

III DENN-LS-IPM MIN 1.67E�09 1.76E�09 3.79E�09 3.96E�09 1.38E�09 1.65E�09 3.87E�08 2.41E�08 5.41E�09 5.14E�09Mean 1.39E�06 1.06E�06 6.97E�07 8.99E�07 1.36E�06 2.71E�05 2.87E�05 2.67E�05 1.90E�05 1.19E�05STD 4.21E�06 3.22E�06 1.90E�06 2.31E�06 3.74E�06 7.39E�05 7.25E�05 6.42E�05 4.34E�05 3.20E�05

DENN-RB-IPM MIN 2.38E�09 1.17E�08 1.15E�08 2.04E�08 5.99E�10 2.06E�07 3.09E�08 9.05E�08 3.57E�08 3.50E�08Mean 1.05E�04 2.23E�04 1.23E�04 2.34E�04 3.47E�04 9.37E�05 1.02E�04 9.82E�05 7.50E�05 2.90E�05STD 2.83E�04 1.32E�03 6.64E�04 1.39E�03 2.00E�03 1.55E�04 1.69E�04 1.70E�04 1.39E�04 5.46E�05

DENN-TS-IPM MIN 4.70E�09 4.17E�09 2.96E�09 1.73E�09 1.84E�09 4.22E�08 3.39E�08 6.34E�09 2.48E�08 1.79E�08Mean 1.20E�05 9.93E�06 7.57E�06 8.57E�06 1.29E�05 5.45E�05 5.85E�05 5.53E�05 4.07E�05 1.28E�05STD 5.49E�05 4.60E�05 3.80E�05 4.47E�05 6.89E�05 3.26E�04 3.49E�04 3.31E�04 2.49E�04 6.98E�05

IV DENN-LS-IPM MIN 4.98E�09 6.15E�09 1.99E�09 3.37E�09 2.69E�09 1.05E�08 4.99E�08 8.67E�09 1.86E�09 1.91E�08Mean 2.26E�06 1.73E�06 1.52E�06 1.91E�06 3.03E�06 1.10E�05 9.61E�06 6.85E�06 4.09E�06 7.21E�06STD 8.91E�06 6.55E�06 5.84E�06 8.79E�06 1.43E�05 2.11E�05 1.86E�05 1.33E�05 6.32E�06 1.15E�05

DENN-RB-IPM MIN 2.53E�09 2.04E�10 4.76E�09 6.50E�09 1.32E�08 1.27E�07 6.42E�08 1.12E�07 2.36E�08 8.84E�08Mean 8.29E�06 1.21E�05 1.38E�05 1.36E�05 1.27E�05 1.00E�04 7.51E�05 5.66E�05 8.56E�05 1.63E�04STD 2.66E�05 6.11E�05 7.13E�05 5.83E�05 3.91E�05 2.70E�04 2.11E�04 1.54E�04 2.47E�04 5.04E�04

DENN-TS-IPM MIN 3.99E�11 3.94E�11 1.83E�10 1.86E�09 7.11E�10 1.47E�08 6.05E�09 3.45E�09 4.29E�10 3.32E�08Mean 8.84E�06 8.32E�06 8.51E�06 1.02E�05 1.37E�05 2.89E�05 2.60E�05 2.00E�05 1.31E�05 1.66E�05STD 4.74E�05 4.42E�05 4.51E�05 5.66E�05 8.02E�05 7.57E�05 6.90E�05 5.61E�05 4.41E�05 4.97E�05

M.A.Z. Raja, R. Samar / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎12

Please cite this article as: M.A.Z. Raja, R. Samar, Numerical treatment for nonlinear MHD Jeffery–Hamel problem using neural networksoptimized with interior point algorithm, Neurocomputing (2013), http://dx.doi.org/10.1016/j.neucom.2013.07.013i

the three proposed algorithms for both problems. It is observedthat the best results of statistical parameters are obtained for theDENN-LS-IPM method.

Our analysis of the proposed solvers continues with the globalmean absolute error (GMAE) and mean fitness (MF), given by thefollowing expressions:

GMAE¼ 1R

∑R

r ¼ 1

1K

∑K

k ¼ 1jf k�f̂

r

kj !

;

MF ¼ 1R

∑R

r ¼ 1εr ð47Þ

here K is the total number of inputs, R is the total number ofindependent runs of the solver, fk is the numerical solution for thekth input, f̂

r

k is the approximate solution for the kth input of the rthindependent run, and εr represents the fitness value for the rth runof the models. In our study we take ηϵ[0, 1] with a step size of 0.1, i.e., K¼11, and 100 independent runs i.e., R¼100. Values of GMAEand MF along with the respective STD are calculated for 100independent runs of each proposed model, and results arepresented in Table 8 for all cases of both Jeffery–Hamel problems.It is seen that values of GMAE and MF are best for the DENN-LS-IPM algorithm. Also values of GMAE and MF along with STD arecalculated for the three models with fitness εr10�04, and resultsincluded in Table 8 for all the cases. Values of GMAE and MF arenow much improved since runs with inferior fitness values are notconsidered.

Computational complexity of the proposed models is examinedbased on the time taken for execution by each solver. Results interm of mean execution time (MET) along with STD based on 100independent runs, as well as independent runs with fitnessεr10�04 for each proposed solver are provided in Table 8. It isseen that no significant difference in the values of MET is observedfor both problems for DENN-LS-IPM and DENN-TS-IPM. Value ofMET is found to be lowest for DENN-RB-IPM as compared to theother two approaches. DENN-RB-IPM has advantage of lowercomputational complexity, but this aspect is overshadowed by

its inferior accuracy and convergence compared to the other twosolvers. Numerical experimentation presented in this article isperformed on a Dell Latitude D630 laptop computer, with Intel(R) Core(TM) 2 Duo CPU [email protected] GHz processor, 2.00 GB RAM,running MATLAB version 2011a.

7. Conclusions

Solvers based on neural network models using log-sigmoid,radial basis and tan-sigmoid activation functions, optimized withan interior point method can provide reliable solutions for thenonlinear transformed problem of the MHD Jeffery–Hamel flowequations.

Comparative study of the results of the three proposed modelsshows that solutions in case of DENN-LS-IPM, DENN-RB-IPM andDENN-TS-IPM match upto 7–9 decimal places of accuracy. Gen-erally the results presented here are better in precision than thosereported in the literature for well-known solvers, including DTM,HPM, HAM, OHAM and VIM for all the cases studied here.

Statistical analysis based on 100 independent runs for eachproposed model show that values of MIN, Mean, STD, MAE, GMAE,and MF have no significant difference for DENN-LS-IPM andDENN-TS-IPM; results for DENN-RB-IPM are however slightlyinferior. Generally, the most accurate results are achieved by theDENN-TS-IPM, the most convergent solutions are obtained withDENN-LS-IPM, and the most computationally efficient method isthe DENN-RB-IPM.

The proposed solvers have some advantages over other numer-ical techniques: solutions are readily available on any continuousinput within the entire trained interval, whereas other numericalsolvers give results only on a predefined grid with discrete inputs.Also state-of-the-art analytical solvers like ADM, VIM, HPM andHAM give accurate results only in a close vicinity of the initialguess; as the input range expands, they start to accumulateerror. The proposed neural network models on the other handare less prone to these effects. Simplicity of concept, ease of

Table 8Comparative analysis of the results.

Problem:case Method For all runs Runs with fitness 10�04

MF GMAE MET(Sec) MF GMAE MET

Values STD Values STD Values STD Values STD Values STD Values STD

1:1 DENN-LS 6.82E�05 6.49E�04 9.62E�07 8.78E�06 108.0 25.1 6.82E�05 6.49E�04 9.62E�07 8.78E�06 108.0 25.1DENN-RB 1.08E�01 2.40E�01 6.06E�02 1.18E�01 088.7 36.0 4.05E�05 1.27E�04 2.44E�06 1.02E�05 074.4 35.3DENN-TS 6.30E�03 4.91E�02 3.32E�03 2.33E�02 104.0 20.1 3.08E�05 2.87E�04 1.10E�06 1.04E�05 103.3 19.6

1:2 DENN-LS 1.51E�05 3.95E�05 8.61E�07 1.77E�06 106.8 25.4 1.51E�05 3.95E�05 8.61E�07 1.77E�06 106.8 25.4DENN-RB 9.65E�02 2.43E�01 4.27E�02 1.01E�01 104.0 78.7 9.39E�05 2.42E�04 5.68E�06 1.40E�05 077.2 33.9DENN-TS 4.84E�03 4.06E�02 2.05E�03 1.70E�02 103.1 23.2 3.26E�05 1.95E�04 2.17E�06 1.14E�05 103.3 23.1

1:3 DENN-LS 1.14E�06 3.34E�06 5.56E�08 2.29E�07 105.8 18.8 1.14E�06 3.34E�06 5.56E�08 2.29E�07 105.8 18.8DENN-RB 9.29E�02 2.20E�01 5.30E�02 1.10E�01 084.4 38.0 1.75E�04 1.13E�03 1.01E�05 4.66E�05 070.7 36.4DENN-TS 1.06E�05 5.45E�05 4.19E�07 1.82E�06 108.7 25.1 1.06E�05 5.45E�05 4.19E�07 1.82E�06 108.7 25.1

1:4 DENN-LS 2.21E�06 1.01E�05 7.18E�08 3.62E�07 101.4 24.3 2.21E�06 1.01E�05 7.18E�08 3.62E�07 101.4 24.3DENN-RB 8.36E�02 2.16E�01 4.51E�02 1.04E�01 088.4 35.2 1.08E�05 5.07E�05 5.43E�07 1.70E�06 078.8 35.1DENN�TS 3.78E�03 3.52E�02 1.69E�03 1.66E�02 103.1 22.2 1.01E�05 5.79E�05 3.47E�07 1.86E�06 102.0 21.7

2:1 DENN-LS 5.81E�06 1.75E�05 3.30E�07 8.66E�07 106.6 15.8 5.81E�06 1.75E�05 3.30E�07 8.66E�07 106.6 15.8DENN-RB 1.02E�01 2.36E�01 5.17E�02 1.12E�01 082.3 40.0 9.01E�05 3.10E�04 7.60E�06 2.50E�05 066.8 39.0DENN-TS 1.62E�03 1.70E�02 3.76E�04 3.76E�03 107.0 19.0 1.16E�05 3.40E�05 9.07E�07 2.24E�06 107.0 18.2

2:2 DENN-LS 2.14E�05 1.58E�04 1.50E�06 9.77E�06 105.3 16.2 2.14E�05 1.58E�04 1.50E�06 9.77E�06 105.3 16.2DENN-RB 1.34E�01 2.61E�01 7.38E�02 1.18E�01 079.8 40.5 1.08E�04 4.51E�04 1.20E�05 3.95E�05 059.6 37.4DENN-TS 8.55E�05 7.22E�04 7.15E�06 5.66E�05 106.4 18.3 1.49E�05 5.11E�05 1.52E�06 5.65E�06 106.1 18.2

2:3 DENN-LS 1.10E�04 9.61E�04 1.56E�05 1.44E�04 105.4 15.2 2.31E�05 6.24E�05 1.26E�06 3.30E�06 105.2 15.1DENN-RB 1.35E�01 2.90E�01 5.82E�02 1.17E�01 085.7 36.8 7.69E�05 1.44E�04 7.22E�06 1.34E�05 072.9 35.9DENN-TS 4.50E�05 2.82E�04 3.43E�06 2.25E�05 107.0 17.0 4.50E�05 2.82E�04 3.43E�06 2.25E�05 107.0 17.0

2:4 DENN-LS 8.15E�06 1.60E�05 7.24E�07 1.48E�06 105.2 15.9 8.15E�06 1.60E�05 7.24E�07 1.48E�06 105.2 15.9DENN-RB 9.99E�02 2.40E�01 5.00E�02 1.07E�01 089.6 34.2 8.75E�05 2.69E�04 1.14E�05 3.00E�05 080.3 33.7DENN-TS 2.43E�03 2.62E�02 1.11E�03 1.10E�02 108.8 16.8 2.16E�05 6.16E�05 1.79E�06 3.40E�06 107.7 16.7

M.A.Z. Raja, R. Samar / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎ 13

Please cite this article as: M.A.Z. Raja, R. Samar, Numerical treatment for nonlinear MHD Jeffery–Hamel problem using neural networksoptimized with interior point algorithm, Neurocomputing (2013), http://dx.doi.org/10.1016/j.neucom.2013.07.013i

implementation, and broader applicability domains are otherperks of the proposed scheme.

In future, one may investigate other computational intelli-gence techniques based on neural network models, optimizedwith global and local search algorithms, such as ant/bee colonyoptimization, genetic programming, differential evolution, active-set method and so on. Moreover, one may explore to extendthese methodologies to solve stiff, highly nonlinear differential

equations with singularities and requiring convergent solutions onlarger domains.

Appendix A

A set of optimal weights obtained by the three proposed neuralnetwork models are listed for cases 1 and 2 of both Jeffery–Hamel

Table A3A set of trained weights by the proposed algorithms for problem 2, case 1.

DENN-LS-IPM DENN-RB-IPM DENN-TS-IPM DENN-LS-IPM DENN-RB-IPM DENN-TS-IPM

w1 1.29383580548973 �1.64904229307321 3.02200537410934 w6 �3.25858805400103 �1.62097927961229 1.23217772794365w2 2.72040907012993 1.02701744726601 0.25049488639868 w7 2.19805276505382 �2.11836269574770 �0.36131419775844w3 �2.15291324536464 1.04171211529597 �0.02262512960569 w8 2.53780488317914 0.28975395519700 �0.45703046925109w4 2.35663171077017 0.21136638382281 �1.19547270514934 w9 3.38379295852069 �1.24646591403150 1.80481184032346w5 5.53562366054690 0.71939030996116 0.07993332036844 w10 2.17939546804700 �1.10782953086663 1.54051302593860δ1 1.05397353624091 �5.73148001916567 �5.11127362425922 δ6 0.10062520354633 �0.39544588284876 2.45617759173960δ2 2.28094854631283 �0.64186665967026 1.03502657453309 δ7 �1.18384592410068 0.87145556692597 3.03613442149736δ3 �1.50860618373457 1.94221371467523 �2.51473160428504 δ8 �3.02681759496829 1.09812382758286 �0.31877368878262δ4 �2.14531352133647 �1.22190256386126 2.06564433140607 δ9 0.47489808996629 �0.87625336268256 0.45228729228854δ5 �4.34724989201021 �1.94290007762016 �2.13560460554486 δ10 0.48507488358599 1.36204436163391 �1.14607726948600β1 1.97269048002822 3.85790609308771 �6.23834091465446 β6 2.32792186723425 �0.71589554332693 0.26983771448967β2 0.67802598175066 0.91846193840404 �3.67805240346944 β7 �0.00172370936942 4.17547093783193 �0.74621066784095β3 0.02348040726167 0.38518120297877 1.37075564923670 β8 �0.66044540903235 �1.11037560558669 �0.15950293388720β4 �4.09428379609072 �0.57881246180996 0.21988518218906 β9 4.23918469140570 0.46186612111985 1.55218181744405β5 �10.4765953197995 �2.39937017319116 1.01750504238210 β10 1.91632315646121 0.45568408670640 �2.82330809189935

Table A1A set of optimal weights by the proposed algorithms for problem 1, case 1.

DENN-LS-IPM DENN-RB-IPM DENN-TS-IPM DENN-LS-IPM DENN-RB-IPM DENN-TS-IPM

w1 2.55247178479533 �1.96990137079370 1.63725980333545 w6 1.61087039532160 �0.54909774163472 1.36605969202283w2 1.95269037533690 0.45935460938668 �0.75266935463272 w7 �2.07857075111229 1.31940967495697 �1.68775154826458w3 �0.72311279044445 1.21213981435795 �1.36636376564675 w8 0.53316125775486 0.32957901102342 �0.42450255009321w4 2.80011004801197 0.26461685204889 �0.38995502552010 w9 0.93399500528796 0.89040084464335 �1.38276355776529w5 0.75129663124973 �0.33186575859682 0.81113119044954 w10 2.62880289538368 �1.07962526800956 1.20453774349475δ1 2.28162314347278 �0.15772619505266 0.90264607777444 δ6 �0.42630430285335 2.76199176846687 0.05959230307379δ2 �2.19607728627685 �0.34403128366462 2.33664076156910 δ7 1.04967040302896 1.07241540927952 �0.97065078501098δ3 �2.03575753535678 �0.20981592493232 0.91176895906154 δ8 1.77456474904275 �1.49634486299951 0.58753907389682δ4 2.25511354234274 0.81864313224521 �0.37322124803457 δ9 �0.05406007682392 0.57267374121454 0.05140356388757δ5 2.86648002029577 0.80196467181288 1.68285169484214 δ10 �3.81740310893333 �1.11271088417097 2.12281295679561β1 0.79542279582408 �0.68328022459476 0.20334283009714 β6 0.61167848423068 2.18867773140803 �1.41309902866579β2 �0.87723405399637 �1.16481894128103 �0.25437743322152 β7 3.15335994009475 0.32621813795507 �0.71485530147053β3 1.98976961276162 �0.95515371533760 0.36409046213356 β8 �0.68726178202950 �1.09226802368107 1.06330755773466β4 0.45135865379181 0.33193049649563 0.91446718171858 β9 2.19706776802048 �0.23672241918686 0.45742171429313β5 �1.23248574597516 1.91967499416614 �2.41817071283921 β10 �0.50537750820965 �0.83254099761828 1.68611473129906

Table A2A set of trained weights by the proposed algorithms for problem 1, case II.

DENN-LS-IPM DENN-RB-IPM DENN-TS-IPM DENN-LS-IPM DENN-RB-IPM DENN-TS-IPM

w1 �3.15330010911646 �0.86327753850623 �1.71316860149002 w6 2.69962634333704 1.76502247435229 2.80953779741088w2 �5.07709905889277 �2.14329777029704 �0.22288219316737 w7 1.46436560483411 �0.55035726655734 �0.46124740751935w3 �0.31949058135594 0.18043782611443 0.05022218505261 w8 �1.86678648775949 �1.53007084786391 �1.96762540216169w4 �3.96793469110788 0.05494120600751 2.27114701660815 w9 �4.08725171143307 �1.06442178141776 0.07187770790212w5 �3.84172445849521 2.88095281565948 0.50836296081341 w10 2.55439832847400 �0.94067832911326 �0.30644755548092δ1 �4.32323745519787 0.06281601514287 0.33726436428287 δ6 �2.61177355122537 �0.69397435511448 �1.06390605896331δ2 3.83507904114042 �2.19636267870008 1.01120511554572 δ7 �1.21346188665441 0.29238533433434 1.56633459375693δ3 2.36232071678325 3.97448204409600 �1.24203628217266 δ8 �0.13353201191577 �1.52517567751001 �2.07326305081451δ4 �3.51475742139208 �1.83539555254923 �0.61305072089672 δ9 2.50672711820647 �0.46817887662457 0.60927378082432δ5 �0.92251330142251 0.08122678946944 0.33034447748880 δ10 1.51675936109168 0.12575943565878 �0.24261288094475β1 4.97384509149379 0.37755289492621 1.73284439713291 β6 �2.77773242846086 �2.41427327406041 �3.70521075317477β2 6.88957130342878 3.49009322756889 �0.22192666109191 β7 �2.73658063101938 �1.86215142827093 �2.64162122366604β3 �3.60346520424723 1.09547231359956 1.93270895791433 β8 �0.23467715750552 �2.22253293967023 �2.53305169930440β4 �4.91037949711531 1.92241036754666 �2.60310976424415 β9 4.75373106062793 �0.91314347529376 �0.18367282172499β5 4.43957376602549 �3.90111355394994 0.47329900720386 β10 �2.30485049461983 �2.13795181646492 0.46681252916344

M.A.Z. Raja, R. Samar / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎14

Please cite this article as: M.A.Z. Raja, R. Samar, Numerical treatment for nonlinear MHD Jeffery–Hamel problem using neural networksoptimized with interior point algorithm, Neurocomputing (2013), http://dx.doi.org/10.1016/j.neucom.2013.07.013i

Problems. These weights are used in the first equation of the set(15) to generate results of the proposed approaches based onDENN-LS-IPM, DENN-RB-IPM and DENN-TS-IPM algorithms.

See Tables A1–A4.

References

[1] G.B. Jeffery, The two-dimensional steady motion of a viscous fluid, Philoso-phical Magazine 6 (29) (1915) 455–465.

[2] G. Hamel, Spiralförmige Bewgungen Zäher Flüssigkeiten, Jahresbericht derDMV – Deutsche Mathematiker-Vereinigung 25 (1916) 34–60.

[3] K. Batchelor, An Introduction to Fluid Dynamics, Cambridge University Press,1967.

[4] R.M. Sadri, Channel Entrance Flow, Ph.D. Thesis, Department of MechanicalEngineering, The University of Western Ontario, 1997.

[5] M. Hamadiche, J. Scott, D. Jeandel, Temporal stability of Jeffery-Hamel flow,Journal of Fluid Mechanics 268 (1994) 71–88.

[6] F. LE, Laminar flow in symmetrical channels with slightly curved walls. I: onthe Jeffery–Hamel solutions for flow between plane walls, Proceedings ofRoyal Society London A 267 (1962) 119–138.

[7] O.D. Makinde, P.Y. Mhone, Hermite-Padé approximation approach to MHD Jeffery-Hamel flows, Applied Mathematics and Computation 181 (2006) 966–972.

[8] H. Schlichting, Boundary-layer Theory, McGraw-Hill Press, New York, 2000.[9] W.I. Axford, The Magnetohydrodynamic Jeffrey-Hamel problem for a weakly

conducting fluid, Quarterly Journal of Mechanics & Applied Mathematics 14(1961) 335–351.

[10] M. Jalaal, D.D. Ganji, G. Ahmadi, Analytical investigation on accelerationmotion of a vertically falling spherical particle in incompressible Newtonianmedia, Advanced Powder Technology 21 (3) (2010) 298–304.

[11] M. Jalaal, D.D. Ganji, On unsteady rolling motion of spheres in inclined tubesfilled with incompressible Newtonian fluids, Advanced Powder Technology22 (1) (2011) 58–67, http://dx.doi.org/10.1016/j.apt.2010.03.011.

[12] M. Jalaal, D.D. Ganji, An analytical study on motion of a sphere rolling down aninclined plane submerged in a Newtonian fluid, Power Technology 198 (1)(2010) 82–92.

[13] D.D. Ganji, A. Sadighi, Application of He's homotopy-perturbation method tononlinear coupled systems of reaction-diffusion equations, InternationalJournal of Nonlinear Sciences and Numerical Simulation 7 (4) (2011) 411–418,http://dx.doi.org/10.1515/IJNSNS.2006.7.4.411. (May).

[14] G. Domairry, A Mohsenzadeh, M. Famouri, The application of Homotopyanalysis method to solve nonlinear differential equation governing Jeffery-Hamel flow, Communications in Nonlinear Science and Numerical Simulation14 (2008) 85–95.

[15] S.J. Liao, The Proposed Homotopy Analysis Technique for the Solution ofNonlinear Problems, PhD thesis, Shanghai Jiao Tong University, 1992.

[16] Q. Esmaili, A. Ramiar, E. Alizadeh, D.D. Ganji, An approximation of theanalytical solution of the Jeffery-Hamel flow by decomposition method,Physics Letters A 372 (2008) 3434–3439.

[17] O.D. Makinde, Effect of arbitrary magnetic Reynolds number on MHD flows inconvergent-divergent channels, International Journal of Numerical MethodsHeat Fluid Flow 18 (6) (2008) 697–707.

[18] R. Hosseini, S. Poozesh, S. Dinarvand, MHD flow of an incompressible viscousfluid through convergent or divergent channels in presence of a high magneticfield, Journal of Applied Mathematics (2012) (2012), http://dx.doi.org/10.1155/2012/157067. (Article ID 157067, 12 pages).

[19] S. Dinarvand, Reliable treatments of differential transform method for two-dimensional incompressible viscous flow through slowly expanding or

contracting porous walls with small-to-moderate permeability, InternationalJournal of Physical Sciences 7 (8) (2012) 1166–1174.

[20] J.H. He, A coupling method of a homotopy technique and a perturbationtechnique for non-linear problems, International Journal of Non-linearMechanics 35 (2000) 37–43.

[21] D.D. Ganji, A semi-Analytical technique for non-linear settling particleequation of motion, Journal of Hydro-Environment Research 6 (4) (2012)323–327.

[22] M. Jalaal, D.D. Ganji, G. Ahmadi, An analytical study on settling of non-spherical particles, Asia-pacific Journal of Chemical Engineering 7 (1) (2012)63–72, http://dx.doi.org/10.1002/apj.492.

[23] M. Sheikholeslami, D.D. Ganji, H.R. Ashorynejad, H.B. Rokni, Analyticalinvestigation of Jeffery-Hamel flow with high magnetic field and nanoparticleby Adomian decomposition method, Applied Mathematics and Mechanics –

Engl. Ed, 33, 25–36, http://dx.doi.org/10.1007/s10483-012-1531-7.[24] H. Bararnia, Z.Z. Ganji, D.D. Ganji, S.M. Moghimi, Numerical and analytical

approaches to MHD Jeffery-Hamel flow in a porous channel, InternationalJournal of Numerical Methods for Heat & Fluid Flow 22 (4) (2012) 491–502.

[25] S. Abbasbandy, E. Shivanian, Exact analytical solution of the MHD Jeffery-Hamelflow problem, Meccanica 47 (6) (2012) 1379–1389.

[26] I. Mustafa, A. Akgül, A. Kılıçman, A new application of the reproducing KernelHilbert Space Method to solve MHD Jeffery-Hamel flows problem in nonpar-allel walls, Abstract and Applied Analysis, 2013, Hindawi Publishing Corpora-tion, 2013.

[27] V. Marinca, N. Herişanu, An optimal homotopy asymptotic approach applied tononlinear MHD Jeffery-Hamel flow, Mathematical Problems in Engineering(2011), http://dx.doi.org/10.1155/2011/169056. (Article ID 169056, 16 pages).

[28] K.S. McFall, Automated design parameter selection for neural networkssolving coupled partial differential equations with discontinuities, Journal ofthe Franklin Institute 350 (2) (2013) 300–317.

[29] E. García-Garaluz, M. Atencia, G. Joya, F. García-Lagos, F. Sandova, Hopfieldnetworks for identification of delay differential equations with an application todengue fever epidemics in Cuba, Neurocomputing 74 (16) (2011) 2691–2697.

[30] I.G. Tsoulos, D. Gavrilis, E. Glavas, Solving differential equations with con-structed neural networks, Neurocomputing 72 (10–12) (2009) 2385–2391.

[31] B. Choi, J.-H. Lee, Comparison of generalization ability on solving differentialequations using backpropagation and reformulated radial basis functionnetworks, Neurocomputing 73 (1–3) (2009) 115–118.

[32] J.A. Khan, M.A.Z. Raja, I.M. Qureshi, Novel approach for van der Pol Oscillatoron the continuous Time Domain, Chinese Physics Letters 28 (11) (2011)110205, http://dx.doi.org/10.1088/0256-307X/28/11/110205.

[33] M. A. Z. Raja, Stochastic numerical techniques for solving Troesch's Problem,Information Sciences Journal, submitted for publication.

[34] X. Li, J. Ouyang, Integration modified wavelet neural networks for solvingthin plate bending problem, Applied Mathematical Modelling 37 (5) (2013)2983–2994.

[35] J.A. Khan, M.A.Z. Raja, I.M. Qureshi, Numerical treatment of nonlinear Emden-Fowler equation using stochastic technique, Annals of Mathematics andArtificial Intelligence 63 (02) (2011) 185–207.

[36] Z. Ping, Tracking problems of a spherical inverted pendulum via neuralnetwork enhanced design, Neurocomputing 106 (2013) 137–147.

[37] M.A.Z. Raja, J.A. Khan, S.I. Ahmad, and I.M. Qureshi, Numerical Treatment ofPainleve equation I using neural networks and stochastic solvers, Springer,book series, Studies in Computational intelligence, 442, Innovations inIntelligent Machines-3, chapter 7, 2013.

[38] V. Kůrková, Surrogate modelling of solutions of integral equations by neuralnetworks, Artificial Intelligence Applications and Innovations, Springer, BerlinHeidelberg (2012) 88–96.

[39] M.A.Z. Raja, S I Ahmad, Numerical treatment for solving one-dimensionalBratu Problem using Neural Networks, Neural Computing and Applications,online first (2012), http://dx.doi.org/10.1007/s00521-012-1261-2.

Table A4A set of trained weights by the proposed algorithms for problem 2, case 2.

DENN-LS-IPM DENN-RB-IPM DENN-TS-IPM DENN-LS-IPM DENN-RB-IPM DENN-TS-IPM

w1 5.09838447500151 0.21429429245080 �0.51519491447245 w6 1.49645177559464 �2.14181010977321 �0.13689940285011w2 �2.69834860702489 �1.50984036245208 �0.02165626609865 w7 �2.27518767726236 1.13686500550773 1.92234916228121w3 0.80363197741053 �0.19422806153597 0.99646623247016 w8 2.95458524754286 4.08504622530791 �1.92366783755987w4 �3.04440617362637 �1.00642874279858 �0.95868325957684 w9 6.85674362064194 �3.48416456593457 1.10263972687186w5 3.51709064654431 0.06775099422983 3.85030217361548 w10 �1.12963522457368 �0.61725448996257 �0.68150005293589δ1 �6.75184748145987 0.49584410936490 2.49561621779126 δ6 �3.81591385066950 �1.80780490952934 1.23017365162800δ2 �1.03572650166897 �1.30832182028110 �3.50084726937484 δ7 �3.68809357294026 0.81714955754782 �1.77184817466041δ3 �1.49169860206614 �0.83241519550093 2.80047875765557 δ8 1.65838744746003 �0.19420889445361 �0.49393269038617δ4 0.50674128607655 �4.70325921431201 �2.52674137033777 δ9 0.51518619526281 1.46119364296426 �1.28516705181458δ5 1.89028584647623 �1.08620357962823 �4.20891570078906 δ10 3.14920031282176 3.72992570600106 1.49416665244129β1 �8.15417116918829 �0.97298398762164 �0.37328804111905 β6 �0.56879002700536 4.22870780617226 �1.55611286670918β2 2.22190933820309 �2.67016963136987 2.08482566162890 β7 �0.75664288004764 0.02599646188539 �3.22107868140892β3 �2.52272620674244 3.77019201254146 0.29880598414971 β8 �4.73159625773005 �6.54376249544596 �1.82114712104418β4 3.04148301555851 2.52652412809024 3.24485635725504 β9 �9.10049922896103 �4.91999157657899 �0.23716222408213β5 4.23876569911785 3.97417628940227 �6.77281223646083 β10 0.90628997148983 �4.28380982170327 1.58408305438130

M.A.Z. Raja, R. Samar / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎ 15

Please cite this article as: M.A.Z. Raja, R. Samar, Numerical treatment for nonlinear MHD Jeffery–Hamel problem using neural networksoptimized with interior point algorithm, Neurocomputing (2013), http://dx.doi.org/10.1016/j.neucom.2013.07.013i

[40] M.A.Z. Raja, J.A. Khan, I.M. Qureshi, A new stochastic approach for solution ofRiccati differential equation of fractional order, Annals of Mathematics andArtificial Intelligence 60 (3–4) (2010) 229–250.

[41] M.A.Z. Raja, J.A. Khan, I.M. Qureshi, Solution of fractional order system ofBagley-Torvik equation using Evolutionary computational intelligence, Math-ematical Problems in Engineering (2011) (2011) 1–18. (Article ID. 675075).

[42] Z.Z. Ganji, D.D. Ganji, M. Esmaeilpour, Study on nonlinear Jeffery-Hamel flowby He's semi-analytical methods and comparison with numerical results,Computers and Mathematics with Applications 58 (2009) 2107–2116.

[43] A.A. Joneidi, G. Domairry, M. Babaelahi, Three analytical methods applied toJeffery-Hamel flow, Communications in Nonlinear Science and NumericalSimulation 15 (2010) 3423–3434.

[44] S.M. Moghimi, G. Domairry, E. Soheil Soleimani, H. Ghasemi, Bararnia,application of homotopy analysis method to solve MHD Jeffery–Hamel flowsin non-parallel walls, Advances in Engineering Software 42 (2011) 108–113.

[45] M. Esmaeilpour, D.D. Ganji, Solution of the Jeffery-Hamel flow problem byoptimal Homotopy asymptotic method, Computers and Mathematics withApplications 59 (2010) 3405–3411.

[46] S.M. Moghimia, D.D. Ganji, H. Bararnia, M. Hosseini, M. Jalaal, Homotopyperturbation method for nonlinear MHD Jeffery-Hamel problem, Computersand Mathematics with Applications 61 (2011) 2213–2216.

[47] R.S. Beidokhti, A Malek, Solving initial-boundary value problems for systemsof partial differential equations using neural networks and optimizationtechniques, Journal of The Franklin Institute 346 (9) (2009) 898–913.

[48] D.R. Parisi, M.C. Mariani, M.A. Laborde, Solving differential equations withunsupervised neural networks, Chemical Engineering and Processing 42 (8–9)(2003) 715–721.

[49] N. Karmarkar, A New, Polynomial time algorithm for linear programming,Combinatorica 4 (1984) 373–395.

[50] Stephen J Wright, Primal-Dual Interior-Point Methods, SIAM, Philadelphia, PA,1997. (ISBN 0-89871-382-X).

[51] M.H. Wright, The interior-point revolution in optimization: history, recentdevelopments, and lasting consequences, Bulletin of the American Mathema-tical Society (N.S) 42 (2005) 39–56.

[52] W. Yan, L. Wen, W Li, C.Y. Chung, K.P. Wong, Decomposition–coordinationinterior point method and its application to multi-area optimal reactive powerflow, International Journal of Electrical Power & Energy Systems 33 (1) (2011)55–60.

[53] N. Duvvuru, K.S. Swarup, A hybrid interior point assisted differential evolutionalgorithm for economic dispatch, IEEE Trans. Power Systems 26 (2) (2011)541–549.

Muhammad Asif Zahoor Raja is born in 1973 atRawalpindi, Pakistan. He has done his MSc Mathe-matics degree from Forman Christen College Lahore,Pakistan in 1996, MSc Nuclear Engineering, fromQuaid-e-Azam, University, Islamabad, Pakistan in 1999and PhD Electronic Engineering from internationalIslamic University, Islamabad, Pakistan in 2011.

He is involved in research and development assign-ment of Engineering and Scientific Commission ofPakistan from 1999 to 2012. Presently, he is workingas assistant professor in department of Electrical Engi-neering, COMSATS institute of information technology,Attock Campus, Attock, Pakistan.

Dr. Raja has developed the Fractional least mean square algorithm and computa-tional platform is formulated for the first time for solving fractional differentialequation using artificial intelligence techniques during his PhD studies. Dr. Raja hasbeen author of more than 24 publications, out of which 18 are reputed journalpublications. Dr. Raja acts as a resource person and gives invited talks on manyworkshop and conferences held at the national level. His areas of interest aresolving linear and nonlinear differential equation of arbitrary order, active noisecontrol system, fractional signal processing, nonlinear system identification, direc-tion of arrival problem and Synthesis of Micro-strip Antennas.

Raza Samar received his BSc degree from the Universityof Engineering and Technology Lahore, Pakistan, and hisMS degree from Stanford University, USA, both in Ele-ctrical Engineering. He got the PhD degree in ControlSystems Engineering from the University of Leicester in1995. He is currently with the Engineering and ScientificCommission of Pakistan where he is head of control andinstrumentation research. He is an Adjunct Professor atthe Mohammad Ali Jinnah University, Islamabad, Paki-stan. His research interests include optimal and robustcontrol applications, linear estimation, intelligent con-trol, and application of optimization to industrial andaerospace problems. He is a member of the IEEE, and alifetime and senior member of the AIAA.

M.A.Z. Raja, R. Samar / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎16

Please cite this article as: M.A.Z. Raja, R. Samar, Numerical treatment for nonlinear MHD Jeffery–Hamel problem using neural networksoptimized with interior point algorithm, Neurocomputing (2013), http://dx.doi.org/10.1016/j.neucom.2013.07.013i


Recommended