Research ArticleResearch and Application of Improved AGP Algorithm forStructural Optimization Based on Feedforward Neural Networks
Ruliang Wang, Huanlong Sun, Benbo Zha, and Lei Wang
Computer and Information Engineering College, Guangxi Teachers Education University, Nanning 530023, China
Correspondence should be addressed to Ruliang Wang; [email protected]
Received 31 May 2014; Revised 18 September 2014; Accepted 7 October 2014
Academic Editor: Yiu-ming Cheung
Copyright © 2015 Ruliang Wang et al. This is an open access article distributed under the Creative Commons Attribution License,which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The adaptive growing and pruning algorithm (AGP) has been improved, and the network pruning is based on the sigmoidalactivation value of the node and all the weights of its outgoing connections. The nodes are pruned directly, but those nodes thathave internal relation are not removed. The network growing is based on the idea of variance. We directly copy those nodes withhigh correlation. An improved AGP algorithm (IAGP) is proposed. And it improves the network performance and efficiency. Thesimulation results show that, compared with the AGP algorithm, the improved method (IAGP) can quickly and accurately predicttraffic capacity.
1. Introduction
Artificial neural networks have been widely applied in datamining, web mining, multimedia data processing, and bioin-formatics [1]. The success of the artificial neural networkis largely determined by its structure. The optimization ofnetwork structure is usually a trial-and-error process bygrowing or pruning method. However, many algorithmsemploy the hybrid algorithm to optimize network structure[2], such as AGP.
Generally speaking, themethod of optimizing neural net-work structure includes growing method, pruning method,and the hybrid algorithm of the two strategies basically.The first is also known as a constructive method. Basedon the minimum network, adding new hidden units trainsthe network by data [3]. We know that the grow-when-required (GWR) algorithm of Marsland adds the hiddennodes based on the network performance requirements [4].The disadvantages of growing methods are that the initialsmall network can be easily overfitting and trapped in localminima and it may also increase the training time [1].
The second method is called the destructive method,which deletes the unimportant nodes or weights in theoriginal large network [5]. Lauret et al. put forward theextended Fourier amplitude sensitivity algorithms to prune
the hidden neurons.This algorithm quantifies the correlationof neurons in the hidden layer and sorts it. And finally ititerates the most favorable neurons by using quantitativeinformation and prunes the notes that rank late. By thismethod, however, the output and input of the network hiddenneurons are independent [6]. When there are dependenciesbetween them, this method is invalid. Xu and Ho describea UB-OBS pruning algorithm that prunes the hidden unitsof feedforward neural network. It uses orthogonal decom-position method to determine the hidden node that needspruning and then recalculate the weights of the remainingnodes to maintain the network performance. But the biggestdrawback of pruning method is to determine the size of theinitial network [7].
There will be more problems only by growing or pruningalgorithms, so the hybrid algorithm of growing and pruningalgorithms is proposed. It does not need to determine the ini-tial network and does not carry out overfitting [8]. And it canbe complementarily the two kinds of algorithms by enlargingtheir respective advantages and narrowing disadvantages [1].AGP is a kind of growing pruning hybrid algorithm. In thestructural design, the algorithm is based on the sigmoidalactivation value of the node to adjust the neural network bypruning the little value neurons, merging similar neurons,and increasing the corresponding neurons, so it can adjust
Hindawi Publishing CorporationMathematical Problems in EngineeringVolume 2015, Article ID 481919, 6 pageshttp://dx.doi.org/10.1155/2015/481919
2 Mathematical Problems in Engineering
the structure of network self-adaptively [9]. In recent decades,the structure optimization algorithm of neural network hasreceived extensive attention [10–17]. The algorithm could beapplied to nonlinear function approximation problems, but ithasmany times of iteration, complex calculation and needs toset threshold and adjust the parameters frequently.
Therefore, the feedforward neural network structure opti-mization algorithm still has much room for improvement.So IAGP was presented in this paper. Network pruning isbased on the sigmoidal activation value of the node and allthe weights of its outgoing connections. Network growing isbased on the idea of variance. We directly copy those nodeswith high correlation. It can rapidly, accurately, and self-adaptively optimize network structure.
Finally, it is applied to nonlinear function approximationand prediction of traffic capacity, and simulation results showthe effectiveness of the improved AGP algorithm.
2. IAGP
2.1. AGP. This algorithm can solve the problem of adjustingthe structure of network self-adaptively. First, it creates aninitial feedforward neural network and then trains networkby using BP algorithm until it reaches the target error. Other-wise, it calculates the sigmoidal activation value of the nodeto prune all the insignificant neurons and combines a largenumber of neurons to achieve the purpose of simplifying thenetwork.Then after a certain amount of training, if it still doesnot reach the target accuracy, we will increase node based onthe idea of cell division. It ensures that the growing node isthe best. At the same time, it ensures the correlation betweenthe two nodes. Then we retrain the network. If classificationaccuracy of the network falls below an acceptable level, thenstop training; otherwise, continue training [9].
2.2. IAGP. In order to improve network performance andefficiency, IAGP was presented in this paper. First, thealgorithm creates an initial network based on the actualproblem. Here we assume that the initial network is a fullyconnected multilayer feedforward neural network with 𝐿layers, as shown in Figure 1.
In each 𝑙th layer, let 𝑚𝑙 be the number of neurons where0 ≤ 𝑙 ≤ 𝐿. Here we let the first layer 0 be an input layer, let thelayers between 0 and 𝐿 be hidden layers, and let the last layer𝐿 be an output layer. The 𝑖th input neuron of 0th layer is𝑁𝑖0 ,0 ≤ 𝑖0 ≤ 𝑚0, and the𝑚0th input neuron’s bias value is alwaysequal to 1. Let 𝑛𝑝 be the number of patterns, in a dataset, andthe value of the 𝑖th input neurons of𝑝th pattern is𝑥𝑖𝑝. Among𝐿 layers in a network, the 𝑗th neuron of 𝑙th hidden layer is𝑁𝑗𝑙,where 0 < 𝑙 < 𝐿 and 1 ≤ 𝑗𝑙 ≤ 𝑚𝑙. The weight between inputneuron𝑁𝑖0 and hidden neuron𝑁𝑗𝑙 is 𝜔𝑖𝑗1 , 𝑗𝑙 ∈ {1, 2, . . . , 𝑚𝑙}.The weight between a neuron𝑁𝑗𝑙 and a neuron𝑁𝑘𝑙+1 is V𝑗𝑙𝑘𝑙+1 ,𝑘 ∈ {1, 2, . . . , 𝑚𝑙+1}, and the initial weights generally take arandom value between −1 and 1.
The activation value of neuron 𝑁𝑗1 is ℎ𝑗1, and theactivation value of neuron𝑁𝑗𝑙 is ℎ𝑗𝑙. Here let 𝑜𝑘 be the output
i
jk
......
......
......
...
...
...
...
...
...
Nk𝐿
Nj𝐿−1Nj𝐿−2Nj1
Ni𝑜
Figure 1: Multilayer feedforward neural network.
of the𝑁𝑘𝐿th neuron in output layer 𝐿, where 1 ≤ 𝑘 ≤ 𝑚𝐿. BPalgorithm is adopted here, and ℎ𝑗1 and ℎ𝑗𝑙 can be written as
ℎ𝑗1 = 𝑓(
𝑚0
∑
𝑖=0
(𝑥𝑖𝑝 ⋅ 𝜔𝑖𝑗1)) ,
ℎ𝑗𝑙 = 𝑓(
𝑚𝑙−1
∑
𝑗𝑙−1=1
(ℎ𝑗𝑙−1⋅ V𝑗𝑙−1𝑘𝑙)) ,
(1)
where 𝑓(𝑥) = 1/(1 + 𝑒−𝑥), 1 < 𝑙 < 𝐿; based on the above, wecan get the output 𝑜𝑘:
𝑜𝑘 = 𝑓(
𝑚𝐿−1
∑
𝑗𝐿−1=1
(ℎ𝑗𝐿−1⋅ V𝑗𝐿−1𝑘𝐿)) , (2)
where 𝑓(𝑥) = 1/(1 + 𝑒−𝑥).Here the value mean squared error is 𝐸 = (1/𝑛𝑝)
∑𝑛𝑝
𝑝=1(∑𝑛
𝑘=1(1/2)(𝑜𝑘 − 𝑑𝑘)
2); we know the dataset with 𝐾
objects and the desired known target value 𝑑𝑘; and we canuse BP algorithm to train the dataset. The total net value ofthe neuron𝑁𝑗𝑙 can be written as
tnet𝑗𝑙 =
{{{{{
{{{{{
{
𝑛𝑝
∑
𝑝=1
𝑚0
∑
𝑖=0
𝑥𝑖𝑝 ⋅ 𝜔𝑖𝑗1𝑙 = 1
𝑚𝑙−1
∑
𝑗𝑙−1
𝑓 (tnet𝑗𝑙−1) ⋅ V𝑗𝑗−1𝑗𝑙 1 < 𝑙 < 𝐿.(3)
Then the significance measure 𝑠𝑗𝑙 can be expressed as
𝑠𝑗𝑙=
𝑚𝑙+1
∑
𝑘𝑙+1
𝑓 (tnet𝑗𝑙) + 𝑏𝑙V𝑗𝑙𝑘𝑙+1
, (4)
where 𝑓(tnet𝑗𝑙) = 1/(1 + 𝑒−tnet𝑗
𝑙 ), ∑𝐿𝑙=0𝑏𝑙 = 1, and 0 ≤ 𝑙 ≤ 𝐿.
According to the above formula, we can see that thesignificance measure 𝑠𝑗𝑙 of a neuron 𝑁𝑗𝑙 is computed byadding its aggregated activation value over all the patternswith all its outgoing connections.
Mathematical Problems in Engineering 3
0 2 4 6 8 10 12 14 16 18 200
1
2
3
4
5
6
7
8
9
Significant neurons
Insignificant neurons
Threshold 𝛽
Hidden neuron
The s
igm
oida
l act
ivat
ion
valu
e of t
he n
ode
Figure 2: Hidden neuron𝑁𝑗𝑙 .
In order to achieve the purpose of pruning neuralnetwork, we should combine similar neurons, and the weightof the new neurons can be expressed as
𝜔new = 𝑃𝜔1 + (1 − 𝑃) 𝜔2, (5)
where 𝜔1 and 𝜔2 are 2 initial neuron weights and 𝑃 is theirsimilarity, where 𝑃 = ((𝜔1 ∗𝜔2)
2− [(𝜔1 +𝜔2)/2]
2)/|𝜔2
1−𝜔2
2|.
When the neural network needs pruning, network adjust-ment for hidden layer neurons is based on the followingformula:
𝑁𝑗𝑙=
{{
{{
{
insignficant 𝑠𝑗𝑙 ≤ 𝛽, 𝛽 =𝑚𝑙
∑
𝑗𝑙=1
(𝑠𝑗𝑙)
𝑚𝑙
significant otherwise
}}
}}
}
, (6)
where 𝛽 is the threshold value, 0 ≤ 𝑙 ≤ 𝐿, and𝑠𝑗𝑙 is theneuronal contribution value; if it is less than the threshold,the neuron is meaningless; if it is more than the threshold, itis significant.
The process of identifying insignificant hidden neurons isshown in Figure 2.
Similarly, we can get the rule of pruning the input layer asfollows:
𝑠𝑖 =
𝑚1
∑
𝑗1
𝑓 (𝑡𝑥𝑖𝑝) + 𝑏𝑖𝜔𝑖𝑗1
, (7)
where 0 ≤ 𝑖 < 𝑚0, ∑𝑚0
𝑖=0𝑏𝑖 = 1, and 𝑡𝑥𝑖𝑝 = ∑
𝑛𝑝
𝑝=1𝑥𝑖𝑝.
So
𝑁𝑖0=
{{
{{
{
insignificant 𝑠𝑖 ≤ 𝛼, 𝛼 =𝑚0
∑
𝑖=0
(𝑠𝑖)
𝑚0
significant otherwise
}}
}}
}
. (8)
Here the threshold is obtained by calculating the averageof all contributions based on the sigmoidal activation value
of the node and all the weights of its outgoing connections. Itonly eliminates neurons below the threshold and less numberof iterations. Because it inherits the weight of the previousnetwork, it reduces the amount of pruning step and doesnot make any complicated calculations, sets thresholds, andadjusts parameters.
After the above steps, if it still cannot reach the target, herewe assume that the algorithm cannot fully learn the sample.So we need to add nodes with the idea of inheritance andsignificancemeasure’s variance.We directly copy those nodeswith high correlation 𝑔 (select the intensity broad point andthen average them):
𝑐 =
∑𝑛
𝑖=1(𝑆𝑖 − 𝑆)
2
𝑛, 𝑎 = 𝑐min,
𝑏𝑖 = 𝑎 ± 𝑑, 𝑔 =∑𝑛
𝑖=1𝐵𝑖
𝑛,
(9)
where 𝑖 = 1, 2, . . . , 𝑛; 𝑑 ∈ (0, 0.01).𝑐 is significance measure’s variance, 𝑆𝑖 is the significance
measure of the 𝑖 neuron, 𝑆 is the average value of thesignificance measure of all neurons from 1 to 𝑛, 𝑎 is thesmallest variance, 𝑏𝑖 is an intensity variance near 𝑎, and 𝐵𝑖is the node whose density is wide. Let the hidden neuron 𝑔be a parent node, and copy it into 𝑅 parts. The input weightof the new node is 𝜔𝑖new = 𝜔𝑖old and the output weight is𝜔𝑜new = ℎ𝑛𝜔𝑜old, ∑
𝑅
𝑛=1ℎ𝑛 = 1, 𝑛 = 1, 2, . . . , 𝑅.
𝜔𝑖new and 𝜔𝑖old are, respectively, the input weights of oldand new neuron and 𝜔𝑜new and 𝜔𝑜old are, respectively, theoutput weights of old and new neuron. The direct “copy,”thought to add new nodes, can retain the relevance betweennodes, greatly reduce the error, prevent overfitting, andquickly converge, and be fewer iterations.
2.3.TheAlgorithm. IAGP is based on the sigmoidal activationvalue of the node and all the weights of its outgoing connec-tions.Weoptimize the neural network structure by increasingor decreasing the neurons. We can use BP algorithm to trainnetwork until it reaches the target error. It can quickly andeffectively achieve the target error.
Compared with AGP algorithm, the improved AGPalgorithm has the following advantages.
(1) Because the growth method and pruning method areadopted, the training time is greatly reduced and thetraining step is relatively short.
(2) Although the structure of neural network that isoptimized by IAGP algorithm is more simple, italso keeps the overall performance of the originalnetwork.
(3) It does not need to set parameters in advance andthese parameters are directly obtained by calculation.
(4) The IAGP has better fitting accuracy and generaliza-tion ability than the original algorithm.
(5) It can achieve network performance requirementsfaster and better.
4 Mathematical Problems in Engineering
The pseudocode of IAGP is as follows.
Step 1. Create a small initial network, and then use BPalgorithm to train network.
Step 2. If classification accuracy of the network falls belowan acceptable level, then stop pruning and go to Step 6;otherwise, go to Step 3.
Step 3. Calculate the sigmoidal activation value of the nodeand combine a large number of neurons to achieve thepurpose of simplifying the network.
Step 4. Using the improved pruning method to train dataset,if it met the requirement of network performance, go toStep 6; otherwise, go to Step 4.
Step 5. After the above steps, if it still does not reach thetarget accuracy, at that time, we use the improved growingmethod to train the dataset; as we know, if it met the networkperformance, go to Step 6; otherwise, go to Step 2.
Step 6. End the neural network training.
Research indicates that IAGP can quickly and efficientlyadjust the network structure accurately, reduce a large num-ber of steps, and improve the efficiency.
3. Simulation Experiments
In this paper, considering the effectiveness of IAGP, it isapplied in the prediction of nonlinear function approxima-tion and the transportation capacity.The algorithm is provento be effective according to simulation result.
3.1. Approximation of the Nonlinear Function. Consider thefollowing nonlinear function:
𝑓 (𝑥) = 1 − 𝑒−0.2𝑥
⋅ cos 0.8𝑥 + cos 0.2𝑥, (10)
where 𝑥 ∈ [1, 6𝜋]. There are 70 groups of experimental dataas the training samples and 30 groups as test samples. Thereare 15 initial hidden neurons, and we use improved AGPalgorithm to train the network. Then 7 hidden nodes are left.
Figure 3 shows the effect of nonlinear function’s approx-imation by neural network. Compared with AGP, we easilyfind that AGP can approximate the function effective better,faster and more effective. In Figure 4, it is a training error.
3.2. Application for Transport Capacity. We all know that thetransportation has the nonlinearity complexity and random-ness [18]. This paper adopts the IAGP algorithm to predictthe transportation capacity. In order to be able to handlethe transport demand and well predict the transportationcapacity, we need to get some parameters based on theanalysis of the factors influencing the freight volume. Theseparameters maybe include GDP, industrial output, the lengthof railway line, the proportion of double track mileage, thelength of road transport routes, the proportion of gradehighway, the number of railway train, and the number of
0 2 4 6 8 10 12 14 16 18 20−0.5
0
0.5
1
1.5
2
2.5
y
x
The functionIAGPAGP
Figure 3: Approximation of the nonlinear function by IAGP.
0 500 1000 1500 2000 2500 3000 3500 40000
10
20
30
40
50
60
70
80
90
Training steps
MSE
Figure 4: The training error curve of IAGP.
laden civilian vehicles. These parameters can be used asthe input vectors of the artificial neural network and theoutput vectors are total volumes of cargo transportation, railfreight, and highway freight. The neural network structure ofexperiment is 8-24-3, the model is shown in Figure 5, and theexperimental data comes from China yearbook.
It selects the statistical data of 2002 to 2009 as trainingsample of the experiment and the statistical data of 2009 to2013 as test sample of the experiment. Let the initial numberof neurons in the hidden layer of new AGP and AGP be 10,and the network training error is 0.01.
It is shown in Table 1 that 2 kinds of optimization algo-rithm performance are compared. With IAGP, the numberof neurons in hidden layer of neural network is 6, trainingerror is 0.031, training steps are 246, and training time is 23.8.By contrast, the improved algorithm has the corresponding
Mathematical Problems in Engineering 5
wj
1
2
24
GDP
Industrial output
The length of railway line
The proportion of double track mileage
The length of road transport routes
The proportion of grade highway
The number of train
The number of laden civilian vehicles
Total volumes of cargo
transportation
Rail freight
Highway freight
y1
y2
y3
...
wij
wij
Figure 5: Feedforward neural network structure.
0 5 10 15 20 25 30 35 40 45 502
2.2
2.4
2.6
2.8
3
3.2
3.4
3.6
3.8
4
×105
Tota
l vol
ume o
f car
go tr
ansp
orta
tion
(mill
ion
tons
)
Prediction of traffic transport capacity
The functionIAGPAGP
Time/month (June 2009–August 2013)
Figure 6: Two kinds of algorithm in contrast to traffic prediction.
Table 1: Comparison of 2 kinds of optimization algorithm perfor-mance.
Algorithm NeuronnumberTrainingerror
Trainingsteps
Trainingtime
IAGP 6 0.031 246 23.8AGP 8 0.047 953 26.6
improvement in the four aspects, and the improved AGPalgorithm does not change the overall structure of neuralnetwork. So this method is very practical.
Figure 6 shows the performance of the improved AGPalgorithm and AGP algorithm in the traffic prediction. It canbe seen that the improvedAGP algorithm results are basically
0 20 40 60 80 100 120 140 160 180 2000
0.5
1
1.5
2
2.5
Training steps (×101)
MSE
(×10
2)
Figure 7: The training error curve of IAGP.
0 20 40 60 80 100 120 140 160 180 2000
50
100
150
200
250M
SE
Training steps (×101)
Figure 8: The training error curve of AGP.
consistent with the actual situation. AlthoughAGP algorithmgenerally can keep up with the actual traffic forecast, it has alittle lag error or a little gap. Network training step is shownin Figure 7, probably in iteration 250; the network graduallystabilized. While Figure 8 is about 610 iterations, the networkbecame more stable. So the use of IAGP algorithm is fasterthan the AGP algorithm. The training error is as shown inFigure 8.
Simulation results show that IAGP can well predict thetransportation capacity, can be very good to follow the actualoutput, and has little error. The approximation speed of AGPis slower and maybe has a bigger error.
As you can see from Figure 6, the traffic freight volumeof China is increasing every year. This algorithm plays animportant role in forecasting transport ability in our economyand can be reasonably optimization related traffic resources.
6 Mathematical Problems in Engineering
4. Conclusions
This paper researches and improves AGP. First of all, we useBP algorithm to train network. Then pruning neurons arebased on the sigmoidal activation value of the node and all theweights of its outgoing connections, and growing neurons arebased on the correlation of significance measure’s variance.Then it trains the neural network until reaching the targetaccuracy. With the IAGP, we change little network structure,have a few training steps, and have short time, and networkstructure is more simple. The experimental results show thatthemethod improves the efficiency and accuracy of the trafficprediction.
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper.
Acknowledgments
This work was jointly supported by the Guangxi Key Labo-ratory Foundation of University and Guangxi Department ofEducation Foundation.
References
[1] Z. Zhang, The Study of Self-Organization Modular NeuralNetwork Architecture Design, Beijing University of Technology,2013.
[2] Z. Zhang, J. Qiao, and G. Yang, “An adaptive algorithm fordesigning optimal feed-forward neural network architecture,”CAAI Transactions on Intelligent Systems, vol. 6, no. 4, 2011.
[3] X. Yu, The structural optimization research for FNN controllerbased on the combination of pruning method and growth method[Master thesis], Southwest Jiao tong University, 2009.
[4] S. Marsland, J. Shapiro, and U. Nehmzow, “A self-organisingnetwork that grows when required,” Neural Networks, vol. 15,no. 8-9, pp. 1041–1058, 2002.
[5] J.-F. Qiao, M. Li, and J. Liu, “A fast pruning algorithm for neuralnetwork,” Acta Electronica Sinica, vol. 38, no. 4, pp. 830–834,2010.
[6] P. Lauret, E. Fock, and T. A. Mara, “A node pruning agorithmbased on a fourier amplitude sensitivity test method,” IEEETransactions on Neural Networks, vol. 17, no. 2, pp. 273–293,2006.
[7] J. Xu and D. W. C. Ho, “A new training and pruning algorithmbased on node dependence and Jacobian rank deficiency,”Neurocomputing, vol. 70, no. 1–3, pp. 544–558, 2006.
[8] H.-Z. Yang,W.-N.Wang, and F. Ding, “Two structure optimiza-tion algorithms for neural networks,” Information and Control,vol. 35, no. 6, pp. 700–704, 2006.
[9] M.-N. Zhang, H. Han, and J. Qiao, “Research on dynamicfeed-forward neural network structure based on growing andpruning methods,” CAAI Transactions on Intelligent Systems,vol. 6, no. 2, 2011.
[10] M. Gethsiyal Augasta and T. Kathirvalavakumar, “A novelpruning algorithm for optimizing feedforward neural networkof classification problems,”Neural Processing Letters, vol. 34, no.3, pp. 241–258, 2011.
[11] J.-J. Tu, Y.-Z. Zhan, and F. Han, “Neural network correlationpruning optimization based on improved PSO algorithm,”Application Research of Computers, no. 9, pp. 3253–3255, 2010.
[12] Y. Wang and C. Dang, “An evolutionary algorithm for globaloptimization based on level-set evolution and latin squares,”IEEE Transactions on Evolutionary Computation, vol. 11, no. 5,pp. 579–595, 2007.
[13] J. Qiao and Y. Zhang, “Fast unit pruning algorithm for mul-tilayer feed-forward network design,” CAAI Transactions onIntelligent Systems, vol. 3, no. 2, 2008.
[14] H.-R. Yan and L.-J. Ma, “Design and realization of intelligentprediction model based on fuzzy neural network,” ModernElectronic Technique, no. 2, pp. 84–88, 2008.
[15] W.Wang and H. Yang, “Pruning algorithm for neural networksbased on pseudo-entropy of weights,”Computer Simulation, vol.23, no. 3, 2006.
[16] Y. Li, Y. Wang, P. Jiang, and Z. Zhang, “Multi-objective opti-mization integration of query interfaces for theDeepWeb basedon attribute constraints,” Data and Knowledge Engineering, vol.86, pp. 38–60, 2013.
[17] Q.-K. Song and M. Hao, “Structural optimization of BP neuralnetwork based on correlation pruning algorithm,” ControlTheory and Applications, vol. 12, 2006.
[18] X. Xu, “A forecast model of freight capacity based on RBFnetwork,”Aeronautical ComputingTechnique, vol. 37, no. 5, 2007.
Submit your manuscripts athttp://www.hindawi.com
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttp://www.hindawi.com
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Journal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
CombinatoricsHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
International Journal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
The Scientific World JournalHindawi Publishing Corporation http://www.hindawi.com Volume 2014
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com
Volume 2014 Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Stochastic AnalysisInternational Journal of