7/29/2019 On the GOn the Grid Implementation of a Quantum Reactive Scattering Programrid Implementation of a Quantum
1/17
On the Grid Implementation of a Quantum
Reactive Scattering Program
Alessandro Costantini1, Dimitrios Skouteris1, Osvaldo Gervasi2
and Antonio Lagana1
1 Department of Chemistry, University of Perugia, Perugia, Italy2 Department of Mathematics and Informatics, University of Perugia,
Perugia, Italy
Lecture given at the
Joint EU-IndiaGrid/CompChem Grid Tutorial on
Chemical and Material Science Applications
Trieste, 15-18 September 2008
LNS0924009
7/29/2019 On the GOn the Grid Implementation of a Quantum Reactive Scattering Programrid Implementation of a Quantum
2/17
Abstract
Quantum reactive scattering codes are a family of calculations be-lieved to be less suited for the exploitation of the increasingly availableGrid computer power due to their large memory request. This is nottrue under proper circumstances and here the case of the ABC programthat has been efficiently implemented on the segment of the EGEE Gridavailable to the COMPCHEM Virtual Organization is discussed. Theimplementation was carried out in collaboration with the ApplicationPorting Support groups of MTI-SZTAKI and CESGA making use ofthe P-GRADE portal and of some java-based visualization tools.
7/29/2019 On the GOn the Grid Implementation of a Quantum Reactive Scattering Programrid Implementation of a Quantum
3/17
Contents
1 Introduction 149
2 ABC program description 150
3 A distributed quantum study of the F+HD reaction 151
4 The P-GRADE Grid implementation of ABC 153
5 A performance analysis 156
6 The improved rendering of ABC results 158
7 Conclusions 159
8 Aknowledgments 160
7/29/2019 On the GOn the Grid Implementation of a Quantum Reactive Scattering Programrid Implementation of a Quantum
4/17
7/29/2019 On the GOn the Grid Implementation of a Quantum Reactive Scattering Programrid Implementation of a Quantum
5/17
On the Grid Implementation of a Quantum Reactive Scattering Program 149
1 Introduction
The possibility of exploiting Grid technologies to perform massive calcula-
tions has encouraged several researchers of the molecular science domain to
implement their programs on the Enabled Grid for E-sciencE (EGEE) envi-
ronment [1]. A family of calculations believed not to be completely suited to
exploit the increasing quantity of computer power being made available on
Grid platforms are those using quantum reactive scattering methods. The
high request of memory of quantum reactive scattering calculations, in fact,
makes related programs quite unsuitable for implementation on the Grid.
The overtaking of this difficulty is one of the missions of the Virtual Orga-nization (VO) COMPCHEM [2] and is also the primary goal of QDYN, the
working group of the COST Action D37 [3] coordinated by the Computa-
tional Dynamics and Kinetics Research Group of the University of Perugia.
In this lecture, in order to illustrate the process of porting on the Grid
a quantum mechanical reactive scattering code, we consider the ABC atom-
diatom reactive program [4] that carries out accurate time independent cal-
culations of the quantum S matrix elements from which reaction probabilities
as well as state-to-state integral and differential cross sections can be eval-
uated. The ABC code, in addition to a significant memory request, has a
quite large CPU demand. Moreover, it is seldomly used for just one set of pa-
rameters. In a typical use case, the ABC program must be executed severaltimes for different sets of input parameter (like initial states and collision en-
ergies) consuming a large amount of CPU time. This feature, indeed, makes
it suitable for parameter study runs provided that the machines chosen for
the calculation have a memory sufficiently large to host the matrices used
by the program.
For this reason ABC has been considered for gridification. To this end
both low and high level gridification tools were used. At low level some
procedures specifically implemented to run ABC on the Grid were used.
At upper level the P-GRADE Grid Portal [5,6] was used in collaboration
with the Grid Application Support Centre (GASuC) of the MTA SZTAKI
[7]. P-GRADE is available on all the major Globus, LCG and gLite based
production Grids [8]. To make the Grid application more user friendly the
gridified version of ABC was also dressed with some graphic utilities.
Accordingly, in section 2 we describe the ABC program, in section 3 we
present a typical calculation at a low level of gridification, in section 4 we
discuss the P-GRADE high level gridification, in section 5 we analyse related
7/29/2019 On the GOn the Grid Implementation of a Quantum Reactive Scattering Programrid Implementation of a Quantum
6/17
150 A. Costantini, D. Skouteris, O. Gervasi and A. Lagana
performances, and in section 6 we discuss the implementation of a more userfriendly rendering of the results.
2 ABC program description
The ABC program integrates the atom-diatom Schrodinger equation for re-
active scattering problems using Delves hyperspherical coordinates [9] and
a coupled channel method. The program integrates the time independent
atom-diatom Schrodinger equation of the nuclei:
[T + V E] = 0 (1)
with T being the kinetic operator, V the potential and E the total energy.
In eq. 1 is the nuclear wavefunction (depending on nuclear coordinates
only) that in ABC is expanded in terms of the hyperspherical arrangement
channel basis functions. The channel basis functions BJMj are also
labeled after J (the total angular momentum quantum number), M and (the space- and body- fixed projections of J), and j the vibrational and
rotational quantum numbers of the asymptotic channel. They also depend
on the three Euler angles, the Jacobi orientation angle and the internal
Delves hyperspherical angle D. In order to carry out the propagation of
the solution from the strong interaction region to the asymptotes, one needsto integrate, through the sectors of the various regions in which the reaction
coordinate has been segmented, the equations
d2g
d2= O1Ug. (2)
which relate the second derivative with respect of the hyperspherical radius
link the matrix of the coefficients (g) of the already mentioned expansion
to the g matrix via the overlap matrix O whose elements are formulated as
O
j
j= BJMj |B
JM
j
(3)
and the coupling matrix U whose elements are formulated as
U
j
j= BJMj |
2
2(H E)
1
42|BJM
j
. (4)
In eq. 4 is the reduced mass of the system and H is the part of the
Hamiltonian not containing derivatives with respect to .
7/29/2019 On the GOn the Grid Implementation of a Quantum Reactive Scattering Programrid Implementation of a Quantum
7/17
On the Grid Implementation of a Quantum Reactive Scattering Program 151
To integrate the set of coupled differential equations given in eq. 2the interval of is segmented in sectors whose local bound state func-
tions are calculated by diagonalizing a Hamiltonian which describes related
D-dependent motions using a carefully chosen reference potential. The
coupled-channel equations are integrated starting from small values of (by
propagating the solution first within the sector and then chaining its value
at the end of the sector with that at the beginning of the next one) to the
asymptotes. At the asymptotes the solutions are matched to both reactant
and product states and the related S matrix is evaluated. Finally, from the
calculated S matrix elements reactive probabilities can be worked out and
some observable properties are determinated.
3 A distributed quantum study of the F+HD
reaction
The key features of the ABC program relevant for gridification are easily
illustrated by considering the F + HD bench reaction. The computational
investigation of the F + HD reaction can be performed by carrying out the
calculation of its reactive properties using an extremely fine energy Grid of
energy (so as to easily single out even the width of narrow resonances) for
the zero total angular momentum (J = 0 given as jtot in Table 1 in whichthe namelist of all input data is listed) case.
This case study is actually ideal to show the particular importance of
distributed concurrent computing for the characterization of atom-diatom
reactive resonances in the threshold region that is ideally suited for a pa-
rameter sweeping study. A resonance falling in this energy region when using
a potential energy surface called SW might, in fact, survive to the total an-
gular momentum averaging and show up in the plot of the integral cross
section as a function of energy for a comparison with the experiment [ 10].
A Grid based parameter sweeping study is, therefore, of paramount impor-
tance to enable the calculation of the probability on a very large set of finegrained energy values. In this way the problem is transformed from that
of having enough computing time to the one of having enough computing
elements. In the low level distribution of the calculation of the Grid a set
of scripts has been created in order to automatically manage the submission
and the retrieval of the submitted jobs.
In the first step two files are created namely: grid server (which contains
7/29/2019 On the GOn the Grid Implementation of a Quantum Reactive Scattering Programrid Implementation of a Quantum
8/17
152 A. Costantini, D. Skouteris, O. Gervasi and A. Lagana
the name of the queues and the machines in which the job is going to beexecuted) and job mask (which contains all the needed information about
the jobs and the associated submission status).
In the second step all the jobs are submitted to the Grid and their status
is periodically checked making use of the crontab command. If a job is
abnormally ended or aborted, the script resubmits it updating the job mask
file. In this step the machine queue to which the job is to be submitted is
randomly chosen from the list stored in the grid server file.
In the third step, after the job is correctly finished, a retrieve command
is launched in order to collect all the output files produced by the calculation
saving them in a specific directory. The procedure includes also a check ofthe information retrieved to make sure that the ABC program has ended
correctly.
In Table 1 the maximum value of (rmax), the number of sectors (mtr),
the initial value of total energy (enrg) and the energy increment (dnrg)
chosen for the calculations are also given. The atomic masses (mass) given
Table 1: Input parameters for the test calculation on the F + HD(=0,j=0)reaction.
Parameter Explanationmass = 19,1,2 Masses of the three atoms in atomic mass units.jtot = 0 Total angular momentum quantum number J.ipar = 1 Triatomic parity eigenvalue P.
jpar = 0 Diatomic parity eigenvalue p.emax = 1.7 Maximum internal energy in any channel (in eV).jmax = 15 Maximum rotational quantum number of any channel.kmax = 4 Helicity truncation parameter kmax.rmax = 12.0 Maximum hyperradius max (in bohr).
mtr = 150 Number of log derivative propagation sectors.enrg = 0.233 Initial scattering energy (in eV).
dnrg = 0.001 Scattering energy increment (in eV).nnrg = 48 Total number of scattering energies.nout = 0 Maximum value of for which output is required.
jout = 0 Maximum value of j for which output is required.
in the first line of the Table can be integer (the program adds more significant
7/29/2019 On the GOn the Grid Implementation of a Quantum Reactive Scattering Programrid Implementation of a Quantum
9/17
On the Grid Implementation of a Quantum Reactive Scattering Program 153
figures automatically) and if no PES is specified a default one (SW of [ 11])is chosen for the calculations. The parameter jpar is not used for the F
+ HD reaction, since it has a meaning only for the symmetric A + B2reactions. Similarly, the parameter ipar, that has a meaning only for the
totally symmetric reaction A + A2, is not used. The helicity truncation
parameter kmax is also not used when J=0 since it has a meaning only for
J > 0. The value ofkmax given in the table is the one necessary to calculate
converged integral and differential cross sections for the F + HD ( = 0,
j = 0) reaction at collision energies slightly higher than those considered
here [12] whereas emax and jmax represent the upper limits considered for
total energy and the diatomic rotational quantum number. Due to the highnumber of values of the scattering energy (nnrg) allowed for the calculation
by the exploitation of concurrent computing on the Grid, we obtain a rather
long output file whose key information are summarized in Figs. 1 and 2.
An important aspect extracted from the many details of the calculations is
the evidence that, as already pointed out in ref. [10] at an energy of 0.254
eV, a pronounced quantum mechanical resonance takes places when F + HD
reacts (see the upper panel of Fig. 1 where state-to-state probabilities for
the two possible products are plotted as a function of the total energy E).
The figure shows a high resonance peak (more than 0.5) only for the = 0
to = 2 transition of the process leading to HF (see left-hand side panels of
Fig. 1). On the contrary, no resonant peaks are shown by the state-to-stateprobabilities of the process leading to DF (right-hand side panels of Fig. 1).
The product rotational distribution associated to the resonant transition is
remarkably broad, as shown in Fig. 2.
4 The P-GRADE Grid implementation of ABC
As a next step ABC was implemented on the COMPCHEM User Interface
(ui.grid.unipg.it) using the P-GRADE Grid Portal [5]. P-GRADE (release
2.7) provides graphical tools and services supporting Grid application de-
velopers in porting legacy programs onto the Grid without necessarily re-engineering or modifying the code. It enables, in fact, to define a parameter
study application structure in a graphical environment and, based on this de-
scription, it generates the Grid specific scripts and commands which actually
carry out the execution on the distributed Grid platform. The generic struc-
ture of the P-GRADE Grid Portal application is a workflow that integrates
batch programs into a directed acyclic graph by connecting them together
7/29/2019 On the GOn the Grid Implementation of a Quantum Reactive Scattering Programrid Implementation of a Quantum
10/17
154 A. Costantini, D. Skouteris, O. Gervasi and A. Lagana
Figure 1: State-to-state reaction probabilities for F + HD( = j = 0) HF() + D(left-hand side panels) and DF() + H (right-hand side panels) reactions calculated onthe SW potential energy surface at J = 0.
with file channels. A batch component can be any executable code which is
binary compatible with the underlying Grid resources (typically with Globusand EGEE clusters). A file channel defines directed data flows between two
batch components and specifies that the output file of the source program
must be used as the input file of the target program. The workflow manager
subsystem of P-GRADE resolves such dependencies during its execution by
transferring and renaming files. A screen-shot of the P-GRADE Grid Portal
implemented on COMPCHEM user interface and accessible to the user after
the login procedure, is shown in Fig. 3 for four testing workflows.
The workflow developed for that purpose is shown in the left-hand side
panel of Fig. 4. As shown by the figure the workflow is articulated in three
different components represented as large boxes named Generator, Ex-ecutor and Collector. For the case study discussed here the convergence
checks with the increase of the number of rotational states and with the size
of the hyperradius are analyzed. In our study various copies of the ABC
program having different Maximum rotational and Maximum hyperra-
dius length (see the right-hand side panel of Fig. 4) in a typical parameter
study fashion to check for convergence were executed. In this case the Gen-
7/29/2019 On the GOn the Grid Implementation of a Quantum Reactive Scattering Programrid Implementation of a Quantum
11/17
On the Grid Implementation of a Quantum Reactive Scattering Program 155
Figure 2: Product rotational distribution of the F + HD( = j = 0) HF( = 2, j)+ D reaction calculated on the SW potential energy surface at the energy E=0.254 eV ofthe maximum of the resonant peak at J = 0.
erator produces all the necessary permutations of the jmax and rmax values
and stores them in the input files. These files are then used as input data
when they are staged to Grid resources and the sequential ABC code (imple-mented as a single Fortran 90 executable) runs. In the left-hand side region
of Figure 4 the small boxes placed by side to the components represent the
input/output files prepared for the FORTRAN program by the workflow
manager of P-GRADE which are used/produced by the Fortran code and
are then transferred to the EGEE Computing Element. This makes the ex-
ecutable need neither to know anything about the Grid configuration nor to
be modified. Finally, the third component, the Collector, is responsible for
collecting the results of the parameter study, analyzing them and creating a
typical user friendly filtered result (to be considered as the final result of the
simulation) without carrying out any post-processing. It simply collects theresults from the ABC jobs and compresses the files into a single archive file
that can be downloaded to the user through the Portal web interface. The
purpose of this step is, in fact, to make the access to results more convenient
for the end users.
In our case the workflow structure was defined and the executable and
input components for the workflow nodes were produced, the ABC code was
7/29/2019 On the GOn the Grid Implementation of a Quantum Reactive Scattering Programrid Implementation of a Quantum
12/17
156 A. Costantini, D. Skouteris, O. Gervasi and A. Lagana
run with multiple input data sets on the section of the production EGEEGrid infrastructure available to the COMPCHEM VO. Using the Workflow
Manager window of P-GRADE (Fig. 5), the user was able to perform all
the actions related to the chosen workflow (submission, abortion, resuming)
and monitor the status of the job with the possibility of checking the log file
produced at every step in case of error. From the same window the user is
also able to directly download the results coming from the calculations by
pressing the green button located under the Output field.
Figure 3: A screen-shot of the fully functional P-GRADE Grid Portal 2.7 installed onthe COMPCHEM user interface.
5 A performance analysis
To provide the students of the school with a case study to evaluate the
performance of the P-GRADE Grid procedure implemented for ABC, theinput generation, the concurrent run of the code on several Grid resources,
and the output collection were executed for the already mentioned bench
system. Since the execution time of both the generator and the collector
stages showed to be negligible compared with that of the ABC code, the
discussion will be confined to the latter. The execution of the ABC program
on a single Intel Pentium 4 machine with 3.4 GHz CPU and 1 Gbyte memory
7/29/2019 On the GOn the Grid Implementation of a Quantum Reactive Scattering Programrid Implementation of a Quantum
13/17
On the Grid Implementation of a Quantum Reactive Scattering Program 157
Figure 4: A screen-shot of the workflow prototype components generated by the auto-matic Generator of P-GRADE.
Figure 5: A screen-shot of the Workflow Manager window of the P-GRADE Grid Portal.
takes from 3 to 6 hours, depending on the chosen values of jmax and rmax.
When using the EGEE Grid platform one has to add to it an approximately
7/29/2019 On the GOn the Grid Implementation of a Quantum Reactive Scattering Programrid Implementation of a Quantum
14/17
158 A. Costantini, D. Skouteris, O. Gervasi and A. Lagana
even amount of time for the job to queue on the Grid resource. This meansthat the average execution time of an ABC job on the Grid is about twice
as long as the one on a dedicated local machine that is equivalent to say
that the Grid execution of a job totals an average of about 5 hours per
job. This also means that the break even between a local single machine
and a concurrent Grid execution is reached as soon as there are at least 2
ABC jobs in a simulation. This means also that the throughput gain rapidly
approaches the number of available machines provided that the variability
of the parameter on which the distribution is carried out is large enough to
accomodate an equal number of jobs.
As a matter of fact it has to be pointed out here that the selection of theDO LOOP to distribute is a critical choice for the exploitation of concur-
rency. It is therefore apparent that the choice made in the above discussed
bench run does not fully exploit the potentialities of the ABC concurrency
since it is clearly targeted to convergence studies (usually performed once
for ever at the beginning of an investigation) and not to production runs and
related massive computational campaigns. In production runs, in fact, it is
more appropriate to exploit, for example, the concurrency achieved when it-
erating on the collision energy enrg values. For this purpose it is, therefore,
important to wrap ABC in a way that the concurrency on this parameter is
exploited.
6 The improved rendering of ABC results
In a Grid approach in which most of the computational effort is distributed
on the net it is appropriate to equipe the user machine for a more user
friendly rendering of the results. For this purpose, as already mentioned in
collaboration with CESGA [13], additional efforts were spent to improve the
rendering of the ABC results. The efforts concentrated on the construction
of a prototype web portal able to analyze the ABC output files which are
arranged so as to be supported by the P-GRADE system. For this purpose
a demo portlet prepared for Gridsphere (the same technology on which P-GRADE is based) was adapted in order to enable the drawing of 2D-Graph
rendering of the Reaction Probabilities for each individual output file and
was deployed on P-GRADE. The assembled prototype provides the users
with the option of plotting the reaction probabilities as a function at the
same time of energy and quantum states. A typical output of this type for
the bench system is given in Fig. 6. There, the F + HD reactive probability
7/29/2019 On the GOn the Grid Implementation of a Quantum Reactive Scattering Programrid Implementation of a Quantum
15/17
On the Grid Implementation of a Quantum Reactive Scattering Program 159
is plotted as a function of the collision energy and a given rotational state.The implemented portlet has been written entirely in java and makes use
of external bash scripts in order to:
- list the workflows present on the users home
- unpack and list the the packed file which contains all the output files
for a single workflow
- extract all the needed data from the selected output file and process
them in order to obtain a data format usable for the visualization.
The External Portlet is shown in Fig 6. After a single workflow has been
completed, the user can go to the Externals Portlets, upload the workflow
list with the related button and copy and paste the selected one on the
first blank space before pressing the Job list button. After that the job
list appears on the screen and the user needs to repeat the copy-and-paste
procedure choosing a job output and put it on the second blank space. At
this point the workflow and the related job have been selected and the 2D-
Graph can be rendered. In this way the final user can compare the Reaction
Probabilities for a selected atom-diatom reaction (with no need to download
all the output files which remain on the server) and evaluate the possible
strategies for a new calculation.
7 Conclusions
The study has tackled the problem of implementing a time independent
quantum reactive scattering application on the EGEE Grid environment of
the reactive properties of the F + HD reaction. This has been carried out
by the porting of the ABC atom-diatom time independent quantum reac-
tive scattering code on the Grid and by benchmarking it through massive
calculations. This has been made possible by a collaboration of our De-
partment with both the Application Porting group of EGEE and CESGA.As a result, extended measurements of the performance of the P-GRADE
portal when carrying out massive calculations of the detailed probabilities
of atom-diatom reactions were made and the friendliness of the rendering
of the calculated results was improved. This has allowed us to show that
although the competition for job slots on the EGEE Grid can make the ex-
ecution of a single ABC run twice as slow with respect to that measured on
7/29/2019 On the GOn the Grid Implementation of a Quantum Reactive Scattering Programrid Implementation of a Quantum
16/17
160 A. Costantini, D. Skouteris, O. Gervasi and A. Lagana
Figure 6: A screen-shot of the new Extra Portlet window of the P-GRADE Grid Portalin which the Reaction Probabilities 2D-Graph is rendered.
a local machine, the overall execution time rapidly approaches the number
of the computing elements used. At the same time it has shown that the
consequent relief of the user machine can be usefully exploited to improve
the friendliness of the results rendered.
8 Aknowledgments
This work makes use of results produced by the EGEE-III project (contract
number IST-2003-508833). The work exploits also the results of research
projects funded by ESA (contract number 21790/08/NL/HE) MIUR, CNR
and COST in Chemistry.
7/29/2019 On the GOn the Grid Implementation of a Quantum Reactive Scattering Programrid Implementation of a Quantum
17/17
On the Grid Implementation of a Quantum Reactive Scattering Program 161
References
[1] EGEE website: http://public.eu-egee.org
[2] COMPCHEM website: http://compchem.unipg.it
[3] QDYN is the working group n. 2 of the CMST COST Action D37:
http://www.cost.esf.org/index.php?id=189&action number=D37
[4] D. Skouteris, J.F. Castillo and D.E. Manolopulos, (2000). ABC: a quan-
tum reactive scattering program. Comp. Phys. Comm. 133 128-135
[5] P-GRADE Grid Portal: http://portal.p-grade.hu
[6] G. Sipos and P. Kacsuk, (2006). Multi-Grid, Multi-User Workflows in
the P-GRADE Portal. Journal of Grid Computing, Vol. 3, No. 3-4,
Kluwer Academic Publishers, pp. 221-238
[7] GASuC (Grid Application Support Centre):
http://www.lpds.sztaki.hu/gasuc
[8] gLite website: http://glite.web.cern.ch/glite
[9] G.C. Schatz, (1998). Quantum reactive scattering using hyperspherical
coordinates: results for H + H2 and Cl + HCl. Chem. Phys. Lett. 15092-98
[10] R.T. Skodje, D. Skouteris, D.E. Manolopulos, S.-H. Lee, F. Dong and
K. Liu, (2000). Resonance-mediated chemical reaction: F + HD to HF
+ D. J. Chem. Phys. 112 4536-4552
[11] K. Stark and H.-J. Werner, (1996). An accurate multireference config-
uration interaction (MRCI) calculation of the potential energy surface
for the F + H2 - HF + H reaction. J. Chem. Phys. 104 6515-
[12] J.F. Castillo and D.E. Manolopulos, (1998). Quantum mechanical angu-
lar distributions for the F+HD reaction. Faraday Discuss. Chem. Soc.110 119-138
[13] http://www.cesga.es