Date post: | 02-Jun-2018 |
Category: |
Documents |
Upload: | sanjaykumarguptaa |
View: | 223 times |
Download: | 0 times |
of 26
8/10/2019 Datagrid Mganagement Portal (Synopsis)
1/26
DATA GRID MANAGEMENT PORTAL
8/10/2019 Datagrid Mganagement Portal (Synopsis)
2/26
ABSTRACT
Aim of the Project
The goal of this project is to design a portal for
managing the access to several databases on a grid through a single
sign-on facility. The user, typically a scientist interested in
computations that depends on data from several data sources, will
access this database from a web browser. The system should use grid
computing standards to fetch data from the dierent data sources.
Description
The portal should be able to support multiple users, including an
administrator, who can add or remove new data sources. Each user will
have associated privileges that determine the set of data sources that
are accessible to that user.
Data Grids are being built across the world as the net
generation data handling systems to manage tera-bytes of inter
organi!ational data and storage space. " data grid #datagrid$ is a
logical name space consisting of storage resources and digital entities
that is created by the cooperation of autonomous organi!ations and its
users based on the coordination of local and global policies. Data Grid
%anagement &ystems #DG%&s$ provide services for the con'uence of
organi!ations and management of inter-organi!ational data and
resources in the datagrid.
The objective of the portal is to provide an introduction to the
opportunities and challenges of this emerging technology. (ovices and
eperts would bene)t from this portal.
8/10/2019 Datagrid Mganagement Portal (Synopsis)
3/26
The portal would cover introduction, use cases, design
philosophies, architecture, research issues, eisting technologies and
demonstrations. *ands on sessions for the participants to use and feel
the eisting technologies could be provided based on the availability of
internet connections.
+urrently, Grid +omputing is applied to solve distributed, large-
scale, and collaborative problems in many domains such as *igh
Energy hysics, io-/nformatics, 0irtual 1bservation, Environmental
&cience, etc. 2eather forecasting is also such a domain where Grid
+omputing3s advantages would show the best.
TECHNOLOGIES
Softwre Re!"irements
2eb Technologies # "&.(et4.5 6anguages # +7.(et, "&.(et
Database # &86 &E90E9 455:
1perating &ystem # 2indows ;
&cripting # ?ava&cript
G@/ Tools # 0&.(et455:
Hr$wre Re!"irements#
/ntel entium # A55 %*! or above.
9"% #&DG
*ard Disc # A5 G or above
8/10/2019 Datagrid Mganagement Portal (Synopsis)
4/26
DATA GRID MANAGEMENT PORTAL
Intro$"ction
The E@ DataGrid project is now in its third and )nal year. 2ithin the
data management worB pacBage we have developed a second
generation of data management services that will be deployed in EDG
release 4.. 1ur )rst generation replication tools #GD%, edg-
replicamanager etc.$ provided a very good base and input. The
eperience we gained in the )rst generation of tools #mainly written in
+CC$, is directly used in the second generation of data management
services that are based on web service technologies and mainly
implemented in ?ava.
The basic design concepts in the second generation services are as
follows
%odularity The design needs to be modular and allow for easy plug-
ins and future etensions.
/n addition, we should use generally agreed standards and do not rely
on vendor speci)c solutions.
Evolution &ince 1G&" is an upcoming standard that is most liBely to
be adapted by several Grid services in the future, the design should
allow for also advisable to use a similar technology.
/n addition, the design should be independent of the underlying
8/10/2019 Datagrid Mganagement Portal (Synopsis)
5/26
operating system as well as relational database managements system
that are used by our services.
*aving implemented the )rst generation tools mainly in +CC, the
technology choices for the second generation services presented in
this article are as follows
?ava based servers are used that host web services
/nterface de)nitions in 2&D6
+lient stubs for several programming languages
ersistent service data is stored in a relational database
management system. 2e mainly use
%y&86 for general services that reuire open source technology and
1racle for more robust services. The entire set of data management
services consists of the following parts
Rep%iction ser&ice frmewor' This service frameworB is the
main part of our data
management services. /t basically consists of an overall replica
management system that uses several other services such as the
9eplica 6ocation &ervice, 9eplica 1ptimi!ation service etc.
S(L Dt)se Ser&ice *Spit+re, &pit)re provides a means to
access relational databases from the Grid. -& Sec"rit. Pc'/e
"ll of our services have very strict security reuirements. The ?ava
security pacBage provides tools that can be used in Grid services such
as our replication services. "ll these components are discussed in
detail in the following sections and thus also outline the paper
organi!ation.
Rep%iction Ser&ice 0rmewor' 1Reptor1
/n the following section we )rst give an architectural overview of the
entire replication frameworB and then discuss individual services
#9eplica 6ocation &ervice, 9eplica 1ptimi!ation &ervice etc.$ in more
detail.
8/10/2019 Datagrid Mganagement Portal (Synopsis)
6/26
Gener% O&er&iew presents the userFs perspective of the main
components of a replica management system for which we have given
the code-name 9eptorF. This design represents an evolution of the
original design. &everal of the components have already been
implemented and tested in EDG #see shaded components$ whereas
others #in white$ are still in the design phase and might be
implemented in the future. 9eptor has been reali!ed as a modular
system that provides easy plug ability of third party components.
9eptor de)nes the minimal interface third party components have to
provide. "ccording to this design the entire frameworB is provided by
the Rep%ic Mn/ement Ser&ice which acts as a logical single
entry point to the system and interacts with the other components of
the systems as follows
The Core module provides the main functionality of replica
management, namely replica creation, deletion, and cataloging by
interacting with third party modules such as transport and replica and
metadata catalog services.
The goal of the Optimi2tion component #implemented as a
service$ is to minimi!e )le access times by pointing access reuests to
appropriate replicas and pro-actively replicating freuently used )les
based on gathered access statistics.
The Sec"rit. module manages the reuired ser authentication and
authori!ationH in particular, issues pertaining to whether a user is
allowed to create, delete, read, and write a )le.
Co%%ections are de)ned as sets of logical )lenames and other
8/10/2019 Datagrid Mganagement Portal (Synopsis)
7/26
collections.
The Consistenc. module maintains consistency between all replicas
of a given )le, as well as between the meta information stored in the
various catalogs.
The Session component provides generic checB pointing, restart ,
and rollbacB mechanisms to add fault tolerance to the system.
The S")scription service allows for a publishsubscribemodel for
replica creation.
2e decided to implement the 9eplica %anagement &ervice and the
core module functionality on the client side in the 9eplica %anager
+lient, henceforth referred to as the Rep%ic Mn/er. The other sub
services and "/s are modules and services in their downright, allowing
for a multitude of deployment scenarios in a distributed environment.
1ne advantage of such a design is that if a sub services unavailable,
the 9eplica %anager can still provide all the functionality that does not
maBe use of that particular service. "lso, critical service components
may have more than one instance to provide a higher level of
availability and to avoid service bottle necBs. " detailed description of
the implemented component sand services can be found in the
following subsections as well as in the original design.
Interction with Ser&ices
The 9eplica %anager needs to interact with many eternal services as
8/10/2019 Datagrid Mganagement Portal (Synopsis)
8/26
well as internal ones, such as the /nformation &ervice and transport
mechanisms liBe Grid IT servers. %ost of the components reuired by
the 9eplica %anager are independent services, hence appropriate
client stubs satisfying the interface need to be provided by the service.
0i/"re 3#9eptorFs main design components .can be speci)ed and ?ava
dynamic class loading features are eploited for maBing them available
at eecution time. To date, the 9eplica %anager has been tested using
the following components
Replica Location Service (RLS): used for locating replicas in the Grid
and assigning physical le names.
Replica Metadata atalog (RM) used for uerying and assigning
logical )le names.
Replica !ptimi"ation Service (R!S) used for locating the best replica
to access.
R#GM$ an information service provided by EDG The 9eplica
%anager uses 9-G%" to obtain information about &torage and
+omputing Elements
Glo%us %ased li%raries as &ell as oG providing Grid IT transport
functionality.
The 'G net&or monitoring services EDG#in particular 2J$provides these services to obtain statistics and networB characteristics.
The implementation is mainly done using the ?ava ?4EE frameworB and
associated web service technologies #the "pache Tomcat servlet
container, ?aBarta "is , etc.$. /n more detail, we use client
8/10/2019 Datagrid Mganagement Portal (Synopsis)
9/26
The basic component inter action is given in Iigure 4 and will also
eplained in a few more details in the following sub sections. Ior more
details on web service .Ior the user, the main entry point to the
9eplication &ervices is through the client interface that is provided via
a ?ava "/ as well as a command line interface, the edge-replica-
manager module. Ior each of the main components in Iigure >, the
9eptor frameworB provides the necessary interface. Ior instance, the
functionality of the core module includes mainly the )le copy and
cataloging process and is handled in the client library with the
respective calls to the Transport and 9eplica +atalog modules.
Rep%ic Loction Ser&ice *RLS,
The 9eplica 6ocation &ervice #96&$ is the service responsible for
maintaining a #possibly distributed$ catalog of )les registered in the
Grid infrastructure .Ior each )le there may eist several replicas. This
is due to the need for geographically distributed copies of the same
)le, so that accesses from dierent points oft he globe may be
optimi!ed #see section on the 9eplica 1ptimi!ation &ervice$. 1bviously,
one needs to Beep tracB of the scattered replicas, so that they can be
located and consistently updated ."s such, the 96& is designed to store
one-to many relationships between #Grid @niue /denti)ers G@/Ds$ and
hysical Iile (ames #I(&$. &ince many replicas of the same )le may
coeist #with dierent I(s$ we identify them as being replicas of the
same )le by assigning to them the same uniue identi)er#the G@/D$.
9eplica 1ptimi!ation &torage Element 9eplica 6ocation &ervice &torage
Element %onitor (etworB %onitor 9eplica %anager +lient
9esource roBer /nformation &ervice
@ser /nterface
/nteraction of 9eplica %anager with other Grid components. The 96&
8/10/2019 Datagrid Mganagement Portal (Synopsis)
10/26
architecture encompasses two logical components - the 69+ #6ocal
9eplica +atalog$ and the 96/ #9eplica 6ocation /nde$. The 69+ stores
the mappings between G@/Ds and I(s on a per-site basis whereas the
96/ stores information on where mappings eist for a given G@/D. /n
this way, it is possible to split the search for replicas of a given )le in
two steps in the )rst one the 96/ is consulted in order to determine
which 69+s contain mappings for a given G@/DH in the second one, the
speci)c 69+s are consulted in order to )nd the I(s one is interested
in. /t is however worth mentioning that the 69+ is implemented to
worB in standalone mode, meaning that it can act as a full 96& on its
own if such a deployment architecture is necessary. 2hen worBing in
conjunction with one #or several$ 96/s, the 69+ provides periodic
updates of the G@/Ds it holds mappings for. These updates consist of
bloom )lter objects, which are a very compact form of representing a
set, in order to support membership ueries The 96& currently has two
possible database bacBend employment possibilities %y &86 and
1racleKi.
Rep%ic Met$t Ct%o/ Ser&ice *RMC,
Despite the fact that the 96& already provides the necessary
functionality for application clients, the G@/D uniue identi)ers are
diLcult to read and remember. The 9eplica %etadata +atalog #9%+$
can be considered as another layer of indirection on top of the 96& that
provides mappings between 6ogical Iile (ames #6I(s$ and G@/Ds. The
6I(s are user de)ned aliases for G@/Ds - many 6I(s may eist for one
G@/D. Iurthermore, the 9%+ is also capable of holding %eta data about
the original physical )le represented by the G@/D #e.g. si!e, date of
creation, owner$. /t is also possible for the user to de)ne speci)c
metadata and attach it to a G@/D or to an 6I(. The purpose of this
8/10/2019 Datagrid Mganagement Portal (Synopsis)
11/26
mechanism is to provide to users and applications a way of uerying
the )le catalog based on a wide range of attributes. The possibility of
gathering 6I(s as collections and manipulating these collections as a
whole has already been envisaged, but is not yet
implemented. "s for the 96&, the 9%+ supports %y&86 and 1racleKi as
database bacB ends.
Rep%ic Optimi2tion Ser&ice *ROS,
The goal of the optimi!ation service is to select the best replica with
respect to networB and storage access latencies. /t is implemented as
a light-weight web service that gathers information from the EDG
networB monitoring service and the EDG storage element service
about the respective data access latencies. /n we de)ned the "/s get
(etworB +osts and get&E +osts for interactions of the 9eplica %anager
with the (etworB %onitoring and the &torage Element %onitor. These
two components monitor the networB traLc and the access traLc to
the storage device respectively and calculate the epected transfer
time of a given )le with a speci)c si!e.
/n the E@ Data Grid roject, Grid resources are managed by the meta
scheduler of 2>, the 9esource roBer.
1ne of the goals of the 9esource roBer is to decide on which
+omputing Element the jobs should be run such that the throughput of
all jobs is maimi!ed. "ssuming highly data intensive jobs, a typical
optimi!ation strategy could be to select the least loaded resource with
the maimum amount of locally available data. /n we introduced the
9eplica %anager "/ get "ccess +ost that returns the access costs of a
speci)c job for each candidate +omputing Element. The 9esource
roBer can then taBe this information provided by the 9eplica %anager
to schedule each job to its optimal resources. The interaction of the
9eplica %anager with the 9esource roBer, the (etworB %onitor and
8/10/2019 Datagrid Mganagement Portal (Synopsis)
12/26
the &torage Element %onitor is depicted .
Rep%ic S")scription Ser&ice
The 9eplica &ubscription &ervice #9&&$ provides automatic replication
based on a subscription model. The basic design is based on our )rst
generation replication tool GD% #Grid Data %irroring acBage$
S(L Dt)se Ser&ice# Spit+re
&pit)re provides a means to access relational databases from the Grid.
This service has been provided by our worB pacBage for some time and
was our )rst service that used the web service paradigm. Thus, we
give more details about its implementation in since many of the
technology choices for the replication services eplained in the
previous section are based on choices also made for &pit)re.
Spit+re O&er&iew
The &86 Database service #named &pit)re$ permits convenient and
secure storage, retrieval and uerying of data held in any local or
remote 9D%&. The service is optimi!ed for metadata storage. The
primary &86 Database service has been re-architected into a standard
web service. This provides a platform and language independent way
of accessing the information held by the service. The service eposes a
standard interface in 2&D6 format, from which client stubs can be built
in most common programming languages, allowing a user application
to invoBe the remote service directly. The interface provides the
common
&86 operations to worB with the data. re-built client stubs eist for the
?ava, + and +CC programming languages. The service itself has been
tested with the %y &86 and 1racle databases. The earlier &86
8/10/2019 Datagrid Mganagement Portal (Synopsis)
13/26
Database service was primarily accessed via a web browser #or
command line$ using pre-de)ned server-side templates. This
functionality, while less 'eible than the full web services interface,
was found to be very useful for web portals, providing a standardi!ed
view of the data. /t has therefore been retained and re-factored into a
separate &86 Database browser module.
Component Description n$ Deti%s )o"t 4e) Ser&ice Desi/n
There are three main components to the &86 Database service the
primary server component, the client#s$ component, and the browser
component. "pplications that have been linBed to the &86 Database
client library communicate to a remote instance of the server. This
server is put in front of a 9D%& #e.g. %y&86$, and securely mediates
all Grid access to that database.
The browser is a standalone web portal that is also placed in front of a
9D%&. The server is a fully compliant web service implemented in
?ava. /t runs on "pache "is inside a ?ava servlet engine #currently we
use the ?ava reference servlet engine, Tomcat, from the "pache ?aBarta
project$. The service mediates the access to a 9D%& that must be
installed independently from the service. The service is reasonably
non-intrusive, and can be installed in front of a pre-eisting 9D%&.
The local database administrator retains full control of the database
bacB-end, with only limited administration rights being eposed to
properly authori!ed grid users. The web services client, at its most
basic, consists of a 2&D6 service description that describes fully the
interface. @sing this 2&D6 description, client stubs can be generated
automatically in the programming language of choice. 2e provide
pre-built client stubs for the ?ava, + and +CC programming languages.
These are pacBaged as ?ava ?"9 )les and static libraries for ?ava and
+
8/10/2019 Datagrid Mganagement Portal (Synopsis)
14/26
the functionality of the previous version of the &86 Database service.
This service does not depend on the other components and can be
used from any web browser. The browser component is implemented
as a ?ava servlet. /n the case where it is installed together with the
primary service, it is envisaged that both services will be installed
inside the same servlet engine.
The design of the primary service is similar to that of the prototype
9emote rocedure +all Grid Data &ervice standard and indeed,
in'uenced the design of the standard. /t is epected that the &86
Database service will eventually evolve into a prototype
implementation of the 9+ part of this GGI standard.
*owever, to maimi!e the usability and portability of the service, we
chose to implement it as a plain web service, rather than just an 1G&"
service.
The architecture of the service has been designed so that it will be
trivial to implement the 1G&" speci)cation at a later date. The
communication between the client and server components is over the
*TT#&$ protocol. This maimi!es the portability of the service, since
this protocol has many pre-eisting applications that have been heavily
tested and are now very robust.
The data format is ;%6, with the reuest being wrapped using
standard &1" 9emote rocedure +all. The interface is designed
around the &86 uery language. The communication between the
userFs web browser and the &86 Database rowser service is also over
*TT#&$. The server and browser components #and parts of the ?ava
client stub$ maBe use of the common ?ava &ecurity module The secure
connection is made over *TT& #*TT with &&6 or T6&$. oth the server
and browser have a service certi)cate #they can optionally maBe use of
the systemFs host certi)cate$, signed by an appropriate +", which they
8/10/2019 Datagrid Mganagement Portal (Synopsis)
15/26
can use to authenticate themselves to the client. The client uses their
G&/ proy to authenticate themselves to the service. The user of the
browser service should load their G&/ certi)cate into the web browser,
which will then use this to authenticate the user to the browser. " basic
authori!ation scheme is de)ned by default for the &86 Database
service, providing administrative and standard user functionality. The
authori!ation is performed using the subject name of the userFs
certi)cate #or a regular epression matching it$. The service
administrator can de)ne a more comple authori!ation scheme if
necessary, as described in the security module documentation.
Sec"rit.
The EDG ?ava security pacBage covers two main security areas,
authentication authori!ation. "uthentication assures that the entity
#user, service or server$ at the other end of the connection is who it
claims to be. "uthori!ation decides what the entity is allowed to do.
The aim in the security pacBage is always to maBe the software as
'eible as possible and to taBe into account the needs of both EDG and
industry to maBe the software usable everywhere. To this end there has
been some research into similarities and possibilities for cooperation
with for eample 6iberty "lliance, which is a consortium developing
standards and solutions for federated identity for web based
authentication, authori!ation and payment.
A"thentiction
The authentication mechanism is an etension of the normal ?ava &&6
authentication mechanism. The mutual authentication in &&6 happens
by echanging public certi)cates that are signed by trusted certi)cate
authorities #+"$. The user and the server prove that they are the
owners of the certi)cate by proving in cryptographic means that they
have the private Bey that matches with the certi)cate. /n Grids the
8/10/2019 Datagrid Mganagement Portal (Synopsis)
16/26
authentication is done using G&/ proy certi)cates that are derived
from the user certi)cate. This proy certi)cate comes close to ful)lling
the reuirement for valid certi)cate chain, but does not fully follow the
standard. This causes the &&6 handshaBe to fail in the conforming
mechanisms. Ior the G&/ proy authentication to worB the &&6
implementation has to be nonstandard or needs to be changed to
accept them. The EDG ?ava security pacBage etends the ?ava &&6
pacBage.
/t accepts the G&/ proies as the authentication method supports G&/
proy loading with periodical reloading
supports 1pen&&6 certi)cate-private Bey pair loading
supports +96s with periodical reloading
integrates with Tomcat
integrates with ?aBarta "is &1" frameworB
The G&/ proy support is done by )nding the user certi)cate and
maBing special allowances and restrictions to the following proy
certi)cates. The allowance is that the proy certi)cate does not have
to be signed by a +". The restriction is that the distinguished name
#D($ of the proy certi)cate has to start with the D( of the user
certi)cate #e.g. +M+*, 1Mcern, +(M?ohn DoeF$. This way the user
cannot pretend to be someone else by maBing a proy with D( +M+*,
1Mcern, +(M?ane DoeF. The proies are short lived, so the program
using the &&6 connection may be running while the proy is updated.
Ior this reason the user credentials #for eample the proy certi)cate$
can be made to be reloaded periodically. 1pen &&6 saves the user
credentials using two )les, one for the user certi)cate and the other for
the private Bey. 2ith the EDG ?ava security pacBage these credentials
can be loaded easily. The +"s periodically releases lists of revoBed
certi)cates in a certi)cate revocation list #+96$. The EDG ?ava security
8/10/2019 Datagrid Mganagement Portal (Synopsis)
17/26
pacBage supports this +96 mechanism and even if the program using
the pacBage is running, these lists can be periodically and
automatically reloaded into the program by setting the reload interval.
The integration to ?aBarta Tomcat #a ?ava webserver and servlet
container$ is done with an interface class and to use it only the ?aBarta
Tomcat con)guration )le has to be set up accordingly.
The ?aBarta "is &1" frameworB provides an easy way to change the
underlying &&6 socBet implementation on the client side. 1nly a simple
interface class was needed and to turn it on a system variable has to
be set while calling the ?ava program. /n the server side the integration
was even simpler as "is runs on top of Tomcat and Tomcat can be set
up as above. Due to issues of performance, many of the services
described in this document have euivalent clients written in +CC. To
this end, there are several +CC &1" clients that have been written
based on the &1" library. /n order to provide the same authentication
and authori!ation functionality as in the corresponding ?ava &1"
clients, an accompanying + library is being developed for &1". 2hen
ready, it is to provide support for mutual authentication between &1"
clients and &1" servers, support for the coarse-grained authori!ation
as implemented in the server end by the "uthori!ation %anager
#described below$ and veri)cation of both standard ;:5K and G&/ style
server and server proy certi)cates.
Corse /rine$ "thori2tion
The EDG ?ava security pacBage only implements the coarse grained
authori!ation. The coarse grained authori!ation decision is made in the
server before the actual call to the service and can maBe decisions
such as what Bind of access does this user have to that database
tableF or what Bind of access does this user have to the )le systemF.
The )ne grained authori!ation that answers the uestion what Bind of
access does this user have to this )leF can only be handled inside the
8/10/2019 Datagrid Mganagement Portal (Synopsis)
18/26
service, because the actual )le to access is only Bnown during the
eecution of the service. The authori!ation mechanism is positioned in
the server before the service. /n the EDG ?ava security pacBage the
authori!ation is implemented as role based authori!ation.
+urrently the authori!ation is done in the server end and the server
authori!es the user, but there are plans to do mutual authori!ation
where also the client end checBs that the server end is authori!ed to
perform the service or to save the data. The mutual authori!ation is
especially important in the medical )eld where the medical data can
only be stored in trusted servers. The role based authori!ation happens
in two stages, )rst the system checBs that the user can play the role
he reuested #or if there is a default role de)ned for him$.
The role the user is authori!ed to play is then mapped to a service
speci)c attribute. The role de)nitions can be the same in all the
services in the #virtual$ organi!ation, but the mapping from the role to
the attribute is service speci)c. The service speci)c attribute can be for
eample a user id for )le system access of database connection id with
precon)gured access rights. /f either step fails, the user is not
authori!ed to access the service using the role he reuested. There are
two modules to interface to the information 'ow between the client
and the serviceH one for normal *TT web traLc and the other for
&1" web services. The authori!ation mechanism can attach to other
information 'ows by writing a simple interface module for them./n a
similar fashion the authori!ation information that is used to maBe the
authori!ation decisions can be stored in several ways. Ior simple and
small installation and for testing purposes the information can be a
simple ;%6 )le.
Ior larger installations the information can be stored into a database
and when using the Globus tools to distribute the authori!ation
8/10/2019 Datagrid Mganagement Portal (Synopsis)
19/26
information, the data is stored in a tet )le that is called the gridmap
)le.
Ior each of these stores there is a module to handle the peci)cs of that
store and to add a new way to store the authori!ation information. 1nly
a interface module needs to be written. 2hen the virtual organi!ation
membership service #01%&$ is used the information provided by the
01%& server can be used for the authori!ation decisions and all the
information from the 01%& is parsed and forwarded to the service.
A$ministrtion we) interfce
The authori!ation information usually ends up being rather comple,
and maintaining that manually would be diLcult, so a web based
administration interface was created. This helps to understand the
authori!ation con)guration, eases the remote management and by
maBing management easier improves the security.
Conc%"sions
The second generation of our data management services has been
designed and implemented based on the web service paradigm. /n this
way, we have a 'eible and etensible service frameworB and are thus
prepared to follow the general trend of the upcoming 1G&" standard
that is based on web service technology. &ince interoperability of
services seems to be a Bey feature in the upcoming years, we believe
that our approach used in the second generation of data management
is compatible with the need for service interoperability in a rapidly
changing Grid environment.
1ur design choices have been as follows we aim for supporting robust,
highly available commercial products as well as standard open source
technology #%y&86, Tomcat, etc.$.
8/10/2019 Datagrid Mganagement Portal (Synopsis)
20/26
MOD5LES#
"dministrator
1rgani!ation %anager
@ser
A$ministrtor#
"n administrator has all the privileges that of the guest as well
as the normal registered user. "long with these common features an
administrator has the administrator related features such as creating
new users and granting roles to those newly created users. The rolesgranted by the administrator cannot be changes by the user. "n
administrator can create new user as a guest or as an user or an
administrator. The access levels are as per the grants done by the
administrator.
"n administrator can also be part of a team and could lead a
project team this is possible only if administrator when building a team
includes himself in the team section. /f included as a manager he is not
a part of the team but supervisor of the team.
The register option on the homepage of the application is provided
only to register a new user as a guest.
ACCESS CONTROL 0OR DATA 4HICH RE(5IRE 5SER
A5THENTICATION
The following commands specify access control identi)ers and
they are typically used to authori!e and authenticate the user
#command codes are shown in parentheses$.
5SER NAME *5SER,
8/10/2019 Datagrid Mganagement Portal (Synopsis)
21/26
The user identi)cation is that which is reuired by the server for
access to its )le system. This command will normally be the )rst
command transmitted by the user after the control connections
are made #some servers may reuire this$.
PASS4ORD *PASS,
This command must be immediately preceded by the user name
command, and, for some sites, completes the user3s
identi)cation for access control. &ince password information is
uite sensitive, it is desirable in general to NmasBN it or suppress
type out.
5sers#
The user must be a registered user. @nless they are not allowed
to access.
App%iction Architect"re#6
The base @/ of the %arBeting is created using "&.(ET web
pages #.asp )les$. 9eusable @/ widgets, such as the navigation menus,
are implemented as "&.(ET user controls. @ser controls are also
used to create dynamic page content, such as the list of most popular
items.
The data for the %arBeting in &86 server database and accessed
via stored procedures. The "D1.(ET database code to access these
stored procedures is then encapsulated in a component layer. The
remote-order tracBing facilities are implemented as "&.(ET webservices.
0et"res
1nline administration of site layout, content and security.
8/10/2019 Datagrid Mganagement Portal (Synopsis)
22/26
9oles-based security for viewing content, editing content, and
administering the site.
This includes built-in "dministration pages for setting up your
portal, adding content, and setting security options.
8/10/2019 Datagrid Mganagement Portal (Synopsis)
23/26
O
8/10/2019 Datagrid Mganagement Portal (Synopsis)
24/26
8/10/2019 Datagrid Mganagement Portal (Synopsis)
25/26
8/10/2019 Datagrid Mganagement Portal (Synopsis)
26/26