+ All Categories
Home > Documents > VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.

VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.

Date post: 29-Dec-2015
Category:
Upload: blaise-griffith
View: 227 times
Download: 4 times
Share this document with a friend
14
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013
Transcript
Page 1: VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.

VIPBG LINUX CLUSTER

By

Helen Wang

March 29th, 2013

Page 2: VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.

Basic Beowulf Cluster Structure

Page 3: VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.

A brief look of our cluster

Page 4: VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.

VIPBG Beowulf Cluster

• Server Name: light.vipbg.vcu.edu• IP: 128.172.85.6• 2nd server as failover: group.vipbg.vcu.edu• IP: 128.172.85.5 (invisible on mission)• To access our server and how to use it,

check the wiki page https://wiki.vcu.edu/display/vipbgit/VIPBG+Cluster+System

Page 5: VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.

Access ClusterWhat you need to do to access to server?

• get username and password• Get webvpn.vcu.edu to install VCU webvpn on your PC so you can access it from

anywhere.• change your password to be qualified password:

$passwd• set up necessary variables to customize your personal console templates:

~/.cshrc ~/.login

echo $PATH - add searching path into your .cshrc file

• Make temp and bin directory under your home dir$mkdir tmp$mkdir bin

Page 6: VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.

Access ClusterServer and nodes• Master node : light.vipbg.vcu.edu• Running CentOS ( red hat kernel)Version 5.6, x86-64 • Open source or Software download – choose 64 bits CentOS or RHEL 5 if possible

Purposes and policy:front-end user interface; Do not run job directly on master, it will be terminated without contact user.accessible from outside by permission and webvpn

Slave nodes (nodes):node22.cl.vcu.edu – node31.cl.vcu.edu (8 cores Xeon processors with 32 GB RAM)Node 2-19 fast nodes ( 12 cores Xeon processors with 98GB RAM on each node)

Purposes and policy:computation; not prefer to access user interface, accessible via master and managed by portable batch management ( PBS ); fast;internal network; -10.0.0.X, not accessible directly from outside

Page 7: VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.

Access Cluster

• Nodes and queue configuration$qstat –q ## will give you all the queues and current running statusserver: light

Queue Memory CPU Time Walltime Node Run Que Lm State

---------------- ------ -------- -------- ---- ----- ----- ---- -----

workq -- -- -- -- 37 0 -- E R

serial -- -- -- -- 58 0 -- E R

mxq -- -- -- -- 1 0 -- E R

express -- -- -- -- 0 0 -- E R

openmx -- -- -- -- 0 0 -- E R

slowq -- -- -- -- 0 0 -- E R

----- -----

96 0

$pbsnodes –a |more ## will give you all of the queue and nodes detailed information with page by page

Page 8: VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.

Access Cluster• Nodes and queues

Serial ( default ): nodes assigned:Node2-3:8 cores, 24GB RAM

Node19-15: 23 cores, 64GB RAM

workq ( dedicated to converge project)

Node14-12:12 cores, 64GB RAM

Openmx ( dedicated to R openmx and parallel jobs)

Node11-9: 23 cores, 64GB RAM

Mxq (dedicated to traditional mx jobs or other open sources jobs, such as plink)

Node6-5

Floating nodes: node4, node7, node8 – currently assigned to workq

Page 9: VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.

Accessing ClusterSoftware available on master and nodes• R 2.15.2 with CRAN libraries and bioconductor libraries• C++/G++ compiler, fortran compiler ( f77/f90)• Python/biopython compilers• Open sources needed by users • Upon users requests• SAS 9.3 is on all nodes• PLINK• Open Mx• Impute2, samtool, gtool and open sources as requested.

Page 10: VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.

Commands to be used on cluster

• Submitting R jobs on normal queue$qR MYSCRIPT ( if the script name is MYSCRIPT.R, submit it with no .R extension)each users is allowed to run 50 jobs simultaneously

• Submitting jobs on large memory queuelarge memory queue is on node1 for memory intensive jobs ( limited 8 totally)$qRL MYSCRIPT

Page 11: VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.

Template used on cluster

• Modify template to create your own pbs script for running programs

• #!/bin/bash• #PBS -q QUEUENAME ##serial, sasq, workq• #PBS -N MYSCRIPT• #• echo "******STARTING****************************"• #• # cd to the directory from which I submitted the job. Otherwise it will execute in my home directory.• #• set WORKDIR = ~/YOURWORDIR• #PBS -V• #echo “PBS batch job id is $PBS_JOBID“• echo "Working directory of this job is: " $WORKDIR• #• echo "Beginning to run job“• Command line you need to execute the job ( /home/huan/bin/calculate - PARAMETEERS)

• SAVE IT IN AN FILE MYSCRIPT• $qsub MYSCRIPT

Page 12: VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.

Commands used on cluster

• Submitting interactive jobwhen there is no script command for submitting jobs using new application

$qsub -I to get on a nodeNODE7$plink –script PLKSCRIPT

• Checking job status“R” Running; “E” Exiting “H” Holding “Q” Queued$qstat$qstat –n ( show which node your job is on)

Page 13: VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.

Use cluster wisely• Quit or cancel job submission

$qstat ( to get the jobID)#qdel YOURJOBID

• To kill all of your jobs if you have too many$qstat -u YOURNAME | tail --lines=+6 |awk '{print "qdel ", $1}‘ |/bin/sh

• Limitation for the name of the SCRIPTNo more than 10 charactersno space in betweenno special characters.use a temporary name if necessary and change it back when the job is done.

• Maximum job for each useer: 30, • No more than 50 jobs for each submission• No ssh connection directly to nodes• Send request to admin if you need to run large jobs

Page 14: VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.

New policies

• User quota will be enabled on cluster, each one will have 1TB, special request needed for more space.

• 6 month after leave vipbg, yoru account will be deactivated

• Always check ~/tmp and remove the temp files your program generated.


Recommended