Date post: | 22-Nov-2014 |
Category: |
Documents |
Upload: | eni-budiyarti |
View: | 309 times |
Download: | 0 times |
Installations.Installations.
2
Introduction to NPACI Rocks
• NPACI Rocks Cluster Distribution• Project by
– National Partnership for Advance Computational Infrastructure (NPACI)
– San Diego Supercomputer Center (SDSC)• http://rocks.npaci.edu/or• http://www.rocksclusters.org/• Latest release version is 4.0.0
3
NPACI Rocks Cluster Distribution
• Goal: make clusters easy. • Uses an SQL database to store the
definitions of these global configurations• Supported Hardware
– Processors• x86 (ia32, AMD, etc.) • IA-64 (Itanium, McKinley, etc.)• x86_64 (AMD Opteron)
• Rocks base on CentOS 4.0 (RHEL 4.0)
4
Physical Assembly• Frontend
– Nodes of this type are exposed to the outside world.
– Many services (NFS, NIS, DHCP, NTP, MySQL, HTTP, ...) run on these nodes.
• Compute – These are the workhorse nodes.
• Ethernet Network – All compute nodes are connected with ethernet
on the private network.
The every machine in cluster called “node”
5
The Rocks cluster architecture
Strictly
6
The Rocks & Rolls
• The Rocks & Rolls Contained : – Boot Roll 1 Disc. (Required)
– OS Roll 4 Disc (Required Disc 1,2)
area51 base ganglia grid hpc java kernel myrinet os sge vizjumbo
area51 base ganglia hpc java kernel vizviz
area51 base ganglia grid hpc java kernel sgegrid
area51 base ganglia hpc java kernel sge [recommended]compute
base hpc kernelbare bones
7
The Rocks & Rolls (cont’d)
• Rocks Base – The linux system and standard cluster tools.
• HPC Roll– High Performance Computing – Contained more library for high performance
computing such as MPI (Message Passing Interface) library for parallel programming.
8
The Rocks & Rolls (cont’d)
• SGE (Sun Grid Engine)– SGE is a distributed resource management
software• Grid Roll
– uses the NSF Middleware Initiative (NMI) Release 3.1 to provide Globus connectivity
9
The Rocks & Rolls (cont’d)
• Intel Roll – install and configure the Intel C compiler
(version 8.0) and – the Intel Fortran compiler (version 8.0) for x86
or IA-64 machines. • The PBS Roll
– installs and configures the Open Portable Batch System scheduler.
10
Minimum Hardware Requirements
• Frontend Node– Disk Capacity: 16 GB – Memory Capacity: 512 MB – Ethernet: 2 physical ports (e.g., "eth0" and
"eth1")• Compute Node
– Disk Capacity: 16 GB – Memory Capacity: 512 MB – Ethernet: 1 physical port (e.g., "eth0")
11
Installation Preparing
• In our workshop, We will Install the pack of rocks consist of: – Boot Roll (compute roll)– OS Roll Disc 1– OS Roll Disc 2– Grid Rolls
• Check it, Your cluster and check it your self.
12
Rocks Installation.
• Pick the Disc, The Rocks base disc 1 and HPC Roll.
• Insert the Rocks Base CD into your frontend machine and reset the frontendmachine.
• After the frontend boots off the CD, you will see the boot screen:
13
Rocks installation 1
When you see the screen above, type: frontend
14
you'll see a screen that looks like:
15
Rocks installation 2
After the CD/DVD drive ejects,Put the OS - Disk 1 Roll CD and select 'Ok'
16
Rocks installation 3
If you have not another roll choose “No”.
17
Rocks installation 4
Fill information of your cluster.
18
Rocks installation 5
Automatic is the default
19
Automatic partitioning
remainder of root disk/export (symbolically linked to /state/partition1)
1 GBswap
6 GB/
Size Partition Name
20
Rocks installation 6eth0 for private network.
It is recommended that you accept the defaults.
21
Rocks installation 7eth1 for Public network
Set up the networking parameters for connectedto the outside network.
22
Rocks installation 8
Configure the Gateway and DNS.
23
Rocks installation 9
Configure the time
24
Rocks installation 10
Input the root password
25
Rocks installation 11
example screen above,insert the Roll into the drive and select 'Ok'
26
Rocks installation 12
the packages will be installed.
27
Rocks installation 13
• Then the installer will ask for each of the roll CDs you added at the beginning of the frontend installation.
• Put the appropriate roll CD in the drive when prompted and hit 'Ok'.
• After the last roll CD is installed, the machine will reboot.
28
Rocks installation 14
In the First time of logging into the frontend, When the system ask you for pass phrase of SSH,
Essentially! press Enter for pass it.Remember not fill any data!!!
29
Install compute node
• Compute node installation1. Login to the frontend node as root. 2. Run a program which captures compute
node DHCP requests and puts their information into the Rocks MySQLdatabase: • # insert-ethers
• you will see next screen:
30
Install compute node (cont’d)
• Take the Rocks base disc 1 and put it in your first compute node
• If you don't have a CD drive in your compute nodes, you can use PXE (Network Boot).
31
Install compute node (cont’d)• When the frontend
machine receives the DHCP request from the compute node
• will be displayed for a few seconds and then you'll see the following:
32
Install compute node (cont’d)
• Press F1 for exit of insert-ethers • Default name of compute node is compute-X-X• The first digit call cabinet. It same the cluster
group number. • For install another cabinet you can restart the
insert-ethers again and use option –cabinet – # insert-ethers --cabinet=1
• If you use before command you can get compute node in cabinet “compute-1-X”
33
Remove a compute node from the cluster
• Command– # insert-ethers --remove="[your compute node
name]"• Example : remove compute-0-1
– # insert-ethers --remove="compute-0-1"
34
cluster-fork Command.
• cluster-fork is a command for distribute another command to all node in the cluster
• Example– #cluster-fork poweroff
• For an example the poweroff command is distributed to all compute node in cluster. And let few minute all compute node is shatdown
The End