1Computer Science, University of WarwickComputer Science, University of Warwick
Condor daemonsCondor daemons
condor_masterIt is responsible for keeping the rest of the Condor daemons running in a pool
The master spawns the other daemon
runs on every machine in your Condor pool
condor_startdAny machine that wants to execute jobs needs to have this daemon running
It advertises a machine ClassAd
responsible for enforcing the policy under which the jobs will be started, suspended, resumed, vacated, or killed
condor_starterSpawned by condor_startd
sets up the execution environment , create a process to run the user job, and monitors the job running
Upon job completion, sends back status information to the submitting machine, and exits
2Computer Science, University of WarwickComputer Science, University of Warwick
condor_scheddAny machine that allows users to submit jobs needs to have the daemon running
Users submit jobs to the condor_schedd, where they are stored in the job queue.
condor_submit, condor_q, or condor_rm connect to the condor_schedd to view and manipulate the job queue
condor_shadowCreated by condor_schedd
runs on the machine where a job was submitted
Any system call performed on the remote execute machine is sent over the network to this daemon, and the shadow performs the system call (such as file I/O) on the submit machine and the result is sent back over the network to the remote job
3Computer Science, University of WarwickComputer Science, University of Warwick
condor_collector
collecting all the information about the status of a Condor pool
All other daemons periodically updates information to the collector
These information contain all the information about the state of the daemons, the resources they represent, or resource requirements of the submitted jobs
condor_status command connects to this daemon for information
condor_negotiator
responsible for all the matchmaking within the Condor system
responsible for enforcing user priorities in the system
4Computer Science, University of WarwickComputer Science, University of Warwick
Interactions among Condor daemonsInteractions among Condor daemons
5Computer Science, University of WarwickComputer Science, University of Warwick
Condor demons in runningCondor demons in running
6Computer Science, University of WarwickComputer Science, University of Warwick
Resource Management System : CondorResource Management System : Condor
Most jobs are not independent:Dependencies exists between jobs.
Second stage cannot start until first stage has completed.
Condor uses DAGMan - Directed Acyclic Graph Manager
DAGMan allows you to specify dependencies between your Condor jobs, then it run the jobs automatically in the sequence satisfying the dependencies.
DAGs are the data structures used by DAGMan to represent these dependencies.
Each job is a “node” in the DAG.
Each node can have any number of “parent” or “children” nodes – as long as there are no loops.
(example from Condor tutorial).
7Computer Science, University of WarwickComputer Science, University of Warwick
Resource Management System : CondorResource Management System : Condor
A DAG is defined by an text file separate from the Condor job description file, listing each of nodes and their dependencies:# diamond.dag
Job A a.sub
Job B b.sub
Job C c.sub
Job D d.sub
Parent A Child B C
Parent B C Child D
Dagman will exam the dag file, locate the submission file for each job and run the jobs in the right sequence.
Job A
Job B Job C
Job D
8Computer Science, University of WarwickComputer Science, University of Warwick
Example ClustersExample Clusters
9Computer Science, University of WarwickComputer Science, University of Warwick
BlueGeneBlueGene/L/L
Source: IBM
No. 1 in Top500 list from 2005-2007
10Computer Science, University of WarwickComputer Science, University of Warwick
BlueGeneBlueGene/L /L –– networkingnetworking
BlueGene system employs various network types.
Central is the torus interconnection network:
3D torus with wrap-around.
Each node connects to six
neighbours (bidirectional).
Routing achieved in hardware.
each link with 1.4 Gbit/s.
1.4 x 6 x 2= 16.8 Gbit/saggregate bandwidth
11Computer Science, University of WarwickComputer Science, University of Warwick
BlueGeneBlueGene/L/L
Other three networks:Binary combining tree
• Used for collective/global operations - reductions, sums, products , barriers etc.
• Low latency (2μS)
Gigabit Ethernet I/O network• Support file I/O
• An I/O node is responsible for performing I/O operations for 128 processors
Diagnostic & control network• Booting nodes, monitoring processors.
Each chip has the above four network interfaces (torus, tree, i/o, diagnostics)
Note specialised networks are used for different purposes -quite different from many other HPC cluster architectures.
12Computer Science, University of WarwickComputer Science, University of Warwick
BlueGeneBlueGene/L/L
Message Passing:
The BlueGene focussed a good deal of energy developing an efficient MPI implementation to reduce latency in the software stack.
Using the MPICH code-base as a start-point:• MPI library was enhanced with respect to machine architecture.
• For example, using the combining tree for reductions & broadcasts.
Reading paper:
“Filtering Failure Logs for a BlueGene/L Prototype”
13Computer Science, University of WarwickComputer Science, University of Warwick
ASCI QASCI Q
The Q supercomputing system at Los Alamos National Laboratory (LANL)
Product of Advanced Simulation and Computing (ASCI) program
Used for simulation and computational modelling
No. 2 in 2002 in Top500 supercomputer list
14Computer Science, University of WarwickComputer Science, University of Warwick
ASCI QASCI Q
“Classical” cluster architecture.
1024 SMPs (AlphaServer ES45s from HP) are put in one segment• Each with four EV-68 1.25Ghz CPUs with 16-MB cache
the whole system has 3 segments• The three segments can operate independently or as a single system
• Aggregate 60 TeraFLOPS capability.
• 33 Terabytes of memory
664 TB of global storage
Interconnection using • Quadrics switch interconnect (QSNet)
• High bandwidth (250MB/s) and Low latency (5us) network.
Top500 list: http://www.top500.org/system/6071
15Computer Science, University of WarwickComputer Science, University of Warwick
Earth SimulatorEarth Simulator
Built by NEC, located in the Earth Simulator Centre in Japan
Used for running global climate models to evaluate the effects of global warming
No.1 from 2002-04
16Computer Science, University of WarwickComputer Science, University of Warwick
Earth SimulatorEarth Simulator
640 nodes, each with 8 vector processors and 16GB memoryTwo nodes are installed in one cabinet
In total:5120 processors (NEC SX-5)
10 TeraByte memory
700 TeraByte of disk storage and 1.6 PetaByte of Tape storage
Computing capacity: 36 TFlop/s
Networking: Crossbar interconnection (very expensive)Bandwidth: 16GB/s between any two nodesLatency: 5us
Dual level parallelism: OpenMP in-node, MPI out of node
Physical installation: Machine resides on 3th floor; Cables on 2nd; Power generation & cooling on 1st and ground floor.
17Computer Science, University of WarwickComputer Science, University of Warwick
UK systems UK systems –– Cambridge Cambridge PowerEdgePowerEdge
576 Dell PowerEdge 1950 compute servers
Computing capability: 28TFlop/s
Each server has two Dual-Core Intel Xeon 5160 processors
3GHz and 8GB of memory
InfiniBand networkBandwidth: 10GBit
Latency: 7us
60 TeraByte of disk storage
18Computer Science, University of WarwickComputer Science, University of Warwick
Cluster NetworksCluster Networks
IntroductionCommunication has significant impact on application performance.
Interconnection networks therefore have a vital role in cluster systems.
As usual, the driver is performance…An increase in compute power typically demands proportional increases in lower latency / higher bandwidth communication services.
19Computer Science, University of WarwickComputer Science, University of Warwick
Cluster NetworksCluster Networks
Issues with cluster interconnections are similar to those with normal networks:
Latency & Bandwidth • Latency= sender overhead + switching overhead + (message size /
Bandwidth) + receiver overhead.
Topology type (bus, ring, torus, hypercube etc).
Routing, switching.
Direct connections (point-to-point) or indirect connections.
NIC (Network Interface Card) capabilities.
Physical media (wiring density, reliability)
Balance performance and cost
20Computer Science, University of WarwickComputer Science, University of Warwick
Interconnection TopologiesInterconnection Topologies
In standard LANs we have two general structures:Shared network (bus)
• As used by “classic” Ethernet networks.
• All messages are broadcast… each processor listens to every message.
• Requires complex access control (e.g. CSMA/CD).
• Collisions can occur: requires back-off policies and retransmissions.
• Suitable when the offered load is low - inappropriate for high performance applications.
• Very little reason to use this form of network today.
Switched network
• Permits point-to-point communications between sender & receiver.
• Fast internal transport provides high aggregate bandwidth.
• Multiple messages are sent simultaneously.
21Computer Science, University of WarwickComputer Science, University of Warwick
Metrics to evaluate network topologyMetrics to evaluate network topology
Useful metrics for switched network topology:Scalability : the network’s switch scalability with nodes.
Degree: number of links to / from a node.
Diameter: the shortest path between the furthest nodes.
Bisection width: the minimum number of links that must be cut in order to divide the topology into two independent networks ofthe same size (+/- one node). Essentially a measure of bottleneck bandwidth - if higher, the network will perform better under load.
22Computer Science, University of WarwickComputer Science, University of Warwick
Interconnection TopologiesInterconnection Topologies
Crossbar switch:Low latency and high throughput.
Switch scalability is poor - O(N2)
Lots of wiring…
23Computer Science, University of WarwickComputer Science, University of Warwick
Interconnection TopologiesInterconnection Topologies
Linear Arrays and RingsConsider networks with switch scaling costs better than O(N2).
In one dimension, we have simple linear arrays.
O(N) switches.
These can wrap around to make a ring or 1D torus.
good overall bandwidth but latency is high.
So 2D/3D Cartesian applications will perform poorly with this network.
24Computer Science, University of WarwickComputer Science, University of Warwick
Interconnection TopologiesInterconnection Topologies
2D MeshesCan wrap-around as a 2D torus.
Switch scaling: O(N)
Average degree: 4
Diameter: O(2n1/2)
Bisection width: O(n1/2)
25Computer Science, University of WarwickComputer Science, University of Warwick
Interconnection TopologiesInterconnection Topologies
Hypercubes:
K dimension, Switches N= 2K.
Diameter: O(K).
Good bisectional width (O(2K-1)).
26Computer Science, University of WarwickComputer Science, University of Warwick
Interconnection TopologiesInterconnection Topologies
Binary Tree:
Scaling:
• n = 2d processor nodes (where d = depth)
• 2d+1-1 switches
Degree: 3
Diameter: O(2d)
Bisection width: O(1)
27Computer Science, University of WarwickComputer Science, University of Warwick
Interconnection TopologiesInterconnection Topologies
Fat trees:
Similar in diameter to a binary tree.
Bisection width (which equates to bottleneck) is greatly improved due to additional dimensions.
28Computer Science, University of WarwickComputer Science, University of Warwick
Interconnection TopologiesInterconnection Topologies
Summary of topologies:
Topology Degree Diameter Bisection
1D Array 2 N-1 1
1D Ring 2 N/2 2
2D Mesh 4 2N1/2 N1/2
2D Torus 4 N1/2 2N1/2
Hypercube n=log(N)n N/2
There are others - we saw a 3D torus in the BlueGene/L section for instance.