Our Objectives• Explore the applicability of Microsoft technologies to real world scientific domains with
a focus on data intensive applicationso Expect data deluge will demand multicore enabled data analysis/miningo Detailed objectives modified based on input from Microsoft such as interest in CCR,
Dryad and TPL• Evaluate and apply these technologies in demonstration systems
o Threading: CCR, TPLo Service model and workflow: DSS and Robotics toolkito MapReduce: Dryad/DryadLINQ compared to Hadoop and Azure o Classical parallelism: Windows HPCS and MPI.NET, o XNA Graphics based visualization
• Work performed using C#• Provide feedback to Microsoft• Broader Impact
o Papers, presentations, tutorials, classes, workshops, and conferenceso Provide our research work as services to collaborators and general science
community
Approach• Use interesting applications (working with domain experts) as benchmarks
including emerging areas like life sciences and classical applications such as particle physicso Bioinformatics - Cap3, Alu, Metagenomics, PhyloDo Cheminformatics - PubChemo Particle Physics - LHC Monte Carloo Data Mining kernels - K-means, Deterministic Annealing Clustering, MDS, GTM,
Smith-Waterman Gotoh• Evaluation Criterion for Usability and Developer Productivity
o Initial learning curveo Effectiveness of continuing developmento Comparison with other technologies
• Performance on both single systems and clusters
Major Achievements• Analysis of CCR and DSS within SALSA paradigm with very detailed performance work on
CCR • Detailed analysis of Dryad and comparison with Hadoop and MPI. Initial comparison
with Azure• Comparison of TPL and CCR approaches to parallel threading• Applications to several areas including particle physics and especially life sciences• Demonstration that Windows HPC Clusters can efficiently run large scale data intensive
applications• Development of high performance Windows 3D visualization of points from dimension
reduction of high dimension datasets to 3D. These are used as Cheminformatics and Bioinformatics dataset browsers
• Proposed extensions of MapReduce to perform datamining efficiently• Identification of datamining as important application with new parallel algorithms for
Multi Dimensional Scaling MDS, Generative Topographic Mapping GTM, and Clustering for cases where vectors are defined or where one only knows pairwise dissimilarities between dataset points.
• Extension of robust fast deterministic annealing to clustering (vector and pairwise), MDS and GTM.
8x1x
22x
1x4
4x1x
48x
1x4
16x1
x424
x1x4
2x1x
84x
1x8
8x1x
816
x1x8
24x1
x82x
1x16
4x1x
168x
1x16
16x1
x16
2x1x
244x
1x24
8x1x
2416
x1x2
424
x1x2
42x
1x32
4x1x
328x
1x32
16x1
x32
24x1
x32
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Concurrent Threading on CCR or TPL Runtime(Clustering by Deterministic Annealing for ALU 35339 data points)
CCR TPL
Parallel Patterns (Threads/Processes/Nodes)
Para
llel O
verh
ead
Typical CCR Comparison with TPL
• Hybrid internal threading/MPI as intra-node model works well on Windows HPC cluster• Within a single node TPL or CCR outperforms MPI for computation intensive applications like clustering of
Alu sequences (“all pairs” problem)• TPL outperforms CCR in major applications
Efficiency = 1 / (1 + Overhead)
1x1x1
2x1x1
2x1x2
4x1x1
1x4x2
2x2x2
4x1x2
4x2x1
1x8x2
2x8x1
8x1x2
1x24x1
4x4x2
1x8x6
2x4x6
4x4x3
24x1x2
2x4x8
8x1x8
8x1x1
0
24x1x4
4x4x8
1x24x8
24x1x1
2
24x1x1
6
1x24x2
4
24x1x2
80
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
Clustering by Deterministic Annealing(Parallel Overhead = [PT(P) – T(1)]/T(1), where T time and P number of parallel units)
Parallel Patterns (ThreadsxProcessesxNodes)
Para
llel O
verh
ead
Thread
MPI
MPI
Thread
Thread
ThreadThread
MPI
Thread
ThreadMPIMPI
Threading versus MPI on nodeAlways MPI between nodes
• Note MPI best at low levels of parallelism• Threading best at Highest levels of parallelism (64 way breakeven)• Uses MPI.Net as a wrapper of MS-MPI
MPI
MPI
Biology MDS and Clustering Results
Alu Families
This visualizes results of Alu repeats from Chimpanzee and Human Genomes. Young families (green, yellow) are seen as tight clusters. This is projection of MDS dimension reduction to 3D of 35399 repeats – each with about 400 base pairs
Metagenomics
This visualizes results of dimension reduction to 3D of 30000 gene sequences from an environmental sample. The many different genes are classified by clustering algorithm and visualized by MDS dimension reduction
High Performance Data Visualization• Developed parallel MDS and GTM algorithm to visualize large and high-dimensional data• Processed 0.1 million PubChem data having 166 dimensions• Parallel interpolation can process up to 2M PubChem points
MDS for 100k PubChem data100k PubChem data having 166 dimensions are visualized in 3D space. Colors represent 2 clusters separated by their structural proximity.
GTM for 930k genes and diseasesGenes (green color) and diseases (others) are plotted in 3D space, aiming at finding cause-and-effect relationships.
GTM with interpolation for 2M PubChem data2M PubChem data is plotted in 3D with GTM interpolation approach. Red points are 100k sampled data and blue points are 4M interpolated points.
[3] PubChem project, http://pubchem.ncbi.nlm.nih.gov/
Applications using Dryad & DryadLINQ
• Perform using DryadLINQ and Apache Hadoop implementations• Single “Select” operation in DryadLINQ• “Map only” operation in Hadoop
CAP3 [1] - Expressed Sequence Tag assembly to re-construct full-length mRNA
Input files (FASTA)
Output files
CAP3 CAP3 CAP3
0
100
200
300
400
500
600
700
Time to process 1280 files each with ~375 sequences
Aver
age
Tim
e (S
econ
ds) Hadoop
DryadLINQ
[4] X. Huang, A. Madan, “CAP3: A DNA Sequence Assembly Program,” Genome Research, vol. 9, no. 9, pp. 868-877, 1999.
All-PairsUsing DryadLINQ
35339 500000
2000400060008000
100001200014000160001800020000
DryadLINQMPI
Calculate Pairwise Distances (Smith Waterman Gotoh)
125 million distances4 hours & 46 minutes
• Calculate pairwise distances for a collection of genes (used for clustering, MDS)• Fine grained tasks in MPI• Coarse grained tasks in DryadLINQ• Performed on 768 cores (Tempest Cluster)
[5] Moretti, C., Bui, H., Hollingsworth, K., Rich, B., Flynn, P., & Thain, D. (2009). All-Pairs: An Abstraction for Data Intensive Computing on Campus Grids. IEEE Transactions on Parallel and Distributed Systems , 21, 21-36.
Hadoop/Dryad ComparisonInhomogeneous Data I
0 50 100 150 200 250 3001500
1550
1600
1650
1700
1750
1800
1850
1900
Randomly Distributed Inhomogeneous Data Mean: 400, Dataset Size: 10000
DryadLinq SWG Hadoop SWG Hadoop SWG on VM
Standard Deviation
Tim
e (s
)
Inhomogeneity of data does not have a significant effect when the sequence lengths are randomly distributedDryad with Windows HPCS compared to Hadoop with Linux RHEL on Idataplex (32 nodes)
Hadoop/Dryad ComparisonInhomogeneous Data II
0 50 100 150 200 250 3000
1,000
2,000
3,000
4,000
5,000
6,000
Skewed Distributed Inhomogeneous dataMean: 400, Dataset Size: 10000
DryadLinq SWG Hadoop SWG Hadoop SWG on VM
Standard Deviation
Tota
l Tim
e (s
)
This shows the natural load balancing of Hadoop MR dynamic task assignment using a global pipe line in contrast to the DryadLinq static assignmentDryad with Windows HPCS compared to Hadoop with Linux RHEL on Idataplex (32 nodes)
Cap3 Efficiency
•Ease of Use – Dryad/Hadoop are easier than EC2/Azure as higher level models•Lines of code including file copy
Azure : ~300 Hadoop: ~400 Dyrad: ~450 EC2 : ~700
Usability and Performance of Different Cloud Approaches
•Efficiency = absolute sequential run time / (number of cores * parallel run time)•Hadoop, DryadLINQ - 32 nodes (256 cores IDataPlex)•EC2 - 16 High CPU extra large instances (128 cores)•Azure- 128 small instances (128 cores)
Cap3 Performance
Instance Type Memory
EC2 compute
units
Actual CPU cores
Cost per hour
Cost per Core per
hour
Large (L) 7.5 GB 4 2 X (~2Ghz) 0.34$ 0.17$
Extra Large (XL) 15 GB 8 4 X
(~2Ghz) 0.68$ 0.17$
High CPU Extra Large (HCXL)
7 GB 20 8 X (~2.5Ghz) 0.68$ 0.09$
High Memory 4XL (HM4XL)
68.4 GB 26 8X
(~3.25Ghz) 2.40$ 0.3$
Tempest@IU 48GB n/a 24 1.62$ 0.07$
Table 1 : Selected EC2 Instance Types
Twister(MapReduce++)• Streaming based communication• Intermediate results are directly transferred
from the map tasks to the reduce tasks – eliminates local files
• Cacheable map/reduce tasks• Static data remains in memory
• Combine phase to combine reductions• User Program is the composer of
MapReduce computations• Extends the MapReduce model to iterative
computationsData Split
D MRDriver
UserProgram
Pub/Sub Broker Network
D
File System
M
R
M
R
M
R
M
R
Worker NodesM
R
D
Map Worker
Reduce Worker
MRDeamon
Data Read/Write
Communication
Reduce (Key, List<Value>)
Iterate
Map(Key, Value)
Combine (Key, List<Value>)
User Program
Close()
Configure()Staticdata
δ flow
Different synchronization and intercommunication mechanisms used by the parallel runtimes
Iterative Computations
K-means Matrix Multiplication
Performance of K-Means Parallel Overhead Matrix Multiplication