Date post: | 18-Jul-2015 |
Category: |
Documents |
Upload: | andrew-howard |
View: | 34 times |
Download: | 4 times |
nci.org.aunci.org.au
@NCInews
InGeneoS Intercontinental Genetic sequencing over trans-Pacific networks and Supercomputers
• Jakub Chrzeszczyk, NCI Cloud Team • Andrew Howard, NCI HPC Team
nci.org.au
InGeneoS Components
• NCI and A*Star collaboration • Goals
• Utilise trans-Pacific extended InfiniBand and shared SuperComputer resources to accelerate DNA analysis • Transfer large (~300Gb) genetic sequence data
sets generated in Canberra from NCI to A*Star Singapore for analysis on the A*Star Aurora large memory system with the results visualised in New Orleans (SC14)
• Utilise NCI High Performance InfiniBand Cloud HPC systems for visualisation of genetic data results produced by Aurora
nci.org.au
Genetic Sequence Workflow for SC14
GenETIC SEQUENCE: GATACGGAGTTTA………A 381GB
NCI
AUA*Star
SG
381G
1143G
nci.org.au
Network
Singapore/Canberra10Gbs (~30,000km)
Canberra/Singapore1Gbs (~6000km)
nci.org.au
Extended InfiniBand Fabric
Obsidian E100InfiniBand Range Extender
Obsidian E100InfiniBand Range Extender
Obsidian E100
NCI
AU
Obsidian E100InfiniBand Range Extender
Obsidian E100InfiniBand Range Extender
TITechJP
Obsidian E100
Obsidian E100
Obsidian Crossbow
InfiniBand Switch
Obsidian E100InfiniBand Range Extender
GaTechUS
SC14US
A*StarSG
nci.org.au
Experiment 1: AU to SG large data transfer, process in SG on Aurora, return results data to AU
A*Star Aurora
Obsidian E100InfiniBand Range Extender
CX250CX250
Obsidian E100InfiniBand Range Extender
301Gb
301Gb
SingAREN
SG
NCI
Mellanox 6036InfiniBand Switch
AARNet
AU
PNWGPUS
CX250
CX250Aurora
ProcessResultData
NCICanberraAU
A*Star
SG
LustreFile
System
nci.org.au
NCI InfiniCloud: HPC InfiniBand performance in a Cloud
NCI InfiniCloud
InfiniBand
LustreFile
System
OpenStack
LustreFile
System
nci.org.au
NCI InfiniCloud: HPC InfiniBand performance in the Cloud
• OpenStack Cloud supported by a 56Gbs InfiniBand fabric • High Performance IB MPI • High Performance Lustre File System access
• Built using Mellanox Neutron modules • Flexible top performance computational
resources ‘on demand’ • Flexible OS and application stack supported by
an I/O architecture which includes local Solid State Disk and high performance large file storage using Lustre
nci.org.au
Experiment 2: SG to AU data transfer, process on InfiniCloud, view results at SC14
SC14US
A*Star
SG
Obsidian E100InfiniBand Range Extender
CX250
CX250
CX250
CX250
Mellanox 6036InfiniBand Switch
Obsidian E100InfiniBand Range Extender
Obsidian E100InfiniBand Range Extender
Data
SingAREN
SG
PNWGPUS
AARNet
AU
NCIInfiniCloud
Obsidian E100InfiniBand Range Extender
Process
Display
ESNETUS
nci.org.au
AU to SG data transfer speedAv
erag
e Ti
me
0h
1h
2h
3h
4h
5h
298Gb data set rsync NCI Io A*Star R&E path rsync NCI to A*Star 10Gbs IPoIB DSYNC 1Gbs IB path DSYNC 10Gbs IB path (~26,000Km)
7m
1h 21m
3h
4h 7m
nci.org.au
SG to AU data transfer speed Av
erag
e Ti
me
0h1h2h3h4h5h6h7h8h9h
10h11h12h13h14h
1,143Gb data set rsync A*Star to NCI R&E path rsync A*Star to NCI 10Gbs IPoIB DSYNC 1Gbs IB path DSYNC 10Gbs IB path (~26,000Km)
24m
4h 5m
9h
12h 33m
nci.org.au
SG to AU Observed data transfer rate at 10Gb using Obsidian DSYNC
1 second 900 MB
1 minute 54 GB
1 hour 3.24 TB
1 day 77 TB
1 week 539 TB
1 month 2.3 PB
nci.org.au
Thanks
This project made possible with the kind assistance of
A*Star ANU John Curtin School of Medical Research Obsidian Strategics SingAREN AARNet Pacific Northwest GigaPOP ESNet
and anyone else we’ve forgotten to thank