+ All Categories
Home > Documents > Desktop Introduction. MASSIVE is … A national facility $8M of investment over 3 years Two high...

Desktop Introduction. MASSIVE is … A national facility $8M of investment over 3 years Two high...

Date post: 18-Jan-2018
Category:
Upload: daniel-underwood
View: 220 times
Download: 0 times
Share this document with a friend

If you can't read please download the document

Transcript

Desktop Introduction MASSIVE is A national facility $8M of investment over 3 years Two high performance computing facilities, located at the Australian Synchrotron and Monash University, that will be designed for data processing and visualisation; Specialised imaging and visualisation software and databases; Expertise in visualisation, image processing, image analysis, HPC and GPU computing. NCI Specialised Facility for Imaging and Visualisation MASSIVE Team Dr Wojtek James Goscinski, Coordinator Dr Paul McIntosh, Senior HPC Consultant/Technical Project Leader Dr Chris Hines, Senior HPC Consultant Dr Kai Xi, HPC Consultant Damien Leong, Senior HPC Consultant Dr Wendy Mason, eResearch Engagement Specialist Jupiter Hu, Software Specialist, Characterisation Virtual Laboratory + Monash eSolutions Facilities MASSIVE1 (m1) Real-time Computer Tomography at the Imaging Beamline at the Australian Synchrotron MASSIVE2 (m2) General facility for image processing, data processing, simulation and analysis, GPU computing Specialised fat nodes for visualisation MASSIVE Source: Google Maps M1 M2 Us CVL (NeCTAR) CVL (NeCTAR) 42 nodes, each with: 2 x 6-core X5650 CPUs 48 GB of RAM 2 x nVidia M2070 GPUs 42 nodes, each with: 2 x 6-core X5650 CPUs 48 GB of RAM 2 x nVidia M2070 GPUs 58TB GPFS file system capable of 2GB/s+ sustained write 4X QDR Mellanox IS5200 InfiniBand switch (~32Gb/s) M1 Stage 2 +95TB 32 compute nodes: 2 x 6-core X5650 CPUs 48 GB of RAM 2 x nVidia M2070 GPUs 32 compute nodes: 2 x 6-core X5650 CPUs 48 GB of RAM 2 x nVidia M2070 GPUs 250TB GPFS file system capable of 3GB/s+ sustained write 4X QDR Mellanox IS5200 InfiniBand switch (~32Gb/s) M2 10 Vis nodes: 192 GB RAM 2 x nVidia M2070Q 10 Vis nodes: 192 GB RAM 2 x nVidia M2070Q Stage TB Stage 2: GB RAM nodes GB RAM nodes +86 NVIDIA Tesla K20s +20 Intel Xeon Phis Stage 2: GB RAM nodes GB RAM nodes +86 NVIDIA Tesla K20s +20 Intel Xeon Phis MASSIVE Resources Total 2224 CPU-cores 74 nodes with 48GB of RAM 56 nodes with 64GB of RAM 20 nodes with 128GB of RAM 10 nodes with 196GB of RAM 244 GPUs (total ~250,000 CUDA cores) 76 NVIDIA K20 GPU-coprocessors 20 NVIDIA M2070Qs (Vis) 148 NVIDIA M2070s GPU-coprocessors 20 Intel Phis (1200 cores) File M2 ~350TB M1 ~150TB MASSIVE Photo: Steve Morton MASSIVE /home/researcher/ |-- myProject001 -> /home/projects/myProject001 |-- myProject001_scratch -> /scratch/myProject001 /home/researcher/ |-- myProject001 -> /home/projects/myProject001 |-- myProject001_scratch -> /scratch/myProject001 |-- Mx -> Software Slurm Scheduler Linux Centos 6 module list/userguide /software-instructions /installed-software Strudel Science Research Outcomes Publications accepted or published 2012 74 2013 160 2014 218 https://www.massive.org.au/news https://www.massive.org.au/about/acknowledgement Desktop Success July-Dec 2012 the MASSIVE Desktop was used by 70+ MASSIVE users average use of 90 times 85% of those users have used the desktop more than 10 times. 55% of those users have used the desktop more than 50 times. Help Desk For any issues with using MASSIVE or the documentation on this site please contact the Help Desk. Phone: 03 Consulting For general enquires and enquires about value added services such as help with porting code to GPUs or using MASSIVE for Imaging and Visualisation, use the following: Phone: 03 Other For other enquiries please contact the MASSIVE Coordinator: ExercisesTraining Accounts username: train[XY] password: MTrain[[X+5]Y] train01 - train54 MTrain51 MTrain104


Recommended