Shared services – the future of HPC & big data facilities for UK research?Martin Hamilton, JiscDavid Fergusson & Bruno Silva, Francis Crick InstituteAndreas Biternas, King’s College londonThomas King, Queen Mary University of London
Photo credit: CC-BY-NC-ND JiscHPC & Big Data 2016
Shared services for HPC & big data
1. About Jisc– Who, why and what?– Success stories
2. Recent developments
3. Personal perspectives & panel discussion– David Fergusson & Bruno Silva, Francis Crick Institute– Andreas Biternas, King’s College London– Thomas King, Queen Mary University of London
HPC & Big Data 2016
1. About Jisc
1. About Jisc
Jisc is the UK higher education, further education and skills sectors’ not-for-profit organisation for digital services and solutions. This is what we do:›Operate shared digital infrastructure and services for universities and colleges
›Negotiate sector-wide deals, e.g. with IT vendors and commercial publishers
›Provide trusted advice and practical assistance
1. About Jisc
1. About Jisc
Janet network[Image credit: Dan Perry]
1. About Jisc
Janet network[Image credit: Dan Perry]
1. About JiscNetflix
VoicenetAkamai
Virgin Radio
Bogons
Logicalis UKPipex / GXN
BBC
Datahop
InTechnology
INUK
Simplecall
LINX multicast
Gamma
Simplecall Redstone
Updata
aql
Voicenet
Limelight
Limelight
AkamaiBTnet
Init7
Amazon
Microsoft EU (via TN)
Telekom Malaysia
Globelynx
10Gbit/s 1Gbit/s
100Gbit/
s
GÉANT
GÉANT+
LINX
Microsoft EU (via TW)
Total external connectivity ≈ 1 Tbit/s
Leeds
Akamai
VM for LGfLInTechnology
NHS N3
Exa Networks
Synetrix BBC (HD 4K pilots)
One Connect
Glasgow&
Edinburgh
HEAnet
BBC (Pacific Quay)
Gamma
BBC (HD 4K pilots)
NHS N3
SWAN (Glas)
SWAN (Edin)
Manchester
Telecity
Harbour
Exch.
Telehouse
North & West
VM for LGfLRM for
Schools
VM for LGfL
RM for Schools
Global Transit
Tata IXManchester
IXLeeds
Global Transit Level3
Global Transit Level3
1. About Jisc
Doing more,for less
1. About Jisc
www.jisc.ac.uk/about/vat-cost-sharing-group
VAT Cost Sharing Group› Largest in UK
(we believe)› 93% of HEIs› 256 institutions
participating
HPC & Big Data 2016
2. Recent developments
2. Recent developments
www.jisc.ac.uk/financial-x-ray
Financial X-Ray› Easily understand and
compare overall costs for services
› Develop business cases for changes to IT infrastructure
› Mechanism for dialogue between finance and IT departments
› Highlight comparative cost of shared and commercial third party services
2. Recent developmentsAssent (formerly Project Moonshot)› Single, unifying technology that
enables you to effectively manage and control access to a wide range of web and non-web services and applications.
› These include cloud infrastructures, High Performance Computing, Grid Computing and commonly deployed services such as email, file store, remote access and instant messaging
www.jisc.ac.uk/assent
2. Recent developmentsEquipment sharing› Brokered industry access to £60m
public investment in HPC› Piloting the Kit-Catalogue software,
helping institutions to share detailsof high value equipment
› Newcastle University alone issharing £16m+ of >£20K valueequipment via Kit-Catalogue
Photo credit: HPC Midlands
http://bit.ly/jiscsharing
2. Recent developments
http://bit.ly/jiscsharing
Equipment sharing› Working with EPSRC and University
of Southampton to operationalise equipment.data as a national service
› 45 organisations sharing details of over12,000 items of equipment
› Conservative estimate: £240m value
› Evidencing utilisation & sharing?
2. Recent developmentsJanet Reach:› £4M funding from BIS to work
towards a Janet which is "open and accessible" to industry
› Provides industry access to university e-infrastructure facilities to facilitate further investment in science, engineering and technology with the active participation of business and industry
› Modelled on Innovate UK competition process
bit.ly/janetreach
2. Recent developmentsJanet Reach:› £4M funding from BIS to work
towards a Janet which is "open and accessible" to industry
› Provides industry access to university e-infrastructure facilities to facilitate further investment in science, engineering and technology with the active participation of business and industry
› Modelled on Innovate UK competition process bit.ly/jisc-hpc
2. Recent developmentsResearch Data Management Shared Service› Procurement under way› Aiming to pilot for 24 months
starting this summer› 13 pilot institutions› Research Data Network› Find out more:
researchdata.jiscinvolve.org
2. Recent developmentsResearch Data Discovery Service› Alpha!› Uses CKAN to aggregate
research data from institutions
› Test system has 16.7K datasets from 14 organisations so far
› Search and browse: ckan.data.alpha.jisc.ac.uk
HPC & Big Data 2016
3. Personal perspectives & panel discussion
3 . Personal perspectives
www.jisc.ac.uk/shared-data-centre
3. Personal perspectives
› David Fergusson› Head of Scientific Computing
› Bruno Silva› HPC Lead
› Francis Crick Institute
eMedLab: Merging HPC and Cloud for Biomedical Research
Dr Bruno SilvaeMedLab Service Operations ManagerHPC Lead - The Francis Crick Institute
[email protected]/12/2015
Institutional Collaboration
Research Data
Multidisciplinary research
DIY...
Federated Institutional support
eMedLabOps team
Inst. Support
Inst. Support
Inst. Support
Inst. Support
Inst. Support
Inst. Support
No funding available for dedicated staff!
Winning bid• 6048 cores (E5-2695v2)• 252 IBM Flex servers, each with
• 24 cores• 512GB RAM per compute server• 240GB SSD (2x120GB RAID0)• 2x10Gb Ethernet
• 3:1 Mellanox Ethernet fabric• IBM GSS26 – Scratch 1.2PB• IBM GSS24 – General Purpose (Bulk) 4.3PB• Cloud OS – OpenStack
Compute Storage
Benchmark results preliminary
• Aggregate HPL (one run per server – embarrassingly parallel) • Peak 460Gflops*252 = 116Tflops• Max – 94%• Min – 84%
• VM ≈ Bare metal HPL runs (16 core)
Benchmark resultspreliminary – bare metal only• Storage throughput
Bulk File System (gpfsperf GB/s) Scratch File System (gpfsperf GB/s)
Create Read Write Create Read Write
Sequential Sequential Random Sequential Random Sequential Sequential Random Sequential Random
16M 16M 512K 16M 512K 16M 512K 16M 512K 16M 16M 512K 16M 512K 16M 512K 16M 512K
100 88 86 131 22 96 97 89 60 141 84 83 107 20 137 137 125 28
eMedLab Service
eMedLab service
elasticluster
eMedLab Governance & Support Model
Projects• Principal Investigator / Project lead
• Reports to eMedLab governance• Controls who has access to project resources
• Project Systems Administrator • Institutional resource and / or• Specialised research team member(s)• Works closely with eMedLab support
• Researchers• Those who utilise the software and data available in eMedLab for the project
GovernanceMRC eMedLab Project Board
(Board)
Executive Committee (Exec)
Resource Allocation Committee
(RAC)
Technical Governance Group
(TGG)
Research Projects Operations
Federated Institutional support
Operations Team Support(Support to facilitators and Systems Administrators)
Institutional Support(direct support to research)
TicketsTrainingDocumentation
elasticluster
Pilot Projects
Pilot Projects• Spiros Denaxas - Integrating EHR into i2b2 data marts
Pilot Projects• Taane Clark – Biobank Data Analysis – evaluation of
analysis tools
Pilot Projects• Michael Barnes - TranSMART
Pilot Projects• Chela James - Gene discovery, rapid genome
sequencing, somatic mutation analysis and high-definition phenotyping
VM Imag
eInstalling OS
CPU RAM Disk
“Flavours”
VM Instan
ce1
VM Instan
ceN
Network
Start/Stop/Hold/CheckpointInstance
Horizon ConsoleSSH - External IPSSH – TunnelWeb interface, etc…
Pilot Projects• Peter Van Loo – Scalable, Collaborative, Cancer
Genomics Cluster
elasticluster
Pilot Projects• Javier Herrero - Collaborative Medical Genomics Analysis
Using Arvados
Challenges
Challenges
Support Integration
Presentation
Performance
Security Allocation
Support
Challenges - Support• High Barrier to entry
• Provide environments that resemble HPC or Desktop, or more intuitive interfaces• Engender new thinking about workflows• Promote Planning and Resource management• Train support staff as well as researchers
• Resource-intensive support• Promote community-based support and documentation• Provide basic common tools and templates• Upskill and mobilise local IT staff in departments• Move IT support closer to the research project – Research Technologist
Integration
Challenges - Integration• Suitability of POSIX Parallel file systems for Cloud Storage
• Working closely with IBM• Copy-on-write feature of SS (GPFS) is quite useful for fast instance creation• SS has actually quite a lot of the scaffolding required for a good object store
• Presentation SS or NAS to VMs requires additional AAAI layer• Working closely with Red Hat and OCF to deliver IdM• Presentation of SS to VMs introduces stability problems that could be worked-around
with additional SS licenses and some bespoke scripting
• Non-standard Network and Storage architecture• Additional effort by vendors to ensure stable and performant infrastructure up-to-date
infrastructure – great efforts by everyone involved!• Network re-design
Performance
Challenges - Performance• File System Block Re-Mapping• SS performs extremely well with 16MB blocks – we want to leverage this
• Hypervisor overhead (not all cores used for compute)• Minimise number of cores “wasted” on cloud management• On the other hand fewer cores means more memory bandwidth
• VM IO performance potentially affected by virtual network stack• Leverage features available in the Mellanox NICs such as RoCE, SR-IOV, and
offload capabilities
Challenges – PerformanceBlock Re-Mapping
• SS (GPFS) is very good at handling many small files – by design• VMs perform random IO reads and a few writes with their storage• VM storage (and Cinder storage pools) are very large files on top of GPFS• VM block size does not match SS (GPFS) block size
Bulk File System (gpfsperf GB/s) Scratch File System (gpfsperf GB/s)Create Read Write Create Read Write
Sequential Sequential Random Sequential Random Sequential Sequential Random Sequential Random
16M 16M 512K 16M 512K 16M 512K 16M 512K 16M 16M 512K 16M 512K 16M 512K 16M 512K
100 88 86 131 22 96 97 89 60 141 84 83 107 20 137 137 125 28
Challenges – PerformanceBlock Re-Mapping
• Idea: turn random into sequential IO• Have a GPFS standing
Bulk File System (gpfsperf GB/s) Scratch File System (gpfsperf GB/s)Create Read Write Create Read Write
Sequential Sequential Random Sequential Random Sequential Sequential Random Sequential Random
16M 16M 512K 16M 512K 16M 512K 16M 512K 16M 16M 512K 16M 512K 16M 512K 16M 512K
100 88 86 131 22 96 97 89 60 141 84 83 107 20 137 137 125 28
Presentation
Challenges - Presentation• Access to eMedLab through VPN only
• Increases security• Limits upload throughput
• Rigid, non-standard networking• Immediately provides a secure environment with complete separation• Projects only need to add VMs to the existing network• Very inflexible, limits the possibility of a shared ecosystem of “public” services• Introduces great administration overheads when creating new projects – space
for improvement
VM Instanc
e1
VM Instanc
eN
Project/Tenant Network
elasticluster
https://vpn.emedlab.ac.uk
Project/Tenant Networks
elasticluster
https://vpn.emedlab.ac.uk
Public Network
DMZ
Security
Challenges - Security
• Presentation of SS shared storage to VMs raises security concerns• VMs will have root access – even with squash, user can sidestep identity• Re-export SS with a server-side authentication NAS protocol• Alternatively, abstract shared storage with another service such as iRODS
• Ability of OpenStack users to maintain security of VMs • Particularly problematic when deploying “from scratch” systems• A competent, dedicated PSA mitigates this
Allocation
Challenges - Allocation
• Politics and Economics of “unscheduled” cloud• Resource allocation in rigid portions of infrastructure (large, medium, small)• Onus of resource utilisation is with Project team
• A charging model may have to be introduced to promote good behaviour• The infrastructure supplier does not care about efficiency, as long as cost is recovered
• Scheduling over unallocated portions of infrastructure may help maximise utilisation• Benefits applications that function as Direct Acyclid Graphs (DAGs)
• Private cloud is finite and limited• Once it is fully allocated, projects will be on a waiting list, rather than a queue• Cloud bursting can “de-limit” the cloud, if funding permits it
• This would be a talk on its own.
Future Developments
Future Developments• VM and Storage performance analysis
• Create optimal settings recommendations for Project Systems Administrators and Ops team
• Revisit Network configuration• Provide a simpler, more standard OpenStack environment• Simplify service delivery, account creation, other administrative tasks
• Research Data Management for Shared Data• Could be a service within the VM services ecosystem• IRODS is a possibility • Explore potential of Scratch
• Integration with Assent (Moonshot tech)• Access to infrastructure through remote credentials and local authorisation• First step to securely sharing data across sites (Safe Share project)
Conclusions• eMedLab is ground breaking in terms
• Institutional collaboration around a shared infrastructure• Federated support model• Large scale High Performance Computing Cloud (it can be done!)• Enabling a large scale highly customisable workloads for Biomedical research
• Linux cluster still required (POSIX legacy applications)• SS guarantees this flexibility at very high performance• We can introduce Bare Metal (Ironic) if needed for a highly versatile platform• Automated scheduling of granular workloads• Can be done inside the Cloud
• True Parnership - OCF, Red Hat, IBM, Lenovo, and Mellanox • Partnership working very well• All vendors highly invested in eMedLab’s success
The Technical Design Group• Mike Atkins – UCL (Project Manager)• Andy Cafferkey – EBI• Richard Christie – QMUL (Chair)• Pete Clapham – Sanger• David Fergusson – the Crick• Thomas King – QMUL • Richard Passey – UCL• Bruno Silva – the Crick
Institutional Support TeamsUCL:
Facilitator: David WongPSA: Faruque Sarker
Crick:Facilitator: David Fergusson/Bruno SilvaPSA: Adam Huffman, Luke Raimbach, John Bouquiere
LSHTM:Facilitator: Jackie StewartPSA: Steve Whitbread, Kuba Purebski
Institutional Support TeamsSanger:
Facilitator: Tim Cutts, Josh RandallPSA: Peter Clapham, James Beal
EMBL-EBI:Facilitator: Steven Newhouse/Andy CafferkeyPSA: Gianni Dalla Torre
QMUL:Tom King
Operations Team
Thomas Jones (UCL) Pete Clapham (Sanger)William Hay (UCL) James Beale (Sanger)Luke Sudbery (UCL)
Tom King (QMUL)Bruno Silva (Ops Manager, Crick)Adam Huffman (Crick) Andy Cafferkey (EMBL-EBI) Luke Raimbach (Crick) Rich Boyce (EMBL-EBI)Stefan Boeing (Data Manager, Crick) David Ocana (EMBL-EBI)
I’ll stop here…
Thank You!
VM Image
Installing OS
CPU RAM Disk
“Flavours”
VM Instanc
e1
VM Instanc
eN
Network
Start/Stop/Hold/CheckpointInstance
Horizon ConsoleSSH - External IPSSH – TunnelWeb interface, etc…
VM Instanc
e1
VM Instanc
eN
Tenant Network
Open Stack Cinder
Block Storage(single VM access) The Internet
Tenant Network
VM Instanc
e1
VM Instanc
eN
Project/Tenant Network
elasticluster
https://vpn.emedlab.ac.uk
Winning bid• Standard Compute cluster
• Ethernet network fabric
• Spectrum Scale storage
• Cloud OS
Initial requirements• Hardware geared towards very high data throughput work – capable for
running an HPC cluster and a Cloud based on VMs
• Cloud OS (open source and commercial option)
• Tiered storage system for:• High performance data processing• Data Sharing• Project storage• VM storage
Bid responses – interesting facts• Majority providing OpenStack as the Cloud OS• Half included an HPC and a Cloud environment• One provided a Vmware-based solution• One provided a OpenStack-only solution• Half tender responses offered Lustre• One provided Ceph for VM storage
3. Personal perspectives
› Andreas Biternas› HPC & Linux Lead
› King’s College London
CHALLENGES HAVING A SERVER FARM IN THE CENTER OF LONDONANDREAS BITERNASFACULTY OF NATURAL AND MATHEMATICAL SCIENCES
King’s College HPC infrastructure in JISC DC
• Cost of Space: Roughly £25k per square meter in Strand;• Power: • Expensive switches and UPS which require annual maintenance;• Unreliable power supply due to high demand in center of London;
• Cooling: • Expensive cooling system similar to one in Virtus DC;• High cost for running and maintenance of the system;
• Weight: Due to the oldness of the building, there are strict weight restrictions as an auditorium is below the server farm(!);
• Noise pollution: There is strong noise from the server farm up to 2 floors below;
Problems and costs of having server farm in Strand campus
King’s college infrastructure in Virtus DC
• Total 25 cabinets with ~200 racks in Data Hall 1:• 16 cabinets HPC cluster ADA+Rosalind;• Rest King’s Central IT infrastructure: fileservers, firewalls
etc.;• Rosalind, a consortium between Faculty of Natural
and Mathematical Sciences, South London and Maudsley NHS Foundation Trust BRC( Biomedical Research Centre) and Guy’s and St Thomas’ NHS Foundation Trust BRC;• Rosalind has around 5000 cores, ~150 Teraflops,
HPC and Cloud part using OpenStack;
Features of Virtus Datacentre• Power:• Two Redundant central power connections;• UPS & onsite power generator;• Two redundant PSU in each rack ;
• Cooling:• Chilled water system cooled via fresh air;• Configures as hot and cold aisles;
• Services:• Remote hands;• Installation and maintenance;• Office, storing spaces and wifi;• Secure access control environment;
• Better internet connection;• No “single” connections;• Fully resilient network;• The bandwidth requirements of
large data sets were being met;
Connectivity with Virtus Datacentre
• Due to the contract with JISC, tenants(Francis Crick Institute, Queen Mary University, King’s College etc.) have special rates;• Costs:• Standard fee for each rack which includes costs of space,
cooling, connectivity etc.;• Power consumed form each rack in normal market(education)
prices;
Costs of Virtus Datacentre
3. Personal perspectives
› Thomas King› Head of Research Infrastructure
› Queen Mary University of London
Queen MaryUniversity of London
Tom KingHead of Research Infrastructure, IT Services
Who are we?
20,000 students and 4,000 staff 5 campuses in London 3 faculties
Humanities & Social SciencesScience & EngineeringBarts & the London School of Medicine and Dentistry
Copyright Tim Peake, ESA, NASA
Old World IT
Small central provision Lots of independent teams offering a lot of overlap in services
and bespoke solutions 21 machine rooms
IT Transformation Programme 2012-15 Centralisation of staff and services ~200 people Consolidation into two data centres On-site ~20 racks Off-site facility within fibre channel latency distances Highly virtualised environment Enterprise services run in active-active JISC Janet6 upgrades
Research IT
Services we support – HPCResearch StorageHardware hostingClinical and secure systems
Enterprise virtualisation is not what we’re after Five nines is not our issue – bang for buck No room at the inn Build our own on-site? The OAP home
Benefits of shared data centre
Buying power and tenant’s association Better PUE than smaller on-site DC
contribution to sustainability commitment Transparent costing for power use Network redundancy – L2 and L3 of JISC network Collaboration – it’s all about the data Cloudier projects
Emotional detachment from blinking LEDsDirection of funding – GridPP, Environmental omics Cloud
HPC & Big Data 2016
That’s all folks…
Except where otherwise noted, this work is licensed under CC-BY
Martin HamiltonFuturist, Jisc, London