+ All Categories
Home > Documents > HPC-SIG REPORT 2010mkbane/hpc-pedagogy/materials/... · 2019-04-23 · 21st century research is...

HPC-SIG REPORT 2010mkbane/hpc-pedagogy/materials/... · 2019-04-23 · 21st century research is...

Date post: 07-Aug-2020
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
24
HPC-SIG REPORT 2010 UK High Performance Computing Special Interest Group
Transcript
Page 1: HPC-SIG REPORT 2010mkbane/hpc-pedagogy/materials/... · 2019-04-23 · 21st century research is characterised by increasing levels of collaboration, across disciplines and between

HPC-SIG REPORT 2010

UK High Performance ComputingSpecial Interest Group

Page 2: HPC-SIG REPORT 2010mkbane/hpc-pedagogy/materials/... · 2019-04-23 · 21st century research is characterised by increasing levels of collaboration, across disciplines and between

Members are drawn primarily from Computing Services in the Higher Educationsector with representation from related organisations such as the National GridService and funding bodies like EPSRC, STFC and NERC. Several non-academicinstitutions, for example GCHQ, are also affiliated to the HPC-SIG. The mainTerms of Reference of the HPC-SIG are:

• To act as a lobby stressing the value of mid-range campus HPC provision. • To ensure that HPC provision and research methodologies are closely aligned,

promoting the academic agenda in addition to system management and support.• To collect, disseminate and promote best practice in HPC provision,

management and support.• To coordinate and publicise training opportunities in the areas of HPC system

support and usage within the UK.• To act as a link between national HPC provision and local university

level provision.• To act as an outreach vehicle promoting the use of HPC across all

academic sectors. • To facilitate communication between academic and industrial/commercial

HPC providers/users.• To secure the role of HPC as a vital research service across all academic

disciplines. • To demonstrate the value of HPC facilities in Higher Education and to ensure

that these facilities can be delivered with best possible value for money.

This report summarises the results from what we believe is the firstcomprehensive survey undertaken to describe the activities, measure the rangeand evaluate the impact of HPC facilities in UK universities. The results clearlyillustrate why campus HPC, so often in the past ignored or misunderstood, iscrucial to many areas of research undertaken in UK universities – providing theunderpinning infrastructure to leading edge research in not only the traditionalscience and engineering disciplines, but in the increasingly important fields ofhealth, energy, life sciences and also in the emerging computational researchareas in social sciences and humanities.

This report will also ably demonstrate the wealth of talent and expertise that staff within the university HPC sector possess in designing, procuring, installingand running efficient, productive and cost-effective advanced high performancecomputing and storage facilities.

We expect that the findings contained in this report will be of value to HPC service providers in HEIs, senior management in UK HEIs such as CTOs, CIOsand Pro-VCs for Research, the major research funding bodies and those who havea general interest in the activities of a major contributor to the UK HPC ecosystem.

WELCOMEEditorial Board

Dr Paul CallejaDirectorHPC University of Cambridge

Caroline GardinerAcademic Research FacilitatorAdvanced Computing Research CentreUniversity of Bristol

Clare GryceManagerResearch Computing Services GroupUCL

Prof Martyn GuestDirectorAdvanced Research Computing@Cardiff Cardiff University

Dr Jon LockleyManagerOxford Supercomputer Centre University of Oxford

Dr Oz ParchmentInfrastructure Services ManageriSolutionsUniversity of Southampton

Dr Ian Stewart (Chair)DirectorAdvanced Computing Research CentreUniversity of Bristol

Design, print and production cw-design.co.uk

For queries about this reportCaroline GardinerAcademic Research FacilitatorUniversity of BristolEmail [email protected] +44 (0) 117 331 4375

If you need part or all of thispublication in an alternativeformat, please contact Caroline Gardiner.

Extracts may be reproduced with the permission of the HPC-SIG.

Cover imagesNGF and its TrkA receptor,courtesy of Deborah Shoemark.BlueCrystal, University of Bristol,courtesy of Timo Kunkel.

1 www.hpc-sig.org

Welcome to this inaugural issue of the report of the UK High PerformanceComputing Special Interest Group1 (HPC-SIG). The HPC-SIG wasformed in 2005 in response to the significant investment for university-level High Performance Computing (HPC) supplied primarily by theHEFCE SRIF3 funding round.

Page 3: HPC-SIG REPORT 2010mkbane/hpc-pedagogy/materials/... · 2019-04-23 · 21st century research is characterised by increasing levels of collaboration, across disciplines and between

At the same time, the ‘data deluge’ is demanding that new models for data andinformation management are developed, supported by novel architectures forstoring and handling digital material.

21st century research is characterised by increasing levels of collaboration, acrossdisciplines and between continents. HPC and large-scale data storage are criticalenablers of this virtual laboratory and library, supported by high-speed globalnetworks. This ‘internationalisation’ of HPC has seen the rapid rise of newentrants into the HPC field, with India, Brazil and China – among others –investing heavily in national programmes. Indeed, the number one spot in theSupercomputing ‘Top 500’ has just been taken by China, ending several years ofUS occupation of this prestigious position.

In these exciting times, the role of university-based HPC is more critical than everin providing the foundation for a healthy HPC ‘ecosystem’ for the UK, wherecomputational scientists and HPC-service providers work together in a highlycollaborative community. Through their locality to today’s research base, and thestudents who will become our next generation of computational scientists,universities are uniquely positioned to deliver excellent return on investment inHPC as a platform for future economic growth.

Professor David PriceUCL Vice-Provost (Research)

1 INTRODUCTION

1HPC-SIG REPORT 2010

Contents

1 Introduction 1

2 Why HPC? 2

3 Campus HPC:

the present and the future 4

4 Large Scale Storage 5

5 Collaborations and

Internationalisation 6

6 Sustainability 7

Staffing

HPC Funding and Cost Recovery

7 Survey Analysis 8

Contributing Institutions

HPC Assets

Impact

Teaching/training

Green IT

8 Conclusions 10

SWOT Analysis

Final Remarks

9 Case studies 12

Appendix A 14

Survey of the HPC-SIG

Appendix B 20

Members of the HPC-SIG

Appendix C 21

Glossary

Over the last ten years, computational science has emerged as a majordriver for innovation across the research spectrum. In fields as diverseas pharmaceuticals and nanotechnology, High Performance Computing(HPC) has dramatically accelerated the delivery of scientific insights;simulation and modelling supplanting traditional experimental-basedmethods through their cost efficiency.

Page 4: HPC-SIG REPORT 2010mkbane/hpc-pedagogy/materials/... · 2019-04-23 · 21st century research is characterised by increasing levels of collaboration, across disciplines and between

Furthermore, the advent of SRIF3 in2004 allowed major investment into theuniversity HPC sector resulting in adramatic increase in the deploymentand use of commodity clusters withinHEIs. Today this commodity universityHPC sector is by far the dominantprovider of HPC resources toacademics in the UK.

HPC is now acknowledged to play akey role in academia, commerce andindustry. In the US the Council onCompetitiveness3 is a group ofcorporate CEOs, university presidentsand labour leaders working to ensureUS prosperity and enhancedcompetitiveness in the global economythrough the creation of high-valueeconomic activity. This Council hasundertaken surveys on HPC usage anduptake in US industry and hasproduced several detailed andinfluential papers confirming that HPCis becoming indispensable for ensuringfuture US productivity, innovation andcompetitiveness. They advocate thatHPC is a proven transforming andgame-changing technology.

It is perhaps worth comparing andcontrasting HPC and cloud computing.There are indeed many areas where thetwo overlap: both typically involve usinga remote resource for example.However for this report the term refersto HPC as the use of a co-locatedresource optimised for parallel and/ormixed workload computing (e.g.simulation of large scale problems)rather than a distributed resource whichmay be used in a ‘Software as aService’ regime (such as web hosting).HPC is the more mature of the twoapproaches, but its heritage is alsobased around a more specialised usercommunity.

The UK’s competitiveness in simulation,modelling and analysis of massive datawill be critically dependent oncontinuing to have access to local andnational innovative and agile HPCsystems. Work in several industrial andacademic disciplines is no longer viablewithout access to HPC. It has becomean indispensable, cost effective andproven tool for many of the UK’sresearch staff and increasingly is heldto be on an equal footing withlaboratory and other experimentalwork. As a consequence we need to beable to supply appropriate local HPCcapability quickly to researchers whohave novel or speculative ideas.National or international HPC facilitiesin the main are for established researchprogrammes at the top end of userrequirements and do not typicallyprovide researchers with the requireddevelopment platforms. As such thesetop-level services need to be ‘fed’researchers and projects from thebottom up.

If the UK has the ambition that itsresearch universities be counted amongstthe top institutions worldwide and thatthey will be able to collaborate withinternational partners in the US, Europe,the ‘BRICs’ and other newly emergingeconomies, it must be able to bringworld-class resources and infrastructureto the table – it must engage in thistransformational technology or, forsome research areas, risk being leftbehind to stagnate by our competitors.Well-founded world-class local universityHPC facilities complemented withappropriate national facilities are the key to realising this ambition.

This unique review of the UK universityHPC ecosystem has been conductedby the HPC Special Interest Group.

2 WHY HPC?

2 HPC-SIG REPORT 2010

Over the past decade there has been a revolution in High PerformanceComputing spearheaded by a movement away from using expensivetraditional proprietary supercomputers to systems based on relativelyinexpensive commodity off-the-shelf systems. A direct result of thistransition is that the UK academic community has increasingly engagedin research using commodity HPC systems.

‘In this new age, it’s clear that whoeverout-computes will out-compete.’ 2

2 Matthew Faraci, ‘American Business's Secret Competitive Weapon: HPC’, www.forbes.com

3 www.compete.org/hpc

User profile:Professor Jonathan TennysonFRS

Department of Physics and Astronomy, UCLwww.ucl.ac.uk/phys/amopp/people/jonathan_tennyson

I have used HPC facilities at UCL for a variety of projects. The two largestprojects are to compute an ultrahighaccuracy dipole surface for the watermolecule and to compute a hightemperature line list for ammonia.

Water vapour is the main absorber ofsunlight in the earth’s atmosphere andthe main greenhouse gas. However it isvery difficult to measure the probabilityof water vapour absorbing light of aparticular wavelength to better accuracythan a few percent. We have developedfirst principles quantum mechanicalmethods which can calculate thisprobability for the majority of waterabsorption features to 1% or better.Doing this required performing some12,000 highly sophisticated electronicstructure calculations to give the waterdipole moment as a function ofgeometry. These calculations would takesome 20 years serial processing time,but were accomplished in three monthsusing the capabilities of the Legion cluster.

Ammonia is a trace species in theEarth’s atmosphere, but is a significantcomponent of gas giants such asJupiter. Theory suggests that absorptionby hot ammonia will provide the spectralsignature of yet to be detected Y-dwarfstars; the coolest brown dwarfspredicted and similar in character to hot gas giants and planets nowobserved orbiting other stars. Thespectrum of hot ammonia has beensimulated theoretically by calculatingover 1 billion transition frequencies and probabilities. These are hugecalculations which have required use of both the MPI Legion cluster andthe OpenMP Unity cluster at UCL andwould have been impossible without them.

Page 5: HPC-SIG REPORT 2010mkbane/hpc-pedagogy/materials/... · 2019-04-23 · 21st century research is characterised by increasing levels of collaboration, across disciplines and between

The HPC-SIG was formed in 2005 inresponse to the significant funding foruniversity-level HPC funded primarilyby SRIF3. Members are drawn, in themain, from Computing Services in the Higher Education sector withrepresentation from organisations suchas the National Grid Service andfunding bodies. The HPC-SIG also hasstrong links with the HPC Forum, asimilar organisation that caters for non-academic institutions such as AWE,GCHQ and the Met Office. Appendix Blists the 35 HEIs, plus affiliates, thatform the membership. The reviewprovides a snapshot of HPC activitiesthroughout the HE institutions thatresponded to the survey request. Asthis is the first year of what we hopewill be a regular review, someinstitutions had difficulty in collating allthe required information within therequested timescale. We hope in futurethat more institutions will be able toprovide input. The HPC-SIG conductedthis survey in recognition of theincreasing importance of having abetter understanding of HPC usage andrequirements and being able to showthe impact of campus HPC facilities onUK research and their importance toboth the UK HPC and e-Infrastructureecosystems.

The benefit of local university HPC tothe UK knowledge economy cannot beoverstated - it provides the foundationof the HPC ecosystem within the UKand encourages usage within newdisciplines and industry which contributeto the wider UK knowledge economy.

One of the interesting developments in2010 has been the emergence of HighPerformance Computing (HPC) Wales,an ambitious 10 year developmentprogramme to build an HPC Instituteand capability across Wales. Theprogramme has been funded from avariety of sources to deliver significantbenefits across the Principality andbeyond through an enabling technologyand the building of a skills base tosupport research and development

projects with a range of both academicand private sector partners.

There are two distinct features of ‘HPCWales’ that are worth emphasising.While funded at the level of a nationalHPC Facility, the project has focusednot on a traditional, monolithic singlesupercomputing facility, but on adistributed Hub-and-Spoke model. Theuniversities of Cardiff and Swansea willprovide Hubs linking to Tier 1 Spokesin Aberystwyth, Bangor and GlamorganHigher Education Institutes, who allform part of the Saint David’s DayAlliance Group of research focuseduniversities. There will be furtherSpokes in the University of WalesAlliance Group of Universities and inthe Welsh Technium facilities, providinga pan Wales hub and spoke network.This network will provide a newenhanced over-arching regionalstructure.

The second distinctive feature is thefocus on high-impact, user-valued,research outputs that have significant economic impacts. HPC capability willnot in itself deliver sustainable value to the economy and therefore theestablishment and development of anHPC Institute is a key objective,designed to deliver advanced research.Focused in the digital, low-carbon,health, bio-science, engineering andadvanced manufacturing sectors, thepurpose of the Institute will be todevelop educational provision andtraining at all levels, building capacityand skills base to operate and takeforward the optimal usage anddevelopment of HPC andsupercomputing technology andsolutions in the public and privatesectors. HPC Wales will offer a range of opportunities, including skills-development activities, trainingpackages and consultancy servicesand short-term internships to supporttwo-way knowledge transfer and createstronger links between the initiative and the industrial community.

3HPC-SIG REPORT 2010

Staff profile:Research FacilitatorCaroline Gardiner

Academic Research FacilitatorUniversity of Bristol www.acrc.bris.ac.uk

Caroline has been Academic ResearchFacilitator in the Advanced ComputingResearch Centre (ACRC) since 2006when the Bristol HPC facility wasestablished. The Bristol HPC system,BlueCrystal, is currently composed oftwo clusters comprising over 3700 coresand running IBM’s General Parallel FileSystem (GPFS). A resilient petascaleresearch data storage facility, BluePeta,has just been installed. Carolinepreviously worked at the University of Bath, organising IT courses andmarketing for the Lifelong Learningprogramme.

Caroline’s role is to promote both HPCand research data storage across theuniversity and beyond. This involvesorganising symposia and seminars topresent research being undertaken onBlueCrystal and giving presentations toacademic staff and to postgraduates toexplain how BlueCrystal and BluePetacould benefit their research. She is thefirst point of contact for users andpotential users and thus has a goodunderstanding of the issues which are of concern to them. Through hermembership of the University HPCExecutive, which oversees themanagement of the HPC and researchdata storage facilities, she can thereforecontribute the user perspective to assist the decision making process.

Caroline also organises a programme of half day workshops for staff andpostgraduates on HPC-related topicsand maintains the ACRC website. Animportant part of her role is thegathering of metrics which demonstratethe impact BlueCrystal is having onresearch at Bristol, such as the numberof publications where the researchunderpinning the publication wasundertaken on BlueCrystal and keyresearch achievements such as theaward of prizes, successful industrialcollaborations and use of national andinternational HPC facilities.

Page 6: HPC-SIG REPORT 2010mkbane/hpc-pedagogy/materials/... · 2019-04-23 · 21st century research is characterised by increasing levels of collaboration, across disciplines and between

The profile of the UK as a centre ofHPC activity has been raisedworldwide. The university academicHPC community is vibrant andproductive and many collaborationshave been formed, for example the HPCConsortia that have arisen from theCollaborative Computational Projects5.However, such is the progress oftechnology that any HPC facility needsan environment of continual investmentin order to provide cost-effective,performant and competitive resourcesfor their HPC research base. After fouryears a computer is significantly slowerthan a new system which means theuser of an older system is taking yearsto achieve what a competitor with anew system can manage in months oreven weeks.

There is an independent list whichranks the top 500 computers in theworld6 according to their performanceon a popular scientific benchmark andseveral UK university systems feature in it. The rate of progress in super-computing and computational sciencecan be gauged by the trend lines inFigure 27. The figure demonstrates thathistorically there is an approximate 10-fold increase in performance every 4 years. The first Petaflop (1Petaflop =1000 trillion calculations per second)system was built in 2008 at Los AlamosNational Laboratory. The first system toachieve 100Petaflops is expected to bebuilt in 2016 and the first Exaflopsystem (a quintillion or 1,000,000 trillioncalculations per second) is expectedcirca 2019-2020. From the UKperspective, for a current Top500university to remain competitive andremain within striking distance of theTop500 worldwide, it will need aPetaflop scale system by around 2016-2018. To maintain competitivenessinternationally, a typical major research-led UK university should expect in

future to be able to provide aggregateHPC performance of the ordersuperimposed in red on Figure 2.

A similar pro-rata increase inperformance will be required for alluniversities with ambitions to beinvolved with HPC based research.Without such systems it is unlikely thatUK universities will be able to remaininternationally competitive in strategicareas such as climate change, energy,health and engineering as:

• The computers that exist or are inplanning at national HPC facilities suchas HECToR8 and its possible successorARCHER, or systems at the proposedHartree Centre9 are or will be in themain focused as production systems.Generally it is accepted that it is morecost effective to develop software onlocal resources prior to scaling up ontolarger national systems. In addition,with the current network bandwidthtransporting vast volumes of data toand from remote compute facilities can be an issue.

• The performance ofpetascale and, in future,exascale systems willonly be achieved byparallel algorithms thatcan scale efficiently to100,000s or even millionsof processors. It isunlikely that suchsoftware would be fullydeveloped on desktopsystems and tests toensure scalability andverify accuracy wouldrequire some form oflarge parallel machine.Furthermore withoutcompetitive facilities, theUK will be unable todevelop a community

with the highly technical skill-setrequired to undertake this work and will lose influence over the direction ofinternational research which uses HPC.

• Virtualisation, cloud computing andGreen IT will likely deprecate usage ofthe traditional desktop computer, amajor tool of many researchers, asgeneral IT moves towards mobile andthin client models.

The UK must maintain its HPCcompetitiveness in industry as well asacademia. Emerging economies suchas China are investing heavily in HPC:Figure 1 compares the deliverablecomputation performance of UK andChinese systems from the Top500 listover the last two years. Furthermore in October 2010, China took the top spot in the Top500 list, the first time in six years that the list has notbeen led by a US system. Suchinvestment needs to be underpinned by skilled users and universities mustplay the central role in developing those skills.

3 CAMPUS HPC: THE PRESENT AND THE FUTURE

4 HPC-SIG REPORT 2010

‘In the face of serious global competition and a sobering economicclimate, U.S. leadership in HPC - in hardware, software, andexpertise - stands out as a true national strategic asset.’ 4

The UK has several truly world-class supercomputers in academic institutions and the significantinvestment by many UK universities through both SRIF3 and other funding streams is already starting to pay dividends in the form of new and novel research outcomes and the increasing diversification and growth of the computational science community.

Figure 1: The emergence of China as a leading HPC user is shownby a comparison of summed peak computing power on the Top500list over the last two years (generated for June 2010 results)10

Gflo

p/s

Page 7: HPC-SIG REPORT 2010mkbane/hpc-pedagogy/materials/... · 2019-04-23 · 21st century research is characterised by increasing levels of collaboration, across disciplines and between

This facility will be funded by theUniversity of Bristol at an appropriateand sustainable scale to keep pacewith researchers’ data storagedemands. Similar projects are in handat UCL and the universities ofSouthampton, Oxford and Edinburgh.

In recent years there have been asignificant number of computertechnology developments. Severalgenerations of processor design havecome and gone, raising the computepower per CPU at least six fold – fromsingle core in 2004 to six cores or moreby 2010 - and processors that willgreatly increase this performance are inthe pipeline. A similar story can be toldfor network and storage technologies.As a consequence, there has been avast increase in the amount of data thatcan be, and is being, created andmanipulated by researchers. To put thisinto context, it is estimated that all theprinted material in the world amounts to200PBytes12 (or 200,000,000GBytes). Oneexperiment, the Large Hadron Colliderat CERN - with which many UK university

Particle Physics groups are involved,will produce an estimated 15PBytes ofdata each year – equivalent to 7.5% ofthe world’s printed material. Furthermorethe investment in campus HPC Facilitieshas made university researchers moreambitious in their research aspirations.

The awareness of the problems of largescale data creation, manipulation andcuration is now more widespread, withfunding bodies such as the ResearchCouncils and JISC taking interest. Oneresponse to the issue is the creation ofthe national Digital Curation Centre13.Another example is provided by one ofthe conclusions of the 2006 report ‘A Strategic Framework for High EndComputing’14, produced by the HighEnd Computing Strategy Committeethat advised the Research Councils.This stated that in order to realise theproposed strategy ‘the e-Infrastructurerequired must be a properly balancedone; that is, it must also includeprovision of capacity resources, datamanagement, data storage and datamining facilities.’

4 LARGE SCALE STORAGE

5HPC-SIG REPORT 2010

‘HPC is a proven innovationaccelerator, shrinking time toinsight and time to discovery.’ 11

Many HPC facilities not only provide processing cycles for research, but are also charged with providing large scale storage for research data. Forexample, the University of Bristol has just installed a 1PByte resilient, scalable, enterprise-grade facility to store its institutional research data assets.

4 ‘High Performance Computing To Enable Next-Generation Manufacturing’, White paper, U.S. Council on Competitiveness, 2009, www.compete.org/hpc

5 www.ccp.ac.uk6 www.top500.org 7 www.top500.org/lists/2008/11/

performance_development8 www.hector.ac.uk9 www.stfc.ac.uk/About%20STFC/18572.aspx 10 Figure 1 does not include the upgraded Tianhe-1A

system which has recently claimed the top spot in the Top500 list.

11 ‘Advance. Benchmarking Industrial Use of HPC for Innovation’, U.S. Council on Competitiveness, 2008, www.compete.org

12 pcbunn.cithep.caltech.edu/presentations/giod_status_sep97/tsld013.htm

13 www.dcc.ac.uk14 www.epsrc.ac.uk/SiteCollectionDocuments/

other/2006HECStrategicFramework.pdf

Staff profile:System AdministratorDr Stuart Rankin

Senior System AdministratorUniversity of Cambridgewww.hpc.cam.ac.uk

Stuart has worked for the HighPerformance Computing Service (HPCS)in Cambridge since 2007. The mainHPCS system is currently a 3500 coreDell/Intel cluster with SDR and QDRInfiniband and Lustre storage. For theprevious ten years he looked aftervarious large SGI Origin and Altix NUMA systems for ProfessorStephen Hawking’s COSMOSConsortium, also based at Cambridge.

Stuart is currently Senior SystemAdministrator with responsibility for thedesign, implementation, operationalsupport and ongoing evolution of thesoftware platform that underpins theCambridge HPC Service. The role alsohas lead responsibility for user supportand overall service delivery, and ispivotal in the management of externalengineers and in the delivery ofconsultancy services which are importantfor industrial funding to the service.

Day to day system administrationinvolves monitoring system operation,detecting, diagnosing and resolvingsystem problems, as well as respondingto requests for help from users and thecreation and management of useraccounts. Weekly tasks includemanaging tape backups, processingstatistics and performing software andhardware maintenance as required.Stuart is the point of contact for allinternal and external customers, andensures that all users see a coherentand documented computationalenvironment. He also tests new softwareand equipment, writes reports and whitepapers and gives technical presentationsboth internally and externally.

Figure 2: Top500 projected performance

Page 8: HPC-SIG REPORT 2010mkbane/hpc-pedagogy/materials/... · 2019-04-23 · 21st century research is characterised by increasing levels of collaboration, across disciplines and between

Researchers at UK universities cantherefore be expected to play aleading role internationally in emergingcross-disciplinary Grand Challenges,provided that the UK has anappropriate sustainable HPCecosystem.

To forge collaborations with nationaland international institutions on futureGrand Challenges, computationalprojects will require campus HPCfacilities whose performance is either comparable with or within anorder of magnitude of that of theprospective partner institutions.

To take just one example, the HartreeCentre model states that around five or six research projects per year will be chosen to be run on the Hartreepetascale facility.

However, the software for theseprojects would be expected to bedeveloped on local facilities which, in order to prove scalability, will have to be of reasonable size. This alsoholds true for HPC based researchcollaborations with other similar largescale projects such as the EuropeanFramework16 and PRACE17 initiatives.

5 COLLABORATIONS AND INTERNATIONALISATION

6 HPC-SIG REPORT 2010

‘Computational Science, the scientific investigation of physical processes throughmodelling and simulation on computers, has become generally accepted as the thirdpillar of science, complementing and extending theory and experimentation.’ 15

Several areas of science that have traditionally been laboratory orexperimentally based are now moving to an HPC base. If we couplethis movement with the exploration of Grand Challenges in research,such as those outlined by the Hartree Centre business case – the virtualphysiome, large scale simulations of biological structure, investigatingnew materials, climate change and CO2 sequestration – it becomesclear that these Grand Challenge research fields will require access tovery large scale computational and data storage resources.

Staff profile:DeveloperDr Michael Bane

Senior Research Applications and Collaboration OfficerUniversity of Manchesterwww.rcs.manchester.ac.uk/home

Michael has been working in the ResearchApplications and Collaborations Team (theRAC Team) within IT Services for Research(formerly RCS) since December 2008. AtManchester, high end computing support isprovided by IT Services for Research withinthe Directorate of IT Services, with academicgovernance via Manchester Informatics (Mi).

Michael's role is to help researchers at theUniversity with their high-end computingrequirements. Typically, this involves aninitial meeting with a PI and postdoc(sometimes a postgrad) to get an overviewof the research. This is followed by a fewdays' intense examination of the code orsystem in question after which a briefOptions Report is produced laying out theexpert view of the available options and therelative efforts (and outcomes) of variousapproaches. Generally, IT Services forResearch can do a brief amount of work aspart of the core service. For more in-depthwork, the team encourages and supportsthe PI to apply for a grant to cover the costsof having a high-end computing expertassigned to the project, perhaps embeddedwithin the School for three to six months.

Alongside the project work, Michael getsregular help-desk enquiries and providessupport including facilitating access to, and applications support on, Grids (nationaland international), national HPC and local clusters including the ManchesterComputational Shared Facility (currentlyconsisting of a 2000 core cluster) Michaelco-ordinates the IT Services for Research'sresearch computing training, working sideby side with the leader of the team, RobinPinning, which involves overseeing therunning of all courses. Michael also writesand delivers a number of courses. InNovember 2010, Michael organised theinaugural University GPU Club with 100researchers attending. In any quietermoments, Michael expands his skills bydabbling in hybrid/GPU programming,evaluating performance evaluation tools andlearning asynchronous PGAS languagessuch as CHAPEL.

CLU

STE

RV

ISIO

N

Darwin, University of Cambridge

Page 9: HPC-SIG REPORT 2010mkbane/hpc-pedagogy/materials/... · 2019-04-23 · 21st century research is characterised by increasing levels of collaboration, across disciplines and between

StaffingWhilst there seems currently to be abarely sufficient pool of expertise inHPC cluster design, build andmanagement within UK universities,there is a noticeable increase incommercial and industrial requirementsfor staff with this HPC experience. Assalaries in the commercial sector aregenerally higher than in the HE sector,this may lead in future to difficulties inretaining expert staff.

Also, while the UK HE sector has aworldwide reputation for ground-breaking and innovative research, for avariety of reasons, such as there beingno specific funding stream to supportthe development aspect of R&D, it isnot seen as being able to build on theresearch successes. To counter thisthere is a very high probability that infuture UK universities will need accessto a pool of professional HPCscientific/technical programmers notonly to develop research results intoprofessional marketable products, butalso to help academics gain most outof new petascale technologies. Theseprogrammers could, for example,provide expertise in developingalgorithms that can scale to hundredsof thousands of cores, fault tolerantcomputing, data curation andmanagement, and performanceanalysis for very large scale parallelcodes. However, recruitment of suchpersonnel will be difficult as suchspecialists are in short supply in the UKand there is no clear career path toencourage more intake.

HPC Funding and Cost RecoveryTotal annual capital expenditure onHPC as at 2010 by the 15 institutionswho could supply figures isapproximately £7.5M which equates to approximately 0.1% of the totalcombined annual income and aninvestment of roughly £1700 per active

user. Imperial College and UCL are thestrongest investors for campus HPCfacilities with an annual capitalexpenditure of £1.5M each.

All survey respondents operate underFull Economic Costing (fEC) and thethree models of cost recovery used aredirect charging of users as per a MajorResearch Facility (MRF), charging underIndirect Rate and a hybrid model(HMRF) which aims to use elements ofboth MRF and Indirect.

The advantage of the MRF is that there is clear identification of incomerecovery and it is easy to ensureincome can match expenditure.However there are a number of majorissues with MRF models. There areissues with providing access for pump-priming or ‘blue-sky’ activities. There issignificant risk to the sustainability ofthe facility if utilisation is not asforecast. Also the administrativeburdens on the centre are much larger.In addition, there is variation betweenfunding sources as to how HPC istreated and users find the additionalwork at the grant proposal stageunhelpful. In general the benefits ofMRF are outweighed by thedisadvantages.

The Indirect Rate model on the otherhand has the advantages that recoveryof HPC costs are spread across theinstitution and no invoicing of grantholders is required, so there isconsiderably less administrativeoverhead. The disadvantages are thatthere is no direct link between costsand income recovery and thatcalculation of the indirect rate is based on prior year costs.

UCL have around 200 registeredprojects with most projects equating toone user. They use a hybrid fundingmodel - Indirect Rate on all FTE’s plus

Direct Charging to enable largeresource users to buy a ‘guaranteed’amount of resource. UCL currentlyoperate a hierarchical fairshareresource allocation policy for allprojects that are not paying forguaranteed resource. UCL’s motivationsfor choosing this funding model are:

• to enable support of newcommunities who lack current funding,

• to support skills development inestablished HPC user communities,

• ease of administration,• the view that as a world class

research university researchersshould reasonably expect HPCservices to be freely available (i.e. institutional profile and attracting research talent),

• to more easily promote cross/inter-disciplinary research.

6 SUSTAINABILITY

7HPC-SIG REPORT 2010

15 ‘International Review of Research Using HPC in the UK’, EPSRC December 2005

16 cordis.europa.eu/fp7/home_en.html 17 www.prace-project.eu

Profile of a trainingprogramme

The University of Bristol began offeringhalf day workshops in HPC-relatedtopics to staff and students in 2007. 950 have so far attended. The work-shops are free and cover a range oftopics including how to use BlueCrystal(Bristol’s HPC), Linux, Matlab andprogramming languages including C,C++, Perl, Python and Fortran.

The tutors are all university staff orpostgraduates, drawn from a range ofacademic departments, who see thevalue of the workshops in enhancing theknowledge of BlueCrystal users andthus increasing the efficient running ofthe machine. Feedback from theworkshops is very positive with over90% of attendees finding the workshopsvery useful and over 95% of attendeesfinding the tutors very knowledgeable.The feedback forms also providesuggestions for future topics of interestensuring that the programme remainsrelevant.

Page 10: HPC-SIG REPORT 2010mkbane/hpc-pedagogy/materials/... · 2019-04-23 · 21st century research is characterised by increasing levels of collaboration, across disciplines and between

Contributing InstitutionsThe seventeen HEIs listed in Table 2(Appendix A) responded to the surveyrequest. The total annual income forthese institutions amounts to over £7.5billion and they represent 50% of theHEIs in the HPC-SIG.

The HPC facilities at these institutionshave computational assets of approx-imately 40,000 cores and serve over4400 users with end-user and systemsupport being provided by a total of 61staff. The large majority (14) of facilitiesare managed by central IT services onbehalf of the research community.

HPC AssetsThere are some very large systemsinstalled in UK universities. In fact sixuniversities have facilities that can offer inexcess of 2000 cores and the two largestfacilities at Imperial College and theUniversity of Southampton respectivelyprovide 7000 and 8000 cores. TheSouthampton system has the highestreported double precision performance of UK university systems with 72TFlop/speak and 66TFlop/s sustained on theLinpack Top500 benchmark.

Most of the systems in the UK are x86based, run mainly Linux of someflavour and use a variety of hardwaresolutions from multiple vendors. Vendorrepresentation is fairly evenly spreadamong the major suppliers (Bull, Dell,HP, IBM, SGI and Sun/Oracle). We notethat the university HPC tier is a majorfactor in maintaining the diversity insuppliers to the UK HPC market.

Campus HPC facilities are in generalkeen adopters of novel or disruptivetechnologies and several sites haveembraced accelerator technologiessuch as Clearspeed and more recentlyGPGPU systems. The University ofCambridge has a 32 node NVIDIAGPGPU cluster that has peak single

precision performance of 120TFlop/s.Many institutions are following suit andinstalling smaller GPGPU clusters andthere is a burgeoning interest amongstmany sections of the academiccommunity in both developing newcodes for and porting existing codes to these architectures.

Experienced HPC users should also be regarded as an asset both to theirhome institution and the wider UK HPCeffort. For this reason many centresinvest a great deal of effort in teachingand training (see section ‘Teaching/training’ on page 9). Fourteen HEIs arecurrently engaged with EPSRC toinvestigate the possibility of an enhancednational training programme to furtherimprove the skill set of UK users.

ImpactOne can employ several metrics tomeasure the success or otherwise oflocal HPC facilities. We shall use thefollowing:

Income directly associated withprojects employing HPC facilities. Ten institutions could supply figures forresearch grant income associated withprojects using their HPC facilities. For2009, the total annual grant incomewas an impressive £160M.

Maintaining an individual university’sposition as a leading campus HPCinstitution.This can be measured against peersthrough the independent Top500 list.While such a comparison can beinformative, care must be taken not to be influenced by the ‘photoopportunity’ nature of this list. As such,the HPC-SIG is considering thedevelopment of an independent set ofKPI’s to reflect the efficiency, utility andproductivity of local HPC facilities.

Number of and range of registeredusers of HPC clusters and storagefacilities. There are at least 4400 registered userson the 17 respondents’ systems. Mostwere internal users, with only 5% of

registered users being classed asexternal collaborators. The averagenumber of active users per institution is145. Unsurprisingly the vast majority ofusers are drawn from the scientific andengineering communities, thoughseveral universities are reporting anincrease in interest from researchers inthe social sciences and arts. Rathersurprisingly given the interest in usingadvanced computation in areas such ashealth care, there seems little activity todate from the medical sciences.

Utilisation metrics for HPC clustersand storage facilities.Most respondents report that over 90%of the workload on their machines was in the main of a parallel nature(including ensemble workloads). Longterm percentage utilisation of the HPCassets varied depending on the age ofthe assets, with a median of 75% andwith some sites reporting maximumutilisation over 90%.

Usage of national and internationalHPC facilities.This can give a measure of ‘scale-up’rates from local facilities to productionwork on larger systems. As would beexpected, several sites make use of theHECToR national system. Interestingly,some also make use of Europeansystems under the DEISA19 project andtwo institutions also make use of theJaguar system at Oak Ridge NationalLaboratory.

Number and quality of paperspublished by campus HPC facility users.Seven institutions could provide figuresfor publications during the last year.The minimum number of publishedjournal papers for an institution was 10and the maximum 125 (University ofBristol). The total number across theseven institutions was 391 journalpapers plus 66 papers in conferenceproceedings. Further papers are also in course of publication. The journalpapers included four published inNature and two published in Science. A range of representative publicationsare given in Table 1, on the next page.

7 SURVEY ANALYSIS

8 HPC-SIG REPORT 2010

‘I believe that modeling, simulation, and large scale analysis with HPC isvital to maintaining an edge in American innovation. High performancecomputing is simply transforming business processes world wide.’ 18

18 Richard H. Herman, Chancellor of the University of Illinois at Urbana-Champaign. www.compete.org/news/entry/525/council-on-competitiveness-idc-release-study-on-hpc-and-innovation

19 www.deisa.eu

Page 11: HPC-SIG REPORT 2010mkbane/hpc-pedagogy/materials/... · 2019-04-23 · 21st century research is characterised by increasing levels of collaboration, across disciplines and between

Institution

Bristol

Cambridge

Cambridge

East Anglia

East Anglia

Exeter

Liverpool

Liverpool

Oxford

Teaching/trainingAll the responding institutions offeredtraining courses of some form with 11institutions running in-house developedcourses. The average number of courseattendees during 2009 was 219, thehighest 1200 and the lowest 30.Subjects covered include parallelprogramming techniques andmethodologies, application packages,programming languages and softwareengineering methods. Most in-housetraining is offered free of charge to the end-user.

In addition a number of institutionsmade use of the excellent range ofcourses provided to UK academics and users of HECToR by the Numerical Algorithms Group (NAG)20. For undergraduates and taught post-graduates, five institutions either have

or are preparing HPC taught courses ormodules. At two institutions, HPC staffalso co-supervise PhD students.

Green ITWhile many HPC facilities rely onstandard data centre air-cooling,several have invested significant capitalin more energy efficient computercooling solutions. Imperial College useTROX® CO2 rear door heat exchangers,the universities of Cardiff and Bristoluse the APC InfraStruXure™ in-rowchiller solutions and the University ofSouthampton uses an IBM Idataplex™rear-door heat exchanger solution.

The University of Sussex has recentlyrelocated its HPC into a new energyefficient data centre, with a designPUE21 (Power Usage Effectiveness) of1.23. Cooling is provided by water

cooled heat exchangers (USystemsColdLogic™) directly located on therear of each cabinet.

Cardiff has a project which is monitoringenergy usage by their HPC Facility andhas reported an initial PUE in the regionof 1.3. The median PUE of all facilitiesis 1.66. Seven respondents are involvedin specific carbon footprint reductionschemes. Oxford SupercomputingCentre has also invested a significanteffort in energy reduction both byimprovement in the efficiency of theirdata centre and in active powermanagement of their systems. Thelatter project is funded by JISC and will be released as an open sourcesoftware package in the near future.

9HPC-SIG REPORT 2010

Publication

The TASC Consortium (Evans, D.M. Data Analysis Group and Manuscript Preparation group), Genomewide association study of ankylosing spondylitis identifies multiple non-MHC susceptibility loci, Nature Genetics 42(2), pp123-127, 2010

Kermode, J.R., Albaret, T., Sherman, D., Bernstein, N., Gumbsch, P., Payne, M.C., Csanyi, G., De Vita, A.,Low-speed fracture instabilities in a brittle crystal, Nature, Volume 55, Issue 7217, pp1224-1227, 2009

Sebastian, S.E., Harrison, N., Palm, E., Murphy, T.P., Mielke, C.H., Liang, R.X., Bonn, D.A., Hardy, W.N., Lonzarich, G.G.,A multi-component Fermi surface in the vortex state of an underdoped high-Tc superconductor, Nature, Volume: 54, Issue: 7201, pp200-203, 2008

Watson, A. J., U. Schuster, D. C. E. Bakker, N. R. Bates, A. Corbiere, M. Gonzalez-Davila, T. Friedrich, J. Hauck, C. Heinze, T. Johannessen, A. Kortzinger, N. Metzl, J. Olafsson, A. Olsen, A. Oschlies, X. A. Padin, B. Pfeil, J. M. Santana-Casiano, T. Steinhoff, M. Telszewski, A. F. Rios, D. W. R. Wallace and R. Wanninkhof,Tracking the Variable North Atlantic Sink for Atmospheric CO2, Science 326(5958), pp1391-1393, 2009

C. Goldblatt, A. J. Matthews, M. W. Claire, T. M. Lenton, A. J. Watson and K. J. Zahnle, Nitrogen-enhanced greenhouse warming on early Earth, Nature Geoscience, 2(12), pp891-896, 2009

Dobbs, C.L., Theis, C., Pringle, J.E. and Bate, M.R., Simulations of the grand design galaxy M51: a case study for analysing tidally induced spiral structures,Notices of the Royal Astronomical Society, Volume 403, Issue 2, pp625-645, 2010

Krishnan R Harikumar et al, Cooperative Molecular Dynamics in Surface-Reaction, Nature Chemistry 1, pp716-721, 2009

J. Rabone, Y.-F. Yue, S. Y. Chong, K. C. Stylianou, J. Bacsa, D. Bradshaw, G. R. Darling, N. G. Berry, Y. Z. Khimyak, A. Y. Ganin, P. Wiper, J. B. Claridge, M. J. Rosseinsky, An Adaptable Peptide-Based Porous Material, Science 329, pp1053, 2010

Kiminori Maeda, Kevin B. Henbest, Filippo Cintolesi, Ilya Kuprov, Christopher T. Rodgers, Paul A. Liddell, Devens Gust,Christiane R. Timmel & P. J. Hore, Chemical compass model of avian magnetoreception, Nature, Volume 453, pp387-391, 2008

Table 1: Representative publications

20 www.nag.co.uk21 www.42u.com/measurement/pue-dcie.htm

Page 12: HPC-SIG REPORT 2010mkbane/hpc-pedagogy/materials/... · 2019-04-23 · 21st century research is characterised by increasing levels of collaboration, across disciplines and between

SWOT AnalysisStrengths• Campus HPC is close to the end-

user so research staff can build along term working relationship withthe HPC service provider. Indeed,many academics are on themanagement committees governingtheir local HPC facility.

• Campus HPC provides a local pool ofHPC expertise within a universitywhich helps enable a quick turn-around of technical issues orproblems.

• Campus HPC provides the agility torespond to new research ideas,teaching drivers and the adoption ofnew technologies.

• Data locality. Campus HPC minimisespossible issues with networkbandwidth limitations or bottlenecksin transferring large datasets to andfrom remote HPC resources.

• Most campus HPC systems haveproven to be very productive andcost-effective in comparison withsome non-campus facilities.

• It is easy to ensure that the activitiesof campus HPC facilities align withinstitutional research and teachingstrategy.

• Collaboration – the SRIF3 sharedprocurement process ablydemonstrated the negotiating andpurchasing power of universitiescollaborating with each other.

• The UK campus HPC serviceproviders have an acknowledgedbreadth and depth of expertise indesigning, building and runningadvanced computing and storagefacilities. This strength and maturityof service translates into the strongand stable infrastructure supportingour HPC based researchers.

Weaknesses• A major weakness is the funding

model for campus HPC facilities.There appears to be a lack of a levelplaying field compared to somenationally funded facilities.

• Similarly, the effects of the variationof Full Economic Costing models asapplied to HPC facilities can affectthe efficacy of campus HPCprovision.

• There is some difficulty in measuringthe success, in terms of researchoutputs and/or value added, toenable campus HPC impact analysisdue to lack of standardised methods.

• There are some difficulties in buyingcapacity outside of the institution(e.g. from an HPC cloud) as thisusually requires capital expenditureto be converted into operatingexpenditure.

• It is proving difficult to retain theexpert HPC support staff required toefficiently run HPC facilities due tothe lack of an appropriate careerdevelopment path and better salariesoffered outside of academia.

• Most of the current endeavour inHPC focuses mainly on research.There are limited fundingopportunities for the development ofHPC technology, software andapplications which could provideincome streams.

• Variable quality of underpinningestate infrastructure can also affectthe efficacy of campus HPCprovision.

Opportunities• Ensuring the continuation of a strong

HPC ecosystem will continue to bringin world class researchers to the UKand ensure their retention.

• The HPC community have longembraced shared services – in factmembers of the HPC-SIG havecollaborated on shared serviceproposals in the past and arecontinuing to explore areas ofcommon advantage and opportunity.This may lead in the fullness of timeto regional HPC consortia withmembers drawn from both academiaand local industry. The HPC Walesinitiative is a prime example.

• To substantiate the previous point,there is growing interest fromcommercial, financial and industrialsectors in exploiting HPC. RegionalHPC consortia could aid local SMEsto embrace HPC at little cost and riskto themselves.

• Campus HPC facilities can reactquickly to changes in technology andin many cases lead the uptake andexploitation of new and noveltechnologies.

• Many campus HPC facilities areclosely involved with the training andteaching of university staff andstudents – educating the nextgeneration of computationalscientists on campus with the latesttechnology and software assets.

• There are obvious opportunities forreuse and repurposing of datathrough data locality.

• Campus HPC facilities could alsooffer cloud-based HPC and storageservices either by themselves or byforming consortia with similarlyminded institutions.

8 CONCLUSIONS

10 HPC-SIG REPORT 2010

Page 13: HPC-SIG REPORT 2010mkbane/hpc-pedagogy/materials/... · 2019-04-23 · 21st century research is characterised by increasing levels of collaboration, across disciplines and between

Threats• Staffing is a major issue – lack of

sustained investment may hamperthe retention of both research andsupport staff. Lack of establishedcareer development opportunities forsupport staff is also a major concern.There is evidence that support staffemployment opportunities areincreasing outside of academia.

• Fragmented and short-term fundingstreams are an obvious threat tosustainability.

• Embracing cloud based HPC couldlead to a variety of issues such asfinding funding for ‘blue sky’ orspeculative research ideas and forstudent teaching in HPCtechnologies. It may also lead to anerosion of the skills base of localHPC support staff.

• The current funding pressure isincreasing the danger of increasedfragmentation of the HPC ecosystem.A lack of connectivity betweennational and local facilities will provedetrimental to the UK economy.

• The lack of a vibrant UK HPCecosystem will lead to the UK fallingbehind international competitors andinevitably lead to a loss of reputationand scientific standing.

• Furthermore, the UK would become asecondary market for vendors andlosing its substantial negotiatingpower in large IT procurements.

Final RemarksIn these difficult times, it is clear thatworld-class research and innovation willbe an important factor in maintainingthe UK’s competitiveness in the worldeconomy. Computational science is nowaccepted as the third pillar of sciencealongside theory and experimentationand as such having a well-founded andwell-balanced HPC ecosystem will becrucial in helping preserve the UK’s

stature as a major centre of researchexcellence and innovation. Indeed, theuse of HPC and computational sciencein new emerging areas in the digitaleconomy such as health, bio-informatics, advanced manufacturingand energy highlight its increasingimportance to 21st century society.

This report has highlighted some of theactivities at campus HPC facilities. Thebreadth and depth of some of theresearch highlights ably demonstratethe need for such local facilities.Campus HPC facilities have manyadvantages such as:

• allowing HEIs to attract and retain thebest academic staff, and allow themto develop leading-edge research,

• service maturity of university HPC -many facilities are embedded withinwider service oriented organisationssuch as local IT service departments,

• enabling exploration of unfunded or new ‘blue-sky’ ideas and theassociated development and testingof new software,

• enabling teaching of HPC methodsand technologies to both under-graduate and postgraduate students,

• enabling researchers to quicklyexplore and adopt new or noveltechnologies.

In order to maintain the position of ourworld-class facilities relative to globalcompetitors and to ensure ourcompetitiveness in attracting externalfunding, particularly for large projectssuch as EU Framework projects, theUK needs to ensure sustainablefunding for HPC at both campus andnational levels in a co-ordinated fashion.This is vital for long term planning and toprovide a stable state-of-the-art campusand national HPC base to nurture andexpand applied computational researchwithin the UK for the foreseeable future.Indeed, it is clear from the HPC-SIGsurvey that there is very strong supportamongst the community of campusHPC service providers for a UKleadership-class system.

The survey also suggests that to ensure maximum exploitation of capitalassets it is becoming increasinglyobvious that there is a requirement forthe development of a professionalscientific/technical programmer careerstructure. If the investment in capitalassets is not matched by a similarinvestment in appropriate trainingprogrammes and the associateddevelopment of an appropriate careerpathway, there is a danger that highlytalented staff will be recruited away byour global competitors who can and dooffer these.

Several of the issues highlighted herein such as sustainability, the longterm curation of vast amounts of data,and developing and maintaining theskills base, have also been raised in a recent report, ‘Delivering the UK’s e-Infrastructure for research andinnovation’22. We broadly agree with therecommendations and conclusions ofthe RCUK report and we hope that thisreport can similarly inform decisionmakers about the UK campus HPCecosystem.

Finally, there is a real spirit ofcollaboration within the HPC-SIG. Inthe past several members submitted a proposal for a joint HPC venture tothe HEFCE shared services initiative.This collaborative interest is spurringdiscussions between members onreducing expenditure by, for example,sharing data centres, collaborating ondisaster recovery and businesscontinuity plans, and developingregional HE HPC clouds. Thecommunity is watching the current HPC Wales collaboration with someexpectation. If successful, HPC Walescould provide an exemplary model forbuilding sustainable, collaborative,shared regional HPC services whichcollaborate with and offer services tolocal academic and industrialstakeholders.

11HPC-SIG REPORT 2010

22 www.rcuk.ac.uk/documents/research/esci/e-Infrastructurereviewreport.pdf

Page 14: HPC-SIG REPORT 2010mkbane/hpc-pedagogy/materials/... · 2019-04-23 · 21st century research is characterised by increasing levels of collaboration, across disciplines and between

9 CASE STUDIES

12 HPC-SIG REPORT 2010

University of BristolProfessor Nello Cristianini and Dr Marco Turchi patterns.enm.bris.ac.uk/smart-dissemination-workshop

SMART is an European project that studied how to connect methods of modern Statistical Learning with Statistical Machine Translation and Cross-Language Information Retrieval.

Statistical methods are promising, in that they achieve performances equivalent or superior to those of rule-basedsystems, at a fraction of the development effort. There are, however, some identified shortcomings in these methods,preventing their broad diffusion. As an example, even though lexical choice is usually more accurate with StatisticalMachine Translation (SMT) systems than with their rule-based counterparts, the text they produce tends to be lessgrammatical. As a second example, SMT systems are trained in batch mode and do not adapt by taking user feedbackinto account. Finally, in Cross-Language Information Retrieval tasks, query words are most often translated independentof one another, thus giving up possibly relevant contextual clues.

SMART addressed these and other shortcomings by the methods of modern Statistical Learning. The scientific focus was ondeveloping new and more effective statistical approaches while ensuring that existing know-how was duly taken into account.

Inside the SMART project, we present an extensive experimental study of a Statistical Machine Translation system,Moses, from the point of view of its learning capabilities, and we discuss learning-theoretic aspects of these systems,including model selection, representation error, estimation error and hypothesis space. Learning Curves are obtained, byusing High Performance Computing, and extrapolations of the projected performance of the system under differentconditions are provided. More than 1,000 experiments were run using different training set sizes (from 12 k to 22,000 ktraining points), data domains (legal and news) and language pairs (French, Chinese, Spanish to English). As far as weknow, we obtained the most accurate Learning Curves in Statistical Machine Translation.

University of Oxfordwww.earth.ox.ac.uk/research/groups/geodynamics/home

A case study about the support provided by the Oxford Supercomputing Centre (OSC) for a final year undergraduate project.

Sam Weatherley, supervised by Dr Richard Katz, in the Department of EarthSciences undertook his 4th year project in 2008-2009. Sam used OSChardware to run three dimensional numerical models of the creeping flow of theEarth’s mantle. His objective was to investigate the dynamics of a mid-oceanridge. Mid-ocean ridges are plate-tectonic boundaries beneath the ocean,where sea-floor plates diverge; new crust is created in the gap between them.

The project was motivated by observations of the global mid-ocean ridge system by a colleague in the US: there is aconsistent asymmetry between opposite sides of the mid-ocean ridge near offsets of the ridge (these offsets are known astransform faults). This asymmetry is observed to be correlated with the direction of migration of the ridge over the mantle.

Sam used OSC hardware to compute the temperature and creeping flow of mantle rock beneath the ridge. In his models,mantle flow is driven by plate divergence and ridge migration. He used the calculated patterns of mantle temperature and flow to predict mantle melting, melt transport, and thickness of the sea-floor crust. His predictions produced anasymmetry that is consistent with the observations, and hence showed that the dynamics of the mantle are expressed interms of a subtle but observable feature of the sea-floor.

100k

m

200km

300km

(a)

(b)

1210

86

42

z

y x

kg/m /yr2

IMA

GE

CO

UR

TES

Y O

F S

AM

WE

ATH

ER

LEY

Page 15: HPC-SIG REPORT 2010mkbane/hpc-pedagogy/materials/... · 2019-04-23 · 21st century research is characterised by increasing levels of collaboration, across disciplines and between

13HPC-SIG REPORT 2010

University of OxfordDr Philip W. Fowler and Professor Mark S.P. Sansomsbcb.bioch.ox.ac.uk

A case study about using supercomputing to probe the mechanical properties of proteins for a project undertaken by the Structural Bioinformatics andComputational Biochemistry Unit, Department of Biochemistry.

Potassium ion channels are proteins that sit in cell membranes and conduct potassium when the voltage across the membrane is greater than a certain threshold. They behave thereforelike biological transistors and are important in the generation of action potentials in e.g. thebrain, along nerves and the heart. When the threshold voltage is reached, part of the proteinmoves down in the membrane and pushes on four alpha-helices which swivel shut, much like the iris diaphragm of acamera. The aim of the research is to calculate a map of how the free energy varies when the helices undergo this iris-likemotion. We used NAMD2.7, a parallel classical molecular dynamics code, to run 60-90 different simulations, each ofwhich had the ends of the helices constrained in a different position. All this data is then combined to yield a single mapshowing where the free energy minima are and any kinetic barriers. Our preliminary result is drawn above and as we cansee there is a single free energy minimum which tell us that when the channel is closed it is frustrated which, amongstother things, explains the different opening and closing kinetics measured by experimentalists.

We ran the NAMD2.7 simulations on SAL, the OSC’s Intel Xeon Nehalem cluster. Each simulation was only 10 ns long but running on 8 cores still took nearly two weeks to complete, generating over 2 GB of data in the process. Seven yearsago, a single 10 ns simulation of a protein was considered cutting-edge, now it is possible to run 60 of them to probe themechanical properties of proteins, like the potassium ion channels. It is essential that we begin to do more quantitativestudies of this type to allow much tighter integration with experiment and so that we can try to provide parameters asinputs to higher-level models.

NAMD2.7 is a second-generation classical molecular dynamics code specifically designed for simulating the dynamics ofproteins using the CHARMM forcefield. Unlike the first-generation codes, it was designed to be parallelised from the startand has demonstrated good scaling up to 8192 cores on an IBM BlueGene/P system. It is freely available for academicuse and has extensive documentation and tutorials as well as a popular mailing list.

University of LiverpoolProfessor Alan Nahum ctuprod.liv.ac.uk/CRUKCentre/nahum.html

A case study on using NW-GRID for a project at the Clatterbridge Centre for Oncology.

We are now extremely close to being able to take a radiotherapy treatment plan (i.e. the characteristics of each radiationbeam and the CT representation of patient anatomy) and simulate it on the highly sophisticated Vancouver ‘BEAMnrc’ MCsystem (specifically adapted to advanced radiotherapy delivery techniques, known generally as intensity modulation) viathe NW-Grid. This will provide highly accurate dose distributions for selected treatments carried out at our radiotherapycentre (one of the largest and best equipped in Europe) which will then be compared in detail with the doses obtainedfrom analytical methods; we term this whole process ‘Monte-Carlo based Quality Assurance of Advanced RadiotherapyTechniques’ (MCQAART). As far as we are aware we will be the first radiotherapy centre in the UK to have implementedMCQAART, and this will have been made possible by the use of the NW-Grid.

pore axis

plane perpendicular to the pore axis

only S6helicesshown

xx

y

y

IMA

GE

CO

UR

TES

Y O

F P

HIL

IP F

OW

LER

AN

D M

AR

K S

AN

SO

M

Page 16: HPC-SIG REPORT 2010mkbane/hpc-pedagogy/materials/... · 2019-04-23 · 21st century research is characterised by increasing levels of collaboration, across disciplines and between

The survey was carried out betweenMarch and August 2010. The 17respondents given in Table 2 answereda total of 44 questions about their HPCfacilities and how they are managedand funded.

Institution

Cardiff University

Imperial College London

Loughborough University

Queen’s University Belfast

Queen Mary, University of London (QMUL)

University College London (UCL)

University of Birmingham

University of Bristol

University of Cambridge

University of East Anglia

University of Exeter

University of Liverpool

University of Manchester

University of Oxford

University of Sheffield

University of Southampton

University of Sussex

Table 2: Survey Respondents

A.1 Management of HPC facilitiesThe respondents were asked todescribe the management of their HPCfacility and whether it is managed as acentral IT department, an academicfacility or another model. Most HPCfacilities are currently managed as acentral IT department.

Figure 3: Management of HPC facilities

An example of each model is: • Loughborough University is managed

as a central IT department.• The University of Exeter is managed

as an academic facility.• Queen Mary, University of London is

run by an academic department.

A.2 Income and ExpenditureA.2.1 Funding and cost recoverymodelsRespondents were asked to describetheir funding model as a MajorResearch Facility (MRF) funded viagrant income, an indirect model wherefunding is provided centrally by theinstitution through top slicing of incomeor a hybrid model which combineselements of both the above models.The responses show a mixed pattern,with an indication that hybrid modelsare becoming more popular.

Figure 4: Funding models

An example of each model is:• University of Oxford – directly funded

as a MRF, underwritten by theuniversity with the view to becomingself-sustaining.

• University of Southampton –indirectly funded through top slicing,with a percentage of research incomeallocated to HPC as academicinfrastructure.

• University of Sheffield – a hybridmodel where the facility is supportedby an annual capital investment of£75K per annum which funds 59% of the facility; 32% of the facility isfunded by investment from individualresearch groups and 9% is funded by groups purchasing resource fordedicated usage.

A.2.2 fEC rates per cpu hourRespondents were asked to indicatetheir current level of Full EconomicCosting (fEC) rate per cpu hour whereappropriate. 11 of the 17 respondentsprovided a rate.

Figure 5: fEC rates per cpu hour

A.2.3 IncomeRespondents were asked to indicatetheir institution’s total grant incomewhere there is an HPC element plusdetails of any commercial or rentalincome (as at 1 January, 2010).

Ten sites provided grant figures with amaximum income of £59m and aminimum income of £200k.

Three institutions derive commercialincome from their HPC facilities; detailsare confidential.

APPENDIX A: SURVEY OF HPC-SIG

14 HPC-SIG REPORT 2010

� Central IT facility - 14 sites

� Academic facility - 2 sites

� Departments run own facilities - 1 site

� Direct - 2 sites

� Indirect - 7 sites

� Mixed - 8 sites

� 0-5p - 5 sites

� 6-10p - 4 sites

� 11-15p - 2 sites

� unknown/not used - 6 sites

Page 17: HPC-SIG REPORT 2010mkbane/hpc-pedagogy/materials/... · 2019-04-23 · 21st century research is characterised by increasing levels of collaboration, across disciplines and between

Two institutions rent out machine roomspace, of which one does so internallyfor disaster recovery purposes.

A.2.4 Capital expenditure per annumRespondents were asked for theircapital expenditure as a representativeannual spend, based on an averagethree year timescale.

All but two respondents provided afigure, although concern wasexpressed by several institutions about future levels of investment. The figures show a substantialvariation, with the median capitalexpenditure figure of £400k as arepresentative annual spend over three years, the minimum spend £50k and the maximum £1.5m.

A.2.5 Non salary operating costs per annumWhen asked about non salary operatingcosts, a range of responses was given;most institutions provided total non-salary operating costs, but no breakdownof depreciation, electricity, softwarelicences, staff training and other costs.

Electricity costs were difficult for someinstitutions to obtain, with someelectricity usage paid centrally. Fourindividual responses were givenshowing an average cost of £140k.

Some software license costs are built in to procurement costs or are fundedfrom grants. Three individual responseswere given, showing an average spendof £31k.

Some training is also built intoprocurement costs or is funded centrally.

A.2.6 DepreciationResponses indicate that computerequipment is depreciated over anaverage term of 3.6 years and machinerooms over an average term of 10 years.

A.3 UsersA.3.1 Total, internal, external andactive usersRespondents were asked to give a total number of users, and then abreakdown between internal andexternal users plus an assessment ofthe number of users who can beclassed as active, i.e. the PrincipleInvestigator (PI) of a project and userswithin a project where jobs have beensubmitted to the HPC facility within the last 12 months.

Figure 6: Total number of users

Internal users were 95% of total users and external users (who arecollaborating with a researcher at theinstitution) only 5%.

The average number of active userswas 145; 3 institutions were unable to provide a breakdown.

A.3.2 Use of regional, national and international resourcesRespondents were asked for details of users or projects using national or international resources.

Seven institutions gave responses – at least one user or project is using the facilities as listed in Table 3, above.

A.4 StaffingA.4.1 HPC total staff numbersRespondents were asked how manyHPC staff they have and to list the rolesand percentage Full Time Equivalent (FTE).

Figure 7: Breakdown of HPC staff roles and FTEs

The average number of staff was 3.63and the median 3. The highest numberof staff was 8.1 and the lowest 1.

There is therefore a substantial variationbetween institutions, but overall thefigures indicate that HPC staff numbersare low in relation to the number ofusers and the size of the machines.

15HPC-SIG REPORT 2010

� 1-100 - 4 sites

� 101-200 - 1 site

� 201-300 - 4 sites

� 301-400 - 3 sites

� 401-500 - 1 site

� 500+ - 4 sites

Facility Users from institution using the facility

Daresbury, Blue Gene and others Liverpool

DEISA (including Jülich and Max Planck) Manchester, Bristol

HECToR Bristol, Cardiff, Liverpool, Manchester, Oxford, Sheffield, UCL

Jaguar (Oak Ridge) Oxford, UCL

National Grid Service Liverpool, Manchester, UCL

NWGrid Liverpool, Manchester

PRACE Liverpool

Shaheen (Saudi Arabia) Oxford

Teragrid UCL

Table 3: Use of regional, national and international resources

Management 14.3 FTE

User/application/developer support 13.1 FTE

System adminstration 23.3 FTE

Research facilitation 1.7 FTE

� Total staff (FTE) 0 10 20 30

Page 18: HPC-SIG REPORT 2010mkbane/hpc-pedagogy/materials/... · 2019-04-23 · 21st century research is characterised by increasing levels of collaboration, across disciplines and between

The breakdown of roles indicates that the majority of HPC staff areundertaking management and systemadministration of the facilities, with a much smaller number specificallyundertaking application and developersupport, although in some cases the System Administrators alsoundertake an element of applicationand development support.

The importance of staff to the successof HPC is highlighted in A.9. When asked what would make their HPCfacility more effective, ten out ofseventeen respondents (59%) saidmore staff.

A.4.2 Academic staffRespondents were asked to providedetails of any academic HPC staff.

The University of Bristol has an HPC lecturer, based in the Departmentof Computer Science and the University of Oxford is hiring a lecturerfor the new academic year in HPC and Visualisation.

With some institutions already providing credit bearing courses and others planning to do so, it is likely that there will be an increase in the number of academic HPC staff in the next few years.

A.5 Teaching and TrainingA.5.1 Training coursesRespondents were asked to detail what training is offered, e.g. informalhalf day workshops, use of externaltraining materials, commercial courses,courses run by NAG.

All institutions provide some training,with 11 institutions running in-housecourses. The average number of attendees in 2009 was 219, the highest1200 and the lowest 30.

A number of institutions run coursesgiven by the Numerical AlgorithmsGroup (NAG), www.nag.co.uk. Thesetraining courses are provided free ofcharge to HECToR users and UKacademics whose work is covered bythe remit of one of the participatingresearch councils (EPSRC, NERC and BBSRC). Most HPC training isprovided free of charge to users.

Figure 8: Training courses provided

A.5.2 Topics taughtRespondents were asked to list thetopics taught as part of their trainingcourses. Table 4 lists the results.

16

� Informal short courses - 11 institutions

� Longer courses - 3 institutions

� NAG courses - 6 institutions

Profile of a project to reduce carbon footprint Sheffield

This is a summary of the project,Optimising Energy Use In the HounsfieldRoad Computer Centre, undertaken by SRichardson, MSc student and hissupervisor, Dr. S Beck in the Departmentof Mechanical Engineering (May 2008).

An energy audit was undertaken for theHounsfield Road machine room; this isthe location of the central HPC facility.Data was gathered and analysed from adiverse range of sources. The buildingwas found to consume a significantamount of electricity both to run andcool the computer hardware, whilst atthe same time requiring large amountsof energy to heat nearby offices.

With a view to finding ways of capturingthe heat from the servers, a computationalfluid dynamics package was used tosimulate the flows of air and heat withinthe room where they are housed. Thisprovided an overview of where heatenergy from the servers is going,although a flaw in the model meant thatdata could not really be used tocalculate meaningful estimates of theamounts of energy available.

From the data recorded on site it wasestimated that the total cooling beingprovided to the hardware is in the regionof 92kW. It was concluded that a heatcapture scheme, possibly using a heatpump, could be implemented to providesignificant savings from both a financialand an energetic view point.

Early in 2011 the university expects tosponsor another student to take thiswork forward. Using HPC, MPI, managing files Applications Programming languages

Condor Abaqus C

Introduction to HPC Ansys C++

LaTex Image based modelling CUDA

Linux Matlab Fortran

Make Mathworks distributed Perlcomputing toolbox

MPI (both in house and Visualisation – Avizo, Pythonrun by NAG) AVS/Express,

OpenMP R

Subversion

Table 4: Subjects taught as part of HPC training programmes

HPC-SIG REPORT 2010

Page 19: HPC-SIG REPORT 2010mkbane/hpc-pedagogy/materials/... · 2019-04-23 · 21st century research is characterised by increasing levels of collaboration, across disciplines and between

A.5.3 Credit bearing coursesRespondents were also asked about any credit bearing courses offered and whether there are plans to offer them in the future.

Table 5: Credit bearing courses

A.5.4 PhD supervisionAt the University of Bristol, HPC staff co-supervise 2 PhD students and at the University of Manchester, HPC staff co-supervise 12-15 PhD students.

A.6 Machine statisticsA.6.1 Summary of HPC assetsRespondents were asked to provide details of their HPC assets.

Table 6: HPC assets

A.6.2 Sustained performanceThis table summarises sustained performance for the 10 sites that provided figures; it does not include the University of Cambridge GPU cluster which achieved a sustained performance of 120 TFlop/s.

Figure 9: Sustained performance

17HPC-SIG REPORT 2010

Institution Existing credit bearing courses Plans for credit bearing courses

Imperial Gives ancillary credit for the parallel programming courses to higher degrees

Manchester Runs MSc modules: Grid and e-Science, Computer Animation,Visualization for HPC (all School of Computer Science), Biomechanics (School of Life Sciences)

Queens, Belfast Offers an optional final year parallel processing module A joint MSc is planned with Trinity College, Dublin

Sheffield 5 credit units are offered as part of the Doctoral Development Programme

Bristol A level M HPC module was offered from October 2010 A MSc is planned from 2011

Oxford Credit bearing courses are under discussion

Asset Usage

Operating system Mainly Linux

Cores Highest 8000Lowest 64Total cores for all 17 respondents 39448

Interconnect Mainly Infiniband

Storage/File system NFS, Panasas, GPFS and Lustre all used

Peak performance Highest 72 TFlop/s; lowest 1.2 TFlop/s.120 TFlop/s (single precision) peak on a 32 node GPU cluster; 1 TFlop/s (double precision)

Sustained performance Median 8.3 TFlop/s; highest 66 TFlop/s; lowest 2 TFlop/s (data not supplied for all machines)

Accelerator cards 5 institutions

Visualisation 4 institutions

Large scale storage 2 institutions

Windows cluster 5 institutions

0

10

20

30

40

50

60

70

80� Sustained performance in TFlop/s

Southampton - 66.68

Bristol -

28

Cardiff - 2

0

Cambridge -

20

Exeter -

11.2

QMUL - 5.5

UEA - 3.5

Sheffield -

3

Liverpool -

2.6

QUB - 2

Page 20: HPC-SIG REPORT 2010mkbane/hpc-pedagogy/materials/... · 2019-04-23 · 21st century research is characterised by increasing levels of collaboration, across disciplines and between

A.6.3 Largest HPC facilitiesThe following facilities all have morethan 2000 cores:

Institution Number of cores

Southampton 8000

Imperial 7032

Cambridge 4180

Bristol 3744

University College

London 2656

Oxford 2496

Cardiff 2048

Table 7: Largest HPC facilities

A.6.4 VendorsRespondents were asked to provide the vendor (and integrator whereappropriate) for each of their systems.

Vendors Total number of machines (including those with an integrator)

Bull 3

ClusterVision 1

Dalco 1

Dell 6 (4 with ClusterVision,

1 with Streamline)

HP 2

IBM 4 (3 with ClusterVision,

1with OCF)

SGI 7

Streamline 1

Sun 4 (1 with Esteem,

3with Streamline)

Supermicro 1

Viglen 2 (1 with Streamline,

1with NVIDIA)

Total number of systems 32

Table 8: Vendors

A.6.5 UsageSome respondents gave a detailedbreakdown of the largest researchareas by usage on a percentage basis.

Institution Usage by research area

Sheffield Mechanical Engineering 25%Physics 24%Computer Science 18%Biological Sciences 15% Applied Maths 5%Electrical and Electronic Engineering 2% Control Engineering 1%Chemistry 1%

Liverpool Chemistry 40%Surface Science 40%Engineering 6%High Energy Physics 1 % Radiology 1%Human Anatomy 1%System/Test/Development 10%Other 1%

Queen Mary Astronomy 80%University Engineering 20%of London (QMUL)

Oxford Mathematical, Physical & Life Sciences 80%Medical Sciences 10%Social Sciences and Humanities 10%

University Surface Science &College Catalysis 27%London Nanoscience & Defects 22%(UCL) Earth Materials 20%

Molecular Quantum Dynamics& Electronic Structure 8%High Energy Physics 6%Astrophysics & Remote Sensing 4%Bioinformatics & Comp. Biology 1%Unspecified 5%

Bristol Chemistry 40% Geographical Sciences 18%,Physics 11%Mathematics 10%Computer Science 7%Electrical & Electronic Engineering 4%Biochemistry 3%Engineering Maths 2%Aerospace Engineering 1%Others 4%

Table 9: Usage of HPC facilities by research area

A.6.6 Usage statisticsRespondents were asked about the mixof work load running on their machineswith a breakdown sought betweenserial and parallel jobs. It wassuggested that parallel in this contextcan be running ensembles of serial jobs.

Most respondents report that theirmachines run over 90% parallelworkloads (including ensemble jobs.)

A wide range of maximum job timeswere reported, with a median of 4.5days and an average of 9 days. Themaximum length allowed was 41 days.

Long term percentage utilisation of theHPC assets varied depending on theage of the assets, with a median of75%. Maximum utilisation at oneinstitution was 91%.

A.7 HPC facilitiesA.7.1 Machine roomsRespondents were asked how many machine rooms contained HPC equipment.

Responses indicated a mix ofinstitutional shared machine rooms and dedicated HPC machine rooms. A number of sites are looking atexpanding machine room resources,recognising the need to refreshmachine room infrastructure. Mostsystems are housed in institutionalmachine rooms, but three sites havededicated HPC machine rooms. Mostsites have and use at least twomachine rooms to provide for at leastan element of business continuity.

The University of Sussex has recentlyrelocated its HPC into a new energyefficient data centre, with a design PUEof 1.23. Cooling is provided by watercooled heat exchangers (USystemsColdLogic™) directly located on therear of each cabinet (to cool theequipment within each cabinet ratherthan the entire room). The thermallyisolated modular room is designed tooperate between 24-27°C to maximisethe free cooling throughout the year

18 HPC-SIG REPORT 2010

Page 21: HPC-SIG REPORT 2010mkbane/hpc-pedagogy/materials/... · 2019-04-23 · 21st century research is characterised by increasing levels of collaboration, across disciplines and between

(external cooling plant includes a largeDry Air cooler and resilient industrialstandard chillers for the warmer days).Power efficiency (99% even under partload) is provided by resilient lineinteractive UPS units (Eaton) housed ina separate plant room.

A.7.2 CoolingWithin their HPC machine rooms 11institutions use standard ComputerRoom Air Conditioning (CRAC), five useadvanced water cooling (three employwater cooled hot-aisle containmentsystems and two employ water cooledrear door heat exchangers) and oneuses a mixture of CRAC and theTROX® CO2 cooling system.

A.7.3 Machine room statisticsRespondents were asked to providestatistics for their machine rooms:

• Power density per rack in kilowattsindicated a median of 12.25 kW, witha maximum of 20 kW and a minimumof 5 kW.

• All machine assets are covered byUninterruptible Power Supply (UPS) at6 sites; the head/login nodes and storageonly are covered by UPS at 10 sites.

• Power Usage Effectiveness (PUE)provides the ratio of the total amountof power used by the data centrefacility to the power delivered to theIT equipment (i.e. not for cooling andother overheads). Ten responses weregiven, showing a wide range with mostsites between 1.3 and 1.8 and a medianof 1.66. There was one outlier of 3.07,representing an older machine room.

A.7.4 Carbon footprint reductionRespondents were asked if they areinvolved in any projects to reducecarbon footprint.

Seven sites are involved in projects:• University of Sheffield – to automate

shutdown of worker nodes with a lowlevel of utilisation; an on-campusproject reviewing recovery of wasteheat from data centre cooling systems;optimisation of machine room layout tooptimise air conditioning requirements.

• University of Liverpool – proposalssubmitted to Salix to replace old hard-ware with new multi-core systems. ROIover 5 years looks feasible; have just wonthe Green ICT award at the Green GownAwards – includes the Condor pool.

• University of Birmingham – activelyusing Adaptive Computing’s MOABGreen Computing options.

• University of Oxford – JISC funded towrite software which powers downunder-utilised HPC resources.

• University of East Anglia – haveworked with a Sustainable ICTService Provision (SISP) project onimproving compute suite efficiency.

• University of Manchester –investigating replacing old researchcomputing equipment with moreefficient newer equipment withinvestment from a ‘Revolving GreenFund’, half funded by the universityand half from the Carbon Trust.Business cases in preparation.

• Cardiff University – measuring PUE;increasing air temperature; turning off unused nodes automatically. On the mainstream IT side, we arevirtualising 90% of the main server-based infrastructure and implementingincreasingly aggressive power savingmeasures on all PCs. We are alsoimplementing an asset managementpackage to gather more data onactual PC power consumption.

A.8 Research outputsA.8.1 Publications in 2009Respondents were asked how manypublications have been published in2009, or are in course of publication,listed by type (e.g. journal article, bookchapter, conference paper), where atleast an element of the content reflectedresearch undertaken on their HPC facilities.

Seven institutions provided figures. Theminimum number of published journalpapers was 10 and the maximum 125,with a total number across the seveninstitutions of 391 journal papers plus66 conference proceedings. Furtherpapers are in course of publication. Thejournal papers included four publishedin Nature and two published in Science.

A.8.3 Key research achievementsRespondents were asked for details ofany key research achievements, whichhelp to demonstrate that the facility isgood value, such as prizes won, awardsgiven or contributions to HPC/softwaredevelopment.

Six institutions provided examplesincluding:• University of Sheffield – model wave

propagation in the Solar Coronaproviding an insight into amechanism for Coronal heating.

• University of Bristol – a team from theDepartment of EngineeringMathematics won Best Model Prizeat the 2008 and 2009 internationalcompetition on GeneticallyEngineered Machines at MIT, Boston (iGEM).

• University of Liverpool – a consortiumof academics named APEMEN(Agent-based Predictive Environmentfor Modelling Expansion in theNeogene) based around the NorthWest Grid (NW-GRID) has developedevidence-based models for researchinto primate and human evolution.They simulated locomotion andcarried out gait analysis tounderstand the energy costs andlocomotor capabilities of extinct andextant species using metric evidencefrom fossil remains and observationalfield work. Circa 170,000 core hourswere used, with the results only beingpossible through the use of the HPCfacility. The research was featured onthe BBC website – news.bbc.co.uk/1/hi/sci/tech/6956867.stm.

A.9 HPC wish listAll respondents were asked what wouldmake their HPC facility more effective.

Ten would like more staff for systemadministration and particularly forapplication support and codedevelopment. The other responsesreflected a desire for more equipmentthrough a mix of more nodes, storage,visualisation and check pointmechanisms.

19HPC-SIG REPORT 2010

Page 22: HPC-SIG REPORT 2010mkbane/hpc-pedagogy/materials/... · 2019-04-23 · 21st century research is characterised by increasing levels of collaboration, across disciplines and between

APPENDIX B: MEMBERS OF THE HPC-SIG

20 HPC-SIG REPORT 2010

The following are currently full membersof the HPC Special Interest Group:

• Aston University • Cardiff University • Cranfield University • Durham University • Imperial College London• Institute of Cancer Research (ICR) • Kings College London • London School of Hygiene and

Tropical Medicine • Loughborough University • Queen Mary, University of London

(QMUL) • Queen’s University Belfast• University College London (UCL) • University of Bath • University of Birmingham • University of Bristol • University of Cambridge• University of Central Lancashire

(UCLAN)

• University of East Anglia • University of Edinburgh • University of Exeter • University of Glasgow • University of Lancaster • University of Leeds • University of Leicester • University of Liverpool • University of Manchester • University of Nottingham • University of Oxford • University of Reading • University of Sheffield • University of Southampton • University of Strathclyde, Glasgow • University of Sussex • University of Warwick • University of York

The following are affiliate membersof the HPC Special Interest Group:

• Cancer Research UK• Engineering and Physical Sciences

Research Council (EPSRC)• GCHQ • Natural Environment Research

Council (NERC) Research Centres• Science and Technology Facilities

Council (STFC) Research Centres

Page 23: HPC-SIG REPORT 2010mkbane/hpc-pedagogy/materials/... · 2019-04-23 · 21st century research is characterised by increasing levels of collaboration, across disciplines and between

BBSRCBiotechnology and Biological Sciences Research Councilwww.bbsrc.ac.uk

ClusterA compute cluster is normally acollection of commodity of the shelfcomputers grouped together to form a single compute resource. See, for example, www.beowulf.org

CoreThe computational processor of amodern multi-core CPU.

CPU hour/Core hourOne hour’s worth of computing on acomputational processor. Most CPUsnowadays are multi-core so the Corehour is gradually replacing CPU hour incommon parlance.

EPSRCEngineering and Physical SciencesResearch Councilwww.epsrc.ac.uk

fECFull Economic Costingwww.jcpsg.ac.uk/index.htm

FTEFull time equivalent

MRFMajor Research Facility (see fEC)

NAGNumerical Algorithms Groupwww.nag.co.uk

NERCNatural Environment Research Councilwww.nerc.ac.uk

NW-GridNorth West Grid is a collaborationbetween Daresbury Laboratory and the Universities of Lancaster, Liverpool and Manchester. www.nw-grid.ac.uk

Petabyte1015 bytes

PFlop/s or Petaflop1015 Floating Point Operations per second

PUEPower Usage Effectiveness – a metricused to measure the efficiency of adata centre by dividing the total powerdelivered by the total IT equipmentpower usage.

PRACEPartnership for Advanced Computing in Europe www.prace-project.eu

STFCScience and Technology Facilities Council www.stfc.ac.uk

Terabyte1012 Bytes

TFlop/s or Teraflop1012 Floating Point Operations per second

TeragridTeraGrid is an open scientific discoveryinfrastructure combining leadershipclass resources at eleven partner sitesto create an integrated, persistentcomputational resource.www.teragrid.org

UPSUninterruptible Power Supply

White Rose GridA collaboration between theUniversities of Leeds, Sheffield andYork who are engaged in eScience,Grid and cloud computing.www.wrgrid.org.uk

APPENDIX C:GLOSSARY

21HPC-SIG REPORT 2010

Page 24: HPC-SIG REPORT 2010mkbane/hpc-pedagogy/materials/... · 2019-04-23 · 21st century research is characterised by increasing levels of collaboration, across disciplines and between

High Performance ComputingSpecial Interest Group

www.hpc-sig.org


Recommended