Post on 27-Jan-2015
description
transcript
James.Casey@cern.ch
James Casey
CERN IT Department
Grid Technologies Group
FUSE Community Day, London, 2010
Using ActiveMQ at CERN for the Large Hadron Collider
Overview
• What we do at CERN
• Current ActiveMQ Usage– Monitoring a distributed infrastructure
• Lessons Learned
• Future ActiveMQ Usage– Building a generic messaging service
LHC is a very large scientific instrument…
Lake GenevaLake Geneva
Large Hadron Collider27 km circumference
Large Hadron Collider27 km circumference
CMSCMS
ATLASATLAS
LHCbLHCb
ALICEALICE
… based on advanced technology27 km of superconducting magnetscooled in superfluid helium at 1.9 K
What are we looking for?
• To answer fundamental questions about the construction of the universe– Why have we got mass ? (Higgs Boson)– Search for a Grand Unified Theory
• Supersymmetry
– Dark Matter, Dark Energy– Antimatter/matter asymmetry
This Requires…….
1. Accelerators : powerful machines that accelerate particles to extremely high energies and then bring them into collision with other particles
2. Detectors : gigantic instruments that record the resulting particles as they “stream” out from the point of collision.
3. Computers : to collect, store, distribute and analyse the vast amount of data produced by the detectors
4. People : Only a worldwide collaboration of thousands of scientists, engineers, technicians and support staff can design, build and operate the complex “machines”
View of the ATLAS detector during construction
Length : ~ 46 m
Radius : ~ 12 m
Weight : ~ 7000 tons
~108 electronic channels
A collision at LHC
8
Bunches, each containing 100 billion protons, cross 40 million times a second in the centre of each experiment
1 billion proton-proton interactions per second in ATLAS & CMS !
Large Numbers of collisions per event
~ 1000 tracks stream into the detector every 25 ns a large number of channels (~ 100 M ch) ~ 1 MB/25ns i.e. 40 TB/s !
The Data Acquisition
9
Cannot possibly extract and record 40 TB/s. Essentially 2 stages of selection
- dedicated custom designed hardware processors 40 MHz 100 kHz
- then each ‘event’ sent to a free core in a farm of ~ 30k CPU-cores
100 kHz few 100 Hz
Tier 0 at CERN: Acquisition, First pass processing, Storage & Distribution
Ian.Bird@cern.ch 10
First Beam day – 10 Sep. 2008
The LHC Computing Challenge
• Experiments will produce about 15 Million Gigabytes (15 PB) of data each year (about 20 million CDs!)
• LHC data analysis requires a computing power equivalent to ~100,000 of today's fastest PC processors (140MSi2K)
• Analysis carried out at more than 140 computing centres
• 12 large centres for primary data management: CERN (Tier-0) and eleven Tier-1s
• 38 federations of smaller Tier-2 centres
Solution: the Grid
Use the Grid to unite computing resources of particle physics institutions around the world
The World Wide Web provides seamless access to information that is stored in many millions of different geographical locations
The Grid is an infrastructure that provides seamless access to computing power and data storage capacity distributed over the globe.It makes multiple computer centres look like a single system to the end-user.
LHC Computing Grid project (WLCG)
• The grid is complex– Highly distributed– No central control– Lots of software in many languages
• Grid middleware SLOC – 1.7M Total– C++ 850K, C 550K, SH 160K, Java 115K, Python 50K, Perl 35K
• Experiment code e.g. ATLAS – C++ 7M SLOC
– Complex services dependencies
My Problem - Monitoring the operational grid infrastructure
• Tools for Operations and Monitoring– Build and run monitoring infrastructure for WLCG– Operational tools for management of grid
infrastructures• Examples:
– Configuration database– Helpdesk/ ticketing– Monitoring– Availability reporting
• Early design decision: – Use messaging as an integration framework
Open Source to the core
• Design and develop services for Open Science based on:– Open source software– Open protocols
• Funded by a series of EU Projects– EDG, EGEE, EGI.eu, EMI
• Backed by industry support• All our code is open source and freely available• Results published in Open Access journals
Use Case– Availability Monitoring and Reporting
• Monitoring of reliability and availability of European distributed computing infrastructure
• Data must be reliable– Definitive source of availability and accounting
reports• Distributed operations model
– Grid implies ‘cross-administrative domain’• No root login !
– Global ticketing– Distributed operations dashboards
Solution
• Distributed monitoring based on Nagios– Tied together with ActiveMQ
• Network of 4 brokers in 3 countries
– Linked to ticketing and alarm systems• Message level signing + encryption for verification of
identity
• Uses STOMP for all communication– Code in Perl & python
• Topics with Virtual Consumers – All persistent messages– Topic naming used for filtering and selection
Architecture
Component drilldown
Current Status
• 16 national level Nagios servers– Will grow to ~40 in next 3 months
• Clients distributed across 40 countries• 315 sites• 5K services• 500,000 test results/day• 3 consumers of full data stream to database for
analysis and post processing• 40 distributed alarm dashboards with filtered
feeds
Lessons (1)
• Just using STOMP is sub-optimal• Pros:
– Very simple– Good for lightweight clients in many languages
• Cons– Hard to write reliable long-lived clients
• No NACK, No heartbeat
– Ambiguities in the specification• Content-length and TextMessage• Content-encoding
• Not really broker independent in practice• Interested in contributing to STOMP 1.1/2.0
Lessons (2)
• JMS Durable consumer suck– Fragile in Network of Brokers– Many problems fixed now by FUSE
• Virtual Topics solve the problem• Pros:
– Just like a queue• Can monitor queue length, purge
• Cons– Issues with selectors– Startup race conditions (solvable via config)
Lessons (3)
• Network of brokers seem attractive• Pros:
– It’s all a cloud– Clients connect anywhere and it “just works”
• Cons:– It’s a very complicated area of code– Often you need to “ask the computer”
• Or a core ActiveMQ developer
• Trade off between resilience/scaling and complexity
Lessons (4)
• Know the code– Most of it is very simple– Even for non-java developers
• If you keep away from “java-ish” stuff– JTA, XA, Spring
– Plugin architecture is very easy to work with• Most things can be implemented by a plugin• E.g. Monitoring, logging, restricting features,
AuthN/AuthZ
• Docs currently don’t explain everything– Especially the interactions between
plugins/features
Lessons (5)
Stay in the ballpark• If it’s not in tests:
– Think twice about using the feature in that way…– Write a test for it !
• Examples– SSL and network connectors– Network of Brokers with odd topologies– STOMP/Openwire differences in feature support
Nagios for ActiveMQ
• We use Nagios to monitor – Brokers– Producer/consumers
• Uses jmx4perl to reduce JVM load on Nagios machine– Exposes JMX information as JSON– Simple perl interface to write clients
• Generic nagios checks– Looking how to make more available for the
community
Broker Monitoring
• Standard OS information– Filesystem full, processes running, socket counts,
open file counts• JMX for broker statistics
– Store usage, JVM stats, inactive durable subs, queues with pending messages
• JMX based scripts to manage brokers– Remove unwanted advisories– Purge queues with no consumers
Virtual Topic monitoring
• Full testing of consumers from producers on all brokers in Network of Brokers
• Consumers instrumented to reply to test messages– Addressed to a single client-id on a topic– Send message to topic in Reply-To
• Nagios sends messages to all brokers for a topic– Checks they all come back– Useful to check that all brokers in network are
forwarding correctly
Nagios broker status check
To the future – a generic messaging service
• Many concurrent applications …– … in many languages …– … over the WAN …– … with little control over the users
• Not a typical messaging problem ?
• Isolate clients from messaging via filesystem– Particularly in the WAN– Always assume messaging could be uncontactable
• Keeps “core” broker network small• And keeps complexity isolated• Allows all clients to use best language/protocol
to talk to messaging
Design thoughts – File Based Queue
Design Thoughts– AMQP style remote messaging
• Queues bound to broker nodes
• IP-like routing sends messages to destinations
• Clients connect to specific instances– Better determinacy in
network– Easier to manage explicit
connections between brokers
Summary
• ActiveMQ is a key technology choice for operating and monitoring the WLCG grid infrastructure
• It provides a scalable and adaptable platform for building a wide range of messaging based applications
• FUSE fits our model of open source software with industrial support
Thank you for your attention
Questions?