Pacific Wave and PRP UpdateBig News for Big Data
John HessDr. Larry Smarr
WESTNET 2016FORT LEWIS COLLEGE
JUNE 16, 2016
Six Charter Associates:
• California K-12 System
• California Community Colleges
• California State University System
• Stanford, Caltech, USC
• University of California
• California Public Libraries
• CENIC is a 501(c)3 created to serve California’s K-20 research & education institutions with cost-effective, high-bandwidth networking
Three networks operate simultaneously as independent layers on a single infrastructure:
CalREN Digital California (DC) daily use for e-mail, web browsing, videoconferencing, etc.
CalREN High Performance Research (HPR) high-performance, data-intensive efforts
CalREN eXperimental Developmental (XD) bleeding-edge research on the network itself
CENIC: California’s Research & Education NetworkCENIC: California’s Research & Education Network
CENIC: California’s Research & Education Network• 3,800+ miles of optical fiber• Members in all 58 counties connect via
fiber-optic cable or leased circuits from telecom carriers
• Over 10,000 sites connect to CENIC• 20,000,000 Californians use CENIC • Governed by members on the segmental
level• Collaborate with over 500 private sector
partners• 88 other peering partners
(Google, Microsoft, Amazon …)• Enables worldwide collaboration
Pacific Waveand WRN
• Pacific Wave and the Western Region Network provide for a 100Gbps network spanning the Western United States serving PNWGP, CENIC, FRGP, ABQGP and UH.
• Pacific Wave and NSF IRNC awardee PIREN (Univ of Hawaii) work together supporting AARNet links to California and Washington and expansion of high-speed service through the Pacific Islands Region
Pacific WaveInternational Exchange
A project of CENIC and PNWGPJohn Hess
Network Engineer
Pacific Wave• Began as first geographically distributed exchange in 2004• Pacific Wave is an open exchange supporting both
commercial and R&E peers• Currently serves 29 countries peering across the Pacific
and Western United States• With PNWGP and TransPac, announced the first 100Gbps
Trans-Pacific link from Tokyo to Seattle in 2015
R&E Exchanges within R&E• StarLight (Chicago, IL)
– StarLight Consortium/MREN• MANLAN (New York, NY)
– NYSERnet• WIX (Washington, DC)
– University of Maryland/MAX GigaPOP• AmLight (Miami, Florida)
– Florida International University/Florida LambdaRail• Pacific Wave (Western US)
– CENIC and PNWGP
National/Global Activities• NSF provides support of the R&E exchange points
through the competitive IRNC (International Research Network Connections) program with funding for backbone, infrastructure and innovation
• The Global Lambda Integrated Facility– The GLIF brings together some of the world’s premier
networking engineers who are working together to develop and international infrastructure
GLIF/GOLE
Nx100G Across the Pacific• CURRENT:
– TransPac/Pacific Wave (Tokyo-Seattle)– SINGAREN/Internet2 (Singapore-Los Angeles)– SINET/SoftBank/Pacific Wave (Tokyo-Los Angeles)– AARNET/PIREN/Pacific Wave (Australia-SEA)
• FUTURE:– AARNET/PIREN/Pacific Wave (Australia-LA) – end of June 2016– UH/PIREN/Pacific Wave (Guam-Hawaii-LA)
Pacific Wave and NSF/IRNC• Pacific Wave has been partially supported
through three separate five-year National Science Foundation grants supporting growth, connectivity and innovation
• Current award promotes 100G expansion and implementation of SDX capabilities within Pacific Wave (ACI-1451050)
SDX = SDN + IXP
14
AS A Router
AS C Router
AS B Router
BGP Session
SDN Switch
SDX ControllerSDX
Abstraction Layer (FlowSpace Firewall)OpenFlow Switches
On-ramp Locations (Ethernet / virtual circuits)
Network Testbed Enivironments
Circuit Building (NSI)
SDX middlewareOpenFlow Controllers (plural)
Testbed Resources / Other Uses (DTNs) Science Group Applications / Uses
Pacific Wave SDX Testbed Control Plane
Next Step: The Pacific Research Platform Creates a Regional End-to-End Science-Driven “Big Data Freeway System”
NSF CC*DNI Grant$5M 10/2015-10/2020
PI: Larry Smarr, UC San Diego Calit2Co-Pis:• Camille Crittenden, UC Berkeley CITRIS, • Tom DeFanti, UC San Diego Calit2, • Philip Papadopoulos, UC San Diego SDSC, • Frank Wuerthwein, UC San Diego Physics and
SDSC
The Pacific Research Platform (PRP)• NSF CC-NIE and similar projects represent significant investments in campus
infrastructure including SDN, Science DMZ’s (~130 projects)
• But the scientists are still struggling with the complexity of using the network and interoperability between different implementations of Science DMZ’s
• PRP focuses on enabling the science communities across the Pacific region to make effective use of the high performance infrastructure
• Kick-off in December 2014: take advantage of the regional infrastructure; perfSONAR for measurement / analysis and MaDDash for visualization
• Include DTN’s: use a common software suite for data movement; reflect disk-to-disk performance on MaDDash
• Demonstrated as a proof-of-concept at the CENIC Spring meeting (March 2015)
DOE ESnet’s Science DMZ: A Scalable Network Design Model for Optimizing Science Data Transfers
A Science DMZ integrates four key concepts into a unified whole:
– A network architecture designed for high-performance applications, with the science network distinct from the general-purpose network
– The use of dedicated systems for data transfer
– Performance measurement and network testing systems that are regularly used to characterize and troubleshoot the network
– Security policies and enforcement mechanisms that are tailored for high performance science environments
http://fasterdata.es.net/science-dmz/
PRPv0 - An experiment including:
CaltechCENIC / Pacific WaveESnet / LBNLNASA Ames / NRENSan Diego State UniversitySDSCStanford UniversityUniversity of WashingtonUSC
UC BerkeleyUC DavisUC IrvineUC Los AngelesUC RiversideUC San DiegoUC Santa Cruz
20
21
PRPv0 ExperimentThe PRPv0 experiment concentrated on the regional aspects of the research data movement challenge.
High-performance interconnection among campus Science DMZs
A mesh of perfSONAR toolkit instances perfSONAR MaDDash -- Measurement
and Debugging Dashboard Flash I/O Network Appliances
(FIONAs) and Data Transfer Nodes (DTNs)
GridFTP file transfers to quantify throughput, with results reflected on MaDDash
CalREN HPR / AS2153 A partial mesh of bilateral BGP
sessions across the Pacific Wave distributed exchange
FIONA – Flash I/O Network Appliance:Linux PCs Optimized for Big Data on DMZs
FIONAs Are Science DMZ Data Transfer Nodes (DTNs) &
Optical Network Termination Devices UCSD CC-NIE Prism Award & UCOP
Phil Papadopoulos & Tom DeFantiJoe Keefe & John Graham
Cost $8,000 $20,000Intel Xeon Haswell E5-1650 v3 6-Core 2x E5-2697 v3 14-Core
RAM 128 GB 256 GBSSD SATA 3.8 TB SATA 3.8 TB
Network Interface 10/40GbE Mellanox 2x40GbE Chelsi+MellanoxGPU NVIDIA Tesla K80
RAID Drives 0 to 112TB (add ~$100/TB)
UCOP Rack-Mount Build: Source: John Graham and Tom DeFanti, Calit2
PRPv0: Transfer Results from March 2015 DTNs loaded with Globus
Connect Server suite to obtain GridFTP tools.
cron-scheduled transfers using globus-url-copy.
ESnet-contributed script parses GridFTP transfer log and loads results in an esmond measurement archive.
FDT – developed by Caltech in collaboration with Polytehnica Bucharest
23
January 29, 2016 PRPV1 (L3)
PRP Point-to-Point Bandwidth MapGridFTP File Transfers-Note Huge Improvement in Last Six Months
June 6, 2016 PRPV1 (L3)Green is Disk-to-DiskIn Excess of 5Gbps
Troubleshooting Unidirectional Performance Issues
Measuring Performance – IPv6
Measuring Performance – IPv4
28
PRP Timeline
• PRPv1– A routed Layer 3 architecture – Tested, Measured, Optimized, With Multi-domain Science Data– Bring Many Of Our Science Teams Up – Each Community Thus Will Have Its Own Certificate-Based Access
To its Specific Federated Data Infrastructure.• PRPv2
– Incorporating SDN/SDX, AutoGOLE / NSI– Advanced IPv6-Only Version with Robust Security Features
– e.g. Trusted Platform Module Hardware and SDN/SDX Software– Support Rates up to 100Gb/s in Bursts And Streams– Develop Means to Operate a Shared Federation of Caches– Cooperating Research Groups
ResourcesPacific Wavehttp://www.pacificwave.net/https://ps-dashboard.pacificwave.net
CENIChttp://www.cenic.org/https://ps-dashboard.cenic.net
Pacific Research Platformhttp://prp.ucsd.edu/http://cenic.org/files/publications/PRP_Overview_%C6%92.pdfhttp://prp-maddash.calit2.optiputer.net/maddash-webui/
Calit2http://www.calit2.net/
CITRIShttp://citris-uc.org/
ESnethttp://www.es.net/http://fasterdata.es.net/http://ps-dashboard.es.net/
Vision: Creating a Pacific Research Platform
Use Optical Fiber Networks to Connect All Data Generators and Consumers,
Creating a “Big Data” Freeway System
“The Bisection Bandwidth of a Cluster Interconnect, but Deployed on a 20-Campus Scale.”
This Vision Has Been Building for 15 Years
Creating a “Big Data” Freeway on Campus:NSF-Funded Prism@UCSD and CHeruB Grants
Prism@UCSD, Phil Papadopoulos, SDSC, Calit2, PI (2013-15)CHERuB, Mike Norman, SDSC PI
CHERuBThese Are Two
of Over 100 NSF Campus Cyberinfrastructure
GrantsMade in the Last 4 Years
How Prism@UCSD Transforms Big Data Microbiome Science:Preparing for Knight/Smarr 1 Million Core-Hour Analysis
12 Cores/GPU128 GB RAM3.5 TB SSD48TB Disk
10Gbps NIC
Knight Lab
10Gbps
Gordon
Prism@UCSD
Data Oasis7.5PB,
200GB/s
Knight 1024 ClusterIn SDSC Co-Lo
CHERuB100Gbps
Emperor & Other Vis Tools
64Mpixel Data Analysis Wall
120Gbps
40Gbps
1.3Tbps
For Big Data Science, One Needs Bandwidths Orders of Magnitude Higher Than the Shared Internet Between Campuses
Bandwidth from My Office in Calit2’s Qualcomm Institute
Bandwidth On the Pacific Research Platform:
500 Times the Bandwidth of the Shared Internet!
Invitation-Only PRP Workshop Held in Calit2’s Qualcomm InstituteOctober 14-16, 2015
• 130 Attendees From 40 organizations – Ten UC Campuses, as well as UCOP Plus 11 Additional US Universities– Four International Organizations (from Amsterdam, Canada, Korea, and Japan) – Five Members of Industry Plus NSF
GPU JupyterHub:
2 x 14-core CPUs256GB RAM 1.2TB FLASH
3.8TB SSDNvidia K80 GPU
Dual 40GbE NICsAnd a Trusted Platform
Module40Gbps
GPU JupyterHub:
1 x 18-core CPUs128GB RAM 3.8TB SSD
Nvidia K80 GPUDual 40GbE NICs
And a Trusted Platform Module
PRP UC-JupyterHub Backbone UCB Next Step: Deploy Across PRP UCSD
Source: John Graham, Calit2
Cancer Genomics Hub (UCSC) is Housed in SDSC:Large Data Flows to End Users at UCSC, UCB, UCSF, …
1G
8G
Data Source: David Haussler, Brad Smith, UCSC
15GJan 2016
30,000 TBPer Year
Two Automated Telescope SurveysCreating Huge Datasets Will Drive PRP
300 images per night. 100MB per raw image
30GB per night
120GB per night
250 images per night. 530MB per raw image
150 GB per night
800GB per nightWhen processed
at NERSC Increased by 4x
Source: Peter Nugent, Division Deputy for Scientific Engagement, LBLProfessor of Astronomy, UC Berkeley
Precursors to LSST and NCSA
PRP Allows Researchersto Bring Datasets from NERSC
to Their Local Clusters for In-Depth Science Analysis
Data Flows Over HPWREN
Global Scientific Instruments Will Produce Ultralarge Datasets Continuously Requiring Dedicated Optic Fiber and Supercomputers
Square Kilometer Array Large Synoptic Survey Telescope
https://tnc15.terena.org/getfile/1939 www.lsst.org/sites/default/files/documents/DM%20Introduction%20-%20Kantor.pdf
Tracks ~40B Objects,Creates 10M Alerts/Night
Within 1 Minute of Observing
2x40Gb/s
OSG Federates Clusters in 40/50 States:Creating a Scientific Compute and Storage “Cloud”
Source: Miron Livny, Frank Wuerthwein, OSG
We are Experimenting with the PRP for Large Hadron Collider Data Analysis Using The West Coast Open Science Grid on 10-100Gbps Optical Networks
Crossed 100 Million
Core-Hours/MonthIn Dec 2015
Over 1 Billion Data Transfers
Moved200 Petabytes
In 2015
Supported Over200 Million Jobs
In 2015
Source: Miron Livny, Frank Wuerthwein, OSG
ATLAS
CMS
40G FIONAs
20x40G PRP-connected
WAVE@UC San Diego
PRP LinksCreates Distributed Virtual Reality
PRP
CAVE@UC Merced
Dan Cayan USGS Water Resources Discipline
Scripps Institution of Oceanography, UC San Diego
much support from Mary Tyree, Mike Dettinger, Guido Franco and other colleagues
NCAR Upgrading to 10Gbps Link Over Westnet from Wyoming and Boulder to CENIC/PRP
Sponsors: California Energy Commission NOAA RISA program California DWR, DOE, NSF
Planning for climate change in California substantial shifts on top of already high climate variability
UCSD Campus Climate Researchers Need to Download Results from NCAR Remote Supercomputer Simulations
to Make Regional Climate Change Forecasts
average summer afternoon temperatureaverage summer afternoon temperature
Downscaling Supercomputer Climate SimulationsTo Provide High Res Predictions for California Over Next 50 Years
45
Source: Hugo Hidalgo, Tapash Das, Mike Dettinger
approximately 50 miles: Note: locations are approximate
to CI andPEMEX
Extending PRP/CENIC Optical Backplane via High Speed Wireless Research and Education Network
Real-Time Network Cameras on Mountains for Environmental Observations
Source: Hans Werner Braun, HPWREN PI
14 May 2014: 9 Simultaneous Active Fires in San Diego County
San Diego County Red Mountain Fire Cameras• Southeast (left) “Highway” Fire• Southwest (center rear) “Poinsettia” Fire• West (right) “Tomahawk” Fire
Interactive Virtual Reality of San Diego CountyIncludes Live Feeds From 150 Met Stations
TourCAVE at Calit2’s Qualcomm Institute
HPWREN Users and Public Safety ClientsGain Redundancy and Resilience from PRP Upgrade
San Diego CountywideSensors and Camera
ResourcesUCSD & SDSU
Data & ComputeResources UCSD
UCR
SDSU
UCI
UCI & UCRData Replication
and PRP FIONA Anchors as HPWREN Expands
Northward
10X Increase During Wildfires
Data From Hans-Werner Braun
• PRP CENIC 10G Link UCSD to SDSU– DTN FIONAs Endpoints– Data Redundancy – Disaster Recovery – High Availability – Network Redundancy
NSF Has Funded Over 100 Campuses to Build Local Big Data Freeways:Imagine Linking All of Them Like the Pacific Research Platform
Red 2012 CC-NIE AwardeesYellow 2013 CC-NIE AwardeesGreen 2014 CC*IIE AwardeesBlue 2015 CC*DNI AwardeesPurple Multiple Time Awardees
Source: NSF
Next Step: Global Research PlatformBuilding on CENIC/Pacific Wave and GLIF
Current InternationalGRP Partners