Date post: | 14-Feb-2017 |
Category: |
Documents |
Upload: | nguyenquynh |
View: | 222 times |
Download: | 0 times |
Campus Cyberinfrastructure – Network Infrastructure and
Engineering (CC-NIE)
Kevin Thompson NSF Office of CyberInfrastructure
January, 2013
2
(Post NSFnet) Brief History of NSF Investments in Network Infrastructure
v vBNS and High Performance Connections Program (HPNC) – 1995-2003 Ø National backbone and connections
v International Networking (IRNC) – 1997 – present Ø Connecting US to the world
v Experimental Infrastructure Networking (EIN) - 2003 v “Academic Research Infrastructure Program – Recovery and
Reinvestment” - 2009 Ø Subset: Optical exchange, regional networking upgrades
v EPScOR – Research Infrastructure Improvement (RII) – 2011 Ø Inter-campus, intra-campus connectivity
v STCI program (2011 – “100G Connectivity for Data-Intensive Computing at JHU”, Lead PI: Alex Szalay)
v CC-NIE 2012
3
ACCI Task Force on Campus Bridging
v Strategic Recommendation to the NSF #3: The National Science Foundation should create a new program funding high-speed (currently 10 Gbps) connections from campuses to the nearest landing point for a national network backbone. The design of these connections must include support for dynamic network provisioning services and must be engineered to support rapid movement of large scientific data sets." - pg. 6, National Science Foundation Advisory Committee for Cyberinfrastructure Task Force on Campus Bridging, Final Report, March 2011
v www.nsf.gov/od/oci/taskforces/TaskForceReport_CampusBridging.pdf
v Also see Campus Bridging Technologies Workshop: Data and Networking Issues Workshop Report. G.T. Almes, D. Jent and C.A. Stewart, eds., 2011, http://hdl.handle.net/2022/13200
4
Campus Cyberinfrastructure – Network Infrastructure and
Engineering (CC-NIE) v FY13 new solicitation is out! v NSF 13-530 – solicitation released Jan 4,2013 v 1st area: Data Driven Networking
Infrastructure for the Campus and Researcher v 2nd area: Network Integration and Applied
Innovation
v Proposals are due April 3, 2013
5
Summary of Changes to FY13 Solicitation
v Joint with CISE/CNS – Bryan Lyles v Anticipated overall funding increased v Campus CI plan now required for all proposals v All proposals encouraged to include quantitative part to their
driving use cases v Streamlined set of network improvement activities (area#1) v Storage/compute resource requests explicitly disallowed v Smaller institution support (area#1) v Several new potential activity areas described in area#2 (e.g.
cloud/compute driven,federated dynamic services, international) v Travel support capped at 5k v Wording strengthened across existing aspects v Note changes to NSF merit review criteria for all NSF proposals
submitted past January 14, 2013 – see Section VI. (A)!
6
CC-NIE
v Estimated Number of Awards: 15 - 30 v Anticipated Funding Amount:
Ø $15,000,000 to $18,000,000 will be available for this competition in FY 2013.
Ø Data Driven Networking Infrastructure for the Campus and Researcher awards will be supported at up to $500,000 total for up to 2 years.
Ø Network Integration and Applied Innovation awards will be supported at up to $1,000,000 total for up to 2 years.
v Proposals may only be submitted by Universities and Colleges - Universities and two- and four-year colleges (including community colleges) accredited in, and having a campus located in the US, acting on behalf of their faculty members. Such organizations also are referred to as academic institutions.
7
CC-NIE Area#1 - Data Driven Networking Infrastructure for the Campus and Researcher
v network infrastructure improvements at the campus level v network improvements include:
Ø network upgrades within a campus network to support a wide range of science data flows
Ø re-architecting a campus network to support large science data flows, for example by designing and building a "science DMZ" (see http://fasterdata.es.net/science-dmz/ for more information on the "science DMZ" approach)
Ø Network connection upgrade for the campus connection to a regional optical exchange or point-of-presence that connects to Internet2 or National Lambda Rail.
8
Other Notes on Area#1 v Must address scientific and engineering project and application drivers
v Must present project-specific end-to-end scenarios for data movement, distributed computing, and other end-to-end services driving the networking upgrade.
v Data movement scenarios are encouraged to describe end-to-end data transfers that include access to and use of wide area dynamic circuit networking services
v Proposals must include a Campus Cyberinfrastructure plan within which the proposed network infrastructure improvements are conceived, designed, and implemented in the context of a coherent campus-wide strategy and approach to CI.
v This Campus CI plan must be included as a supplementary document and is limited to no more than 5 pages. The plan should also address campus IPv6 deployment and use of the InCommon Federation global federated system.
9
Other Notes on Area#1
v Must document explicit partnerships or collaborations with the campus IT/networking organization, as well as one or more domain scientists, research groups, and educators in need of the new network capabilities.
v Partnership documentation from personnel not included in the proposal as PI, Co-PI, or Senior Personnel should be in the form of a letter of commitment located in the supplementary documents section of the proposal.
v Should describe an approach to end-to-end network performance measurement based on the perfSonar framework with associated tool installation and use; proposals may describe an alternative approach to perfSonar with sufficient justification.
v Title should start with:” CC-NIE Network Infrastructure:” v Funding request not to exceed $500k for up to 2 years
10
CC-NIE Area#2 – Network Integration and Applied Innovation
v end-to-end network CI through integration of existing and new technologies and applied innovation
v Applying network research results, prototypes, and emerging innovations to enable (identified) research and education
v May leverage new and existing investments in network infrastructure, services, and tools by combining or extending capabilities to work as part of the CI environment used by scientific applications and users
11
Area#2 Examples of Relevant Activities
v Integration of networking protocols/technologies with application layer
v Transitioning successful research prototypes in SDN, and activities supported by GENI and FIA programs, to distributed scientific environments and campus infrastructure
v Innovative network solutions to problems driven by distributed computing and storage systems including cloud services.
v Federation-based security solutions for dynamic network services extending end-to-end
v See solicitation text for others
12
Other Notes on Area#2
v Must identify one or more supported science or engineering research projects or applications and describe how the proposed network integration activities will support those projects, particularly in the context of addressing data movement, throughput, and predictable performance end-to-end.
v Must include clear project goals and milestones. v New – must include a Campus CI plan v Any software development must be made available under
an open source license. v Title should start with “CC-NIE Integration:” v Funding request not to exceed $1M total for up to 2 years
13
Additional Review Criteria for CC-NIE proposals
v expected impact on the deployed environment described in the proposal. v extent to which the value of the work is described in the context of a needed capability
required by science and engineering, and potential impact across a broader segment of the NSF community.
v A project plan that addresses in its goals and milestones the end result of a working system in the target environment.
v Where applicable, how resource access control, federated identity management, and other cybersecurity related issues and community best practices are addressed.
v Cyberinfrastructure Plan - How well does the cyberinfrastructure plan support and integrate with the institutions' science and technology plan? To what extent is the cyberinfrastructure plan likely to enhance capacity for discovery, innovation, and education in science and engineering? How well does the plan as presented position the proposing institution(s) for future cyberinfrastructure development? Are IPv6 deployment and InCommon federation addressed? Are the activities described in the proposal consistent with the institution’s CI plan?
v Also for CC-NIE Integration projects: Tangible metrics described to measure the success of the integrated systems and any associated software developed, and the steps necessary to take the systems from prototype status to production use.
14
CC-NIE 2012 Stats
v 89 proposals received ($52M+ requested) v 39 awards made (34 projects total)
Ø 34 different institutions Ø 23 states Ø Total funding: $21.6M (that includes $3M in co-
funding from CISE/CNS) • Area#1: $9.7M, 21 awards • Area#2: $11.9M, 18 awards
15
Award List Area#1(unordered) Institution PI Title
Colorado State U Burns, Patrick CC-NIE Data-Driven Network Infrastructure Upgrade for Colorado State University
U of Washington Lazowska, Edward CC-NIE Network Infrastructure: Enhancements to Support Data-Driven Discovery at the University of Washington
Virginia Tech Gardner, Mark CC-NIE Network Infrastructure: ASCED -- An Advanced Scientific Collaboration Environment and DMZ
U of Chicago Jelinkova, Klara CC-NIE Network Infrastructure: High Performance Research Networking (HiPerNet)
Penn State Agarwala, Vijay CC-NIE Network Infrastructure: Accelerating the Build-out of a Dedicated Network for Education and Research in Big Data Science and Engineering
Duke Futhey, Tracy CC-NIE Network Infrastructure: Using Software-Defined Networking to Facilitate Data Transfer
U of Florida Deumens, Erik CC-NIE Network Infrastructure: 100Gig Connection to FLR
U of Wisconsin Maas, Bruce CC-NIE Network Infrastructure: Advancing Network Capacity, Efficiency, and Security for Wisconsin Big Data Research Through Improvement of campus research DMZ
U of Oregon Rejaie, Reza CC-NIE Network Infrastructure: Bridging Open Networks for Scientific Applications and Innovation (BONSAI)
Florida International U Ibarra, Julio CC-NIE Network Infrastructure: FlowSurge: Supporting Science Data Flows towards discovery, innovation and education
16
Award List Area#1 Institution PI Title
UT Knoxville Hazelwood, Victor G. CC-NIE Network Infrastructure: Bandwidth for Leadership in Advancing Science and Technology (BLAST)
UC San Diego Papadopoulos, Philip CC-NIE Network Infrastructure: PRISM@UCSD: A Researcher Defined 10 and 40Gbit/s Campus Scale Data Carrier
San Diego State Castillo, Jose CC-NIE Network Infrastructure: Implementation of a Science DMZ at San Diego State University to Facilitate High-Performance Data Transfer for Scientific Applications
U of North Carolina Aikat, Jay CC-NIE Network Infrastructure: Enabling data-driven research
Florida State U Barret, Michael CC-NIE Network Infrastructure: NoleNet Express Lane -- a private network path for research data transmission at Florida State University and beyond
U of Michigan Noble, Brian CC-NIE Network Infrastructure: Expanding Connectivity to Campus-Wide Resources for Computational Discovery
Wayne State Cinabro, David CC-NIE Network Infrastructure: Wayne State University
Yale Sherman, Andrew CC-NIE Network Infrastructure: The Future of Research & Collaboration: The Dedicated Science Network
Louisiana State U Tohline, Joel CC-NIE Network Infrastructure: CADIS -- Cyberinstructure Advancing Data-Interactive Sciences
U of Colorado Hauser, Thomas CC-NIE Network Infrastructure: Improving an existing science DMZ
Texas A&M Cantrell, Pierce CC-NIE Network Infrastructure: Advanced Connectivity for Texas A&M University
17
Award List Area#2 Institution PI Title
Indiana U Swany, Douglas Collaborative Research: CC-NIE Integration: An Open Cloud Infrastructure for Scalable Data Intensive Collaboration
UT Knoxville Beck, Micah Collaborative Research: CC-NIE Integration: An Open Cloud Infrastructure for Scalable Data Intensive Collaboration
Vanderbilt Sheldon, Paul Collaborative Research: CC-NIE Integration: An Open Cloud Infrastructure for Scalable Data Intensive Collaboration
U of Chicago Tuecke, Steven Collaborative Research: CC-NIE Integration: A Data Movement Solution for Next-Generation Campus Cyberinfrastructure
Indiana U Swany, Douglas Collaborative Research: CC-NIE Integration: A Data Movement Solution for Next-Generation Campus Cyberinfrastructure
U of Maryland Voss, Brian CC-NIE Integration: SDNX - Enabling End-to-End Dynamic Science DMZ
Ohio State Whitacre, Caroline CC-NIE Integration: Innovations to Transition a Campus Core Cyberinfrastructure to Serve Diverse and Emerging Researcher Needs
UMass Amherst Dubach, John CC-NIE Integration: Multi-Wave - a Dedicated Data Transport Ring to Support 21st Century Computational Research
Clemson Wang, Kuang-Ching CC-NIE Integration: Clemson-NextNet
U of Kentucky Kellen, Vincent CC-NIE Integration: Advancing Science through Next Generation SDN Networks
18
Award List Area#2 Institution PI Title
Stanford McKeown, Nick CC-NIE Integration: Bringing SDN based Private Cloud to University Research
U of North Carolina Baldine, Ilia Collaborative Research: CC-NIE Integration: Transforming Computational Science with ADAMANT (Adaptive Data-Aware Multi-domain Application Network Topologies)
U of Southern California
Deelman, Eva Collaborative Research: CC-NIE Integration: Transforming Computational Science with ADAMANT (Adaptive Data-Aware Multi-domain Application Network Topologies)
Duke Chase, Jeffrey Collaborative Research: CC-NIE Integration: Transforming Computational Science with ADAMANT (Adaptive Data-Aware Multi-domain Application Network Topologies)
U of Nebraska Bockelman, Brian CC-NIE Integration: Bringing Distributed High Throughput Computing to the Network with Lark
Caltech Newman, Harvey CC-NIE Integration: ANSE (Advanced Network Services for Experiments)
Missouri U - Columbia Springer, Gordon CC-NIE Integration: Creation of an Institutional Cyberinfrastructure to Enable Researcher-Oriented, Federated Environment for Large, Collaborative Science Projects
UC Davis Bishop, Matt CC-NIE Integration: Improved Infrastructure for Data Movement and Monitoring
19
CC-NIE Award Activities
v CC-NIE Integration: Innovations to Transition a Campus Core Cyberinfrastructure to Serve Diverse and Emerging Researcher Needs Ø Paul Schopis, CTO OARNet Ø PI: Caroline Whitacre, Ohio State University
v Collaborative Research: CC-NIE Integration: Transforming Computational Science with ADAMANT (Adaptive Data-Aware Multi-domain Application Network Topologies) Ø Paul Ruth, RENCI Ø PI: Ilia Baldine – UNC, Jeff Chase, Duke, Ewa Deelman –
USC/ISI
20
Wrap up v Award abstracts available on fastlane.nsf.gov (try
searching on the term “CC-NIE”) v Jan 7/8 GENI/CC-NIE 2 day workshop at NSF
Ø Almost all awards were represented and gave talks
v Any comments/questions on CC-NIE: Ø [email protected] Ø [email protected]
Innovations to Transition a Campus Core Cyberinfrastructure to Serve Diverse and Emerging Researcher Needs
• Science DMZ construction with advanced technologies • 100Gbps connectivity to OARnet, perfSONAR, OpenFlow, RoCE/iWARP, Bro
• Define and establish role of a “Performance Engineer on Campus” • App development to help operations, policy development, funding model
• Wide-area experimentation case studies with Co-PIs and SPs • OSU – MU experiments: Brain imaging, Soybean translational genomics • Cloud/OSC experiments: adoption of cloud-based technologies for big-data
import, storage and collaboration, as well as related analytics • Multi-physics experiments: foster multi-physics research collaboration and high-
resolution simulation steering; graduate capstone project for validation • Others…geography, high-energy physics, agriculture, material science
21
OSU Science DMZ (Logical Diagram)
22
Equipment Purchase and Locations • 100 Gbps border router – connect to OARnet-Internet2 peered network
– Location at OSU Border: ?; Vendor: Current plan to use a Juniper MX for Year 1; looking at Cisco, Arista and Brocade offerings for Year 2 and beyond
• OpenFlow switches – configure VLANs to remote sites (e.g., MU, GENI) – Locations: 1 at Science DMZ border, and 3 at inner-campus (kc, tc, se)
aggregation points that reach all researchers; Vendors: NEC, Dell, Brocade • perfSONAR measurement points – collect end-to-end performance metrics
– Locations: 1 at Science DMZ border, and at 3 or 4 inner-campus locations to reach research labs in primary use cases (e.g., Physics, Med Center, CS Dept.)
• Data transfer nodes – wide-area RDMA-based, GridFTP technologies – Locations: Same plan as perfSONAR measurement points
• Policy-directory server – enforces researchers’ project-specific policies – Location: Coupled with OpenFlow controller and located at Science DMZ border
• Bro Cluster – investigate the tradeoffs to be balanced between researcher flow performance and campus security practices
– Location: Coupled with all equipment at Science DMZ border; Deploy in Year 2
23
OSU-MU GENI Experiments
• Common testbed setup tasks – Federation of Ohio State U and U of Missouri - Columbia Science DMZs
• User accounts/roles; single sign-on; authorization policies – End-to-end (programmable) perfSONAR instrumentation & measurement – Establishment of VLAN extensions and GENI experimentation before Internet2
production deployment – Experiments with optimized large data transfers with RoCE and iWARP
• Research Use Case: Soybean translational genomics and breeding – MU “Soybean KB” (http://soykb.org) database and “Brain Explorer” experiments
with OSU for set up of GENI slices to dynamically change user load patterns from remote campuses
– Service response time analysis of distributed databases, web-services for remote user access’ scalability, imaging computation speed/accuracy
• Researchers: – D. K. Panda (OSU), Prasad Calyam (MU), Ye Duan (MU), Dong Xu (MU), Umit
Catalyurek (OSU), Gordon Springer (MU), Paul Schopis (OARnet/OSU)
24
} ExoGENI/ORCA ◦ IaaS Networked Clouds ◦ RENCI (Ilia Baldine) and Duke (Jeff Chase) ◦ NSF GENI, SDCI
} Pegasus ◦ Workflow Management System ◦ ISI/USC (Ewa Deelman)
} Target: ScienSfic Workflows ◦ IntegraSon workflows (Pegasus) with dynamic resource provisioning (ORCA)
} Complementary NSF Projects at Duke (Jeff Chase) ◦ On-‐Ramps (EAGER) ◦ Expressways (CC-‐NIE)
Virtual Infrastructure
Cloud Providers
Bandwidth Provisioned Networks
IaaS: Networked Clouds
Network Transit Providers
Breakable Experimental Network
} 14 GPO-‐funded racks ◦ Partnership between RENCI, Duke and IBM ◦ IBM x3650 M4 servers (X-‐series 2U)
� 1x146GB 10K SAS hard drive +1x500GB secondary drive � 48G RAM 1333Mhz � Dual-‐socket 8-‐core CPU � Dual 1Gbps adapter (management network) � 10G dual-‐port Chelseo adapter (dataplane) ◦ BNT 8264 10G/40G OpenFlow switch ◦ DS3512 6TB sliverable storage
� iSCSI interface for head node image storage as well as experimenter slivering
} Each rack is a small networked cloud ◦ OpenStack-‐based with NEuca extensions ◦ EC2 node sizes (m1.small, m1.large etc) ◦ xCAT for baremetal node provisioning
} hfp://wiki.exogeni.net
28
} Workflow Management Systems ◦ Pegasus, Custom scripts, etc.
} Lack of tools to integrate with dynamic infrastructures ◦ Orchestrate the infrastructure in response to applicaSon ◦ Integrate data movement with workflows for opSmized performance ◦ Manage applicaSon in response to infrastructure
} Scenarios ◦ ComputaSonal with varying demands ◦ Data-‐driven with large staSc data-‐set(s) ◦ Data-‐driven with large amount of input/output data
} Autonomic control of infrastructure by the applicaSon ◦ Control interface ◦ Performance Measurements
Few compute nodes forbeginning steps
Add compute nodes for parallel compute intensive step
Workflow Dynamic Slice
Time
1.
Compute intensive workflow step
End workflow
Free unneeded compute nodes after compute step
Start workflow
3.
5.
Dynamically provision network between cloud sites
Dynamically destroy compute nodes
Dynamically create compute nodes2.
4.
} On-‐Ramps (EAGER) ◦ New “on-‐ramp” services for advanced cloud networking
} Expressways (CC-‐NIE) ◦ “Expressway” buildout for big-‐data science
Both projects complement ADAMANT
MCNC (Commodity + I-‐2/NLR)
Campus “Backbone”
Interchange (safe and slow)
IGSP
Campus ingress/egress
Cisco IP Core
(MPLS-‐VPN Ring)
DSCR PHYS You Are Here
MCNC (Commodity + I-‐2/NLR)
Campus “Backbone”
Interchange Layer
Duke Shared Cluster Resource
Physics Department
InsNtute for
Genome Sciences & Policy
Duke CS – Exo-‐Geni Research
RENCI’s Breakable Experimental Network (BEN)
Future External
Data Flow: SDN-‐Mediated “Expressway”
Links: Enable Layer2 Transport and
ExoGENI Resource Access
I-‐2/ION
1. “Expressway” or “HOV lanes” for big-‐data transfers ◦ Intra-‐campus traffic engineering through dedicated science links ◦ Edge-‐to-‐edge ◦ Offload IP core, bypass security services at interchange layer
2. Bridge campus to naSonal circuit fabrics ◦ Plug-‐and-‐play “science DMZ” for external dynamic circuits ◦ Direct-‐connect to selected campus resources ◦ Tenant can customize network above L2
3. ElasSc departments: virtual networks for cloudbursSng ◦ Expand edge networks onto virtual resources on IaaS clouds ◦ Either on-‐campus clouds, or cross-‐campus via circuits ◦ Example: ExoGENI. Connect slices into departments.
} More about ExoGENI: ◦ hfp://www.exogeni.net (including the wiki for experimenters and operators)
} More about ORCA: ◦ hfp://geni-‐orca.renci.org
} More about Pegasus: ◦ hfp://pegasus.isi.edu
36
} Deploy SDN switches in access nets. } Network funcSon unchanged in general. } Program switches to install/remove ramps on-‐the-‐fly ◦ Connect “outside” node sets to/through IP ring ◦ With preposiSoned MPLS-‐VRFs for safe rouSng ◦ Divert specific flows ◦ Link virtual networks
OpenFlow Switch
traffic flow server
OpenFlow Switch
diverted traffic flow
ramp install
OpenFlow-‐Enabled Network Resource Access that is Manageable ProgrammaNc and Safe
gateway
World Outside
Core Ring
Dept access network (L2 VLANs) With science resources, storage, etc.
local security domain
Cluster (e.g., DSCR)
World Outside
Core Ring
Cluster (e.g., DSCR)
Note: Your mileage may vary.
Dept
Cloud Providers
Observatory
Wind tunnel
Workflow
IaaS: Networked Clouds
Cloud Providers
Virtual Compute and Storage Infrastructure
IaaS : Clouds and Network Virtualization
Breakable Experimental Network
Network Transit Providers
Cloud APIs (Amazon EC2 ..) Network Provisioning APIs (NLR Sherpa, DOE OSCARS, Internet2 DRAGON, OGF NSI …)
Virtual Network Infrastructure
} Grayson – compiles the workflow DAX from a high-‐level descripSon
} Pegasus – manages workflow execuSon
} ORCA – orchestrates the substrate
} Substrate – transfers data and performs computaSons
Grayson
Pegasus
OpenStack
OpenFlow
Network Providers
Grid sites
ORCA
} Every Infrastructure as a Service, All Connected. ◦ Substrate may be volunteered or rented. ◦ E.g., public or private clouds and transport providers
} ExoGENI Principles: ◦ Open substrate ◦ Off-the-shelf back-ends ◦ Provider autonomy ◦ Federated coordination ◦ Dynamic contracts ◦ Resource visibility
} http://www.exogeni.net Breakable Experimental Network
Allocate Compute Nodes from the Cloud
Workflow Static Slice
Time
1.
Compute intensive workflow step
End workflow
Start workflow
2.
3.
Compute VMs with Network Connection
Workflow entering a sync step where a large amount of data is moved
between nodes
Workflow Elastic Slice
Dynamically Destroy High-bandwith Network
Time
1.
Dynamically Create High-bandwith Network
Compute-intensive Workflow Step
End Workflow
Data-intensive SyncStep
Data intensive workflow leaving a stage of high demand for large data
Start Workflow
2.
4.
6.
5.
3.
Compute VMs with Network Connection
Data intensive workflow entering a stage of high demand for large data set
residing on a remote resource
Workflow Elastic Slice
Dynamically Destroy High-bandwith Network
Time
Data-intensive Workflow Step
1.
Dynamically Create High-bandwith Network
Compute-intensive Workflow Step
End Workflow
Sync Step
Data intensive workflow leaving a stage of high demand for large data
High-bandwidth Connections between
Compute Resources and Large Static Data Set
Start Workflow
2.
3.
7.
6.
5.
4.
} SDN is a new Next Big Thing. } Loosely: a more dynamic, programmable network } OpenFlow is an emerging open standard for SDN. ◦ Outcome of NSF-‐funded research
} Duke/OIT has two NSF-‐funded SDN pilots: ◦ New “on-‐ramp” services for advanced cloud networking (EAGER) ◦ “Expressway” buildout for big-‐data science (CC-‐NIE) ◦ Incremental adopSon; leverage our Cisco IP network.
} 7 racks deployed ◦ RENCI, GPO and NICTA, Duke, UNC, FIU and UH
48
ExoGeni Site
ExoGENI Partner SiteNICTAEveleigh, NSW, Austrailia
University of Alaska FairbanksFairbanks, AK
University of Alaska FairbanksBarrow, AK
University of HoustonHouston, TX
Florida International UniversityMaimi, FL
Duke UniversityDurham, NC
RENCIChapel Hill, NC
UNC Chapel HillChapel Hill, NC
BBNBoston, MA
} Connected via BEN (hfp://ben.renci.org) } LEARN } NLR FrameNet } Partner racks ◦ U of Alaska Fairbanks