Date post: | 20-Dec-2015 |
Category: |
Documents |
View: | 219 times |
Download: | 1 times |
NOW Finale
NOW Project Timeline
Sta
rt o
f F
un
din
g
1/94 6/94 1/95 6/95 1/96 6/96 1/97 6/97 1/98
Cas
e fo
r N
OW
Asp
los
Wor
ksho
p II
Asp
los
Wor
ksho
p I
1st P
hD
NO
W II
NO
W I
NO
W 0
CS
258
CS
252
CS
267
Man
y P
hD
s
6/98
VIA
Myr
inet
AT
M,
fdd
i
SC
I
G-E
ther
NP
AC
IC
S 2
67
NO
W S
ort
Inkt
omi
2nd
PhD
NO
WN
OW
Fin
ale
Fin
ale
NOW Finale
Metrics of Success
• Project goals?
• Papers published?
• Technology transfer?
• Adoption of approach in the real world?
• Students produced?
• Marriages?
• Research results?
• Unexpected research results?
• All of the above?
NOW Finale
Project Goals
• Fundamental change in how we design large-scale computing systems
– snap together commodity components
– self-managing, self-tuning, highly available
• Make the “killer network” real– realize the potential of emerging hardware technology
– and push its effect through the rest of the system
• Integrated system on a building-wide scale– pool of resources (proc, disk mem)
– remote processor and memory closer than local disk
– federation of systems with local and global role
• The right way to build internet services
NOW Finale
AM L.C.P.
VN segment Driver
UnixWorkstation
AM L.C.P.
VN segment Driver
UnixWorkstation
AM L.C.P.
VN segment Driver
UnixWorkstation
AM L.C.P.
VN segment Driver
Unix (Solaris)Workstation
NOW Software Components
Global Layer Unix
Myrinet Scalable Interconnect
Large Seq. AppsParallel Apps
Sockets, Split-C, MPI, HPF, vSM
Active MessagesName Svr
Sched
uler
NOW Finale
NOW publications
• Over 40 papers and counting
• wide range of important venues– IEEE Micro, ACM TOCS, ISCA, ASPLOS, SOSP,
SIGMETRICS, OSDI, SIGMOD, SPAA, SC, IPPS/SPDP, JSPP, USENIX, Hot Interconnects, SW Prac. and Exp., SPDT, HPCA, …
• countless presentations
NOW Finale
NOW Students
• Moved on– Mike Dahlin (UT), Steve Rodriguez (NetApp), Steve Luna
(HP), Lok Tin Liu (Intel), Cedric Krumbein (Microsoft)
• Moving on– Doug Ghormley (Sandia), Randy Wang (Princeton), Amin
Vahdat (Duke), Andrea Arpaci-Dusseau (Stanford), Steve Lumetta (UIUC), Rich Martin (Rutgers)
• Finishing– Remzi Arpaci-Dusseau, Satoshi Asami, Alan Mainwaring,
Jeanna Neefe Mathews, Drew Roselli, Nisha Talagala
• On to other projects in CS– Brent Chun, Kim Keeton, Chad Yoshikawa, Fred Wong
• and several undergrads– Josh Coates, Alec Woo, Eric Schein, ...
NOW Finale
Comm. Performance => Evaluation
Occams Razor: 10µs User to User
Kernel Suppor t
User Comm Layer
Processor
$
Memory
NI
$
Network Fabric (Switch)
Link
Bus
2µs
0.5 µs
2µs
0.5 µs
5µs
From “NOW Communication Architecture”Jan 1994 Retreat
• Demonstrated on LogP micro-benchmarks with GAM
• Rich Martin (9:25) Sensitivity to Network Characteristics
0
2
4
6
8
10
12
14
16
µs
gLOrOs
NOW Finale
Novel System Design Techniques
• Andrea Arpaci-Dusseau (9:50)
Implicit Coscheduling: From Simulation To Implementation And Back Again
NOW 695 2
NOW is “federalism”
• Large, collective pool of resources– Not just networked services
• Building block is complete computer
• Authority, control, responsibility dividedbetween local operating system and globaloperating system
• How is the ensemble organized?
• Who does it?
• Based on what?
From “On Self-organizing systems,”June 1995 Retreat
NOW Finale
Understanding Parallel Appln Perf.
• Frederick Wong (10:25) Understanding Application Scaling: NAS Parallel Benchmarks on the NOW and SGI Origin 2000
Case 24
2003 Computer Food Chain
PortableComputers
Mainframe Vector Supercomputer Mini-supercomputer
Networks of Desktop Computers
Mini-computer
From “Case for NOW”Jan 1994 Retreat
NOW Finale
Minute Sort
SGI Power Challenge
SGI Orgin
0123456789
0 10 20 30 40 50 60 70 80 90 100
ProcessorsG
igab
ytes
sorted
Fast Parallel I/O
• Remzi Arpaci-Dusseau & Eric Anderson (10:50)Robust I/O Performance in River
NOW Finale
Scalable Services
• Wingman/NOW transcoding proxy demo
Scalable Servers
Stationarydesktops
Informationappliances
NOW Finale
Virtual Networks
• Alan Mainwaring (1:00) Communication Retrospectives
Implications (system)
• Independent scheduling– provide concept of network process
– NI stamps NPID in message and checks against currentprocess
– Vector inactive messages to kernel, package messagesfor current nPID conveniently
» avoid interrupt if attentive, multiple messages per int, . . .
– Context switch support (???)
• Shared Network– destination should always be able to accept packets
Reality check: 10 ms page fault => 200 KB at 155 Mb/s
=> 750 KB at 622 Mb/s
=> End-to-end flow control needed to ensure thatresources are available at destination w/i net process.
• Virtual Memory– address translation on dest (miss rate?)
From Jan 1994 Retreat
NOW Finale
New look at File Systems
• Drew Roselli (1:25) Huge File Traces
• Mike Dahlin (1:50) xFS and Beyond
• Randy Wang (2:45) Intelligent Disks
NOW 8
Example: Traditional File System
Clients Server
$$$
GlobalSharedFile Cache
RAIDDisk Storage
Fast Channel (HPPI)
• Expensive
• Complex
• Non-Scalable
• Single point of failure
$
LocalPrivate
File Cache
$
$
° ° ° Bottleneck
• Server resources at a premium
• Client resources poorly utilized
NOW Finale
Cluster Design
• Steve Lumetta (3:10) Trends in Cluster Architectures
Q2: What is the Hardware Organization?
• Wide scope for innovation
MEM
M/C
P 28 dma channels
nCUBE:
$
P
M
CM-5:
$
P
M
mbus
sbus
Splat:
$
P
M
graphics
HP/Medusa:
$
P
M
mbus
PµMeiko:
$
P
M
Paragon:
$
P
Networks are all over the map as well!
From Jan 1994 Retreat
NOW Finale
Vast, Cheap Storage
• Nisha Talagala and Satoshi Asami (3:35) Large-scale Storage Devices
NOW Finale
New Scale and New Technology• Matt Welsh, Millennium
• Philip Buonodonna, VIA
• Eric Brewer, The Pro-active Infrastructure
NOW 45
Millennium Computational Community
Gigabit Ethernet
SIMS
C.S.
E.E.
M.E.
BMRC
N.E.
IEORC. E. MSME
NERSC
Transport
Business
Chemistry
Astro
Physics
Biology
EconomyMath
NOW Finale
Many Thanks
• To all of you visitors for coming– and for guiding us through many retreats
– and for tremendous support
• To the CS division– an environment that made it possible
• To an incredible group of students who made NOW a successful project
– by any metric
• I think you will enjoy these final presentations