Sun Fire X4500 and Netbackup 6.5 TestingRyan ArnesonISV Engineer – Storage Systems Product Group
Agenda• What was tested?> Overview
• Configuration> Solaris> Zpool & ZFS> NBU
• Performance• Recommendations
What was tested• Hardware> Sun Fire X4500 (x 2)> 48 500GB drives (46 configured into zpool)> 10GBE Neterion Xframe II > 4 onboard 1GBE ports
> X4200 & X4100 clients to generate load (Solaris)• OS> Solaris 10 8/07 (Update 4)> No patching other than bundled
• Netbackup 6.5> First version to support x64 as a Master/Media Server
Configuration
Solaris• Solaris 10 8/07 (Update 4)> ZFS improvements > Minimal tuning> 10GBE tuning in /etc/system (2 values)>Ndd tuning for tcp_xmit_hiwat & tcp_recv_hiwat> Jumbo Frames throughout>Most NBU recommended /etc/system tweaks are obsolete with
S10, however a few params were bumped up– project.max-msg-ids– project.max-sem-ids– project.max-shm-ids
Zpools and ZFS• Zpools & ZFS> Tested with default RAIDZ from factory and Mirroring> Checksums on/off to see gains>Do you really want to though?
> 1 ZFS filesystem = 1 basic storage unit>During testing 4 ZFS filesytems were grouped into a Storage
Group.> No spares were used>Recommended for production though
Zpool Config #1 – Default RAIDZzpool create -f nbupool \
raidz c0t0d0 c1t0d0 c4t0d0 c6t0d0 c7t0d0\ raidz c0t1d0 c1t1d0 c4t1d0 c5t1d0 c6t1d0 c7t1d0 \ raidz c0t2d0 c1t2d0 c4t2d0 c5t2d0 c6t2d0 c7t2d0 \ raidz c0t3d0 c1t3d0 c4t3d0 c5t3d0 c6t3d0 c7t3d0 \ raidz c0t4d0 c1t4d0 c4t4d0 c6t4d0 c7t4d0 \ raidz c0t5d0 c1t5d0 c4t5d0 c5t5d0 c6t5d0 c7t5d0 \ raidz c0t6d0 c1t6d0 c4t6d0 c5t6d0 c6t6d0 c7t6d0 \ raidz c0t7d0 c1t7d0 c4t7d0 c5t7d0 c6t7d0 c7t7d0
Zpool Config #2 – Mirror (23 pairs)zpool create -f nbupool \mirror c0t0d0 c1t0d0 mirror c0t1d0 c1t1d0 mirror c0t2d0 c1t2d0 \mirror c0t3d0 c1t3d0 mirror c0t5d0 c1t5d0 mirror c0t6d0 c1t6d0 mirror c0t7d0 c1t7d0 \mirror c4t0d0 c7t0d0 mirror c4t1d0 c7t1d0 mirror c4t2d0 c7t2d0 mirror c4t3d0 c7t3d0 \mirror c4t4d0 c7t4d0 mirror c4t5d0 c7t5d0 mirror c4t6d0 c7t6d0 mirror c4t7d0 c7t7d0 \mirror c6t1d0 c5t1d0 mirror c6t2d0 c5t2d0 mirror c6t3d0 c5t3d0 mirror c6t4d0 c1t4d0 \mirror c6t5d0 c5t5d0 mirror c6t6d0 c5t6d0 mirror c6t7d0 c5t7d0 mirror c6t0d0 c0t4d0
ZFS Filesystemsfor fs in 1 2 3 4 5 6 7 8dozfs create -o mountpoint=/backup$fs nbupool/backup$fsdone
root@tm45h # zfs listNAME USED AVAIL REFER MOUNTPOINTnbupool 779G 16.1T 40.1K /nbupoolnbupool/backup1 257G 16.1T 257G /backup1nbupool/backup2 207G 16.1T 207G /backup2nbupool/backup3 207G 16.1T 207G /backup3...and so on
Netbackup• Netbackup 6.5 (release from Sun soon)> X4500 configured as Media Server> No special tuning – left at defaults> SIZE_DATA_BUFFER_DISK>NUMBER_DATA_BUFFER_DISK
> Tuning the buffers only caused an increase in disk Queues and Wait time>Need to monitor client and server backup and watch for
buffer_full and buffer_empty messages to see if tuning needed> Configure multiple Storage Units to gain performance
and flexibility with staging>Used 4 Storage Units configured in 1 Storage Group in test
Test – Backup to one Media Server
Test – Initial RAIDZ - 350MB/sec?YES!
Test – Initial RAIDZ – 70% System?Yes! @ 400MB/sec
Test – Initial Mirror - 350MB/sec?YES! (divide raw by 2 for mirror)
Test – Initial Mirror – 70% System?Yes! (jump due to adding more clients)
What is the Max?• 500-550MB/sec achieved with mirroring and
checksums off• 430-470MB/sec with RAIDZ and checksum off• CPU close to max in both cases
Max Mirror
4 GigE Onboard ports• Used dladm to aggregate all 4 ports. • 350-370MB/sec achieved> System % in mid-80's to low 90's> Interrupts causing higher system load.
• Drawbacks found with staging > Many -> One or One -> Many aggregate nicely> One -> One only uses 1 port (100MB/sec)
• Could configure each interface separately> Management headache
Test – Stage to 2nd Media Server
Stage Results• 450-500MB/sec measured> Used basic disk staging policy> Bpbackup -dssu <storageunit> for manual test> Further testing with Advanced Disk and Storage Lifecycle
Policies could be done• By design, NBU only stages one image per Storage
Unit• Use more Storage Units to run parallel stages> 4 worked well in testing
Test – Backup to two Media Servers
Two Media Server Backup• How to scale?> Create Storage Group with Storage Units from multiple
X4500s> Spread the streams out> NBU does the load balancing
Recommendations• Use 10GBE> Less CPU overhead. Room to grow past 350MB/sec> No issues with staging> Less administrator overhead
• RAIDZ gives more usable space, but Mirror gives a bit more performance> May be more noticeable on restores, further testing
needed.> Use spares to increase realiability
• Use single Zpool, but multiple ZFS filesystems and Storage Units for perf gains and flexibility