Date post: | 29-Nov-2014 |
Category: |
Technology |
Upload: | ilki-your-cloud-designer |
View: | 340 times |
Download: | 0 times |
VDI Storage optimization:software vs hardware debate
• Association between
• Infralys
• Coretek
• Infrageeks
Both have remotes - fighting over slides
Vincent Brangervs
Erik Ableson
Both have remotes - fighting over slides
Vincent Branger Erik Ableson
Software Hardware
Both have remotes - fighting over slides
Vincent
• Cloud & Virtualization Senior consultant
• Works with Citrix portfolio since 1999 (Prologue...)
• Infralys/Ilki Co-Founder with Gaël Corlay
Vincent
• Cloud & Virtualization Senior consultant
• Works with Citrix portfolio since 1999 (Prologue...)
• Infralys/Ilki Co-Founder with Gaël Corlay
Vincent
• Cloud & Virtualization Senior consultant
• Works with Citrix portfolio since 1999 (Prologue...)
• Infralys/Ilki Co-Founder with Gaël Corlay
Things we agree on
Whisky is good
Virtualization is good too
Erik
• IT generalist for 25+ years, focussed on new technologies
• Cloud & Virtualization Senior consultant
• Particular attention to storage
Varied background starting in Canada plus a few years in the US and then onto France. Trying to keep up and integrating new ways of doing things - early virtualization adopter, bringing consulting teams up to speed Lots of boutique implementations more than big scale stuff
Today’s subject
VDI Challenge: legacy storage is not up to the task
–Brian 2013
“We can do VDI now. Virtual Graphics cards.
VDI specific storage solutions.”
Two approaches
• Software optimised
• Hardware accelerated
I like…
…software because
• It's obviously less expensive than hardware
• It’s an investment that lasts beyond this year’s hardware
• It has predictable performance compared to storage
Vincent
…software because
• It's obviously less expensive than hardware
• It’s an investment that lasts beyond this year’s hardware
• It has predictable performance compared to storage
legacy
Vincent
Some solutions in this space
• Atlantis ILIO VDI
• Liquidware Labs FlexIO
• Nexenta
FlexApp Application Virtualization Technology
Whitepaper
I like…
…hardware because:
• It keeps getting faster and cheaper
• Until legacy storage is replaced, I like my VDI stuff separate
• Software has to run on something…
Some solutions in this space
• Legacy storage optimization
• PernixData FVP
• OCZ VXL
• Next generation storage
• VSAN, Nutanix, Nimble Storage, Pure Storage, Coho Data, …
Tidelands Bank Cashes in on Citrix XenDesktop Performance with PernixData FVP™Moving to Flash Storage
Tidelands Bank is a local bank focused on serving the coastal communities of South
Carolina. Their first branch opened in 2003 and they now have seven locations throughout
the state.
Mitch Lane (anonymized for security), the IT Director at Tidelands, had a problem with
aging desktops. Many of them were four to five years old and running Windows XP. They
needed to be replaced and Lane wanted to use $300 thin clients instead of $600 PCs.
Lane’s plan was to deploy Citrix XenDesktop to all employees, with VMware vSphere on
the backend.
When the bank first tried their Virtual Desktop Infrastructure (VDI) in a small proof-of-
concept (about 20 users), everything performed great. However, as they grew their
VDI user base, application latency become unacceptable, resulting in numerous user
complaints. The servers had just been upgraded, so the problem was likely elsewhere.
Lane did some root cause analysis and it became clear that the Storage Area Network
(SAN) was the culprit. “When I looked in vCenter, the server CPUs and RAM were both
under 50% throughout the day, but utilization in our iSCSI SAN was over 80%,” said Lane.
“I clearly did not have enough IOPS to support my virtual desktop requirements.”
At first, Lane looked at adding spinning disks to his SAN. He priced out a tray of iSCSI
15K disks, which would have given 1,200 more IOPS per shelf for approximately $25,000.
But Lane wondered, “Even if I spend the money on more disks, would the problem really
be solved?” He feared that he would have to keep adding more shelves as his VDI
deployment grew, bringing the total SAN upgrade costs to around $100,000.
Instead, Lane turned his attention to Flash. “I know flash storage provides orders of
magnitude more IOPS than spinning disks,” Lane stated, “but my SAN didn’t support flash
storage.” Since Lane wasn’t interested in a ripping out his current SAN just to add Flash,
Lane looked at server-side Flash. “By locating Flash in the server, the extra performance is
right where I need it,” said Lane. “Now I needed a software solution that would make the
Flash usable across all of my hosts.” Given his heterogeneous environment, this software
also needed to work across a variety of server-side Flash hardware solutions. Since VDI
workloads tend to be write intensive, it was extremely important that the software could
accelerate writes as well as reads.
The first two vendors Lane evaluated didn’t offer write acceleration. According to Lane,
“These were old technologies and could only boost read IOPS. They did not support the
write acceleration necessary for successful desktop virtualization. I needed something that
could handle the needs of both a virtual server a write-hungry virtual desktop user.”
INDUSTRY:
Banking
“On one hand, I would have
had to spend about $100k on
my SAN and cross my fingers
that it would be enough to solve
our latency issues. On the other
hand, I could spend about $17k
on FVP and get more than
enough IOPS to be sure the
latency problem was solved. It
was a pretty easy discussion
with senior management.”
RESULTS:
• Citrix XenDesktop latency under 0.4 ms
• 1,000 IOPS per desktop
• More than 100K IOPS per host
• SQL VM with 360 MB/s of throughput
• Saved over $83,000 compared to upgrading SAN
CUSTOMER PROFILE
Empowering Enterprise Applications with Optimized Flash Hardware and Software
The Combination of Optimal Flash Caching with Accelerated I/O Access Delivers a Leading-Edge Flash Implementation
Allon Cohen, PHD
Scott Harlin
OCZ Technology Group
White Paper
enterprise
Some solutions in this space
• Legacy storage optimization
• PernixData FVP
• OCZ VXL
• Next generation storage
• VSAN, Nutanix, Nimble Storage, Pure Storage, Coho Data, …
Tidelands Bank Cashes in on Citrix XenDesktop Performance with PernixData FVP™Moving to Flash Storage
Tidelands Bank is a local bank focused on serving the coastal communities of South
Carolina. Their first branch opened in 2003 and they now have seven locations throughout
the state.
Mitch Lane (anonymized for security), the IT Director at Tidelands, had a problem with
aging desktops. Many of them were four to five years old and running Windows XP. They
needed to be replaced and Lane wanted to use $300 thin clients instead of $600 PCs.
Lane’s plan was to deploy Citrix XenDesktop to all employees, with VMware vSphere on
the backend.
When the bank first tried their Virtual Desktop Infrastructure (VDI) in a small proof-of-
concept (about 20 users), everything performed great. However, as they grew their
VDI user base, application latency become unacceptable, resulting in numerous user
complaints. The servers had just been upgraded, so the problem was likely elsewhere.
Lane did some root cause analysis and it became clear that the Storage Area Network
(SAN) was the culprit. “When I looked in vCenter, the server CPUs and RAM were both
under 50% throughout the day, but utilization in our iSCSI SAN was over 80%,” said Lane.
“I clearly did not have enough IOPS to support my virtual desktop requirements.”
At first, Lane looked at adding spinning disks to his SAN. He priced out a tray of iSCSI
15K disks, which would have given 1,200 more IOPS per shelf for approximately $25,000.
But Lane wondered, “Even if I spend the money on more disks, would the problem really
be solved?” He feared that he would have to keep adding more shelves as his VDI
deployment grew, bringing the total SAN upgrade costs to around $100,000.
Instead, Lane turned his attention to Flash. “I know flash storage provides orders of
magnitude more IOPS than spinning disks,” Lane stated, “but my SAN didn’t support flash
storage.” Since Lane wasn’t interested in a ripping out his current SAN just to add Flash,
Lane looked at server-side Flash. “By locating Flash in the server, the extra performance is
right where I need it,” said Lane. “Now I needed a software solution that would make the
Flash usable across all of my hosts.” Given his heterogeneous environment, this software
also needed to work across a variety of server-side Flash hardware solutions. Since VDI
workloads tend to be write intensive, it was extremely important that the software could
accelerate writes as well as reads.
The first two vendors Lane evaluated didn’t offer write acceleration. According to Lane,
“These were old technologies and could only boost read IOPS. They did not support the
write acceleration necessary for successful desktop virtualization. I needed something that
could handle the needs of both a virtual server a write-hungry virtual desktop user.”
INDUSTRY:
Banking
“On one hand, I would have
had to spend about $100k on
my SAN and cross my fingers
that it would be enough to solve
our latency issues. On the other
hand, I could spend about $17k
on FVP and get more than
enough IOPS to be sure the
latency problem was solved. It
was a pretty easy discussion
with senior management.”
RESULTS:
• Citrix XenDesktop latency under 0.4 ms
• 1,000 IOPS per desktop
• More than 100K IOPS per host
• SQL VM with 360 MB/s of throughput
• Saved over $83,000 compared to upgrading SAN
CUSTOMER PROFILE
Empowering Enterprise Applications with Optimized Flash Hardware and Software
The Combination of Optimal Flash Caching with Accelerated I/O Access Delivers a Leading-Edge Flash Implementation
Allon Cohen, PHD
Scott Harlin
OCZ Technology Group
White Paper
enterprise
Stop the Finger-Pointing: Managing Tier 1 Applications with VMware vCenter™ Operations Management Suite™
By David Davis, VMware vExpert™
W H I T E P A P E R
www.tintri.com
DATA SHEET
Tintri VMstore™ smart storage is designed to address the needs of virtualization and
cloud environments. Traditional storage is a mismatch for the specialized demands of
virtualization, requiring complex confi guration, signifi cant over-provisioning and ongoing
optimization and management. VMstore addresses the challenges traditional storage
platforms pose when virtualizing critical server workloads such as Microsoft® Exchange®,
Microsoft® SQL Server®, Microsoft® SharePoint®, Oracle® and SAP® databases as well as
end-user desktops.
Built using the industry’s fi rst and leading application-aware storage architecture, the
fourth-generation Tintri VMstore T600 series operates at the VM and vDisk level—
seeing and adapting to rapidly-changing workloads, eliminating mundane storage
management tasks and delivering substantial improvements in performance and density
over legacy storage. Tintri VMstore T600 series is ideal for midsize and large enterprise
virtual environments with a variety of workloads such as VDI deployments with mixed
end-users, business-critical applications and development and test environments.
Whether you are an IT architect, administrator or manager, Tintri VMstore can help you:
Realize the full potential of virtualization
with intelligent storage.
• Set-up in minutes with support for multiple VMware vCenter servers. Only deal
with auto aligned VMs and vDisks, not LUNs and volumes—eliminate any complex
confi guration or ongoing tuning.
• Get the performance of fl ash with the economics of HDD with Tintri Flash First
Design, delivering 99 percent of IO from fl ash.
• Serve hundreds of diŪ erent types of VM workloads from a single VMstore with vDisk-
level QoS and performance allocation—eradicate the impact from noisy neighbors
on other virtual workloads.
Eliminate bottlenecks and troubleshooting
overhead with infrastructure insight.
• Get a single view of all VMs stored and identify performance and capacity trends
without dealing with underlying storage.
• Instantly identify performance hot spots at the hypervisor, network and storage levels
with comprehensive performance visualization.
• Leverage Tintri Global Center to monitor and administer multiple VMstore systems
and resident VMs from a single control pane.
Stay in control of virtualization environment while VMstore
eliminates mundane storage management tasks.
• Protect individual VMs with customizable policies for VM-level instant space-eű cient
snapshots—eliminating the complexity of LUNs and volumes mapping.
• Deploy aŪ ordable WAN-eű cient replication at the VM-level using as much as 95 percent
less bandwidth with block-level global deduplication and compression over the wire.
• Create hundreds of high performance zero-space VM clones locally or remotely.
Ideal for speeding up VDI deployments and for development/test workloads.
Highlights
Storage that Sees:• Designed specifi cally for
virtualized applications, VMstore
automatically confi gures itself
based on your environment and
provides a complete end-to-end
view of all virtual workloads.
Storage the Learns:• VMstore maintains constant
communication with your entire
virtualized environment. Actively
changing VMs are tracked and
highlighted so you have status on
a moment-by-moment basis.
Storage that Adapts:• Because of unique per-VM data
management and operations,
VMstore can make adjustments
including QoS and auto-alignment
to maintain the best service for all
virtualized applications.
“Compared to our previous storage, Tintri VMstore can run ten times the VMs in less than a tenth of the data center footprint, and reduce latency by 98 percent at the same time. They helped us realize a fundamental goal of virtualization: consolidating workloads and increasing resource utilization, both on hosts and on storage.”
—Mike Torgersenvice president of IT at ParAccel
Tintri VMstore™ T600 Series
&OHJOFFSFE�GPS�&GæDJFODZDemands for better storage performance, scalability, data protection, and simplicity continue to grow in today’s datacenter. The rapid adoption of virtualization and server consolidation has further compounded the need for network storage that can keep up with these demands. Nimble Storage makes it possible for IT to tackle them all head on.
Nimble Storage designed its Cache Accelerated Sequential Layout (CASLTM) architecture to help large and small IT organizations address their storage challenges. As the industry’s first flash-optimized storage architecture designed from the ground up, CASL effectively combines the performance of flash for reads with a unique data layout optimized for writes. The result is high-performance, efficient storage. CASL also includes integrated data protection and management functionality required by today’s demanding applications, eliminating the need for separate backup storage solutions and tools. These characteristics make the Nimble Storage CS-Series the ideal storage platform for mainstream IT applications in a variety of environ-ments, ranging from midsize deployments with hundreds of users to large enterprises with thousands of employees.
Nimble Storage CS200 and CS400 SeriesChoosing the right Nimble Storage array is simple. The CS200 Series is a good fit for midsize businesses or distributed sites of larger organizations, supporting workloads such as Microsoft applications, VDI, or virtual server consolidation. For IO-intensive workloads, such as transaction processing supported by Oracle or large-scale VDI deployments, the CS400 Series delivers higher performance. Nimble Storage arrays come standard with full software functionality, so there are no hidden costs.
4DBMF�UP�'JU�XJUI�4DBMF�0VU� "SDIJUFDUVSFCASL’s scale-to-fit capabilities make it easy to non-disruptively scale the CS-Series to meet both the growing capacity and performance needs of today’s datacenter.
Storage can be scaled to hundreds of terabytes by adding disk shelves. Performance can be enhanced through the addition of higher capacity SSDs able to support larger amounts of active data. For additional throughput, a CS200 system can be upgraded to a CS400 non-disruptively.
Nimble Storage also enables scaling of performance and capacity beyond the physical limitations of a single array, to a storage cluster comprised of any combina-tion of Nimble arrays. This seamless scaling of performance and capacity can help elimi-nate performance hotspots and storage silos, enabling substantial management efficiency and extending overall storage investment.
Nimble Storage hybrid arrays deliver the right mix of high performance and efficient capacity for mainstream workloads in IT organizations of all sizes.
Flash-Optimized Hybrid Storage Arrays “With Nimble we have reduced
power consumption, cooling needs
and rack usage, eliminated tra-
ditional backup and associated
backup windows, shortened our
recovery point objective, improved
server performance, and improved
perceived user experience.”
Lucas Clara Director of Information Technology Foster Pepper PLLC
0.67 ms (All Nimble Storage Customers)
WriteLatency
1-4 ms (Tiered Systems with Flash)
ReadLatency
0.5 ms (Nimble Customers on VMware)
5-10 ms (Disk-Based Systems)
Measured Across Entire Nimble Storage Installed Base (March 2012 to March 2013)
51%Actual Results for All Nimble Storage Customers
10%Industry Average (Source: IDC)
% OF WORKLOADS REPLICATED FOR DISASTER RECOVERY
% of Workloads Replicated for Disaster Recovery
0.67 ms (All Nimble Storage Customers)
WriteLatency
1-4 ms (Tiered Systems with Flash)
ReadLatency
0.5 ms (Nimble Customers on VMware)
5-10 ms (Disk-Based Systems)
Measured Across Entire Nimble Storage Installed Base (March 2012 to March 2013)
51%Actual Results for All Nimble Storage Customers
10%Industry Average (Source: IDC)
% OF WORKLOADS REPLICATED FOR DISASTER RECOVERY
% of Workloads Replicated for Disaster Recovery
Our Customers Protect 5x More Apps
Our Customers Access Data 10x Faster
Our Customers Enjoy Virtually Zero Downtime
C S - S E R I E S D ATA S H E E T
Kaminario K2 SPEAR
White Paper
Proactive Services with Active Watch World-class customer service is a top priority for X-IO. X-IO supports its customers using best-in-class tools combined with the native (and no-cost) phone-home support called Active Watch. Active Watch regularly reports complete operating telemetry on each ISE and reports failure-predictive conditions and events. Active Watch is tied into the X-IO customer database to ensure support cases are generated automatically when it is time for an ISE to send an alert. Many of the cases handled today by X-IO’s technical support staff are automatically generated by Active Watch, leading to an exceptional customer experience and a faster time to resolution.
9950 Federal Drive, Suite 100 | Colorado Springs, CO 80921 | U.S. >> 1.866.472.6764 | International. >> +1.719.388.5500
www.x-io.com X-IO, X-IO Technologies, ISE and CADP are trademarks of Xiotech Corporation. Product names mentioned herein may be trademarks and/or registered trademarks of their respective companies. © Xiotech Corporation. All rights reserved. Document Number DS-0009-20130712
ISE Performance Adapter for Windows Performance Monitor When performance is a priority, this software adapter gets information from the CorteX web service and enables the powerful Windows Performance Monitor to capture and collect performance and capacity information for many ISE storage systems. These statistics can be collected at very granular intervals (seconds) for troubleshooting or at course intervals (hours) for trending. These captures can be viewed in Performance Monitor or can be imported into spreadsheets or even databases.
Using Microsoft Excel to display ISE performance statistics captured by Windows Performance Monitor
CorteX CorteX is a ReSTful web-based management service that is built into all ISE storage systems, as well as being integrated with X-IO’s storage resource management software: Orchestrator, ISE Manager, Virtual View, Mirror Manager, etc. CorteX enables simple commands and interfacing for the discovery, monitoring, and configuration of all ISE storage resources. CorteX is designed so the ISE systems can be easily controlled via custom scripting or the storage management functionality, located inside operating systems, hypervisors, databases, and other applications. ISE Mirroring Customers can take advantage of volume portability and maximum availability for their data when combined with the industry-leading resiliency and redundancy of the ISE and ISE Mirroring. ISE Mirroring is easy to use and is a simple way to build in-house, highly resilient IT environments, in a consolidated datacenter footprint, for a fraction of the cost of traditional replication solutions. As a part of the ISE Mirroring framework, Active-Active Mirroring goes beyond replication technology. It provides the storage industry’s only fully active-active, native, array replication, requiring no additional server-based software to implement. Imagine a replication solution that allows servers to have read and write access to both mirror copies simultaneously; maintain and, at times, improve performance; and create a continuously available storage solution. Active-Active Mirroring provides all of these benefits and requires no additional software on the host or cluster.
Pernix FVP
• Hypervisor SSD caching layer
• Integrated into the ESXi storage stack (VIB install)
• Read caching with optional read/write
• Clustered for data reliability
• Generic solution to accelerate any kind of shared back end storage
NFS, iSCSI, FC
Pernix FVP
Frequently linked to FusionIO
Pernix FVP
Frequently linked to FusionIO
Pernix FVP
Frequently linked to FusionIO
Pernix FVP
Frequently linked to FusionIO
Pernix FVP
Frequently linked to FusionIO
Pernix FVP
Frequently linked to FusionIO
Pernix FVP
Frequently linked to FusionIO
Pernix FVP
Frequently linked to FusionIO
OCZ VXL
• A scale-out cache/storage hybrid
• Optimized for the OCZ Z-Drive PCIe SSDs
• Republishes shared block storage over iSCSI
• Or used as primary storage
OCZ VXL Architecture
OCZ VXL Architecture
OCZ VXL Architecture
OCZ VXL Architecture
OCZ VXL Architecture
iSCSI
vSwitch
iSCSI
vSwitch
Next generation
New hardware• Next generation flash optimized storage
• All-flash
• Hybrid
• Too many to list all of them
• Xtreme-IO, Coho Data, X-IO, Tegile, Invicta, Pure Storage, Nimble Storage Kaminario, RamSAN, Violin Memory, Solidfire, Tintri, Infinio, Skyera, …
• Not to mention all of the traditional players…
Next Generation Storage
All flash: Pure Storage Hybrid array: Nimble Storage, Tintri Scale out array: Solidfire Unified name space: Coho Data Hyperconverged: VSAN, Nutanix, Simplivity, Scaleio
Next Generation Storage
All flash: Pure Storage Hybrid array: Nimble Storage, Tintri Scale out array: Solidfire Unified name space: Coho Data Hyperconverged: VSAN, Nutanix, Simplivity, Scaleio
Next Generation Storage
All flash: Pure Storage Hybrid array: Nimble Storage, Tintri Scale out array: Solidfire Unified name space: Coho Data Hyperconverged: VSAN, Nutanix, Simplivity, Scaleio
Next Generation Storage
All Flash Arrays
All flash: Pure Storage Hybrid array: Nimble Storage, Tintri Scale out array: Solidfire Unified name space: Coho Data Hyperconverged: VSAN, Nutanix, Simplivity, Scaleio
Next Generation Storage
Hybrid Arrays
All flash: Pure Storage Hybrid array: Nimble Storage, Tintri Scale out array: Solidfire Unified name space: Coho Data Hyperconverged: VSAN, Nutanix, Simplivity, Scaleio
Next Generation Storage
Scale-out clusters
All flash: Pure Storage Hybrid array: Nimble Storage, Tintri Scale out array: Solidfire Unified name space: Coho Data Hyperconverged: VSAN, Nutanix, Simplivity, Scaleio
Next Generation Storage
Scale-out clusters
Unified Namespace
All flash: Pure Storage Hybrid array: Nimble Storage, Tintri Scale out array: Solidfire Unified name space: Coho Data Hyperconverged: VSAN, Nutanix, Simplivity, Scaleio
Next Generation Storage
“Hyperconvergence”
All flash: Pure Storage Hybrid array: Nimble Storage, Tintri Scale out array: Solidfire Unified name space: Coho Data Hyperconverged: VSAN, Nutanix, Simplivity, Scaleio
• Greenbytes vIO
• Nexenta
Focussing on VDI
• Greenbytes vIO
• Nexenta
Focussing on VDI
Explaining the software
Atlantis ILIO VDI
• Storage optimizations through software
• In-memory storage
• First product for VDI only
• Then XenApp
• Then for all workloads (USX)
Atlantis ILIO VDI
• Storage optimizations through software
• In-memory storage
• First product for VDI only
• Then XenApp
• Then for all workloads (USX)
Atlantis ILIO VDI
• Storage optimizations through software
• In-memory storage
• First product for VDI only
• Then XenApp
• Then for all workloads (USX)
Atlantis ILIO VDI
• Storage optimizations through software
• In-memory storage
• First product for VDI only
• Then XenApp
• Then for all workloads (USX)
Architecture
• Diskless VDI
• Persistent VDI : disk backed
• Persistent VDI : In memory
Deduplication IO coalescing…
Architecture
• Diskless VDI
• Persistent VDI : disk backed
• Persistent VDI : In memory
Architecture
• Diskless VDI
• Persistent VDI : disk backed
• Persistent VDI : In memory
My experiences
• Several projects from 100 to 2000 users
• XenDesktop/XenServer & View/vSphere
• Amazing UX especially with diskless
My experiences
• Several projects from 100 to 2000 users
• XenDesktop/XenServer & View/vSphere
• Amazing UX especially with diskless
My experiences
• Several projects from 100 to 2000 users
• XenDesktop/XenServer & View/vSphere
• Amazing UX especially with diskless
Liquidware FlexIO
• RAM cache and compression mechanisms
• Reads and writes caching
• For non-persistent only
• Easy to implement
FlexApp Application Virtualization Technology
Whitepaper
These approaches are the closest to the hypervisor layer. Nexenta offers a slightly different approach.
NexentaConnect• Virtual appliance based on ZFS
• Publish storage : NFS, iSCSI
• De facto all ZFS optimizations
• Write and reads caching
• I/O coalescing
• Fast cloning
• Inline deduplication...
Vincent: OK - this seems to be closer to the hardware than I thought Erik: Yup - the ZFS architecture is pretty focussed on getting the most out of hybrid hardware. In fact, Greenbytes also uses a ZFS core…
Greenbytes
• Architecture
• Core is ZFS, similar to Nexenta
• Deployment model is a VM on top of an SSD backed datastore
• Features
• VDI optimized dedup algorithm - much faster
And also available in a pure hardware form!
NexentaStor
• L2ARC - Expand the memory caching with SSD
• Cheaper than RAM!
• Excellent performance
Budget by requirement; PCIe vs SAS SSD vs SATA SSD. Economics are pushing towards SSD Vincent - review the stack
Desktop down
create ramdisk format pool, present NFS Not marketed or supported AFAIK as a solution, but the commands are certainly available
Desktop down
Desktop Image
create ramdisk format pool, present NFS Not marketed or supported AFAIK as a solution, but the commands are certainly available
Desktop down
Desktop Image
NFS Share
create ramdisk format pool, present NFS Not marketed or supported AFAIK as a solution, but the commands are certainly available
Desktop down
Desktop Image
NFS Share
ARC (RAM Cache)
create ramdisk format pool, present NFS Not marketed or supported AFAIK as a solution, but the commands are certainly available
Desktop down
Desktop Image
NFS Share
ARC (RAM Cache) L2ARC (SSD Cache)
create ramdisk format pool, present NFS Not marketed or supported AFAIK as a solution, but the commands are certainly available
Desktop down
Desktop Image
NFS Share
ARC (RAM Cache) L2ARC (SSD Cache)
zpool
create ramdisk format pool, present NFS Not marketed or supported AFAIK as a solution, but the commands are certainly available
Desktop down
Desktop Image
NFS Share
ARC (RAM Cache) L2ARC (SSD Cache)
zpool
block devices (disk, SSD)
create ramdisk format pool, present NFS Not marketed or supported AFAIK as a solution, but the commands are certainly available
Hmmmm
Non persistent
Non persistent
Desktop Image
Non persistent
Desktop Image
NFS Share /exports/ILIO_VirtualDesktops
Non persistent
Desktop Image
NFS Share
RAM Disk
/exports/ILIO_VirtualDesktops
/dev/ram0 (zram=compressed)
filesystem = “dedup”
Persistent
Desktop Image
NFS Share
RAM Disk
Persistent
Desktop Image
NFS Share
RAM Disk
RAID 1 Mirror
Persistent
Desktop Image
NFS Share
RAM DiskFile
RAID 1 Mirror
Persistent
Desktop Image
NFS Share
RAM Disk
NFS Share
File
RAID 1 Mirror
Persistent
Desktop Image
NFS Share
RAM Disk
NFS Share
File
RAID 1 Mirror
Shared storage
Cheating…
Cheating…• It’s not as VDI optimized, but :
Cheating…• It’s not as VDI optimized, but :
# sudo ramdisk -a 100G vdi_ramdisk
Cheating…• It’s not as VDI optimized, but :
# sudo ramdisk -a 100G vdi_ramdisk
# zpool create vdipool mirror /dev/vdi_ramdisk /mnt/nfs/remotedisk
Cheating…• It’s not as VDI optimized, but :
# sudo ramdisk -a 100G vdi_ramdisk
# zpool create vdipool mirror /dev/vdi_ramdisk /mnt/nfs/remotedisk
# zpool dedup=on vdipool
Cheating…• It’s not as VDI optimized, but :
# sudo ramdisk -a 100G vdi_ramdisk
# zpool create vdipool mirror /dev/vdi_ramdisk /mnt/nfs/remotedisk
# zpool dedup=on vdipool
# zpool primarycache=off vdipool
Cheating…• It’s not as VDI optimized, but :
# sudo ramdisk -a 100G vdi_ramdisk
# zpool create vdipool mirror /dev/vdi_ramdisk /mnt/nfs/remotedisk
# zpool dedup=on vdipool
# zpool primarycache=off vdipool
ComparisonAtlantis ILIO
DisklessAtlantis ILIO
PersistentLiquidware FlexIO GreenBytes vIO NexentaConnect
RAM Caching Yes Yes Yes Yes Yes
Deduplication Yes Yes No Yes Yes
Compression Yes Yes Yes Yes Yes
Write Coalescing Yes Yes No Yes Yes
Publish iSCSI Yes Yes No Yes Yes
Publish NFS Yes Yes Yes Yes Yes
Storage RAM Disk Shared disk Local disk SSD Local/Shared Disk
Licensing $/Named User $/Named User $/Host $/Tb SSD Storage $/Host
Key takeaways - all of them work and work well. But licensing is be completely different by solution
Projects
• 200 Users
• 100 Persistent
• 100 Non-persistent
• Image 50 GB
• 3 Servers
Projects
• 200 Users
• 100 Persistent
• 100 Non-persistent
• Image 50 GB
• 3 Servers
Atlantis ILIO
22 k€
Projects
• 200 Users
• 100 Persistent
• 100 Non-persistent
• Image 50 GB
• 3 Servers
Atlantis ILIO
22 k€
GreenBytes vIO
18 k€
Tradeoffs
Tradeoffs
Tradeoffs
16 GB RDIMMs ~$12/GB~$12/GB Kingston RAM
Tradeoffs
16 GB RDIMMs ~$12/GB~$12/GB Kingston RAM
PCIe Flash ~$3-8/GB~$3-8/GB OCZ/Fusion-IO
Tradeoffs
16 GB RDIMMs ~$12/GB~$12/GB Kingston RAM
PCIe Flash ~$3-8/GB~$3-8/GB OCZ/Fusion-IO
SATA SSD ~$0,76/GB~$0,76/GBSamsung Evo
Pro
Tradeoffs
16 GB RDIMMs ~$12/GB~$12/GB Kingston RAM
PCIe Flash ~$3-8/GB~$3-8/GB OCZ/Fusion-IO
SATA SSD ~$0,76/GB~$0,76/GBSamsung Evo
Pro
100GB cache layer
1 200 $US
500 $US
76 $US
Tradeoffs
16 GB RDIMMs ~$12/GB~$12/GB Kingston RAM
PCIe Flash ~$3-8/GB~$3-8/GB OCZ/Fusion-IO
SATA SSD ~$0,76/GB~$0,76/GBSamsung Evo
Pro
100GB cache layer
1 200 $US
500 $US
76 $US
Latency Bandwidth
Nanoseconds ~20 GB/s
Microseconds ~15 GB/s
<Millisecond ~1,2 GB/s
Tradeoffs
16 GB RDIMMs ~$12/GB~$12/GB Kingston RAM
PCIe Flash ~$3-8/GB~$3-8/GB OCZ/Fusion-IO
SATA SSD ~$0,76/GB~$0,76/GBSamsung Evo
Pro
100GB cache layer
1 200 $US
500 $US
76 $US
Latency Bandwidth
Nanoseconds ~20 GB/s
Microseconds ~15 GB/s
<Millisecond ~1,2 GB/s*
• Read warranties carefully before buying
• Samsung EVO
!
• Samsung Evo Pro
*
3
Technical Specifications
Samsung SSD 840 PRO Series
Usage Application(s) Client PCs, Enterprise Computing†
Capacity 128GB, 256GB, 512GB
Dimensions (L x W x H) 100 x 69.85 x 6.8 (mm)
Interface SATA 6Gb/s (compatible with SATA 3Gb/s and SATA 1.5Gb/s)
Form Factor 2.5-inch
NAND Flash Memory Samsung Toggle DDR 2.0 NAND Flash Memory (400Mbps, 2xnm/1xnm)
DRAM Cache Memory 256MB (128GB model) or 512MB(256GB & 512GB models) LPDDR2
Performance*
4KB Random Read (QD32): Max. 100,000 IOPS (256GB/512GB) Max. 97,000 IOPS (128GB)
4KB Random Write (QD32): Max. 90,000 IOPS (128GB/256GB/512GB)
4KB Random Read (QD1): Max. 9,900 IOPS (256GB/512GB) Max. 9,800 IOPS (128GB)
4KB Random Write (QD1): Max. 31,000 IOPS (128GB/256GB/512GB)
Sequential Read: Max. 540 MB/s (256GB/512GB) Max. 530MB/s (128GB)
Sequential Write: Max. 520 MB/s (256GB/512GB) Max. 390 MB/s (128GB)
TRIM Support Yes (Requires OS Support)
Garbage Collection Yes
S.M.A.R.T Yes
Encryption AES 256-bit Full Disk Encryption (FDE)
Weight Max. 54g (128GB/256GB/512GB)
Reliability MTBF: 1.5 million hours
Power Consumption Average : 0.069W ** (Typical) Idle : 0.054W (Typical, DIPM ON), 0.349W (Typical, DIPM OFF)
Temperature Operating: Non-Operating:
0°C to 70°C -55°C to 95°C
Humidity 5% to 95%, non-condensing
Vibration Non-Operating: 20 ~ 2000Hz, 20G
Shock 1500G & 0.5ms (Half sine)
Warranty 5 years limited (client PC use only)***
System Configuration : Intel Core i7-3770 @ 3.4GHz, 4GB DDR3 SDRAM (2GBx2) 1333Mbps; Asus motherboard with Intel 7 Series Z77 Chipset, Windows 7 Ultimate x64 SP1; IRST 11.2, MS performance guide pre-condition.
† For enterprise usage (e.g. servers), a minimum of 6.7% over-provisioning (OP) is recommended. * Sequential performance measurements based on CrystalDiskMark v.3.0.1. Random performance measurements based on Iometer 2010. Performance may vary based on SSD’s firmware version, system hardware & configuration ** Power consumption measured with MobileMark 2007 in Windows 7. Values calculated using laptop PC and represent system-level
power consumption. *** For enterprise applications, 5 years limited warranty assumes a maximum average workload of 40GB/day (calculated based on host writes and on the industry standard of 3-month data retention). Workloads in excess of 40GB/day are not covered under warranty.
3 DATA SHEET Rev. 1.1, August, 2013
Technical Specifications
Samsung SSD 840 EVO
Usage Application Client PCs*
Capacity 120GB, 250GB, 500GB,750GB,1TB
Dimensions (LxWxH) 100 x 69.85 x 6.8 (mm)
Interface SATA 6Gb/s (compatible with SATA 3Gb/s and SATA 1.5Gb/s)
Form Factor 2.5-inch
Controller Samsung 3-core MEX Controller
NAND Flash Memory 1x nm Samsung Toggle DDR 2.0 NAND Flash Memory (400Mbps)
DRAM Cache Memory 256MB (120GB) or 512MB(250GB&500GB) or 1GB (750&1TB) LPDDR2
Performance**
Sequential Read: Max. 540 MB/s
Sequential Write***: Max. 520 MB/s (250GB/500GB/750GB/1TB) Max. 410 MB/s (120GB)
4KB Random Read (QD1): Max. 10,000 IOPS
4KB Random Write(QD1): Max. 33,000 IOPS
4KB Random Read(QD32): Max. 98,000 IOPS (500GB/750GB/1TB) Max. 97,000 IOPS (250GB) Max. 94,000 IOPS (120GB)
4KB Random Write(QD32): Max. 90,000 IOPS (500GB/750GB/1TB) Max. 66,000 IOPS (250GB) Max. 35,000 IOPS (120GB)
TRIM Support Yes (Requires OS Support)
Garbage Collection Yes
S.M.A.R.T Yes
Security AES 256-bit Full Disk Encryption (FDE)
Weight Max. 53g (1TB)
Reliability MTBF: 1.5 million hours
Power Consumption Average : 0.1W **** (Typical) Idle : 0.045W (Typical, DIPM ON)
Temperature Operating: Non-Operating:
0°C to 70°C -55°C to 95°C
Humidity 5% to 95%, non-condensing
Vibration Non-Operating: 20~2000Hz, 20G
Shock Non-Operating: 1500G , duration 0.5m sec, 3 axis
Etc. Worldwide Name (WWN), LED Indicator support
Warranty 3 years limited
System Configuration : Intel Core i7-3770 @ 3.4GHz, 4GB DDR3 SDRAM (2GBx2) 1333Mbps; Asus motherboard with Intel 7 Series Z77 Chipset; Windows 7 Ultimate x64 SP1; IRST 11.2, MS performance guide pre-condition.
* 840 EVO is not validated for data center usage. ** Sequential performance measurements based on CrystalDiskMark v.3.0.1. Random performance measurements based on Iometer 2010. Performance may vary based on SSD’s firmware version, system hardware & configuration *** Sequential Write performance measurements reflect TurboWrite operation. **** Power consumption measured with MobileMark 2007 in Windows 7. Values calculated using laptop computer and represent system-level power consumption.
Check the overprovisioning values (6,7% - can be bumped up) Garbage collection hiccups noticeable on consumer grade SSD
Users 200
Users 200Avg IOPS 20
Users 200Avg IOPS 20Write % 70 %
Users 200Avg IOPS 20Write % 70 %Avg block size 4 KB
Users 200Avg IOPS 20Write % 70 %Avg block size 4 KBWork Day 8 hours
Users 200Avg IOPS 20Write % 70 %Avg block size 4 KBWork Day 8 hoursWork Day 28800 seconds
Users 200Avg IOPS 20Write % 70 %Avg block size 4 KBWork Day 8 hoursWork Day 28800 secondsIOPS/day/user 576000
Users 200Avg IOPS 20Write % 70 %Avg block size 4 KBWork Day 8 hoursWork Day 28800 secondsIOPS/day/user 576000IOPS Write/day/user 403200
Users 200Avg IOPS 20Write % 70 %Avg block size 4 KBWork Day 8 hoursWork Day 28800 secondsIOPS/day/user 576000IOPS Write/day/user 403200KB/User/day 1612800 KB
Users 200Avg IOPS 20Write % 70 %Avg block size 4 KBWork Day 8 hoursWork Day 28800 secondsIOPS/day/user 576000IOPS Write/day/user 403200KB/User/day 1612800 KBGB/User/day 1,54 GB
Users 200Avg IOPS 20Write % 70 %Avg block size 4 KBWork Day 8 hoursWork Day 28800 secondsIOPS/day/user 576000IOPS Write/day/user 403200KB/User/day 1612800 KBGB/User/day 1,54 GBGB/day for 200 users 307,62 GB
Users 200Avg IOPS 20Write % 70 %Avg block size 4 KBWork Day 8 hoursWork Day 28800 secondsIOPS/day/user 576000IOPS Write/day/user 403200KB/User/day 1612800 KBGB/User/day 1,54 GBGB/day for 200 users 307,62 GB# of SSDs required 7,69 to stay in warranty
Users 200Avg IOPS 20Write % 70 %Avg block size 4 KBWork Day 8 hoursWork Day 28800 secondsIOPS/day/user 576000IOPS Write/day/user 403200KB/User/day 1612800 KBGB/User/day 1,54 GBGB/day for 200 users 307,62 GB# of SSDs required 7,69 to stay in warranty
IO Savings from dedup & compression ???
It’s all the same
Two approaches
• Not
• Software
• Hardware
Two approaches
Two approaches• Are actually
• In-Memory
• Persistent
Two approaches• Are actually
• In-Memory
• Persistent
• All driven by a software storage stack on commodity hardware
Two approaches• Are actually
• In-Memory
• Persistent
• All driven by a software storage stack on commodity hardware
• Stop calling everything “software defined”
• OK - it’s got an API… Finally.