of 26
8/3/2019 Red Hat Storage Software Appliance-3.2-3.2 Release Notes-En-US
1/26
Red Hat Storage
Software Appliance 3.23.2 Release Notes
Release Notes for Red Hat Storage Software Appliance
Red Hat Engineering Content Services
8/3/2019 Red Hat Storage Software Appliance-3.2-3.2 Release Notes-En-US
2/26
8/3/2019 Red Hat Storage Software Appliance-3.2-3.2 Release Notes-En-US
3/26
iii
Preface v
1. License ........................................................................................................................... v
2. Getting Help and Giving Feedback ................................................................................... v
2.1. Do You Need Help? ............................................................................................. v
2.2. We Need Feedback! ............................................................................................ vi
1. Introducing Red Hat Storage Software Appliance 1
2. Key Features 3
3. System Requirements 5
4. Downloading and Installing Red Hat Storage Software Appliance 11
4.1. Downloading Red Hat Storage Software Appliance ....................................................... 11
4.2. Installing Red Hat Storage Software Appliance ............................................................. 11
5. Known Issues 13
6. Product Support 15
7. Product Documentation 17
A. Revision History 19
8/3/2019 Red Hat Storage Software Appliance-3.2-3.2 Release Notes-En-US
4/26
iv
8/3/2019 Red Hat Storage Software Appliance-3.2-3.2 Release Notes-En-US
5/26
v
PrefaceThis Release Notes includes the following sections for the 3.2 release of the Red Hat Storage
Software Appliance:
Chapter 1, Introducing Red Hat Storage Software Appliance
Chapter 2, Key Features
Chapter 3, System Requirements
Chapter 4, Downloading and Installing Red Hat Storage Software Appliance
Chapter 5, Known Issues
Chapter 6, Product Support
Chapter 7, Product Documentation
Note
It is recommended that you must thoroughly review this release notes prior to installing or
migrating Red Hat Storage Software Appliance.
Important
Existing Gluster Storage Software Appliance users can migrate to Red Hat Storage Software
Appliance. For step-by-step instructions on migrating to Red Hat Storage Software Appliance,
see http://download.gluster.com/pub/gluster/RHSSA/3.2/Documentation/UG/html/chap-
User_Guide-gssa_migrate.html.
1. LicenseThe License information is available at www.redhat.com/licenses/rhel_rha_eula.html.
2. Getting Help and Giving Feedback
2.1. Do You Need Help?
If you experience difficulty with a procedure described in this documentation, visit the Red Hat
Customer Portal at http://access.redhat.com. Through the customer portal, you can:
search or browse through a knowledgebase of technical support articles about Red Hat products.
submit a support case to Red Hat Global Support Services (GSS).
access other product documentation.
http://access.redhat.com/http://www.redhat.com/licenses/rhel_rha_eula.htmlhttp://download.gluster.com/pub/gluster/RHSSA/3.2/Documentation/UG/html/chap-User_Guide-gssa_migrate.htmlhttp://download.gluster.com/pub/gluster/RHSSA/3.2/Documentation/UG/html/chap-User_Guide-gssa_migrate.html8/3/2019 Red Hat Storage Software Appliance-3.2-3.2 Release Notes-En-US
6/26
Preface
vi
Red Hat also hosts a large number of electronic mailing lists for discussion of Red Hat software and
technology. You can find a list of publicly available mailing lists at https://www.redhat.com/mailman/
listinfo. Click on the name of any mailing list to subscribe to that list or to access the list archives.
2.2. We Need Feedback!
If you find a typographical error in this manual, or if you have thought of a way to make this manual
better, we would love to hear from you! Please submit a report in Bugzilla: http://bugzilla.redhat.com/
against the product Documentation.
When submitting a bug report, be sure to mention the manual's identifier: Release_Notes
If you have a suggestion for improving the documentation, try to be as specific as possible when
describing it. If you have found an error, please include the section number and some of the
surrounding text so we can find it easily.
http://bugzilla.redhat.com/https://www.redhat.com/mailman/listinfohttps://www.redhat.com/mailman/listinfo8/3/2019 Red Hat Storage Software Appliance-3.2-3.2 Release Notes-En-US
7/26
Chapter 1.
1
Introducing Red Hat Storage Software
ApplianceThe Red Hat Storage Software Appliance 3.2 enables enterprises to treat physical storage as
a virtualized, scalable, standardized, scale-on-demand, and centrally managed pool of storage.
Enterprises, now have the capability to leverage storage resources the same way they have leveraged
computing resources, radically improving storage economics in the process through the use of
commodity storage hardware. The appliance's global namespace capability aggregates disk and
memory resources into a unified storage volume that is abstracted from the physical hardware. It
supports multi-tenancy by partitioning users or groups into logical volumes on shared storage and
scales to petabytes of storage capacity.
Red Hat Storage Software Appliance is POSIX-compliant, hence the interface abstracts vendor APIs
and the application need not be modified.
Hence, by scaling performance and capacity linearly, you can add capacity as required in matter offew minutes across a wide variety of workloads without affecting the performance. Storage can also be
centrally managed across a wide variety of workloads enabling operations to more efficiently manage
storage used for a variety of purposes.
The storage software appliance enables users to eliminate their dependence on high cost, difficulty in
deployment, and manage monolithic storage arrays. With a storage software appliance, enterprises
can now deploy commodity storage hardware and realize superior economics.
Figure 1.1. Red Hat Storage Software Appliance Architecture
The heart of Red Hat Storage Software Appliance is GlusterFS; an open source distributed filesystem
distinguished by multiple architectural differences, including a modular, stackable design and a unique,
no-metadata server architecture. Eliminating the metadata server provides better performance,
improved linear scalability, and increased reliability.
8/3/2019 Red Hat Storage Software Appliance-3.2-3.2 Release Notes-En-US
8/26
2
8/3/2019 Red Hat Storage Software Appliance-3.2-3.2 Release Notes-En-US
9/26
Chapter 2.
3
Key FeaturesThis section describes the key features available in Red Hat Storage Software Appliance. The
following is a list of feature highlights of this new version of the Red Hat Storage Software Appliance:
Elastically scale storage with no downtime or application interruption
High Availability support via N-way replication
Scale availability, performance and capacity linearly and independently
No-metadata server eliminates performance bottleneck and ensures linear scalability
Utilization and performance monitoring, measuring and reporting
No changes to applications or management tools
Aggregate CPU, memory, network & disk resources
Scale-out capacity and performance as needed
For deployments that value scale-out architectures and speed
8/3/2019 Red Hat Storage Software Appliance-3.2-3.2 Release Notes-En-US
10/26
4
8/3/2019 Red Hat Storage Software Appliance-3.2-3.2 Release Notes-En-US
11/26
Chapter 3.
5
System RequirementsBefore you install Red Hat Storage Software Appliance, you must verify that your environment
matches the minimum requirements described in this section.
General Configuration Considerations
The system must meet the following general requirements:
Centralized time servers are available (required in clustered environments)
For example, ntpd - Network Time Protocol (NTP) Daemon
File System Requirements
Red Hat recommends XFS when formatting the disk sub-system. XFS supports metadata journaling,
which facilitates quicker crash recovery. The XFS file system can also be defragmented and enlarged
while mounted and active.
For existing Gluster Storage Software Appliance customers who are upgrading to Red Hat Storage
Software Appliance, Ext3 and Ext4 file systems is supported.
Cluster Requirements
Minimum of four SSA nodes and maximum of 64 SSA nodes.
Larger configurations supportable, but require an exception.
Initial cluster deployment can be of heterogeneous nodes.
Configuration upgrades can support a mixture of node sizes.
For example, adding node with 4TB drives to a cluster with 2TB drives. However, in replicated
configuration nodes must be added in pairs such that a node and it's replica are the same size.
Depending on whether the cluster is used for High Performace Computing (HPC), General Purpose,
or Archival, the table below lists the supported configurations.
Component HPC General Purpose Archival
Chassis (only
applicable with
SuperMicro)
2u 24x2.5"
Hotswap with
redundant power
2u 12x3.5"
Hotswap with
redundant power
4u 36x3.5"
Hotswap with
redundant powerProcessor Dual Socket
Hexacore Xeon
Dual Socket
Hexacore Xeon
Dual Socket
Hexacore Core
Xeon
Disk 24x 2.5" 15K RPM
SAS
12x 3.5" or 24x
2.5" SFF 6gb/s
SAS
36x 3.5" 3gb/s
SATA II
minimum RAM 48 GB 32 GB 16 GB
Networking 2x10 GigE 2x10 GigE
(preferred) or
2x1GigE
2x10 GigE
(preferred) or 2x1
GigEMax # of JBOD
attachments
0 2 4
8/3/2019 Red Hat Storage Software Appliance-3.2-3.2 Release Notes-En-US
12/26
Chapter 3. System Requirements
6
Component HPC General Purpose Archival
Supported Dell
Model
R510 R510 R510
Supported HP
Model
DL-180, DL-370,
DL-380
DL-180, DL-370,
DL-380
DL-180
JBOD Support NA Dell MD-1200,
HP D-2600, HP
D-2700
Dell MD-1200,
HP D-2600, HP
D-2700
Note
The boot device must be 1.4 GB or higher.
All data disks are configured in groups of 12 drives in RAID6 configuration. Infiniband support on
exception basis only.
Networking Requirements
Verify either of the following:
Gigabit Ethernet
10 Gigabit Ethernet
Compatible Hardware
For successful installation of Red Hat Storage Software Appliance 3.2, you must select your hardware
from the Supported Dell, Supported HP, or Supported SuperMicro list of models.
Dell Supported Configurations
Table 3.1. Dell Supported Configurations
Component Recommended Supported Unsupported
Chassis Redundant power
configuration
R510, R710
(Intel 5520Chipset)
All other Dell models by exception only
Processor Dual Six- core
processors:
Intel Xeon
X5690 -
3.46GHz
Intel Xeon
X5680 -
3.33GHz
Intel Xeon
X5675 -
3.06GHz
Unsupported processors:
Quad-core processors
Single socket configurations
AMD based servers
8/3/2019 Red Hat Storage Software Appliance-3.2-3.2 Release Notes-En-US
13/26
7
Component Recommended Supported Unsupported
Intel Xeon
X5660 -
2.80GHz
Intel XeonX5650 -
2.66GHz
Intel Xeon
E5649 -
2.53GHz
Intel Xeon
E5645 -
2.40GHz
Intel XeonL5640 -
2.26GHz
(also any
faster versions
of six-core
processors)
Memory 32GB 24GB Min, 64GB
Max
NIC
RAID PERC 6/E SAS1gb/512, PERC
H800 1gb/512
Dell single channel ultra SCSI
System Disk 2x200GB Min
(mirrored) 7.2K or
10/15
Data Disk SSD
SFF Drives
HP Supported Configurations
Table 3.2. HP Supported Configurations
Component Recommended Supported Unsupported
Chassis Either Model
Redundant
power
configuration
DL-180 G6,
DL-370 G7,
DL-380 G7 (Intel
5520 Chipset)
All other HP
models by
exception only
Processor Dual Six-core
processors
Intel XeonX5690 -
3.46GHz
Quad-core
processors
Single socket
configurations
8/3/2019 Red Hat Storage Software Appliance-3.2-3.2 Release Notes-En-US
14/26
Chapter 3. System Requirements
8
Component Recommended Supported Unsupported
Intel Xeon
X5680 -
3.33GHz
Intel XeonX5675 -
3.06GHz
Intel Xeon
X5660 -
2.80GHz
Intel Xeon
X5650 -
2.66GHz
Intel XeonE5649 -
2.53GHz
Intel Xeon
E5645 -
2.40GHz
Intel Xeon
L5640 -
2.26GHz
(also anyfaster versions
of six-core
processors)
AMD based
servers
Memory 32GB 16GB Min, 128GB
Max
NIC
RAID HP Smart Array
P410/512 with
FBWC
Smart Array
Advanced Pack
(SAAP)
HP Smart Array
P410/256 with
FBWC or
HP Smart Array
P410/512 with
FBWC
Smart Array
Advanced Pack
(SAAP)
HP Smart Array
B110i
HP Smart ArrayP212
HP Smart Array
P410 with
BBWC
System Disk 2x
200GB Min
(mirrored) 7.2K
or 10/15
Data Disk SSD
8/3/2019 Red Hat Storage Software Appliance-3.2-3.2 Release Notes-En-US
15/26
9
Component Recommended Supported Unsupported
SFF Drives
8/3/2019 Red Hat Storage Software Appliance-3.2-3.2 Release Notes-En-US
16/26
10
8/3/2019 Red Hat Storage Software Appliance-3.2-3.2 Release Notes-En-US
17/26
Chapter 4.
11
Downloading and Installing Red Hat
Storage Software ApplianceThis chapter provides information on downloading and installing the Red Hat Storage Software
Appliance.
4.1. Downloading Red Hat Storage Software ApplianceYou can download the latest Red Hat Storage Software Appliance from https://access.redhat.com.
4.2. Installing Red Hat Storage Software ApplianceYou can install Red Hat Storage Software Appliance using an USB stick, an iso image, or boot over
PXE.
The step-by-step installation process is available at http://download.gluster.com/pub/gluster/
RHSSA/3.2/Documentation/UG/html/chap-User_Guide-gssa_install.html.
http://download.gluster.com/pub/gluster/RHSSA/3.2/Documentation/UG/html/chap-User_Guide-gssa_install.htmlhttp://download.gluster.com/pub/gluster/RHSSA/3.2/Documentation/UG/html/chap-User_Guide-gssa_install.htmlhttps://access.redhat.com/8/3/2019 Red Hat Storage Software Appliance-3.2-3.2 Release Notes-En-US
18/26
12
8/3/2019 Red Hat Storage Software Appliance-3.2-3.2 Release Notes-En-US
19/26
Chapter 5.
13
Known IssuesThe following are the known issues:
Issues related to Distributed Replicated Volumes:
When process has done cd into a directory, stat of deleted file recreates it (directory self- heal
not triggered).
In GlusterFS replicated setup, if you are inside a directory (for example, Test directory) of
replicated volume. From another node, you will delete a file inside Test directory. Then if you
perform stat operation on the same file name, the file will be automatically created. (that is, a
proper directory self-heal is not triggered when process has done cd into a path).
Open fd self-heal blocks the I/O on fd.
While doing self-heal on open file descriptors in replicate, the I/O operations on that particular file
descriptor may get blocked.
Issues related to Distributed Volumes:
Rebalance does not happen if bricks are down.
Currently while running rebalance, make sure all the bricks are in operating or connected state.
Rebalance can happen to already filled subvolume.
Current algorithm of rebalance is not considering the free-space in the target brick before
migrating data. This enhancement is under development and will be available shortly.
There may be minor I/O glitches when Rebalance operation is performed. The live rebalance featurewill be available in upcoming 3.3.x releases. It is recommended to perform rebalance operation
when there are no critical IO operations are happening.
glusterfsd - Error return code is not proper after daemonizing the process.
Due to this, scripts that mount glusterfs or start glusterfs process must not depend on its return
value.
glusterd - Parallel rebalance
With the current rebalance mechanism, the machine issuing the rebalance is becoming a bottleneck
as all the data migrations are happening through that machine.
Parallel operations (add brick, remove brick, and so on) with CLI from different nodes can crash
glusterd.
After # gluster volume replace-brick VOLNAME Brick New-Brickcommit command
is issued, the file system operations on that particular volume, which are in transit will fail.
Command # gluster volume replace-brick ... will fail in a RDMA set up.
If files and directories have different GFIDs on different backends, GlusterFS client may hang or
display errors.
Work Around: The workaround for this issue is explained at http://gluster.org/pipermail/gluster-users/2011-July/008215.html.
http://gluster.org/pipermail/gluster-users/2011-July/008215.htmlhttp://gluster.org/pipermail/gluster-users/2011-July/008215.htmlhttp://gluster.org/pipermail/gluster-users/2011-July/008215.htmlhttp://gluster.org/pipermail/gluster-users/2011-July/008215.htmlhttp://gluster.org/pipermail/gluster-users/2011-July/008215.html8/3/2019 Red Hat Storage Software Appliance-3.2-3.2 Release Notes-En-US
20/26
Chapter 5. Known Issues
14
Downgrading from 3.2.x to 3.1.x
If you are using 3.2.x, then the new features are enabled in the default volume files (i.e. new
translators). So after the downgrade, old versions fail to understand the new options/ translators and
fail to start.
Work Around: Before starting downgrade procedure, run the following commands:
# gluster volume reset VOLNAMEforce
# gluster volume geo-replication stopMASTER SLAVE
Now you can downgrade to 3.1.x.
Run any parameter changing operations on the volume. For example, operations like # gluster
volume set VOLNAMEread-ahead off and # gluster volume set VOLNAMEread-
ahead on.
Issues related to Directory Quota:
Some writes can appear to pass even though the quota limit is exceeded (write returns success).
This is because they could be cached in write-behind. However disk-space would not exceed the
quota limit, since when writes to backend happen, quota does not allow them. Hence it is advised
that applications should check for return value of close call.
If a user has done cd into a directory on which the administrator is setting the limit, even though
the command succeeds and the new limit value will be applicable to all the users except for those
users who has done cd in to that particular directory. The old limit value will be applicable until
the user has cd out of that directory.
Rename operation (that is, removing oldpath and creating newpath) requires additional disk space
equal to file size. This is because, during rename, it subtracts the size on oldpath after rename
operation is performed, but it checks whether quota limit is exceeded on parents of newfile before
rename operation.
With striped volumes, Quota feature is not available.
Issues related to POSIX ACLs:
Even though POSIX ACLs are set on the file or directory, the + (plus) sign in the file permissions
will not be displayed. This is for performance optimization and will be fixed in a future release.
When glusterfs is mounted with -o acl, directory read performance can be bad. Commands likerecursive directory listing can be slower than normal.
When POSIX ACLs are set and multiple NFS clients are used, there could be inconsistency in the
way ACLs are applied due to attribute caching in NFS. For a consistent view of POSIX ACLs in
a multiple client setup, use -o noac option on NFS mount to switch off attribute caching. This
could have a performance impact on operations involving attributes.
The following are few known missing (minor) features:
locks - mandatorylocking is not supported.
NLM (Network Lock Manager) is not supported.
8/3/2019 Red Hat Storage Software Appliance-3.2-3.2 Release Notes-En-US
21/26
Chapter 6.
15
Product SupportYou can reach support at http://www.redhat.com/support.
http://www.redhat.com/support8/3/2019 Red Hat Storage Software Appliance-3.2-3.2 Release Notes-En-US
22/26
16
8/3/2019 Red Hat Storage Software Appliance-3.2-3.2 Release Notes-En-US
23/26
Chapter 7.
17
Product DocumentationProduct documentation of Red Hat Storage Software Appliance is available at http://www.gluster.com/
community/documentation/index.php/Main_Page .
http://www.gluster.com/community/documentation/index.php/Main_Pagehttp://www.gluster.com/community/documentation/index.php/Main_Page8/3/2019 Red Hat Storage Software Appliance-3.2-3.2 Release Notes-En-US
24/26
18
8/3/2019 Red Hat Storage Software Appliance-3.2-3.2 Release Notes-En-US
25/26
19
Appendix A. Revision HistoryRevision 1-9 Mon Dec 20 2011 Divya Muntimadugu [email protected]
Updated Release Notes for File System updates.
Revision 1-1 Fri Nov 18 2011 Daniel Macpherson [email protected]
Transfer of book to Red Hat Documentation site
mailto:[email protected]:[email protected]8/3/2019 Red Hat Storage Software Appliance-3.2-3.2 Release Notes-En-US
26/26