+ All Categories
Home > Documents > Dell EMC Storage with Verint Nextiva Configuration Guide · Dell EMC Storage with Verint Nextiva...

Dell EMC Storage with Verint Nextiva Configuration Guide · Dell EMC Storage with Verint Nextiva...

Date post: 22-Jul-2018
Category:
Upload: lamdan
View: 252 times
Download: 0 times
Share this document with a friend
22
Dell EMC Storage with Verint Nextiva Configuration Guide Surveillance June 2018 H14898.3 Configuration Guide Abstract This guide provides configuration instructions for installing the Verint Enterprise VMS video management software using Dell EMC storage platforms. Dell EMC Solutions
Transcript

Dell EMC Storage with Verint NextivaConfiguration GuideSurveillanceJune 2018

H14898.3

Configuration Guide

Abstract

This guide provides configuration instructions for installing the Verint Enterprise VMSvideo management software using Dell EMC storage platforms.

Dell EMC Solutions

Copyright © 2016-2018 Dell Inc. or its subsidiaries. All rights reserved.

Published June 2018

Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS-IS.“ DELL MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH

RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS

FOR A PARTICULAR PURPOSE. USE, COPYING, AND DISTRIBUTION OF ANY DELL SOFTWARE DESCRIBED IN THIS PUBLICATION REQUIRES AN

APPLICABLE SOFTWARE LICENSE.

Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners. Published

in the USA.

Dell EMCHopkinton, Massachusetts 01748-91031-508-435-1000 In North America 1-866-464-7381www.DellEMC.com

2 Dell EMC Storage with Verint Nextiva Configuration Guide

Introduction 5Purpose........................................................................................................6Scope........................................................................................................... 6Assumptions.................................................................................................6

Configuring the solution 7Design concepts........................................................................................... 8EMC VNX..................................................................................................... 8

Disk drives....................................................................................... 8Storage pool configuration (recommended).................................... 9LUN configuration.......................................................................... 10Fibre Channel configuration........................................................... 10VNXe RAID configuration................................................................11iSCSI initiators................................................................................ 11Recommended cache configuration................................................ 11

Isilon (NAS)................................................................................................. 11OneFS 8.1 job workers (required)...................................................12Large file system, small view (SmartQuotas)................................. 12Configuring SmartQuotas (recommended).................................... 12Unique share naming...................................................................... 13Link aggregation.............................................................................13SMB 3.0 MultiChannel................................................................... 14Configuring SmartConnect (optional)............................................ 14I/O optimization configuration....................................................... 16Configuring authentication and access control...............................16

Client connections and Load Balancing....................................................... 16Manually re-balancing recorders across nodes............................... 17

DNS Cache..................................................................................................17Increase the Memory-MaxFrameBuffer on the Verint Recorder................. 18Microsoft Multipath I/O.............................................................................. 18Releases tested...........................................................................................18VMware ESXi requirements and recommendations..................................... 19

Conclusion 21Summary.................................................................................................... 22

Chapter 1

Chapter 2

Chapter 3

CONTENTS

Dell EMC Storage with Verint Nextiva Configuration Guide 3

CONTENTS

4 Dell EMC Storage with Verint Nextiva Configuration Guide

CHAPTER 1

Introduction

This chapter presents the following topics:

l Purpose................................................................................................................6l Scope...................................................................................................................6l Assumptions.........................................................................................................6

Introduction 5

PurposeThis configuration guide aims to help Dell EMC field personnel understand how toconfigure Dell EMC storage system offerings to simplify the implementation of VerintNextiva. This document is not a replacement for the Verint implementation guide noris it a replacement for the Dell EMC Storage with Verint Nextiva: Sizing Guide.

ScopeThis guide is intended for internal Dell EMC personnel and qualified Dell EMC andVerint partners. It provides configuration instructions for installing the Verint Nextivavideo management software using Dell EMC storage platforms.

The following Dell EMC storage systems have been tested:

l Dell EMC Isilon™

l EMC VNX™

This guide supplements the standard EMC VNX Storage Best Practices with VideoManagement Systems: Configuration Guide and Dell EMC Isilon Storage Best Practiceswith Video Management Systems: Configuration Guide and provides configurationinformation specific to Verint Nextiva.

Note

All performance data in this guide was obtained in a rigorously controlled environment.Performance varies depending on the specific hardware and software used.

AssumptionsThis solution assumes that internal Dell EMC personnel and qualified Dell EMCpartners are using this guide with an established architecture.

This guide assumes that the Dell EMC partners who intend to deploy this solution are:

l Associated with product implementation

l Verint-certified to install Verint Nextiva services

l Proficient in installing and configuring Unity storage solutions

l Proficient in installing and configuring VNX storage solutions

l Proficient in installing and configuring Isilon storage solutions

l Familiar with installing and configuring VMware hypervisors and the appropriateoperating system, such as Microsoft Windows or a Linux distribution

l Able to access the Dell EMC Block Storage with Video Management Systems BestPractices: Configuration Guide and Dell EMC Isilon Storage with Video ManagementSystems Best Practices: Configuration Guide

The configurations that are documented in this guide are based on tests that weconducted in the Dell EMC Surveillance Lab using worst-case scenarios to establish aperformance baseline. Lab results might differ from individual productionimplementations.

Introduction

6 Dell EMC Storage with Verint Nextiva Configuration Guide

CHAPTER 2

Configuring the solution

This chapter presents the following topics:

l Design concepts...................................................................................................8l EMC VNX.............................................................................................................8l Isilon (NAS).........................................................................................................11l Client connections and Load Balancing...............................................................16l DNS Cache......................................................................................................... 17l Increase the Memory-MaxFrameBuffer on the Verint Recorder......................... 18l Microsoft Multipath I/O..................................................................................... 18l Releases tested.................................................................................................. 18l VMware ESXi requirements and recommendations............................................ 19

Configuring the solution 7

Design conceptsThere are many design options for a Verint Nextiva implementation. Verint offersmany training courses related to design and implementation. These design details arebeyond the scope of this paper.

The Nextiva VMS System Planning Guide provides the information that you need to plana Nextiva VMS system and complements the Nextiva VMS Customer-FurnishedEquipment Guide and the Nextiva VMS Verint-Supplied Equipment Guide

These guides are intended for systems integrators and architects, network ITplanners, and system administrators. These guides assume that readers know whatNextiva Video Management Software (VMS) does and how it works, and know how todeploy and configure Windows IP networks. These documents are available from aVerint partner or through the Verint Partner network.

In the Nextiva VMS 6.3 SP2 System Planning Guide, Verint recommends a segregatedimplementation. A common segregated implementation example could consist of auser network, a camera network, and a storage network. Other considerationscovered in the planning guide include multicast, third-party software, ports used byNextiva, and other important information. This white paper is not intended to replaceor supersede any Verint document.

The following figure represents the basic configuration that was tested in our lab.

Figure 1 Verint Nextiva architecture

EMC VNXVNX storage is ideal for recording and managing terabytes of video from distributedlocations. This section describes best practices for configuring a VNX storage systemfor this solution.

The VNX family includes the VNX and VNX-VSS series arrays. The VNX series isdesigned for midtier to enterprise storage environments, is ideal for distributedenvironments, and can scale to handle large petabyte (PB) environments with block-only requirements at central locations.

Disk drivesAlthough any supported drive will work, video surveillance systems typically rely onthe density of the array. Dell EMC recommends NL-SAS drives of the highest available

Configuring the solution

8 Dell EMC Storage with Verint Nextiva Configuration Guide

density in this solution. In general, we used one-terabyte (TB) or multi-TB NL-SASdrives when performing our tests.

Note

Because of the high percentage of sequential, large block writes, Dell EMC does notrecommend using flash drives for video storage within a surveillance application.

Storage pool configuration (recommended)The tests we conducted show how storage pools that are defined with the maximumallowable number of disks per pool perform as well as, or better than, traditional RAIDgroups. Therefore, Dell EMC recommends that you use storage pools rather than RAIDgroups. Storage pools also reduce the required array management tasks.

The VNX family array architecture is optimized for storage pools. A storage pool is aconstruct that is built over one, or more commonly multiple, RAID groups. LUNs arebuilt on top of the storage pool. The read/write activity is a random distribution acrossall disks defined to the storage pool. This distribution results in increased and balancedper disk utilization and improved performance when compared to traditional RAIDimplementations.

The RAID groups underlying storage pools can be either RAID 5 or RAID 6. The defaultand recommended RAID configuration for a VNXe or VSS1600 array using NL-SASdrives is RAID 6. Either RAID 5 or RAID 6 can be used with VNX arrays. RAID 5 is usedfor optimizing the array to achieve the maximum amount of storage and RAID 6 isused for enhancing data protection. Our tests using an isolated surveillanceinfrastructure did not reveal any notable performance variances when using RAID 5 ascompared to RAID 6.

Building a storage pool is a straightforward process. You can configure either RAID 5or RAID 6 pools depending on the VNX storage system restrictions and the level ofrisk that the customer is willing to accept. When configuring storage pools, use largestorage pools with large logical unit number (LUN) sizes, and configure the LUNs asthick. Do not use thin LUN provisioning.

Dell EMC recommends the following RAID configurations for VNX arrays:

l RAID 5 or RAID 10 with SAS drives

l RAID 6 with NL-SAS drives

Procedure

1. In Unisphere, select Storage > Storage Pools for block.

2. Click Create under Pools in the Pools section.

3. Set the following options for the storage pool:

l Storage pool name

l RAID type

l Number of SAS drives

l Number of NL SAS drives

4. Choose a method for selecting disks to include in the storage pool:

l Automatic: Provides a list of available disks.

l Manual: Enables you to select specific disks to include in the storage poolfrom a list of available disks. Be sure to clear the automatic diskrecommendation list before you select new disks from the list.

Configuring the solution

Storage pool configuration (recommended) 9

5. Select Perform a Background verify on the new storage and set the priorityto medium.

6. Click Apply, and then click YES to create the storage pool.

LUN configurationA VNX pool LUN is similar to a classic LUN. Pool LUNs comprise a collection of slices.A slice is a unit of capacity that is allocated from the private RAID groups to the poolLUN when it needs additional storage. Pool LUNs can be thin or thick.

Thin LUNs typically have lower performance than thick LUNs because of the indirectaddressing. The mapping overhead for a thick LUN is less than for a thin LUN.

Thick LUNs have more predictable performance than thin LUNs because they assignslice allocation at creation. Because thick LUNs do not provide the flexibility ofoversubscribing like a thin LUN, use thick LUNs for applications where performance ismore important than saving space.

Thick and thin LUNs can share the same pool, enabling them to have the same ease-of-use and benefits of pool-based provisioning.

Procedure

1. In Unisphere, right-click a storage pool and then click Create LUN.

2. Type the user capacity for the LUN.

3. Type the starting LUN ID, and then select the number of LUNs to create.

For example, if the selected LUN ID is 50, and the selected number of LUNs tocreate is 3, the names for the LUNs are 50, 51, and 52.

4. Select Automatically assign LUN IDs as LUN names.

5. Click Apply.

Fibre Channel configurationTo transfer traffic from the host servers to shared storage, the serial-attachednetwork (SAN) uses the Fibre Channel (FC) protocol that packages SCSI commandsinto FC frames.

Note

iSCSI is prevalent for video security implementations because it often provides alower-cost option when compared to FC.

To restrict server access to storage arrays that are not allocated to the server, theSAN uses zoning. Typically, zones are created for each group of servers that access ashared group of storage devices and LUNs. A zone defines which HBAs can connectto specific service providers (SPs). Devices outside a zone are not visible to thedevices inside the zone.

Zoning is similar to LUN masking, which is commonly used for permissionmanagement. LUN masking is a process that makes a LUN available to some hosts andunavailable to other hosts.

Zoning provides access control in the SAN topology. Zoning defines which HBAs canconnect to specific targets. When you use zoning to configure a SAN, the devicesoutside a zone are not visible to the devices inside the zone.

Zoning has the following effects:

Configuring the solution

10 Dell EMC Storage with Verint Nextiva Configuration Guide

l Reduces the number of targets and LUNs presented to a host

l Controls and isolates paths in a fabric

l Prevents non-ESXi systems from accessing a particular storage system and frompossible virtual machine file system (VMFS) data loss

l Optionally, separates different environments, such as test and productionenvironments

With VMware ESXi hosts, use single-initiator zoning or single-initiator-single-targetzoning. The latter is the preferred zoning practice because it is more restrictive andprevents problems and misconfigurations that can occur on the SAN.

VNXe RAID configurationVNXe offers RAID 5, RAID 6, and RAID 10 configurations. Different configurationsoffer different types of protection against disk failures.

Dell EMC recommends the following RAID configurations:

l RAID 5 or RAID 10 with SAS drives

l RAID 6 with NL-SAS drives

iSCSI initiatorsSoftware or hardware initiators may be used with VMware ESXi server or a non-virtualized server.

Microsoft Internet SCSI (iSCSI) initiators

For both physical servers and VMware ESXi server, the Dell EMC Surveillance Labuses Microsoft iSCSI initiators with excellent results.

Hardware iSCSI initiators

Hardware iSCSI initiators can be used. There are many iSCSI initiators availableon the market, and results might vary.

Recommended cache configurationEMC VNX generation 2 systems, such as VNX5200 or VNX5400, manage the cache. Ifthe array is shared with other applications, you can use a lower write cache value, butavoid excessive forced flushes.

Dell EMC recommends that you configure the cache as 90 percent write and 10percent read if the storage array does not automatically adapt to the writecharacteristics of video surveillance (for example, EMC VNX5500 or EMC VNX-VSS100).

Isilon (NAS)The Isilon scale-out network-attached storage (NAS) platform combines modularhardware with unified software to harness unstructured data. Powered by thedistributed Isilon OneFS™ operating system, an Isilon cluster delivers a scalable pool ofstorage with a global namespace.

The platform's unified software provides centralized web-based and command-lineadministration to manage the following features:

l A symmetrical cluster that runs a distributed file system

Configuring the solution

VNXe RAID configuration 11

l Scale-out nodes that add capacity and performance

l Storage options that manage files and tiering

l Flexible data protection and high availability

l Software modules that control costs and optimize resources

To maximize caching performance for surveillance workloads, the Dell EMCSurveillance Lab recommends using two SSD system drives per node in clusters whereit is supported, such as the NL-series.

OneFS 8.1 job workers (required)OneFS can be tuned to provide optimal bandwidth, performance, or operatingcharacteristics. Starting with OneFS 8.1 the Dell EMC Surveillance Lab achievedoptimum resilience when the number of job workers slowly increased their number perjob phase.

From the CLI to modify the job works to 0 per core:

isi_gconfig -t job-config impact.profiles.medium.workers_per_core=0

Large file system, small view (SmartQuotas)Although it is possible to assign the full Isilon cluster file system to a single VerintRecorder, the Dell EMC best practice is to use SmartQuotas™ to segment the singleIsilon file system so that each Recorder has a logical subset view of storage.

There are three directory-level quota systems:

Advisory limit

Lets you define a usage limit and configure notifications without subjecting usersto strict enforcement.

Soft limit

Lets you define a usage limit, configure notifications, and specify a grace periodbefore subjecting users to strict enforcement.

Hard limit (recommended)

Lets you define a usage limit for strict enforcement and configure notifications.For directory quotas, you can configure storage users' view of space availabilityas reported through the operating system.

Use the Hard limit quota system to set the video storage as a defined value.

If necessary, both Isilon and the Verint Recorder can add or subtract storage,even if a hard quota is set.

Configuring SmartQuotas (recommended)The SmartQuotas feature enables you to limit the storage that is used for each VerintRecorder. It presents a view of available storage that is based on the assigned quotato the Recorder. SmartQuotas enables each Recorder to calculate its available diskspace and react appropriately.

Without SmartQuotas, the Nextiva administrator must anticipate the total write rateto the cluster and adjust the Min Free Space on each Recorder accordingly. Amiscalculation can result in lost video. SmartQuotas resolves the issues that can becaused by manual calculations.

Configuring the solution

12 Dell EMC Storage with Verint Nextiva Configuration Guide

Configure SmartQuotas when more than one Recorder is writing to the Isilon cluster,or when other users share the cluster. Enable SmartQuotas and define a quota foreach share or directory.

Configure the SmartQuotas setup with the following settings:

l Configure a hard share limit threshold to the Recorder video files.

l Define OneFS to show and report the available space as the size of the hardthreshold.

l Set the usage calculation method to show the user data only.

Procedure

1. From the OneFS GUI, select File System Management > SmartQuotas.

2. For each listed share, select View details.

3. Under Usage Limits, select Edit usage limits.

4. Define the SmartQuotas limit and set the threshold:

a. Select Specify Usage Limits.

b. Select Set a hard limit.

c. Type the hard limit value.

d. Select the size qualifier, typically TB.

e. Select the size of the hard threshold.

5. Click Save.

6. Repeat the process for the remaining shares.

Unique share namingWhen working with a single file system, each Recorder uses the time and date as partof its directory and file-naming conventions.

To avoid corruption caused by overwriting or grooming (deleting) files prematurely,create a unique share for each Recorder.

Link aggregationThe active/passive configuration involves aggregating the NIC ports on the Isilonnodes for high availability. If one of the ports on the node or switch port fails, theNextiva Recorder can continue writing to the Isilon share using the other portconnection without affecting the recording. The SMB share continues to be accessibleto the server using the passive connection port.

NIC aggregation can be used to reduce the possibility of video loss from a cable pull,NIC failure, or switch port issue. Dell EMC recommends NIC aggregation, also knownas link aggregation, in an active/passive failover configuration. This method transmitsall data through the master port, which is the first port in the aggregated link. If themaster port is unavailable, the next active port in an aggregated link takes over.

Configuring the solution

Unique share naming 13

Figure 2 Isilon Active/Passive and Active/Active configuration

SMB 3.0 MultiChannelThe support for Multichannel feature of SMB 3.0, which establishes a single SMBsession over multiple network connections, is introduced in OneFS 7.1.1. SMBMultichannel enables increased throughput, connection failure tolerance, andautomatic discovery.

To take advantage of this new feature, client computers must be configured withMicrosoft Windows 8 or later, or Microsoft Windows Server 2012 or later withsupported network interface cards (NICs) and SMB3 enabled.

SMB Mutlichannel allows file servers to use multiple network connectionssimultaneously and provides the following capabilities:

Increased throughput

OneFS can transmit more data to a client through multiple connections over ahigh speed network adapter or over multiple network adapters.

Connection failure tolerance

When using an SMB Multichannel session over multiple network connections,clients can continue to work uninterrupted despite the loss of a networkconnection.

Automatic discovery

SMB Multichannel automatically discovers supported hardware configurations onthe client that have multiple available network paths and then negotiates andestablishes a session over multiple network connections. You are not required toinstall components, roles, role services, or features.

Configuring SmartConnect (optional)SmartConnect

™ uses the existing Domain Name Service (DNS) Server and provides a

layer of intelligence within the OneFS software application.

The resident DNS server forwards the lookup request for the delegated zone to thedelegated zone's server of authority, which is the SmartConnect Service IP (SIP)address on the cluster. If the node providing the SmartConnect service becomesunavailable, the SIP address automatically moves to a different node in the pool.

Configuring the solution

14 Dell EMC Storage with Verint Nextiva Configuration Guide

Connections are balanced across the cluster, which ensures optimal resourceutilization and performance. If a node goes down, SmartConnect automaticallyremoves the node's IP address from the available list of nodes, ensuring that aconnection is not tried with the unavailable node. When the node returns to service,its IP address is added to the list of available nodes.

The delegated server authority is always the node with the lowest ID, unless it hassurrendered its authority status, either voluntarily or involuntarily. This node shouldalways be available, but if the status of the node changes and becomes unavailable, itvoluntarily surrenders its role as server of authority.

You must add a delegation Name Server (NS) entry to the resident DNS server for theSmartConnect name, which points to the SIP address as the Name Server. In yourDNS Manager, create a New Delegation using your SmartConnect zone name. In theMicrosoft DNS wizard, a New Delegation record is added in the forward lookup zonefor the parent domain.

SmartConnect balances connection loads to the Isilon cluster and handles connectionfailover. With SmartConnect, all Verint Recorders use a single fully qualified domainname (FQDN) or universal naming convention (UNC) path for video storage access.Using this network name provides load balancing when the connection to the cluster ismade and simplifies installations.

SmartConnect Basic can use a round-robin-type connection allocation, which is basedon DNS load balancing.

SmartConnect Advanced can include multiple pools for each subnet, Dynamic IPaddresses for NFS, and the following load-balancing options (Connection policy andRebalance policy):

Round-robin (recommended)

Sequentially directs a connection to the next Isilon IP address in the cycle. Basedon field reports, this option works well with 20 servers or more.

Connection count

Provides uniform distribution of the Verint Recorder servers to specified nodes inthe Isilon cluster. Use a unique IP address pool for video recording and Recorderread/write access.

Network throughput

Based on NIC utilization. Use of throughput requires that each Recorder isactivated, configured, and recording video after it connects to Isilon.

CPU usage

Uses the node CPU utilization to determine which Isilon IP address to assign tothe next connection request.

Ensure that no other service uses the Recorder IP address pool. Define additionalpools for management (such as Isilon InsightIQ™ or administrative access), evidencerepository, post process, or other use.

Procedure

1. Select Networking Configuration.

2. Under Subnet > Settings, define the SmartConnect service IP (SSIP) address.The SSIP address is the IP address that the DNS uses for the IsilonAuthoritative name service.

3. Under Pool settings:

a. Define the SmartConnect zone name, which is the name to which clientsconnect.

Configuring the solution

Configuring SmartConnect (optional) 15

b. Define the SmartConnect service subnet (the subnet that has the SSIPconfigured on the DNS server).

c. Define the connection balancing policy to Round Robin.

d. Set the IP allocation strategy to Static.

4. Verify this configuration on the SmartConnect dashboard.

I/O optimization configurationAs of OneFS 7.0.x, no changes are necessary to the I/O profiles for the directoriesthat are used for Verint.

Note

This setting does not require a SmartPool license.

Configuring authentication and access controlWe conducted authentication and access control tests to determine the best methodfor shared access.

The following three tests were conducted:

Full Active Directory (recommended)

Where the Nextiva server and the Isilon cluster are part of the same Windowsdomain.

Partial Active Directory

Where the Nextiva servers are part of the Windows domain, but the Isilon clusteris administered locally.

Fully locally administered control

Where the Nextiva servers and the Isilon cluster are administered locally.

Alternatives to the previous methods might exist, but the Dell EMC Surveillance Labteam does not plan to derive or support other methods.

Procedure

1. Select Cluster Management > Access Management.

2. Select Access zone and ensure that the System access zone has the providerstatus Active Directory, Local, and File marked with a green dot.

3. Under Active Directory, select Join a domain and add the Windows domainand appropriate users using one of the following options:

l When the Isilon cluster and Verint are not part of the same domain, set theshares to Run as Root. This setting is not ideal from a security perspective.

l When the Isilon cluster and Nextiva server are part of the same domain,configure the DVM Camera service to use the Domain account with read/write permissions to the Isilon cluster share. During the initial installation ofthe camera server, use the Nextiva administrator account specificationwizard to configure the camera service. Specify the recording location forthe camera server using the full UNC path of the Isilon share.

Client connections and Load BalancingDuring a node or NIC failure, it is possible that of all the recorders in the failed nodemight reconnect to any other single available node. This connection issue is specific to

Configuring the solution

16 Dell EMC Storage with Verint Nextiva Configuration Guide

CA enabled shares. If this connection issue occurs you can manually re-balance thecluster.

Manually re-balancing recorders across nodesAfter any activity that causes recorders to move between Isilon nodes, the recorder tonode ratio can become unbalanced.

A recorder can be moved manually from the existing node to another node in thecluster to re-balance the node. You might have to perform the following proceduremultiple times to get the recorder to the desired node.

When manually rebalancing a cluster, you can change the connection load balancingalgorithm to Connection Count. When using Connection Count, use the isi smbsessions command to force the systems that you want to move. It is important touse a time interval greater than 1 minute between each forced server reconnection.Once the cluster is back in balance, change the connection load balancing algorithmback to Round Robin. Round Robin is the best algorithm for unattended reconnectscenarios.

Procedure

1. Delete the SMB sessions that allow it to reconnect to other nodes.

Enter the following commands:

isi smb sessions listisi smb sessions delete -f <computer name>

DNS CacheTo successfully distribute IP addresses, the OneFS SmartConnect DNS delegationserver answers DNS queries with a time-to-live (TTL) of 0 so that the answer is notcached. Certain DNS servers fix the TTL value to 1 second, in particular WindowsServer 2003, 2008, and 2012. If many clients request an address within the samesecond, all of them receive the same address. If you encounter this problem, you mayneed to use a different DNS server, such as bind.

Client system administrators should turn off client DNS caching, where possible. Tohandle client requests correctly, SmartConnect requires that clients use the latestDNS entries. If clients cache SmartConnect DNS information, they might connect toincorrect SmartConnect zone names. If this occurs, it might appear thatSmartConnect is not functioning correctly. To reduce the TTL on the client side, applythe following registry key on the client end.

Registry key:

HKLM\System\CurrentControlSet\Services\Dnscache\ParametersDWORD: MaxCacheTtl / Value: 1

Configuring the solution

Manually re-balancing recorders across nodes 17

Increase the Memory-MaxFrameBuffer on the VerintRecorder

In order to prevent video loss while handling video buffers during the failure testscenarios, the Memory MaxFrameBuffer was increased from 256 MB to 1024 MB onthe Verint recorder.

Procedure

1. Edit the configuration settings file mseries.ini in the D:\Verint\Recorder directory.

The mseries.ini file path may differ depending on the drive chosen forinstallation of the Recorder application.

2. Change the [MEMORY] MaxStreamBuffersKbytes value from 262144 to1048576.

[MEMORY] MaxStreamBuffersKbytes=1048576

3. Save the configuration file and restart the Verint Recorder service for thesetting to take effect.

Microsoft Multipath I/OMicrosoft Multipath I/O (MPIO) is a framework that allows administrators toconfigure load balancing and failover processes for Fibre Channel, iSCSI, and SASconnected storage devices. Dell EMC SC Series arrays provide redundancy andfailover with multiple controllers and RAID modes.

However, servers still need a way to spread the I/O load and handle internal failoverfrom one path to the next, which is where MPIO plays an important role. WithoutMPIO, servers see multiple instances of the same disk device in Windows diskmanagement.

The MPIO framework uses Device Specific Modules (DSM) to allow pathconfiguration. Microsoft provides a built-in generic Microsoft DSM (MSDSM) forWindows Server 2008 R2 and above. This MSDSM provides the MPIO functionalityfor Dell EMC storage customers.

For configuration details, refer to the white paper Dell EMC SC Series Storage:Microsoft Multipath I/O Best Practices.

Releases testedThe following tables list the firmware builds and software releases used for our tests.

Table 1 Firmware builds

Model Firmware

VNXe1600 VNXe 0E 3.1.3.5754151

VNXe3200 VNXe 0E 3.1.3.5754151

Configuring the solution

18 Dell EMC Storage with Verint Nextiva Configuration Guide

Table 1 Firmware builds (continued)

Model Firmware

VNXe3300 VNXe 0E 2.1.0.14097

VNX-VSS100 VNX OE 5.32.000.5.215

VNX5200 VNX OE 5.33.008.5.119

VNX5400 VNX OE 5.33.000.5.015

VNX5600 VNX OE 5.33.000.5.052

Table 2 OneFS releases

Model OneFS version

A2000 8.1.0.2

HD400 8.0.0.2, 7.2.1

NL410 7.2.1

NL400 7.0.x

Table 3 Verint Nextiva releases

Release Subrelease

Verint Nextiva 7.5.1.7623

6.4.2811

6.4.1591

6.3.2292

VMware ESXi requirements and recommendationsDuring all the tests, we assumed that the vCPU, memory, and network wereconfigured correctly according to Verint's best practices to operate within Nextivaparameters.

The following virtual machine configuration was used for all tests:

l vCPUs (virtual CPUs): 12

l vMemory (virtual memory): 12 GB

l Network Driver:

n Vmxnet3 with 10 GbE

n Vmxnet2 with 1 GbE

l Disk Driver: SCSI

This document assumes that the person that performs the configuration is familiarwith these basic configuration activities.

Configuring the solution

VMware ESXi requirements and recommendations 19

Configuring the solution

20 Dell EMC Storage with Verint Nextiva Configuration Guide

CHAPTER 3

Conclusion

This chapter presents the following topics:

l Summary............................................................................................................22

Conclusion 21

SummaryDell EMC performed comprehensive testing with Verint Nextiva against many EMCVNX and VNXe arrays and Dell EMC Isilon clusters.

EMC VNXCompared to traditional block-level storage, the use of storage pools to create LUNswithin the VNX arrays greatly simplifies the configuration and increases theperformance. Either iSCSI or FC can be implemented. FC performs better than iSCSI.

Dell EMC Unity arraysThe use of storage pools to create LUNs within the Dell EMC Unity arrays greatlysimplifies the configuration and increases the performance when compared totraditional block-level storage. Either iSCSI or FC can be implemented. FC performsbetter than iSCSI.

EMC VSSThe VNX Video Surveillance Storage (VSS) is a storage solution that is purpose-builtto meet the unique demands of the video surveillance environment. We found that thishigh-availability, low-cost array performs comparably to other arrays in the VNXfamily.

Dell EMC Isilon scale-out storageIsilon scale-out storage is ideal for midtier and enterprise customers. An Isilon clusteris based on independent nodes working seamlessly together to present a single filesystem to all users.

Licensed SmartQuotas options can be configured so that each Recorder view of thestorage is based on the assigned quota and not the entire file system. Dell EMCrecommends using SmartQuotas with Verint Nextiva as a best practice.

Conclusion

22 Dell EMC Storage with Verint Nextiva Configuration Guide


Recommended