+ All Categories
Home > Documents > Using Splunk Enterprise with VxRail Appliances and Isilon for … · 2021. 1. 24. · Splunk...

Using Splunk Enterprise with VxRail Appliances and Isilon for … · 2021. 1. 24. · Splunk...

Date post: 09-Feb-2021
Category:
Upload: others
View: 8 times
Download: 0 times
Share this document with a friend
91
1 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data USING SPLUNK ENTERPRISE WITH VxRAIL APPLIANCES AND ISILON FOR ANALYSIS OF MACHINE DATA March 2017 Abstract This solution guide describes a Dell EMC hyper-converged infrastructure VxRail Appliance solution that highlights flexible scaling options and tight integration with Splunk Enterprise for analyzing large quantities of machine data. H15699
Transcript
  • 1 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    USING SPLUNK ENTERPRISE WITH VxRAIL APPLIANCES AND ISILON FOR ANALYSIS OF MACHINE DATA March 2017

    Abstract

    This solution guide describes a Dell EMC hyper-converged infrastructure

    VxRail Appliance solution that highlights flexible scaling options and tight

    integration with Splunk Enterprise for analyzing large quantities of machine

    data.

    H15699

  • Copyright

    2 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    The information in this publication is provided as is. Dell Inc. makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.

    Use, copying, and distribution of any software described in this publication requires an applicable software license.

    Copyright © 2017 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners. Published in the USA March 2017 Solution Guide H15699.

    Dell Inc. believes the information in this document is accurate as of its publication date. The information is subject to change without notice.

  • Contents

    3 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Contents

    Chapter 1 Executive Summary 5

    Business case ........................................................................................................ 6

    Solution overview ................................................................................................... 6

    Key results .............................................................................................................. 7

    Audience ................................................................................................................. 7

    We value your feedback! ........................................................................................ 7

    Chapter 2 Solution Architecture 8

    Overview ................................................................................................................. 9

    VxRail Appliance architecture ............................................................................... 12

    Isilon ..................................................................................................................... 14

    VMware vSphere .................................................................................................. 14

    Splunk Enterprise ................................................................................................. 15

    Chapter 3 Splunk Enterprise Deployment Design and Configuration 18

    Overview ............................................................................................................... 19

    Compute design ................................................................................................... 19

    Network design ..................................................................................................... 20

    Storage design ..................................................................................................... 21

    Virtualization design ............................................................................................. 24

    Splunk Enterprise design ...................................................................................... 24

    Chapter 4 Splunk Single Instance 50GB/day with 90-day Retention 30

    Overview ............................................................................................................... 31

    Implementation ..................................................................................................... 31

    Use case summary ............................................................................................... 38

    Chapter 5 Splunk Multi-instance 500GB/day with 90-day Retention 39

    Overview ............................................................................................................... 40

    Implementation ..................................................................................................... 40

    Use case summary ............................................................................................... 47

    Chapter 6 Splunk Multi-instance 1000GB/day with 90-day Retention 48

    Overview ............................................................................................................... 49

    Implementation ..................................................................................................... 49

    Use case summary ............................................................................................... 51

  • Contents

    4 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Chapter 7 Splunk Multi-instance 1000GB/day with > 90-day Retention 52

    Overview ............................................................................................................... 53

    Implementation ..................................................................................................... 53

    Use case summary ............................................................................................... 61

    Chapter 8 Splunk Multi-instance 1000GB/day with > 90-day Retention and Indexer High Availability 62

    Overview ............................................................................................................... 63

    Implementation ..................................................................................................... 63

    Use case summary ............................................................................................... 75

    Chapter 9 Validated Configurations for Splunk Enterprise 76

    Splunk-validated sizing configurations ................................................................. 77

    Scenario 1: One VxRail node for up to 50 GB/day with 90-day retention ............. 78

    Scenario 2: Four VxRail nodes for up to 500 GB/day (distributed) or up to 250 GB/day (clustered) with 90-day retention ....................................................... 79

    Scenario 3: Seven VxRail nodes for up to 1 TB/day (distributed) with 90-day retention ......................................................................................................... 80

    Scenario 4: Seven VxRail nodes with Isilon for up to 1 TB/day (clustered) with 7-day retention for hot/warm buckets and configurable retention for cold buckets .............................................................................................. 81

    Summary .............................................................................................................. 82

    Chapter 10 Conclusion 83

    Summary .............................................................................................................. 84

    Findings ................................................................................................................ 84

    Conclusion ............................................................................................................ 84

    Chapter 11 References 85

    Dell EMC documentation ...................................................................................... 86

    VMware documentation ........................................................................................ 86

    Splunk Enterprise documentation ......................................................................... 86

    Appendix A VxRail Appliance Scalability 87

    Overview ............................................................................................................... 88

    Test scenario ........................................................................................................ 88

    Test methodology ................................................................................................. 88

    Test results ........................................................................................................... 90

    Summary .............................................................................................................. 91

  • Chapter 1: Executive Summary

    5 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Chapter 1 Executive Summary

    This chapter presents the following topics:

    Business case ....................................................................................................... 6

    Solution overview ................................................................................................. 6

    Key results ............................................................................................................ 7

    Audience ............................................................................................................... 7

    We value your feedback! ..................................................................................... 7

  • Chapter 1: Executive Summary

    6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Business case

    Machine data is one of the fastest growing and complex areas of Big Data. It is also one

    of the most valuable, containing a definitive record of events that can reveal information

    about user transactions, customer behavior, machine behavior, security threats,

    fraudulent activity, and more. Making use of this data, however, presents real challenges.

    Traditional data analysis, management, and monitoring solutions are not engineered to

    handle such high-volume, high-velocity, and highly diverse data.

    Splunk Enterprise is the industry-leading platform for machine data. It gives you real-time

    visibility, insight, and understanding across your IT infrastructure and the application and

    services that run on top of it. The Splunk platform also:

    Seamlessly blends metrics and events from both structured and unstructured data

    sources

    Collects and correlates multiple data sources to rapidly pinpoint service

    degradations and reduce mean-time-to-resolution (MTTR)

    Monitors end-to-end infrastructure to detect anomalies and prevent problems in real

    time

    Delivers powerful visualizations to understand relationships, track trends, and

    accelerate investigations

    Dell EMC™ and Splunk have partnered to provide a menu of standardized, reference

    architectures for non-disruptive scalability and performance to aid an organization’s digital

    transformation. When paired together, Dell EMC and Splunk combine the analytics

    provided by the Splunk ecosystem with the cost-effective, scalable and flexible

    infrastructure of Dell EMC to deliver Operational Intelligence.

    Solution overview

    This solution demonstrates how Splunk Enterprise combined with the Dell EMC VxRail™

    Appliance, Isilon™, and VMware virtualization software can easily, efficiently, and cost-

    effectively scale to support enterprise-level machine data analytics and real-time

    operational intelligence. VxRail is the only fully integrated, preconfigured, and pre-tested

    VMware hyper-converged infrastructure appliance family on the market. Based on

    VMware vSphere plus VMware vSAN, and EMC software, VxRail delivers an all-in-one IT

    infrastructure transformation by leveraging a known and proven building block for the

    Software Defined Data Center (SDDC).

    The solution describes the design, deployment, and configuration of Splunk Enterprise on

    VxRail for five representative uses cases covering a range of customer needs.

    Table 1. Solution Use Cases

    Daily ingest (GB/day) Retention (days) Equipment Splunk deployment

    50 90 1-node VxRail Single/combined instance

    500 90 4-node VxRail Distributed

  • Chapter 1: Executive Summary

    7 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Daily ingest (GB/day) Retention (days) Equipment Splunk deployment

    1000 90 7-node VxRail Distributed

    1000 Greater than 90 7-node VxRail + Isilon Distributed

    1000 Greater than 90 7-node VxRail + Isilon Clustered

    In addition to covering these use cases, the solution includes background material

    describing all of the technologies that make the solution compelling, including: VxRail

    Appliance architecture, Isilon, VMware vSphere, and Splunk Enterprise.

    Key results

    This solution provides detailed information for evaluating the applicability of VxRail

    offerings for a Splunk implementation. Splunk has validated multiple use case

    configurations for VxRail that meet or exceed the performance of Splunk’s documented

    reference hardware. Potential customers should be able to match almost any current

    needs with an approved configuration. Customers can also be confident that the VxRail

    product line together with the flexibility of Splunk Enterprise configuration options can be

    scaled out to handle future needs without the need for extensive upgrades or expensive

    re-platforming.

    Audience

    This guide is intended for IT administrators, storage administrators, virtualization

    administrators, system administrators, IT managers, and those who evaluate, acquire,

    manage, maintain, or operate Splunk Enterprise environments.

    We value your feedback!

    Dell EMC and the authors of this document welcome your feedback on the solution and

    the solution documentation. Contact [email protected] with your

    comments.

    Authors:

    Dell EMC: Eric Wang, James Shen, Tao Guo, Phil Hummel, Reed Tucker

    Splunk: Jenny Hollfelder

    mailto:[email protected]?subject=Feedback:%20Type%20document%20title%20and%20part%20number%20here%20

  • Chapter 2: Solution Architecture

    8 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Chapter 2 Solution Architecture

    This chapter presents the following topics:

    Overview ................................................................................................................ 9

    VxRail Appliance architecture ........................................................................... 12

    Isilon .................................................................................................................... 14

    VMware vSphere ................................................................................................. 14

    Splunk Enterprise ............................................................................................... 15

  • Chapter 2: Solution Architecture

    9 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Overview

    The following reference architecture describes a Dell EMC hyper-converged infrastructure

    VxRail Appliance with Isilon for a virtualized Splunk Enterprise environment. Dell EMC

    and Splunk jointly tested and validated this reference architecture to meet or exceed the

    performance of Splunk Enterprise running on Splunk’s reference hardware.

    The VxRail Appliance is a fully integrated, preconfigured, and pre-tested hyper-converged

    infrastructure appliance. Powered by industry-leading vSAN and vSphere software, the

    VxRail Appliance is the easiest and fastest way to streamline and extend a VMware

    environment while dramatically simplifying IT operations.

    Figure 1 and Figure 2 show how we deployed two reference architectures representing

    Splunk instances as virtual machines on a VMware vSphere 6.0 cluster following Splunk’s

    documented virtualization best practices. In the storage layer, VxRail leverages VMware

    vSAN technology to build vSAN on groups of local attached disks. This configuration

    provides rapid read and write disk I/O and low latency through the use of an all-flash

    array.

    Figure 1 shows the four layers (application layer, virtualization layer, infrastructure layer,

    and virtual SAN layer) in this solution. In the application layer, there are four Splunk

    components: forwarder, indexer, search head, and master node. The VxRail cluster forms

    a Virtual SAN as storage, which is used to hold all virtual machines and the Splunk

    hot/warm and cold buckets on an all-flash array.

    Reference

    architecture

  • Chapter 2: Solution Architecture

    10 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Figure 1. Splunk Enterprise on VxRail Appliance reference architecture

    Figure 2 shows a reference architecture similar to Figure 1 with differences in the number

    of VxRail nodes and the location of Splunk buckets. vSAN is used to store all virtual

    machines and Splunk hot/warm buckets, while Isilon storage is used to store the Splunk

    cold bucket for long-term data retention.

    Note: For an explanation of the hot/warm and cold bucket concept, refer to Splunk core

    architecture.

  • Chapter 2: Solution Architecture

    11 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Figure 2. Splunk Enterprise on VxRail Appliance with Isilon reference architecture

  • Chapter 2: Solution Architecture

    12 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Table 2 lists the hardware components in this solution.

    Table 2. Hardware configuration

    Component Hardware

    VxRail E460F 2 Intel® Xeon® Processors E5-2698 v4 @ 2.20 GHz per node

    384 GB (24 x 16 GB) or 512 GB (16 x 32 GB)

    800 GB per disk group (1 or 2 disk groups)

    5.235TB (3 x 1.92TB) or 20.94TB (6 x 3.84TB SSD) capacity per node**

    2 x 10 GbE SFP+ per node

    Switch Fabric interconnect

    Isilon X410 2 Intel® Xeon® Processors 2.0 GHz per node

    128 GB RAM per node

    3.2 TB SSD storage

    64 TB HDD storage

    2 x 10 GbE SFP+ per node

    2 x 1 GbE per node

    **Note: The net effective usable capacity of the VxRail cluster is ½ the raw capacity. This is due to

    the vSAN FTT=1 policy setting applied to each VM.

    Table 3 lists the versions of software used in this solution.

    Table 3. Software configuration

    Software Version

    Splunk Enterprise 6.5.0

    Splunk Universal Forwarder 6.5.0

    RedHat Linux 64-bit 6.7

    VMware vSphere Enterprise 6.0 U2

    VMware vCenter Server 6.0 U2

    VMware Virtual SAN Enterprise 6.2

    VMware vRealize Log Insight 3.3.1

    VxRail Manager 4.0

    OneFS 8.0.0.3

    VxRail Appliance architecture

    The VxRail Appliance offers the performance, capacity, and graphics capability needed to

    meet the infrastructure requirements of a small or medium-sized enterprise. The VxRail

    Appliance provides a simple, cost-effective, hyper-converged solution that solves your

    Hardware

    components

    Software

    components

  • Chapter 2: Solution Architecture

    13 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    virtualization infrastructure challenges and supports a wide range of applications and

    workloads.

    VxRail Appliances use VMware’s vSAN software, which is fully integrated with vSphere

    and provides full-featured and cost-effective software-defined storage. vSAN implements

    a notably efficient architecture, built directly into a hypervisor. This architecture

    distinguishes vSAN from solutions that typically install a virtual storage appliance (VSA)

    that runs as a guest VM on each host. Embedding vSAN into the ESXi kernel layer has

    obvious advantages in performance and memory requirements. It has very little impact on

    CPU utilization (less than 10 percent) and self-balances based on workload and resource

    availability. It presents storage as a familiar data store construct and works seamlessly

    with other vSphere features such as VMware vSphere vMotion.

    vSAN aggregates locally attached disks of hosts in a vSphere cluster to create a pool of

    distributed shared storage. Capacity is easily scaled up by adding additional disks to the

    cluster and scaled out by adding additional ESXi hosts. This distributed shared storage

    provides the flexibility to start with a very small environment and scale it over time.

    Storage characteristics are configured using Storage Policy Based Management (SPBM),

    which allows object-level policies to be set and modified on the fly to control storage

    provisioning and day-to-day management of storage service-level agreements (SLAs).

    vSAN is preconfigured when the VxRail system is first initialized and managed through

    vCenter. The VxRail Appliance initialization process discovers locally attached storage

    disks from each ESXi node in the cluster to create a distributed, shared-storage data

    store. The amount of storage in the vSAN data store is an aggregate of all of the capacity

    drives in the cluster.

    VxRail provides an all-flash SSD configuration. The all-flash configuration uses flash

    SSDs for both the caching tier and capacity tier.

    The VxRail Appliance uses a modular, distributed system architecture based on a 1U

    appliance with one node that scales linearly. In addition, different options are available for

    compute, memory, and storage configurations to match any use cases. Choose from a

    range of next-generation processors, variable RAM, storage and cache capacity for

    flexible CPU-to-RAM-to-storage ratios.

    The VxRail Appliance is assembled with proven server-node hardware that has been

    integrated, tested, and validated as a complete solution by Dell EMC. The current

    generation of VxRail Appliance nodes uses Intel Xeon E5 processors. The Intel Xeon E5

    processor is a multi-threaded, multi-core CPU designed to handle diverse workloads for

    cloud service, high-performance computing, and networking. The number of cores and

    memory capacity differ for each VxRail Appliance model.

    VxRail is a self-contained infrastructure, it is not a stand-alone environment. It is intended

    to connect and integrate with the customer’s existing data center network. The distributed

    cluster architecture allows independent nodes to work together as a single system. The

    close coupling between nodes is accomplished through IP networking connectivity. Our

    implementation in this solution uses two customer-provided 10 GbE Top-of-the-Rack

    (TOR) switches to connect each node in the VxRail cluster.

    Storage

    components

    Compute

    components

    Networking

    components

  • Chapter 2: Solution Architecture

    14 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    In VxRail, network traffic is segregated using switch-based Virtual LAN (VLAN) technology

    and vSphere Network I/O Control. A VxRail cluster supports four types of network traffic:

    Management - Management traffic connects VMware vCenter Web Client, VxRail

    Manager, and other management interfaces. It provides communications between

    the management components and the ESXi nodes in the cluster. Either the default

    VLAN or a specific management VLAN can be used for management traffic.

    Virtual SAN - Data access for read and write activity as well as for optimization and

    data rebuild is performed over the vSAN network. Low network latency is critical for

    this traffic and a specific VLAN is required to isolate this traffic.

    vMotion – vSphere vMotion is a vSphere feature that allows virtual machines

    mobility between nodes. A separate VLAN is used to isolate this traffic.

    Virtual Machine - Users access virtual machines and the service provided over the

    VM networks. At least one VM VLAN is configured when the system is initially

    configured and others may be defined as required.

    Note: For detailed VxRail network configuration, refer to the Dell EMC VxRail Network Guide.

    Isilon

    The Isilon X-Series is a flexible and comprehensive storage product that provides large

    capacity and high performance. The VxRail Appliance supports Isilon storage.

    Isilon storage uses intelligent software to scale data across a large number of commodity

    hardware units, enabling explosive growth in performance and capacity. The product's

    revolutionary storage architecture—the OneFS™ operating system (OS)—offers a single

    clustered file system.

    OneFS provides value by incorporating parallelism at a deep level of the OS. Virtually, the

    system is distributed across multiple hardware units. This parallelism allows OneFS to

    scale in every dimension as the infrastructure is expanded. By providing multiple

    redundancy levels, the system has no single point of failure. As a result, OneFS can grow

    to a multi-petabyte scale while providing greater reliability than traditional systems.

    OneFS runs on Isilon scale-out network-attached storage (NAS) hardware, ensuring that

    Isilon benefits from the ever-improving cost and efficiency curves of commodity hardware.

    OneFS allows you to add hardware to or remove hardware from the cluster at any time.

    The data is protected from hardware changes. This feature alleviates the cost and burden

    of data migrations and hardware refreshes.

    VMware vSphere

    VMware vSphere is a widely adopted virtualization platform. The technology increases

    server utilization so that a firm can consolidate its servers and spend less on hardware,

    administration, energy, and floor space. The vSphere platform enables its installations to

    respond to user requests reliably while giving administrators the tools to respond to their

    changing needs.

    https://www.emc.com/collateral/guide/h15300-vxrail-network-guide.pdf

  • Chapter 2: Solution Architecture

    15 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    The components of particular importance in this solution are vSphere ESXi and vCenter.

    VMware vSphere ESXi is a bare-metal hypervisor. It installs directly on a physical server,

    and partitions that server into multiple virtual machines. An ESXi host refers to the

    physical server.

    vSphere ESXi hosts and their resources are pooled together into clusters that contain the

    CPU, memory, network, and storage resources that are available for allocation to the

    virtual machines. Clusters scale up to a maximum of 64 hosts and can support thousands

    of virtual machines.

    vCenter Server is management software that runs on a virtual or physical server to

    oversee multiple ESXi hypervisors as a single cluster. An administrator can interact

    directly with vCenter Server or use vSphere Client to manage virtual machines from a

    browser window anywhere in the world. For example, the administrator can capture the

    detailed blueprint of a known, validated configuration—a configuration that includes

    networking, storage, and security settings—and then deploy that blueprint to multiple ESXi

    hosts.

    Splunk Enterprise

    Splunk Enterprise is a software platform that enables you to collect, index, and visualize

    machine-generated data gathered from different sources in your IT infrastructure. These

    sources include applications, networking devices, host and server logs, mobile devices,

    and more.

    Splunk turns silos of data into operational insights and provides end-to-end visibility

    across your IT infrastructure to enable faster problem solving and informed, data-driven

    decisions.

    Figure 3 provides a graphic overview of Splunk system architecture. A Splunk Enterprise

    instance can perform the role of a search head, an indexer, or both in the case of small

    deployments. When the daily ingest rate or search load exceeds the sizing

    recommendations for a combined instance environment, Splunk Enterprise scales

    horizontally by adding additional indexers and search heads. For more information, refer

    to the Splunk Capacity Planning Manual.

    VMware vSphere

    ESXi

    VMware vSphere

    vCenter

    Splunk core

    architecture

    http://docs.splunk.com/Documentation/Splunk/6.5.0/Capacity/IntroductiontocapacityplanningforSplunkEnterprise

  • Chapter 2: Solution Architecture

    16 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Figure 3. Splunk architecture overview

    When a Splunk Enterprise indexer receives data, the indexer parses the raw data into

    distinct events based on the timestamp of the event and writes them to the appropriate

    index. Splunk implements a form of storage tiering involving hot/warm and cold buckets of

    data to optimize performance for newly indexed data and to provide an option to keep

    older data for longer periods on higher capacity storage.

    Newly indexed data lands in a hot bucket, where it is actively read and written by Splunk.

    When the number of hot buckets is reached, or when the size of the data in the hot

    buckets exceeds the specified threshold, the hot bucket is rolled to a warm bucket. Warm

    buckets reside on the same tier of storage as hot buckets. The only difference is that

    warm buckets are read-only. It is important that the storage that is identified for hot/warm

    data is your fastest storage tier because it has the biggest impact on the performance of

    your Splunk Enterprise deployment.

    When the number of warm buckets or volume size is exceeded, data is rolled into a cold

    bucket, which can optionally reside on another tier of storage. Cold data may reside on an

    NFS mount if the latency is less than 5 ms (ideally) and not more than 200 ms. NAS

    technologies offer an acceptable blend of performance and lower cost per TB, making

    them a good choice for longer-term retention of cold data.

    Data can also be archived or frozen, but such data is no longer searchable by Splunk

    search heads. Manual user action is required to bring the data back into Splunk Enterprise

    buckets to be searchable. While you might choose to use frozen buckets to meet

    compliance retention requirements, this paper shows how Isilon’s massive scalability and

    competitive cost of ownership can empower you to retain more data in the cold bucket,

    where it remains searchable. Figure 4 provides more details about Splunk bucket

    concepts.

  • Chapter 2: Solution Architecture

    17 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Figure 4. Splunk index buckets

  • Chapter 3: Splunk Enterprise Deployment Design and Configuration

    18 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Chapter 3 Splunk Enterprise Deployment Design and Configuration

    This chapter presents the following topics:

    Overview .............................................................................................................. 19

    Compute design ................................................................................................. 19

    Network design ................................................................................................... 20

    Storage design .................................................................................................... 21

    Virtualization design .......................................................................................... 24

    Splunk Enterprise design .................................................................................. 24

  • Chapter 3: Splunk Enterprise Deployment Design and Configuration

    19 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Overview

    This chapter provides details about the deployment, design, and configuration of Splunk

    Enterprise on the Dell EMC hyper-converged infrastructure VxRail Appliance, from a

    single instance starter kit to a scalable, distributed, cluster environment. This solution

    covers five types of deployment for the different user scenarios:

    Single Instance 50GB/day with 90-day Retention – A single instance that combines

    indexing and search management functions

    Multi-instance 500GB/day with 90-day Retention – One search head with two

    indexers

    Multi-instance 1000GB/day with 90-day Retention – One search head with five

    indexers

    Multi-instance 1000GB/day with > 90-day Retention – One search head with five

    indexers, using Isilon to provide configurable retention for Splunk cold buckets

    Multi-instance 1000GB/day with > 90-day Retention and Indexer High Availability –

    One search head with five indexers, including a replication factor of 2 and a search

    factor of 2, using Isilon to provide configurable retention for Splunk cold buckets

    Compute design

    Table 4, Table 5, Table 6, and Table 7 show the details of the compute design of five

    types of Splunk Enterprise deployments on VxRail appliance.

    Note: Splunk multi-instance 1000 GB/day with 90-day retention deployment and Splunk multi-

    instance 1000 GB/day with > 90-day retention deployment use the same compute design listed in

    Table 6.

    Table 4. Single instance 50 GB/day with 90-day retention deployment on one VxRail node

    Instance role Quantity Physical cores/vCPUs Memory

    Single Instance combined search head and indexer

    1 32/64 256 GB

    Table 5. Multi-instance 500 GB/day with 90-day retention deployment on four VxRail nodes

    Instance role Quantity Physical cores/vCPUs Memory

    Search Head 1 32/64 256 GB

    Indexer 2 32/64 256 GB

    Admin Server 1 20/40 256 GB

  • Chapter 3: Splunk Enterprise Deployment Design and Configuration

    20 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Table 6. Multi-instance 1000 GB/day with 90-day retention deployment and multi-instance 1000 GB/day with > 90-day retention with Isilon on seven VxRail nodes

    Instance role Quantity Physical cores/vCPUs Memory

    Search Head 1 32/64 256 GB

    Indexer 5 32/64 256 GB

    Admin Server 1 20/40 256 GB

    Table 7. Multi-instance 1000 GB/day with > 90-day retention and indexer high availability deployment with Isilon on seven VxRail nodes

    Instance role Quantity Physical cores/vCPUs Memory

    Search Head 1 32/64 256 GB

    Indexer 5 32/64 256 GB

    Admin Server 1 20/40 256 GB

    Network design

    The VxRail Appliance is delivered ready to deploy and attach to any 10 GbE network

    infrastructure using IPv4 and IPv6. As a best practice, Dell EMC recommends using dual

    TOR switches to eliminate the switch as a single point of failure. In this solution, we

    designed the VxRail cluster’s network as follows:

    Configure the two top-of-rack switches to provide 10 Gb Ethernet connectivity to the VxRail Appliance.

    Use VLANs to logically group devices on different network segments or sub networks.

    Use separate vSphere virtual distributed port groups to isolate the network communication for each network:

    vSphere management network

    vSphere vMotion network

    vCenter network

    vSAN management network

    Splunk Enterprise network

    Figure 5 shows the VxRail Appliance network design of this solution.

  • Chapter 3: Splunk Enterprise Deployment Design and Configuration

    21 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Figure 5. VxRail Appliance network design

    Storage design

    This section describes the vSAN storage design for five types of Splunk Enterprise

    deployment on the VxRail cluster.

    Table 8 shows the Virtual SAN storage policy that is defined for the virtual machines. In

    this solution, we used two vSAN storage policies:

    The default policy of vSAN, which is used for the home files and the OS Virtual

    Machine Disk (VMDK) files of all virtual machines.

    A newly created policy for the disk drive that keeps Splunk data on the Splunk

    indexer virtual machines. The number of disk stripes per object is set to 10 to

    distribute the data evenly across all VxRail nodes to improve the vSAN read/write

    performance for large disk drives.

    A vSAN storage policy with failures to tolerate (FTT) of 1 and failure tolerance

    method of RAID-1 (mirroring) creates a full copy of the data. Because of this, twice

    the capacity of the workload is required.

    vSAN storage

    design

  • Chapter 3: Splunk Enterprise Deployment Design and Configuration

    22 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Table 8. Virtual SAN storage policy configuration

    Policy name Rule sets Comments

    Virtual SAN Default Storage Policy

    Number of failures to tolerate = 1

    Failure tolerance method = RAID-1

    Disable object checksum = No

    Force provisioning = No

    IOPS limit for object = 0

    Number of disk stripes per object = 1

    Object space reservation (%) = 0

    This policy is Virtual SAN default policy, it is used for the VM home files and OS VMDK files of virtual machines that are created in this solution.

    Splunk-Data-Policy Number of failures to tolerate = 1

    Failure tolerance method = RAID-1

    Disable object checksum = No

    Force provisioning = No

    IOPS limit for object = 0

    Number of disk stripes per object = 10

    Object space reservation (%) = 0

    This policy is used for Splunk indexer data storage

    Table 9 shows the vSAN storage design for Splunk in this solution.

    Table 9. Virtual SAN storage design for Splunk

    Deployment type

    Instance role Quantity OS storage Indexer storage

    Single Instance 50 GB/day with 90-day Retention

    Single Instance combined search head and indexer

    1 300 GB 3 TB

    Multi-instance 500 GB/day with 90-day Retention

    Search Head 1 300 GB 0

    Indexer 2 300 GB 13.9 TB

    Admin Server 1 150 GB 0

    Multi-instance 1000 GB/day with 90-day Retention

    Search Head 1 300 GB 0

    Indexer 5 300 GB 10.8 TB

    Admin Server 1 150 GB 0

    Multi-instance 1000 GB/day with > 90-day Retention

    Search Head 1 300 GB 0

    Indexer 5 300 GB 2.1 TB

    Admin Server 1 150 GB 0

    Multi-instance 1000 GB/day with > 90-day Retention and Indexer High Availability

    Search Head 1 300 GB 0

    Indexer 5 300 GB 2.1 TB

    Admin Server 1 150 GB 0

    Isilon storage

    design

  • Chapter 3: Splunk Enterprise Deployment Design and Configuration

    23 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    In this solution, a four-node Isilon X410 cluster is used for the Splunk deployment to

    provide configurable retention for cold buckets. The detailed configuration of Isilon nodes

    and Isilon storage design for Splunk are shown in Table 10 and Table 11.

    Table 10. Isilon node configuration

    CPU CPU cores RAM SSD capacity HDD capacity Network

    Two Intel Xeon Processors 2.0 GHz

    8 cores 128 GB 3.2 TB 64 TB 2 x 10 GbE

    2 x 1 GbE

    Table 11. Isilon storage design for Splunk

    Deployment Type Instance Role Quantity Indexer Cold Bucket Storage

    Multi-instance 1000 GB/day with > 90-day Retention

    Indexer 5 10.8 TB

    Multi-instance 1000 GB/day with > 90-day Retention and Indexer High Availability

    Indexer 5 10.8 TB

    For the overall Isilon configuration, we followed these best practices:

    Enabled SmartPools settings across all four Isilon nodes and use an SSD as L3

    cache for metadata read acceleration

    Enabled SmartConnect to provide automatic client connection load balancing and

    failover capabilities

    Enabled SmartCache for write performance

    Optimized for concurrent access for data access pattern

    Used 10 Gb/s external network for data connection

    Increased network MTU to 9000 (Jumbo Frames)

    Splunk and Dell EMC recommend that NFS storage, including Isilon, is only used for cold

    and frozen data, never for hot/warm. For details about system requirements, see the

    Splunk Enterprise Installation Manual.

    http://docs.splunk.com/Documentation/Splunk/6.5.0/Installation/Systemrequirements

  • Chapter 3: Splunk Enterprise Deployment Design and Configuration

    24 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Virtualization design

    VxRail delivers virtualization, compute, and storage in a scalable, easy to manage, hyper-

    converged infrastructure appliance. It deeply integrates VMware vSphere virtualization

    software that delivers an industry-leading virtualization platform to provide application

    virtualization with a highly available, resilient, efficient on-demand infrastructure.

    For details about the configuration of the virtual machines that are used in this solution,

    refer to Table 4, Table 5, Table 6, and Table 7 in the Compute design section.

    This solution implements the following Dell EMC and VMware best practices to provide

    optimal performance for all Splunk Enterprise virtual machines running on the VxRail

    Appliance:

    Create a vSphere HA cluster to provide a virtualized, high-availability Splunk Enterprise environment that is easy to use and cost-effective.

    With virtual Non-Uniform Memory Access (NUMA) topology, the virtual socket that has fewer virtual CPU cores than the physical CPU cores of a socket in the physical ESXi host is recommended.

    Use a VMware Paravirtual SCSI controller to increase throughput with significant CPU utilization reduction in the SAN environment.

    Use a VMware VMXNET3 network adapter to optimize network performance.

    Use Thick Provision Eager Zeroed disk provisioning to optimize virtual disk performance.

    Install VMware tools in the guest OS to improve VM performance.

    Set VM advance parameters numa.vcpu.preferHT to “true” for enabling hyper-

    threading with NUMA in ESXi.

    For more information, refer to Performance Best Practices for VMware vSphere 6.0.

    Splunk Enterprise design

    Figure 6 shows the Splunk single instance 50 GB/day with 90-day retention deployment

    design with a single Splunk Enterprise instance and a combined indexer and search head.

    Note: In this solution, we use one forwarder to demonstrate the deployment process.

    Virtual machine

    configuration

    Virtualization

    configuration

    Splunk

    Enterprise

    deployment

    design

    http://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/vmware-perfbest-practices-vsphere6-0-white-paper.pdf

  • Chapter 3: Splunk Enterprise Deployment Design and Configuration

    25 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Figure 6. Splunk single instance 50 GB/day with 90-day retention deployment

    Figure 7 shows the Splunk multi-instance 500 GB/day with 90-day retention deployment

    with one search head and two indexers.

    Note: In this solution, we use one forwarder to demonstrate the deployment process.

  • Chapter 3: Splunk Enterprise Deployment Design and Configuration

    26 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Figure 7. Splunk multi-instance 500 GB/day with 90-day retention deployment

    Figure 8 shows the Splunk multi-instance 1000 GB/day with 90-day retention deployment

    and Splunk multi-instance 1000 GB/day with > 90-day retention deployment with one

    search head and five indexers. Using the Isilon, the VxRail cluster can provide

    configurable retention for Splunk cold buckets.

    Note: In this solution, we use one forwarder to demonstrate the deployment process.

  • Chapter 3: Splunk Enterprise Deployment Design and Configuration

    27 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Figure 8. Splunk multi-instance 1000 GB/day with 90-day retention deployment and Splunk multi-instance 1000 GB/day with > 90-day retention deployment

    Figure 9 shows the Splunk multi-instance 1000 GB/day with > 90-day retention and

    indexer high availability deployment design with one search head and five indexers. Using

    Isilon, the VxRail Appliances can provide configurable retention for Splunk cold buckets.

    Note: In this solution, we use one forwarder to demonstrate the deployment process.

  • Chapter 3: Splunk Enterprise Deployment Design and Configuration

    28 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Figure 9. Splunk multi-instance 1000 GB/day with > 90-day retention and indexer high availability deployment

    In this solution, we implement the following Linux configuration parameter settings to

    provide optimal Splunk Enterprise performance:

    Change tuned profile to virtual-host in RHEL 6.X. This profile decreases the

    swappiness of virtual memory and enables more aggressive writeback of dirty

    pages. It tunes the system settings for high throughput and low latency.

    Disable Transparent Huge Pages (THP) to avoid the degradation of Splunk

    Enterprise performance on RHEL 6.X. For more information, refer to Transparent

    huge memory pages and Splunk performance.

    Disable SELinux, so that enhanced system security does not add overhead to the

    performance.

    Increase the maximum number of open file descriptors and processes by

    configuring ulimit to avoid the “Too Many Open Files” exception. Table 12 shows

    the recommended values.

    Table 12. Recommended ulimit values

    System-wide resources Ulimit invocation Recommended minimum value

    Open files ulimit -n 8,192

    User processes ulimit -u 1,024

    Splunk

    Enterprise Linux

    configuration

    http://docs.splunk.com/Documentation/Splunk/6.5.0/ReleaseNotes/SplunkandTHPhttp://docs.splunk.com/Documentation/Splunk/6.5.0/ReleaseNotes/SplunkandTHP

  • Chapter 3: Splunk Enterprise Deployment Design and Configuration

    29 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    System-wide resources Ulimit invocation Recommended minimum value

    Data segment size ulimit -d 1,073,741,824

    Tune the kernel to optimize the network for high throughput over a 10 Gb Ethernet

    by adding the following command string to /etc/sysctl.conf:

    net.ipv4.tcp_timestamps=0

    net.ipv4.tcp_sack=1

    net.core.netdev_max_backlog=250000

    net.core.rmem_max=4194304

    net.core.wmem_max=4194304

    net.core.rmem_default=4194304

    net.core.wmem_default=4194304

    net.core.optmem_max=4194304

    net.ipv4.tcp_rmem=4096 87380 4194304

    net.ipv4.tcp_wmem=4096 65536 4194304

    net.ipv4.tcp_low_latency=1

  • Chapter 4: Splunk Single Instance 50 GB/day with 90-day Retention

    30 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Chapter 4 Splunk Single Instance 50 GB/day with 90-day Retention

    This chapter presents the following topics:

    Overview .............................................................................................................. 31

    Implementation ................................................................................................... 31

    Use case summary ............................................................................................. 38

  • Chapter 4: Splunk Single Instance 50 GB/day with 90-day Retention

    31 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Overview

    In this chapter, we will show the Splunk single instance 50 GB/day with 90-day retention

    implementation on one VxRail node to an existing VxRail cluster. A single Splunk

    Enterprise instance serves as both indexer and search head. We optimize the design for

    both high performance and data retention capability using VxRail for all Splunk indexing

    (hot/warm/cold).

    Implementation

    Table 13 lists the process flow for the Splunk single Instance 50 GB/day with 90-day

    retention implementation on one VxRail node.

    Table 13. Process flow for Splunk single Instance 50 GB/day with 90-day retention implementation

    Step Action Description

    1 Expanding VxRail cluster Add one VxRail node into the existing VxRail cluster

    2 Setting up vSAN policy Prepare the vSAN policy that is used for Splunk disks, including hot/warm and cold buckets

    3 Creating Splunk VM template

    Prepare the VM template that is used for indexer/search head and forwarder. Tune it according to Splunk’s recommendation

    4 Deploying Splunk indexer/search head

    Deploy indexer/search head instance that is based on the Splunk VM template

    5 Deploying forwarder Deploy forwarder instance that is based on the Splunk VM template for validating implementation

    6 Validating implementation Validate the implementation of Splunk

    To begin the implementation, expand the existing VxRail cluster by adding one VxRail

    node to provide the dedicated resource for Splunk Enterprise single instance deployment.

    This is a Dell EMC internal process. Contact your Dell EMC or partner representative

    when planning to expand your VxRail cluster.

    Note: The VCSA root password must be the same as the password of

    [email protected]. If the password was changed, change it back before adding the new

    node.

    Follow these steps to set up the VSAN policy for Splunk hot/warm and cold buckets.

    1. Log in to the vCenter vSphere Web Client using the administrator account.

    2. Navigate to Home > VM Storage Policies.

    Expanding

    VxRail cluster

    Setting up vSAN

    policy

  • Chapter 4: Splunk Single Instance 50 GB/day with 90-day Retention

    32 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Figure 10. VM storage policies

    3. Create New VM Storage Policy for Splunk hot/warm and cold buckets using

    these settings:

    Name: Splunk-Data-Policy

    Description: Used for Splunk hot/warm bucket and cold buckets

    Number of failures to tolerate: 1

    Number of disk stripes per object: 10

    Object space reservation (%): 0

    Failure tolerance method: RAID-1

    Figure 11. VM storage policy settings

    Follow these steps to create a VM template and tune it according to Splunk’s

    recommendation. We will use the template to deploy a Splunk indexer/search head and a

    Splunk forwarder.

    Creating Splunk

    VM template

  • Chapter 4: Splunk Single Instance 50 GB/day with 90-day Retention

    33 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    1. Log in to the vCenter client and deploy one VM with RHEL 6.7 OS.

    2. Log in to the Linux VM deployed in step 1 using the root account.

    3. Disable the firewall to allow Splunk instances on different hosts to communicate

    with each other properly:

    service iptables stop

    chkconfig iptables off

    4. Disable SELinux, so that enhanced system security does not add overhead to

    Splunk’s performance:

    vi /etc/selinux/config

    SELINUX=disabled

    5. Disable Transparent Huge Pages (THP) to avoid the degradation of Splunk

    Enterprise performance on RHEL 6.X:

    vi /etc/grub.conf

    transparent_hugepage=never

    6. Change the tuned profile to virtual-host in RHEL 6.X for high throughput and low

    latency storage access:

    yum install -y tuned

    chkconfig tuned on

    tuned-adm profile virtual-host

    7. Tune the kernel to optimize the network for high throughput over a 10 Gb Ethernet

    by adding the following command string to /etc/sysctl.conf:

    vi /etc/sysctl.conf

    net.ipv4.tcp_timestamps=0

    net.ipv4.tcp_sack=1

    net.core.netdev_max_backlog=250000

    net.core.rmem_max=4194304

    net.core.wmem_max=4194304

    net.core.rmem_default=4194304

    net.core.wmem_default=4194304

    net.core.optmem_max=4194304

    net.ipv4.tcp_rmem=4096 87380 4194304

    net.ipv4.tcp_wmem=4096 65536 4194304

    net.ipv4.tcp_low_latency=1

    8. Increase the maximum number of open file descriptors and processes by

    configuring ulimit to avoid the “Too Many Open Files” exception:

    vi /etc/security/limits.conf

    root - nofile 65536

    root - nproc 65536

    vi /etc/security/limits.d/90-nproc.conf

    root - nofile 65536

  • Chapter 4: Splunk Single Instance 50 GB/day with 90-day Retention

    34 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    root - nproc 65536

    9. Remove the NIC's MAC address runtime mapping file:

    rm -f /etc/udev/rules.d/70-persistent-net.rules

    10. Shut down the server:

    shutdown -P now

    11. Export the Open Virtualization Format (OVF) template for the Splunk VM

    template.

    Follow these steps to deploy one Splunk indexer/search head instance:

    1. Log in to the vCenter vSphere Client and deploy one VM for indexer/search head

    using the Splunk VM template.

    2. Edit the virtual machine settings as follows:

    Memory: 256 GB

    CPUs: 64

    Hard disk: 300 GB (OS Storage)

    3. Reserve all guest memory (all locked)

    4. Power on the VM and configure the IP and hostname.

    5. Download and install Splunk Enterprise 6.5.0 on the VM to serve as the combined

    indexer/search head by following these steps:

    a. Change permissions on the installation package:

    chmod 744 splunk-6.5.0-xxx-linux-2.6-x86_64.rpm

    b. Run the following command to install the Splunk Enterprise RPM in the

    default directory /opt/splunk:

    rpm -i splunk-6.5.0-xxx-linux-2.6-x86_64.rpm

    Note: To install Splunk in a different directory, use the --prefix flag.

    rpm -i --prefix=/opt/new_directory splunk-6.5.0-xxx-linux-2.6-x86_64.rpm

    6. Start Splunk Enterprise with --accept-license for the first time:

    /opt/splunk/bin/splunk start --accept-license

    7. Configure the Splunk Enterprise license on webhttps://:8000

    a. Log in with the default credential: admin/changeme

    b. Navigate to Settings > Licensing.

    c. Click Add license.

    Deploying

    Splunk single

    instance

    https://www.splunk.com/bin/splunk/DownloadActivityServlet?architecture=x86_64&platform=linux&version=6.5.0&product=splunk&filename=splunk-6.5.0-59c8927def0f-linux-2.6-x86_64.rpm&wget=true

  • Chapter 4: Splunk Single Instance 50 GB/day with 90-day Retention

    35 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Figure 12. Adding a license

    8. Set up the receiving port 9997

    /opt/splunk/bin/splunk enable listen 9997 -auth

    admin:changeme

    9. Remove NIC's MAC address runtime mapping file:

    rm -f /etc/udev/rules.d/70-persistent-net.rules

    10. Change allowRemoteLogin to “always” in the server.conf file:

    vi /opt/splunk/etc/system/local/server.conf

    [general]

    allowRemoteLogin=always

    11. Remove the file instance.cfg:

    rm -f /opt/splunk/etc/instance.cfg

    12. Export OVF template for indexer/search head VM template.

    13. Mount a 3 TB disk for Indexer Storage by following these steps:

    a. Edit the virtual machine settings as follows:

    Hard disk: 3 TB (Indexer Storage)

    b. Stop Splunk Enterprise:

    /opt/splunk/bin/splunk stop

    c. Make partitions:

    fdisk /dev/sdb

    d. Make file systems:

    mkfs.ext4 /dev/sdb1

    e. Mount to Splunk default database:

    mount /dev/sdb1 /opt/splunk/var/lib/splunk/defaultdb

    vi /etc/fstab

    /dev/sdb1 /opt/splunk/var/lib/splunk/defaultdb ext4 defa

    ults 1 1

  • Chapter 4: Splunk Single Instance 50 GB/day with 90-day Retention

    36 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Note: To use the custom paths of hot/warm bucket and cold bucket, please refer to Use multiple

    partitions for index data in the Splunk online document “Managing Indexers and Clusters of

    Indexers.”

    f. Start Splunk Enterprise:

    /opt/splunk/bin/splunk start

    14. Log in to the vCenter vSphere Web Client.

    15. Navigate to Home > Hosts and Clusters > the indexer/search head VM >

    Manage > Policies > Edit VM Storage Policies.

    16. Configure the storage policy to Splunk-Data-Policy for Indexer Storage disks.

    Follow these steps to deploy one Splunk universal forwarder:

    1. Log in to the vCenter vSphere Client and deploy one VM for Forwarder using the

    Splunk VM template.

    2. Edit the virtual machine settings as follows:

    Memory: 4 GB

    CPUs: 4

    Hard disk: 300 GB (OS Storage)

    3. Reserve all guest memory (all locked).

    4. Power on the VM and configure the IP and hostname.

    5. Download and install Universal Forwarder 6.5.0 on the VM:

    rpm -i splunkforwarder-6.5.0-xxx-linux-2.6-x86_64.rpm

    6. Start the universal forwarder:

    /opt/splunkforwarder/bin/splunk start --accept-license

    7. Configure the data input on the forwarder:

    /opt/splunkforwarder/bin/splunk add monitor /data

    Note: The forwarder asks you to authenticate and begins monitoring the specified directory

    immediately after you log in.

    8. Restart the universal forwarder:

    /opt/splunkforwarder/bin/splunk restart

    9. Remove NIC's MAC address runtime mapping file:

    rm -f /etc/udev/rules.d/70-persistent-net.rules

    10. Remove file instance.cfg:

    rm -f /opt/splunk/etc/instance.cfg

    Deploying

    forwarder

    http://docs.splunk.com/Documentation/Splunk/6.5.0/Indexer/Usemultiplepartitionsforindexdatahttp://docs.splunk.com/Documentation/Splunk/6.5.0/Indexer/Usemultiplepartitionsforindexdatahttps://www.splunk.com/bin/splunk/DownloadActivityServlet?architecture=x86_64&platform=linux&version=6.5.0&product=universalforwarder&filename=splunkforwarder-6.5.0-59c8927def0f-linux-2.6-x86_64.rpm&wget=true

  • Chapter 4: Splunk Single Instance 50 GB/day with 90-day Retention

    37 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    11. Export OVF template for Forwarder VM template.

    12. Configure the universal forwarder to connect to the receiving indexer:

    /opt/splunkforwarder/bin/splunk add forward-server deploy-

    indexer01.bigdata.emc.local:9997

    Follow these steps to validate the implementation of Splunk:

    1. Verify the forward-server as shown in Figure 13:

    /opt/splunkforwarder/bin/splunk list forward-server

    Figure 13. Verification of forward server

    2. Verify the indexer by following these steps:

    a. Upload data to the forwarder as shown in Figure 14.

    Figure 14. Uploading data to the forwarder

    Note: Download tutorialdata.zip from Splunk Tutorial.

    b. Search on the indexer as shown in Figure 15.

    Figure 15. Searching on the indexer

    3. Stop the forwarder and clean the index:

    /opt/splunkforwarder/bin/splunk remove forwarder-server

    deploy-indexer01.bigdata.emc.local:9997

    /opt/splunkforwarder/bin/splunk list forward-server

    /opt/splunkforwarder/bin/splunk stop

    /opt/splunkforwarder/bin/splunk clean eventdata –index main

    4. Delete the forwarder VM.

    Validating the

    implementation

    http://docs.splunk.com/images/Tutorial/tutorialdata.zip

  • Chapter 4: Splunk Single Instance 50 GB/day with 90-day Retention

    38 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Use case summary

    In this use case, we added an additional VxRail node to an existing VxRail cluster to add

    the required compute and storage capacity required to deploy a Splunk single instance 50

    GB/day with 90-day retention. The implementation shows that the VxRail light starter kit

    makes Splunk deployment easy.

  • Chapter 5: Splunk Multi-instance 500 GB/day with 90-day Retention

    39 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Chapter 5 Splunk Multi-instance 500 GB/day with 90-day Retention

    This chapter presents the following topics:

    Overview .............................................................................................................. 40

    Implementation ................................................................................................... 40

    Use case summary ............................................................................................. 47

  • Chapter 5: Splunk Multi-instance 500 GB/day with 90-day Retention

    40 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Overview

    In this chapter, we will show the Splunk multi-instance 500 GB/day with 90-day retention

    implementation on a four-node VxRail cluster, with increasing data volume and number of

    concurrent users.

    Implementation

    Table 14 lists the process flow for the Splunk multi-instance 500 GB/day with 90-day

    retention implementation on a four-node VxRail cluster.

    Table 14. Process flow for Splunk multi-instance 500 GB/day with 90-day retention implementation

    Step Action Description

    1 Implementing VxRail cluster

    Implement a four-node VxRail cluster

    2 Setting up vSAN policy Prepare the vSAN policy that is used for Splunk disks, including hot/warm and cold buckets

    3 Deploying Splunk indexer Deploy two indexers that are based on Splunk indexer/search head template

    4 Deploying Splunk search head

    Deploy a search head that is based on Splunk indexer/search head template and configure with two Indexers

    5 Deploying Splunk admin server

    Deploy an admin server and configure the indexers and the search head into the cluster

    6 Validating implementation Validate the implementation of Splunk

    To begin the implementation, implement a four-node VxRail cluster. This is a Dell EMC

    internal process. Contact your Dell EMC or partner representative when planning to

    implement your VxRail cluster.

    For details of the procedure of setting up vSAN policy, refer to Setting up vSAN policy in

    Chapter 4.

    Follow these steps to deploy two indexers.

    1. Log in to the vCenter vSphere client.

    2. Use the indexer/search head VM template to deploy one indexer VM.

    3. Configure the IP and hostname.

    4. Mount a 13.9 TB disk for Indexer Storage.

    5. Log in to the vCenter vSphere Web Client.

    6. Navigate to Home > Hosts and Clusters > the indexer/search head VM >

    Manage > Policies > Edit VM Storage Policies.

    7. Configure the storage policy to Splunk-Data-Policy for Indexer Storage disks.

    Implementing

    VxRail cluster

    Setting up vSAN

    policy

    Deploying

    Splunk indexer

  • Chapter 5: Splunk Multi-instance 500 GB/day with 90-day Retention

    41 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    8. Start Splunk Enterprise:

    /opt/splunk/bin/splunk start

    9. Configure the Splunk instance name:

    /opt/splunk/bin/splunk set servername deploy-indexer0[1-

    2].bigdata.emc.local

    /opt/splunk/bin/splunk set default-hostname deploy-

    indexer0[1-2].bigdata.emc.local

    10. Restart Splunk Enterprise:

    /opt/splunk/bin/splunk restart

    Follow these steps to deploy a Splunk search head:

    1. Log in to the vCenter vSphere client.

    2. Use the indexer/search head VM template to deploy one search head VM.

    3. Configure the IP and hostname.

    4. Start Splunk Enterprise:

    /opt/splunk/bin/splunk start

    5. Configure the Splunk instance name:

    /opt/splunk/bin/splunk set servername deploy-

    searchhead01.bigdata.emc.local

    /opt/splunk/bin/splunk set default-hostname deploy-

    searchhead01.bigdata.emc.local

    6. Restart Splunk Enterprise:

    /opt/splunk/bin/splunk restart

    7. Configure the indexer instances as search peers:

    /opt/splunk/bin/splunk add search-server https://deploy-

    indexer01.bigdata.emc.local:8089 -auth admin:changeme -

    remoteUsername admin -remotePassword changeme

    /opt/splunk/bin/splunk add search-server https://deploy-

    indexer02.bigdata.emc.local:8089 -auth admin:changeme -

    remoteUsername admin -remotePassword changeme

    Follow these steps to deploy one admin server. The admin server is recommended for the

    Splunk distributed environment. The procedure of deploying an admin server is the same

    as deploying an indexer cluster master but will not make any index replication.

    1. Use the Splunk VM template to deploy one VM for the cluster master.

    2. Configure the IP and hostname of the VM.

    3. Edit the virtual machine settings as follows:

    Memory: 256 GB

    Deploying

    Splunk search

    head

    Deploying

    Splunk admin

    server

    https://deploy-indexer01.bigdata.emc.local:8089/https://deploy-indexer01.bigdata.emc.local:8089/https://deploy-indexer02.bigdata.emc.local:8089/https://deploy-indexer02.bigdata.emc.local:8089/

  • Chapter 5: Splunk Multi-instance 500 GB/day with 90-day Retention

    42 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    CPUs: 40

    4. Start Splunk Enterprise:

    /opt/splunk/bin/splunk start

    5. Configure the Splunk instance name:

    /opt/splunk/bin/splunk set servername deploy-

    adminserver.bigdata.emc.local

    /opt/splunk/bin/splunk set default-hostname deploy-

    adminserver.bigdata.emc.local

    6. Restart Splunk Enterprise:

    /opt/splunk/bin/splunk restart

    7. Log in to the Splunk web server using the default credential admin/changeme.

    8. Navigate to Settings > Indexer clustering.

    9. Click Enable indexer clustering, as shown in Figure 16.

    Figure 16. Enabling indexer clustering

    10. Choose Master node, as shown in Figure 17.

    Figure 17. Choose Master node

    11. Configure the Replication Factor and Search Factor:

  • Chapter 5: Splunk Multi-instance 500 GB/day with 90-day Retention

    43 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Replication Factor: 1

    Search Factor: 1

    Note: This causes the cluster to function purely as a coordinated set of Splunk Enterprise

    instances, without data replication. The cluster will not make any duplicate copies of the data, so

    you can keep storage size and processing overhead to a minimum.

    12. Click Enable Master Node.

    Figure 18 shows that message that is displayed.

    Figure 18. Restarting Splunk after enabling the master node

    13. Click Go to Server Controls and go to the Settings page from which you can

    initiate the restart.

    Note: Do not restart the master while it is waiting for the peers to join the cluster. Otherwise, you

    must restart the peers a second time.

    14. Log in to the Splunk web server of indexers using the default credential

    admin/changeme.

    15. Navigate to Settings > Indexer clustering.

    16. Click Enable indexer clustering.

    17. Choose Peer node, as shown in Figure 19.

  • Chapter 5: Splunk Multi-instance 500 GB/day with 90-day Retention

    44 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Figure 19. Choosing peer node

    18. Configure the Master URI and Peer replication port, as shown in Figure 20:

    Master URI: https//:8089

    Peer replication port: 8080

    Figure 20. Configuring Master URI and peer replication port

    19. Click Enable peer node. .

    Figure 21 shows that message that is displayed.

  • Chapter 5: Splunk Multi-instance 500 GB/day with 90-day Retention

    45 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Figure 21. Restarting Splunk

    20. Click Go to Server Controls and restart the server.

    21. Repeat step 14 to step 20 on all indexer VMs.

    22. Log in to the Splunk web server of search head using the default credential

    admin/changeme.

    23. Navigate to Settings > Indexer clustering.

    24. Click Enable indexer clustering.

    25. Choose Search head node, as shown in Figure 22.

  • Chapter 5: Splunk Multi-instance 500 GB/day with 90-day Retention

    46 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Figure 22. Choosing search head node

    26. Configure the Master URI: https://:8089, as shown in Figure

    32 .

    Figure 23. Configuring the Master URI

    27. Click Enable search head node.

    Figure 24 shows the message that is displayed.

  • Chapter 5: Splunk Multi-instance 500 GB/day with 90-day Retention

    47 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Figure 24. Restarting Splunk from Server Controls

    28. Click Go to Server Controls and restart the server.

    29. Navigate to Settings > Indexer clustering, as shown in Figure 25.

    Figure 25. Completing the process.

    Follow these steps to validate the implementation of Splunk:

    1. Verify search peers on the search head:

    a. Log in to the web server of the search head with default credentials.

    b. Navigate to Settings > Distributed search.

    c. Click Search peers and check the two indexers as shown in Figure 26.

    Figure 26. Check the two indexers

    2. Verify that five VMs are balanced among the four ESXi servers.

    Use case summary

    In this use case, we implemented a 4-node VxRail cluster to deploy the Splunk multi-

    instance 500 GB/day with 90-day retention with one search head and two indexers. The

    implementation shows the VxRail’s flexibility and demonstrates that it is easy to scale

    Splunk deployment by distributing Splunk Enterprise instances across multiple virtual

    machines.

    Validating

    implementation

  • Chapter 6: Splunk Multi-instance 1000 GB/day with 90-day Retention

    48 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Chapter 6 Splunk Multi-instance 1000 GB/day with 90-day Retention

    This chapter presents the following topics:

    Overview .............................................................................................................. 49

    Implementation ................................................................................................... 49

    Use case summary ............................................................................................. 51

  • Chapter 6: Splunk Multi-instance 1000 GB/day with 90-day Retention

    49 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Overview

    In this chapter, we will show the Splunk multi-instance 1000 GB/day with 90-day retention

    implementation on a seven-node VxRail cluster.

    Implementation

    Table 15 lists the process flow for the Splunk multi-instance 1000 GB/day with 90-day

    retention implementation on a seven-node VxRail cluster.

    Table 15. Process flow for Splunk multi-instance 1000 GB/day with 90-day retention implementation

    Step Action Description

    1 Implementing VxRail cluster

    Implement a seven-node VxRail cluster

    2 Setting up vSAN policy Prepare the vSAN policy that is used for Splunk disks, including hot/warm and cold buckets

    3 Deploying Splunk indexer Deploy 5 indexer instances that are based on indexer/search head VM template

    4 Deploying Splunk search heads

    Deploy a search head instance that is based on indexer/search head VM template

    5 Deploying Splunk admin server

    Deploy an admin server and configure the indexers and the search head into the cluster

    6 Validating implementation Validate the implementation of Splunk

    To begin the implementation, implement a seven-node VxRail cluster. This is a Dell EMC

    internal process. Contact your Dell EMC or partner representative when planning to

    implement your VxRail cluster.

    For details of the procedure of setting up vSAN policy, refer to Setting up vSAN policy in

    Chapter 4.

    Follow these steps to deploy five indexers.

    1. Log in to the vCenter vSphere client.

    2. Use the indexer/search head VM template to deploy one indexer VM.

    3. Configure the IP and host name.

    4. Mount a 10.8 TB disk for Indexer Storage.

    5. Log in to the vCenter vSphere Web Client.

    6. Navigate to Home > Hosts and Clusters > the indexer/search head VM >

    Manage > Policies > Edit VM Storage Policies.

    7. Configure the storage policy to Splunk-Data-Policy for Indexer Storage disks.

    8. Start Splunk Enterprise:

    Implementing

    VxRail cluster

    Setting up vSAN

    policy

    Deploying

    Splunk indexers

  • Chapter 6: Splunk Multi-instance 1000 GB/day with 90-day Retention

    50 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    /opt/splunk/bin/splunk start

    9. Configure Splunk instance name:

    /opt/splunk/bin/splunk set servername deploy-indexer0[1-

    5].bigdata.emc.local

    /opt/splunk/bin/splunk set default-hostname deploy-

    indexer0[1-5].bigdata.emc.local

    10. Restart Splunk Enterprise:

    /opt/splunk/bin/splunk restart

    Follow these steps to deploy a Splunk search head:

    1. Log in to the vCenter vSphere client.

    2. Use the indexer/search head VM template to deploy one search head VM.

    3. Configure the IP and hostname.

    4. Start Splunk Enterprise:

    /opt/splunk/bin/splunk start

    5. Configure the Splunk instance name:

    /opt/splunk/bin/splunk set servername deploy-

    searchhead01.bigdata.emc.local

    /opt/splunk/bin/splunk set default-hostname deploy-

    searchhead01.bigdata.emc.local

    6. Restart Splunk Enterprise:

    /opt/splunk/bin/splunk restart

    7. Configure the indexer instances as search peers:

    /opt/splunk/bin/splunk add search-server https://deploy-

    indexer01.bigdata.emc.local:8089 -auth admin:changeme -

    remoteUsername admin -remotePassword changeme

    /opt/splunk/bin/splunk add search-server https://deploy-

    indexer02.bigdata.emc.local:8089 -auth admin:changeme -

    remoteUsername admin -remotePassword changeme

    /opt/splunk/bin/splunk add search-server https://deploy-

    indexer03.bigdata.emc.local:8089 -auth admin:changeme -

    remoteUsername admin -remotePassword changeme

    /opt/splunk/bin/splunk add search-server https://deploy-

    indexer04.bigdata.emc.local:8089 -auth admin:changeme -

    remoteUsername admin -remotePassword changeme

    /opt/splunk/bin/splunk add search-server https://deploy-

    indexer05.bigdata.emc.local:8089 -auth admin:changeme -

    remoteUsername admin -remotePassword changeme

    Deploying

    Splunk search

    head

    https://deploy-indexer01.bigdata.emc.local:8089/https://deploy-indexer01.bigdata.emc.local:8089/https://deploy-indexer02.bigdata.emc.local:8089/https://deploy-indexer02.bigdata.emc.local:8089/

  • Chapter 6: Splunk Multi-instance 1000 GB/day with 90-day Retention

    51 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    For details of the procedure of deploying Splunk admin server, refer to Deploying Splunk

    admin server in Chapter 5.

    Follow these steps to validate the implementation of Splunk:

    1. Verify search peers on the search head:

    a. Log in to the web server of the search head with default credentials.

    b. Navigate to Settings > Distributed search.

    c. Click Search peers and check the five indexers.

    2. Verify that VMs are balanced among the seven ESXi servers.

    Use case summary

    In this use case, we implemented a 7-node VxRail cluster to deploy the Splunk multi-

    instance 1000 GB/day with 90-day retention with one search head and five indexers. The

    implementation shows that the VxRail Appliance can easily scale out for business growth.

    For details about VxRail Appliance scalability, refer to Appendix A: VxRail Appliance

    Scalability.

    Deploying

    Splunk admin

    server

    Validating

    implementation

  • Chapter 7: Splunk Multi-instance 1000 GB/day with > 90-day Retention

    52 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Chapter 7 Splunk Multi-instance 1000 GB/day with > 90-day Retention

    This chapter presents the following topics:

    Overview .............................................................................................................. 53

    Implementation ................................................................................................... 53

    Use case summary ............................................................................................. 61

  • Chapter 7: Splunk Multi-instance 1000 GB/day with > 90-day Retention

    53 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Overview

    In this chapter, we will show the Splunk multi-instance 1000 GB/day with > 90-day

    retention implementation on a seven-node VxRail cluster with Isilon. This procedure

    creates a distributed Splunk Enterprise environment featuring both high performance and

    large capacity data retention capability, using VxRail for hot/warm buckets and Isilon for

    cold buckets.

    Implementation

    Table 16 lists the process flow for the Splunk multi-instance 1000 GB/day with > 90-day

    retention implementation on a seven-node VxRail cluster with Isilon.

    Table 16. Process flow for Splunk multi-instance 1000 GB/day with > 90-day retention implementation

    Step Action Description

    1 Implementing VxRail cluster

    Implement a seven-node VxRail cluster

    2 Setting up vSAN policy Prepare the vSAN policy that is used for Splunk disks, including hot/warm and cold buckets

    3 Implementing Isilon Prepare Isilon for VxRail with Isilon

    4 Configuring Isilon Configure Isilon NFS and add Isilon storage to VxRail

    5 Deploying Splunk indexer Deploy 5 indexer instances that are based on indexer/search head VM template

    6 Adding Isilon storage Add disks from SplunkCold data store to each Indexer VM for Splunk cold bucket

    7 Deploying search head Deploy a search head instance that is based on Search head VM template

    8 Deploying Splunk admin server

    Deploy an admin server and configure the indexers and the search head into the cluster

    9 Deploying forwarder Deploy a forwarder instance that is based on forwarder VM template for validating implementation

    10 Validating implementation Validate the implement of Splunk

    To begin the implementation, implement a seven-node VxRail cluster. This is a Dell EMC

    internal process. Contact your Dell EMC or partner representative when planning to

    implement your VxRail cluster.

    For details of the procedure of setting up vSAN policy, refer to Setting up vSAN policy in

    Chapter 4.

    Implementing an Isilon storage array is a Dell EMC internal process. Contact your Dell

    EMC representative when planning to set up your Isilon storage.

    Implementing

    VxRail cluster

    Setting up vSAN

    policy

    Implementing

    Isilon

  • Chapter 7: Splunk Multi-instance 1000 GB/day with > 90-day Retention

    54 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Follow these steps to configure Isilon NFS for VxRail cluster:

    1. Log in to the Isilon OneFS web service using the root account.

    2. Navigate to Cluster Management> Network Configuration.

    3. Click More>Add Subnet of groupnet0 to create a subnet, as shown in Figure 27.

    Figure 27. Creating a subnet

    4. Navigate to Access> Access Zones.

    5. Click Create an access zone to create an access zone for Splunk, as shown in

    Figure 28.

    Configuring

    Isilon

  • Chapter 7: Splunk Multi-instance 1000 GB/day with > 90-day Retention

    55 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Figure 28. Creating an access zone

    6. Navigate to Cluster Management> Network Configuration.

    7. Click More>Add Pool of subnet-10g to create an IP address pool, as shown in

    Figure 29.

  • Chapter 7: Splunk Multi-instance 1000 GB/day with > 90-day Retention

    56 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Figure 29. Creating a IP address pool

    8. Navigate to Protocols > UNIX Sharing (NFS) > NFS Exports.

    9. Click Create Export to create an NFS export for Splunk, as shown in Figure 30.

    Description: NFS Share for Splunk

    Root Clients: IP addresses of all the ESXi servers in VxRail

    Directory Paths: /ifs/data/splunk

  • Chapter 7: Splunk Multi-instance 1000 GB/day with > 90-day Retention

    57 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Figure 30. Creating an NFS export

    After completing the Isilon configuration, run the following procedure on each ESXi server

    to add Isilon NFS storage to VxRail.

    1. Log in to the vCenter client using the administrator account.

    2. Navigate to Home > Inventory > Hosts and Clusters > ESXi server >

    Configuration > Storage > Datastores.

    3. Click Add Storage to add Isilon NFS storage as a data store, as shown in Figure

    31:

    Storage Type: Network File System

    Server:

    Folder: /ifs/data/splunk

    Data store Name: SplunkCold

    Figure 31. Adding Isilon NFS storage as a data store

  • Chapter 7: Splunk Multi-instance 1000 GB/day with > 90-day Retention

    58 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    Follow these steps to deploy five indexers.

    1. Log in to the vCenter vSphere client.

    2. Use the indexer/search head VM template to deploy one indexer VM.

    3. Configure the IP and hostname.

    4. Mount a 2.1 TB disk for Indexer Storage.

    5. Log in to the vCenter vSphere Web Client.

    6. Navigate to Home > Hosts and Clusters > the indexer/search head VM >

    Manage > Policies > Edit VM Storage Policies.

    7. Configure the storage policy to Splunk-Data-Policy for Indexer Storage disks.

    8. Start Splunk Enterprise:

    /opt/splunk/bin/splunk start

    9. Configure the Splunk instance name:

    /opt/splunk/bin/splunk set servername deploy-indexer0[1-

    5].bigdata.emc.local

    /opt/splunk/bin/splunk set default-hostname deploy-

    indexer0[1-5].bigdata.emc.local

    10. Restart Splunk Enterprise:

    /opt/splunk/bin/splunk restart

    Follow these steps to add disks from the Splunk Cold data store on each Indexer VM:

    1. Log in to the vCenter client using the administrator account.

    2. Click Indexer VM and Edit virtual machine settings.

    3. Click Add Hardware to run the wizard:

    Device Type: Hard Disk

    Disk: Create a new virtual disk

    Capacity/Disk Size: 10.8 TB

    Location/Specify a data store or data store cluster: SplunkCold

    Follow these steps to prepare Splunk cold buckets using Isilon disks on VMs.

    1. Log in to the indexer using SSH.

    2. Make a partition on the newly provisioned Isilon virtual disk:

    fdisk /dev/sdc

    3. Make a file system on the partition:

    mkfs.ext4 /dev/sdc1

    4. Mount the Isilon virtual disk to a separate mount point.

    mkdir -p /data/isilon

    Deploying

    Splunk indexers

    Adding Isilon

    storage

  • Chapter 7: Splunk Multi-instance 1000 GB/day with > 90-day Retention

    59 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

    mount /dev/sdc1 /data/isilon

    vi /etc/fstab

    /dev/sdc1 /data/isilon ext4 defaults 1 1

    5. Create storage volume definitions for hot/warm and cold data in order to

    maximum storage utilization of your hot/warm tier before rolling to cold. Modify

    indexes.conf and set maxVolumeDataSizeMB to 80 percent of the total volume

    size. This reserves 20 percent for free space to ensure optimal filesystem

    performance and will roll data from hot/warm to cold when the

    maxVolumeDataSizeMB is reached.

    vi /opt/splunk/etc/system/local/indexes.conf

    #######################################################

    # Volume for hot/warm buckets

    #######################################################

    [volume:primary]

    path = /opt/splunk/var/lib/splunk

    maxVolumeDataSizeMB = 1800000

    #######################################################

    # Volume for cold buckets

    #############


Recommended