+ All Categories
Home > Documents > Centrify Identity and Access Management for...

Centrify Identity and Access Management for...

Date post: 10-Apr-2018
Category:
Upload: duongnhi
View: 224 times
Download: 1 times
Share this document with a friend
27
© 2015 CENTRIFY CORPORATION. ALL RIGHTS RESERVED. PAGE 1 Centrify Identity and Access Management for Hortonworks Integration Guide Abstract Centrify Server Suite is an enterprise-class solution that secures Hortonworks Data Platform leveraging an organization’s existing Active Directory infrastructure to deliver identity, access control, privilege management and user-level auditing.
Transcript
Page 1: Centrify Identity and Access Management for Hortonworkscommunity.centrify.com/centrify/attachments/centrify/techblog/22/3... · Centrify Identity and Access Management for Hortonworks

© 2015 CENTRIFY CORPORATION. ALL RIGHTS RESERVED. PAGE 1

Centrify Identity and Access Management for Hortonworks Integration Guide

Abstract

Centrify Server Suite is an enterprise-class solution that secures Hortonworks Data Platform

leveraging an organization’s existing Active Directory infrastructure to deliver identity, access control,

privilege management and user-level auditing.

Page 2: Centrify Identity and Access Management for Hortonworkscommunity.centrify.com/centrify/attachments/centrify/techblog/22/3... · Centrify Identity and Access Management for Hortonworks

CENTRIFY INTEGRATION GUIDE IDENTITY AND ACCESS MANAGEMENT FOR HORTONWORKS

© 2015 CENTRIFY CORPORATION. ALL RIGHTS RESERVED. PAGE 2

Information in this document, including URL and other Internet Web site references, is subject to change without notice. Unless otherwise noted, the example companies, organizations, products, domain names, email addresses, logos, people, places and events depicted herein are fictitious, and no association with any real company, organization, product, domain name, e-mail address, logo, person, place or event is intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Centrify Corporation.

Centrify may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Centrify, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property.

© 2015 Centrify Corporation. All rights reserved.

Centrify, DirectControl and DirectAudit are registered trademarks and Centrify Suite, DirectAuthorize, DirectSecure and DirectManage are trademarks of Centrify Corporation in the United States and/or other countries. Microsoft, Active Directory, Windows, Windows NT, and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.

The names of actual companies and products mentioned herein may be the trademarks of their respective owners.

Page 3: Centrify Identity and Access Management for Hortonworkscommunity.centrify.com/centrify/attachments/centrify/techblog/22/3... · Centrify Identity and Access Management for Hortonworks

CENTRIFY INTEGRATION GUIDE IDENTITY AND ACCESS MANAGEMENT FOR HORTONWORKS

© 2015 CENTRIFY CORPORATION. ALL RIGHTS RESERVED. PAGE 3

Contents

Contents ............................................................................................................ 3  

Overview ........................................................................................................... 4  

Planning for Active Directory Integration .......................................................... 4  Cluster Creation Pre-Requisites ........................................................................ 5  Preparing Active Directory ............................................................................... 6  Setup Centrify Zones and setup Roles for Linux login .......................................... 7  

Setup Hortonworks Cluster with Centrify ........................................................... 8  Setup the Virtual Machines .............................................................................. 8  Install Centrify on each node in the cluster ........................................................ 9  Install Hortonworks on each node in the cluster .................................................. 9  Enable Security ............................................................................................ 14  

Verify Proper Operation ................................................................................... 17  Verify Active Directory managed Service Accounts ............................................ 17  Finishing the Security Wizard and Testing Services ........................................... 18  Setting Long Term Account Maintenance ......................................................... 19  Zone enable Hadoop Accounts ....................................................................... 20  Validating Your Cluster’s Security ................................................................... 21  

Conclusion ....................................................................................................... 25  

How to Contact Centrify ................................................................................... 27  

Page 4: Centrify Identity and Access Management for Hortonworkscommunity.centrify.com/centrify/attachments/centrify/techblog/22/3... · Centrify Identity and Access Management for Hortonworks

CENTRIFY INTEGRATION GUIDE IDENTITY AND ACCESS MANAGEMENT FOR HORTONWORKS

© 2015 CENTRIFY CORPORATION. ALL RIGHTS RESERVED. PAGE 4

Overview Centrify Server Suite is an enterprise-class solution that secures even the most complex Hadoop

environments leveraging an organization’s existing Active Directory infrastructure to deliver access

control, privilege management and user-level auditing.

Centrify Server Suite secures the industry's broadest range of mission-critical servers from identity-

related insider risks and outsider attacks, making security and regulatory compliance repeatable and

sustainable. The solution leverages existing Active Directory infrastructure to centrally manage

authentication, access controls, privileged identities, policy enforcement and compliance for on-

premises and cloud resources.

Centrify Server Suite provides Identity, Access and Privilege Management for the Hortonworks Data

Platform:

! Simplifying AD integration for Hortonworks to run in secure mode

! Automating service account credential management

! Simplifying access with AD-based user single sign-on authentication

! Ensuring regulatory compliance with least privilege and auditing

! Developer SDKs for secure client application access to Hadoop

NOTE: This document provides the configuration guidance for multiple Hortonworks clusters to be

managed within an Active Directory environment. The key to multiple clusters in Active Directory is

the addition of a cluster prefix to the associated Hortonworks Kerberos principals or Active Directory

Account Name. Without the cluster prefix, Kerberos principals for the accounts for each cluster would

have the same name User Principal Name (UPN). These account names (UPN) must be unique within

the Active Directory domain.

Planning for Active Directory Integration Hadoop’s security implementation uses Kerberos which is built into Active Directory. As a result all

principals are user principals and that there will be an Active Directory account for each service that

requires a keytab. From an implementation perspective a 2-node cluster with 6 unique distributed

services will require 12 Active Directory accounts where each will require a unique Kerberos keytab

file.

Centrify provides a centralized access control and privilege management solution built on top of Active

Directory that simply requires the Centrify agent software to be installed on every node within the

cluster while administration is performed through Microsoft Management Consoles on an

administrator’s Windows computer.

Page 5: Centrify Identity and Access Management for Hortonworkscommunity.centrify.com/centrify/attachments/centrify/techblog/22/3... · Centrify Identity and Access Management for Hortonworks

CENTRIFY INTEGRATION GUIDE IDENTITY AND ACCESS MANAGEMENT FOR HORTONWORKS

© 2015 CENTRIFY CORPORATION. ALL RIGHTS RESERVED. PAGE 5

Cluster Creation Pre-Requisites

There are several common requirements such as you must have an Active Directory environment

running, you will need a Windows workstation joined to the domain where you can run administrative

consoles and you will need several Linux systems on which to install Hortonworks.

Centrify software

You should request a free trial of Centrify Server Suite if you don’t already have access to Centrify

software from http://www.centrify.com/lp/server-suite-free-trial/, just specify Hadoop in the

Comments field.

You can find the Centrify Documentation online here

http://community.centrify.com/t5/custom/page/page-id/Centrify-Documentation after you register for

a free trial and setup your Centrify Account here https://www.centrify.com/account/register.asp.

Naming convention

You should outline a naming convention for all Hadoop components that will reside in AD. Ideally you

will be able to identify the cluster in the names. But keep in mind the limitations of the Active

Directory sAMAccountName that has a maximum length of 20 characters and must be unique across

the Active Directory environment.

! You will need an Active Directory OU for managing all your Hadoop clusters such as OU=Hadoop.

You may have to ask your Active Directory team to create this OU for you. The technical lead or

Hadoop admin should be given full control of this Hadoop OU. Your Active Directory Domain

Admin will need to delegate administrative rights of this OU to your technical lead.

! Each cluster should have it’s own OU in order to independently manage it’s nodes and service

accounts. The OU name should reflect the name of the cluster; e.g. HWC9. This is usually created

within an OU that was created by the AD staff and delegated to you so that you can create an OU

for each Hortonworks cluster and manage the accounts and policies yourself.

! Centrify uses Zones as a logical container for storing the Linux access and privilege permissions

for the selected Active Directory users who you authorize to access your Hortonworks cluster.

You will setup a unique Zone for each Hortonworks cluster you deploy in order to ensure

separation of duties and enable delegated administration. This Centrify Zone containing the Linux

identity, access and privilege information is stored within the OU that was created for you in the

steps above. Use the child zone name as the same name for the cluster prefix, e.g. HWC9.

Servers and Hortonworks software

Additionally, you will need the following:

! At least 2 Linux systems that are compatible with Hortonworks to use for the Hadoop nodes.

Ideally the Ganglia and Nagios monitoring services are setup.

! Access to Hortonworks Data Platform software.

Page 6: Centrify Identity and Access Management for Hortonworkscommunity.centrify.com/centrify/attachments/centrify/techblog/22/3... · Centrify Identity and Access Management for Hortonworks

CENTRIFY INTEGRATION GUIDE IDENTITY AND ACCESS MANAGEMENT FOR HORTONWORKS

© 2015 CENTRIFY CORPORATION. ALL RIGHTS RESERVED. PAGE 6

! Preferably the organization is running their own Hadoop repository/repo (this speeds up any

setup)

Preparing Active Directory

Create Active Directory OUs (Organizational Unit is just a container for AD objects). For this task you

may need your Active Directory administrator to perform the first step and grant you delegated

permission to manage this top level OU for

! Create the Hadoop OU; e.g. OU=Hadoop, DC=Company, DC=com

! Then for each Cluster create another OU under OU=Hadoop; e.g. OU=HWC9, OU=Hadoop,

DC=Company, DC=Com

! Next in order to make it easier to manage nodes in the cluster separate from the Service

accounts, you may also want to create a set of child OUs with OU=Nodes and OU=Users

Page 7: Centrify Identity and Access Management for Hortonworkscommunity.centrify.com/centrify/attachments/centrify/techblog/22/3... · Centrify Identity and Access Management for Hortonworks

CENTRIFY INTEGRATION GUIDE IDENTITY AND ACCESS MANAGEMENT FOR HORTONWORKS

© 2015 CENTRIFY CORPORATION. ALL RIGHTS RESERVED. PAGE 7

Setup Centrify Zones and setup Roles for Linux login

Start with the Centrify Server Suite Quick Start Guide to install the Management Consoles and to

setup your Centrify Zone with the appropriate Roles to grant AD users with login rights to the Linux

systems you will join to Active Directory in the next step.

! Run the appropriate setup program from the Management ISO for Windows 32-bit or 64-bit on a

Windows administrator’s workstation. ���

The setup program simply copies the necessary files to the local Windows computer, so there are

no special permissions required to run the setup program other than permission to install files on

the local computer. Follow the prompts displayed to select the type of suite to install and which

components to install. ���

! Open Access Manager to start the Setup Wizard and create the containers for Licenses and

Zones. You can accept the default locations or use create a Centrify organizational unit for the

containers. ���

! In Access Manager, create a new zone with the default options. For example, create a new zone

named Hadoop. ���

! In Access Manager, add Active Directory users to the new zone. These are the users you will

grant access permission to login to the Hadoop cluster. ! Select the new Hadoop zone. ��� ! Right-click, then select Add User to search for and select existing Active Directory users. ��� ! Select Define user UNIX profile and deselect assign roles. ��� ! Accept the defaults for all fields. ���

! Create a child zone.

! Select the Hadoop zone. ���

! Right-click, then select Create Child Zone. ���

! Type a name for the zone, for example, HWC9 and an optional description, then click Next

and Finish to create the new child zone. ���

Page 8: Centrify Identity and Access Management for Hortonworkscommunity.centrify.com/centrify/attachments/centrify/techblog/22/3... · Centrify Identity and Access Management for Hortonworks

CENTRIFY INTEGRATION GUIDE IDENTITY AND ACCESS MANAGEMENT FOR HORTONWORKS

© 2015 CENTRIFY CORPORATION. ALL RIGHTS RESERVED. PAGE 8

! Assign a role for the users you added to the Hadoop zone. ���

User profiles are inherited by child zones, so the users you added to Hadoop, automatically have

a profile in HWC1. To login to a machine, a user requires a profile and a role assignment.

DirectManage provides a default UNIX Login role that you can assign to enable users to login. ���

! Expand Child Zones, HWC9, and Authorization. ���

! Select Role Assignments, right-click, then click Assign Role. ���

! Select the UNIX Login role from the results and click OK. ���

! Click Add AD Account, then search for one of the Active Directory user you added to the

Hadoop zone. Select this user and click OK. ���

Setup Hortonworks Cluster with Centrify Setup the Virtual Machines

! Provision 2 new Centos 6.x virtual machines:

! C9n1.centrifyimage.vms (192.168.1.46), 2 processors, 8GB RAM, 1 HD (40gb)

! C9n2.centrifyimage.vms (192.168.1.47), 2 processors, 8GB RAM, 1 HD (40gb)

! Create the corresponding DNS A records in the appropriate DNS Zone, in this case we are using

centrifyimage.vms DNS zone. Make sure to setup the proper reverse DNS entries as well.

! One each Hadoop node:

! Perform a yum update

! Disable and stop the iptables service (chkconfig iptables off && service iptables stop)

! Enable the ntpd service (chkconfig ntpd on)

Page 9: Centrify Identity and Access Management for Hortonworkscommunity.centrify.com/centrify/attachments/centrify/techblog/22/3... · Centrify Identity and Access Management for Hortonworks

CENTRIFY INTEGRATION GUIDE IDENTITY AND ACCESS MANAGEMENT FOR HORTONWORKS

© 2015 CENTRIFY CORPORATION. ALL RIGHTS RESERVED. PAGE 9

! Disable selinux (edit /etc/selinux/config)

! Set the directive to “enabled = 0” on the /etc/yum/pluginconf.d/refresh-packagekit.conf

! Create /etc/security/keytabs directory (mkdir –p /etc/security/keytabs)

! On the c9n1 Node or your first node you will need to:

! Run the ssh-keygen command and copy the contents of id_rsa.pub to

/root/.ssh/authorized_keys.

! Attempt an ssh connection as root to the second node, c9n2.centrifyimage.vms

! Copy the /root/.shh/authorized_keys file to c9n2:/root/.ssh

Install Centrify on each node in the cluster

Install the Centrify Agent and join the nodes to Active Directory.

! After downloading Centrify agents disk image, just copy the appropriate tgz file from the

ISO to the Nodes, un pack the file and run the install.sh

! Install.sh will ask several questions if you run it interactively which is suggested this first

time, however the installation can be automated with a custom config file for silent

installation. Just install Standard Edition of Centrify Suite and do not join Active Directory,

we will need to do that after making a few changes to the configuration files.

! Edit the /etc/centrifydc/centrifydc.conf file and uncomment he

adclient.krb5.service.principals line and remove the http principal.

Note: this step is required or the cluster will not start. Centrify should not create

servicePrincipalName for the http service since Hortonworks will need to do this

later.

! Join your zone (adjoin –z zone –c “container” –V –u user domain name)

adjoin –z HWC9 –c ou=hwc9,ou=Hadoop,dc=company,dc=com –V –u <your AD

loginname> company.com

! Optional: Install the Centrify Audit agent and enable audit (rpm –Uvh centrifyda-

<version>)

! The computer should join AD and then you will need to reboot. At this point, you should be

able to login with an AD userid and password for the user you granted login rights to

previously.

Install Hortonworks on each node in the cluster

Hortonworks will be installed on the first node in the cluster, in this case that is c9n1.

! On c9n1, login as root.

Page 10: Centrify Identity and Access Management for Hortonworkscommunity.centrify.com/centrify/attachments/centrify/techblog/22/3... · Centrify Identity and Access Management for Hortonworks

CENTRIFY INTEGRATION GUIDE IDENTITY AND ACCESS MANAGEMENT FOR HORTONWORKS

© 2015 CENTRIFY CORPORATION. ALL RIGHTS RESERVED. PAGE 10

! Add the Hortonworks repo:

wget http://public-repo-1.hortonworks.com/ambari/centos5/1.x/GA/ambari.repo

copy the ambari.repo file to /etc/yum.repos.d

! Note: This is OK, the Centos6 repo seems to be down at the time of this writing.

! Install the epel repository (yum install epel-release)

! Confirm the repos (yum repolist)

! Install the ambari server (yum install ambari-server)

The server install will prompt you for dependencies and to accept the Oracle JDK EULA.

! Run the ambari-server setup program and accept all the defaults.

! Start the ambari server (ambari-server start)

! On the Welcome page, name your cluster (e.g. HWC9)

! On the Select Stack page, select HDP 2.1

! One the install Options page > Target hosts, enter the FQDNs for the Hadoop servers and in

the host registration page, paste the contents of hadoop1:/root/.ssh/id_rsa

Page 11: Centrify Identity and Access Management for Hortonworkscommunity.centrify.com/centrify/attachments/centrify/techblog/22/3... · Centrify Identity and Access Management for Hortonworks

CENTRIFY INTEGRATION GUIDE IDENTITY AND ACCESS MANAGEMENT FOR HORTONWORKS

© 2015 CENTRIFY CORPORATION. ALL RIGHTS RESERVED. PAGE 11

! On the Confirm Hosts page, the installation of the Ambari agents will start

! In the Choose Services page, uncheck every service but following. This is to limit the cluster so

it does not consume all fo the resources of your machine. (This is especially helpful if you are

running on VMs on a laptop.)

! HDFS

! Nagios

! Ganglia

Page 12: Centrify Identity and Access Management for Hortonworkscommunity.centrify.com/centrify/attachments/centrify/techblog/22/3... · Centrify Identity and Access Management for Hortonworks

CENTRIFY INTEGRATION GUIDE IDENTITY AND ACCESS MANAGEMENT FOR HORTONWORKS

© 2015 CENTRIFY CORPORATION. ALL RIGHTS RESERVED. PAGE 12

! Zookeeper

! Mapreduce2 / YARN

! In the Assign Masters page as well as the Assign Slaves and Clients page, accept the

defaults.

! In the Customize Services page, set up a password and email for the Nagios component.

! Also, on the Customize Services page, select Misc, to add a cluster pre-fix (“hwc9-“ to match

the name of your cluster entered earlier) to all users and groups.

Note: This step allows for multiple clusters within Active Directory and must be done

before Hadoop software deployment.

! Select “Accept” for the changes to the various services

Page 13: Centrify Identity and Access Management for Hortonworkscommunity.centrify.com/centrify/attachments/centrify/techblog/22/3... · Centrify Identity and Access Management for Hortonworks

CENTRIFY INTEGRATION GUIDE IDENTITY AND ACCESS MANAGEMENT FOR HORTONWORKS

© 2015 CENTRIFY CORPORATION. ALL RIGHTS RESERVED. PAGE 13

! Then press Next, and next in the Review Page. This will take you to the Install, Start and Test

Page progress window

! In the Summary page, press Complete. At this point you will be taken to the Ambari

Dashboard. The startup of some of the services may have timed out, so you may have to stop

all services then restart all.

Page 14: Centrify Identity and Access Management for Hortonworkscommunity.centrify.com/centrify/attachments/centrify/techblog/22/3... · Centrify Identity and Access Management for Hortonworks

CENTRIFY INTEGRATION GUIDE IDENTITY AND ACCESS MANAGEMENT FOR HORTONWORKS

© 2015 CENTRIFY CORPORATION. ALL RIGHTS RESERVED. PAGE 14

Enable Security

The next step is to configure the cluster to operate in secure mode leveraging the Kerberos that was

enabled by the Centrify agent on each of the nodes.

! In Ambari, go to Admin > Security and click Enable Security

! In the Get Started page, press next

! In the Configure Services > General, specify the realm name or Active Directory Domain

Name, realms must be all uppercase (CENTRIFYIMAGE.VMS).

Note: make sure to use the cluster pre-fix “hwc9-“ on the user principal names for both

hdfs and ambari-qa

Page 15: Centrify Identity and Access Management for Hortonworkscommunity.centrify.com/centrify/attachments/centrify/techblog/22/3... · Centrify Identity and Access Management for Hortonworks

CENTRIFY INTEGRATION GUIDE IDENTITY AND ACCESS MANAGEMENT FOR HORTONWORKS

© 2015 CENTRIFY CORPORATION. ALL RIGHTS RESERVED. PAGE 15

! In the Create Principals and Keytabs page, click the Download CSV button and export it to

excel.

Review the results and you’ll realize that there are reusable principals like ambari-qa and hdfs as

well as host-specific principals like http. You will return to the wizard once the keytabs are

generated.

Service Account Creation in Active Directory

Centrify Server Suite 2015 will provide tools that automate the creation and distribution of these

service accounts. If you are using Centrify Server Suite 2014.1 or prior, you should use the following

instructions.

! Open an SSH session with an AD user (who can elevate to root) or as root to both servers.

! On both servers, set the proper ACLs for the /etc/security/keytabs folder

chown root:hwc9-hadoop /etc/security/keytabs

chmod 750 /etc/security/keytabs

Page 16: Centrify Identity and Access Management for Hortonworkscommunity.centrify.com/centrify/attachments/centrify/techblog/22/3... · Centrify Identity and Access Management for Hortonworks

CENTRIFY INTEGRATION GUIDE IDENTITY AND ACCESS MANAGEMENT FOR HORTONWORKS

© 2015 CENTRIFY CORPORATION. ALL RIGHTS RESERVED. PAGE 16

! On c9n1, use centrify’s adkeytab command to create the Kerberos keytabs and service (headless)

accounts for ambari-qa, hdfs, and hbase

adkeytab --new --upn [email protected] --keytab /etc/security/keytabs/smokeuser.headless.keytab -c ou=Users,ou=hwc9,ou=unix --ignore -V hwc9-ambari-qa adkeytab --new --upn [email protected] --keytab /etc/security/keytabs/hdfs.headless.keytab -c ou=Users,ou=hwc9,ou=unix --ignore -V hwc9-hdfs

! Copy via scp the headless keytabs for ambari-qa, hdfs, hbase to c9n2:/etc/security/keytabs

! On nodes c9n1 & c9n2, use adkeytab to create the keytabs for the node specific principals

adkeytab --new -P HTTP/[email protected] -U HTTP/[email protected] --keytab /etc/security/keytabs/spnego.service.keytab -c ou=Users,ou=hwc9,ou=unix --ignore -V c9n1-HTTP adkeytab --new -P nn/[email protected] -U nn/[email protected] --keytab /etc/security/keytabs/nn.service.keytab -c ou=Users,ou=hwc9,ou=unix --ignore -V c9n1-nn adkeytab --new -P HTTP/[email protected] -U HTTP/[email protected] --keytab /etc/security/keytabs/spnego.service.keytab -c ou=Users,ou=hwc9,ou=unix --ignore -V c9n2-HTTP adkeytab --new -P nn/[email protected] -U nn/[email protected] --keytab /etc/security/keytabs/nn.service.keytab -c ou=Users,ou=hwc9,ou=unix --ignore -V c9n2-nn

! Set the proper security for both files in both hosts with the following script:

cd /etc/security/keytabs chown hwc9-hdfs:hwc9-hadoop dn.service.keytab chown hwc9-falcon:hwc9-hadoop falcon.service.keytab chown hwc9-hbase:hwc9-hadoop hbase.* chown hwc9-hdfs:hwc9-hadoop hdfs.headless.keytab chown hwc9-hive:hwc9-hadoop hive.service.keytab chown hwc9-mapred:hwc9-hadoop jhs.service.keytab chown hwc9-nagios:hwc9-hadoop nagios.service.keytab chown hwc9-yarn:hwc9-hadoop nm.service.keytab chown hwc9-hdfs:hwc9-hadoop nn.service.keytab chown hwc9-oozie:hwc9-hadoop oozie.service.keytab chown hwc9-yarn:hwc9-hadoop rm.service.keytab chown hwc9-ambari-qa:hwc9-hadoop smokeuser.headless.keytab chown root:hwc9-hadoop spnego.service.keytab chown hwc9-storm:hwc9-hadoop storm.service.keytab chown hwc9-zookeeper:hwc9-hadoop zk.service.keytab chmod 400 * chmod 440 *headless* chmod 440 spnego*

! On each individual host, create the host-specific principals. E.g. for the zookeeper principal

! Create a Kerberos ticket for the AD user with privedlge to create the keytabs

>kinit >adkeytab --new -P zookeeper/[email protected] --keytab /etc/security/keytabs/zk.service.keytab -c ou=Users,ou=hwc9,ou=unix --ignore -V c9n1-zookeeper Entering the above keytab will create the following output ADKeyTab version: CentrifyDC 5.1.3-469 Options ------- use machine ccache: no domain: centrifyimage.vms server: null gc: null user: null container: ou=Users,ou=hwc9,ou=unix

Page 17: Centrify Identity and Access Management for Hortonworkscommunity.centrify.com/centrify/attachments/centrify/techblog/22/3... · Centrify Identity and Access Management for Hortonworks

CENTRIFY INTEGRATION GUIDE IDENTITY AND ACCESS MANAGEMENT FOR HORTONWORKS

© 2015 CENTRIFY CORPORATION. ALL RIGHTS RESERVED. PAGE 17

account: c9n1-zookeeper trust: no des: no Attempting bind to centrifyimage.vms site:Demo-Site server:dc.centrifyimage.vms: ccache:FILE:/tmp/krb5cc_1627391058 Bind successful to server dc.centrifyimage.vms Attempting bind to GC domain:centrifyimage.vms site:Demo-Site gcserver:dc.centrifyimage.vms ccache:FILE:/tmp/krb5cc_1627391058 Bound to GC server:dc.centrifyimage.vms domain:CENTRIFYIMAGE.VMS Searching for AD Object: filter = (samAccountName=c9n1-zookeeper), root = DC=centrifyimage,DC=vms Searching for AD Object: filter = (samAccountName=c9n1-zookeeper$), root = DC=centrifyimage,DC=vms AD Object not found. Building Container DN from OU=USERS,OU=HWC9,OU=UNIX Account 'CN=c9n1-zookeeper,OU=USERS,OU=HWC9,OU=UNIX,DC=centrifyimage,DC=vms' does not exist Search for account in GC: filter = (samAccountName=c9n1-zookeeper), root = DC=CENTRIFYIMAGE,DC=VMS SAM name 'c9n1-zookeeper' not found in GC Problem to create account; try again with no password required Searching for AD Object: filter = (samAccountName=c9n1-zookeeper), root = DC=centrifyimage,DC=vms AD Object found: CN=c9n1-zookeeper,OU=Users,OU=HWC9,OU=Unix,DC=centrifyimage,DC=vms Key Version = 1 Adding managed account keys to configuration file: c9n1-zookeeper Changing account 'c9n1-zookeeper' password with user '[email protected]' credentials. Searching for AD Object: filter = (samAccountName=c9n1-zookeeper), root = DC=centrifyimage,DC=vms AD Object found: CN=c9n1-zookeeper,OU=Users,OU=HWC9,OU=Unix,DC=centrifyimage,DC=vms Key Version = 2 Success: New Account: c9n1-zookeeper

! Repeat for all principals that correspond to each host.

Note: Centrify Server Suite 2015 will provide tools that automate the creation and distribution of

these service accounts.

Verify Proper Operation Verify Active Directory managed Service Accounts

In ADUC, browse to the Hadoop/HWC9 OU, you should see your Service Account in AD.

On each host, you should see the keytabs with the appropriate permissions:

Page 18: Centrify Identity and Access Management for Hortonworkscommunity.centrify.com/centrify/attachments/centrify/techblog/22/3... · Centrify Identity and Access Management for Hortonworks

CENTRIFY INTEGRATION GUIDE IDENTITY AND ACCESS MANAGEMENT FOR HORTONWORKS

© 2015 CENTRIFY CORPORATION. ALL RIGHTS RESERVED. PAGE 18

Now you’re ready to return to the Ambari Security Wizard.

Finishing the Security Wizard and Testing Services

In the Create Principals and Keytabs page, click Apply.

At this point, Ambari will reconfigure all the services to use Kerberos for authentication.

Once complete, press “Done” and you’ll be returned to the Ambari Dashboard.

Note: Depending on how your cluster performs, you may see a “Failed” message in the page, but

don’t worry, this may mean that you have to start some services manually. For example, in my

environment, I had to start the NameNode and Nagios service manually.

Page 19: Centrify Identity and Access Management for Hortonworkscommunity.centrify.com/centrify/attachments/centrify/techblog/22/3... · Centrify Identity and Access Management for Hortonworks

CENTRIFY INTEGRATION GUIDE IDENTITY AND ACCESS MANAGEMENT FOR HORTONWORKS

© 2015 CENTRIFY CORPORATION. ALL RIGHTS RESERVED. PAGE 19

Setting Long Term Account Maintenance

Centrify’s Direct Control automatically maintains the keytab entries that are part of the machine

account when adclient changes machine password every 28 (default value) days. Other keytab are

NOT automatically refreshed, such those created for Hadoop. A script could issue an adkeytab -C

that will update keytab for the specified account because user tells Active Directory the password, so

Direct Control will update the account, and get a new kvno.

The upshot of the above is the accounts (Hadoop principals) should have passwords set to never

expire and those accounts not used for management locked.

Page 20: Centrify Identity and Access Management for Hortonworkscommunity.centrify.com/centrify/attachments/centrify/techblog/22/3... · Centrify Identity and Access Management for Hortonworks

CENTRIFY INTEGRATION GUIDE IDENTITY AND ACCESS MANAGEMENT FOR HORTONWORKS

© 2015 CENTRIFY CORPORATION. ALL RIGHTS RESERVED. PAGE 20

Zone enable Hadoop Accounts

The Ambari installer will automatically create a number of local accounts on the node with the cluster

prefix “hwc9-“. In addition, the RPM installer will create additional accounts without the cluster prefix

(see below).

postgres:x:26:26:PostgreSQL Server:/var/lib/pgsql:/bin/bash hwc9-ambari-qa:x:1001:501::/home/hwc9-ambari-qa:/bin/bash hwc9-nagios:x:1002:502::/home/hwc9-nagios:/bin/bash hwc9-yarn:x:1003:501::/home/hwc9-yarn:/bin/bash hwc9-nobody:x:1004:501::/home/hwc9-nobody:/bin/bash hwc9-hdfs:x:1005:501::/home/hwc9-hdfs:/bin/bash hwc9-mapred:x:1006:501::/home/hwc9-mapred:/bin/bash hwc9-zookeeper:x:1007:501::/home/hwc9-zookeeper:/bin/bash hwc9-tez:x:1008:501::/home/hwc9-tez:/bin/bash rrdcached:x:496:493:rrdcached:/var/rrdtool/rrdcached:/sbin/nologin zookeeper:x:495:492:ZooKeeper:/var/run/zookeeper:/bin/bash hdfs:x:494:491:Hadoop HDFS:/var/lib/hadoop-hdfs:/bin/bash

After zone enabling all of the above accounts that have a cluster prefix the local accounts can be

removed from all nodes in the cluster.

Page 21: Centrify Identity and Access Management for Hortonworkscommunity.centrify.com/centrify/attachments/centrify/techblog/22/3... · Centrify Identity and Access Management for Hortonworks

CENTRIFY INTEGRATION GUIDE IDENTITY AND ACCESS MANAGEMENT FOR HORTONWORKS

© 2015 CENTRIFY CORPORATION. ALL RIGHTS RESERVED. PAGE 21

In the example above the cluster specific accounts hwc9-nagios, hwc9-yarn, etc are linked to normal

AD accounts nagios, yarn except the headless accounts. The headless accounts are create during

keytab creation with specific UPN and are cluster wide. However, the headless accounts still must be

zone enabled.

Validating Your Cluster’s Security

First you should verify that users cannot access the cluster without having logged into Active

Directory to obtain their Kerberos credential which is now required to gain access to the cluster. In

the following session, you will see that the initial Hadoop command and mapreduce job will fail since

the user dwirth does not have a valid Kerberos ticket.

Using username "dwirth". CentOS release 6.5 (Final) Kernel 2.6.32-431.29.2.el6.x86_64 on an x86_64 Last login: Fri Oct 24 14:23:33 2014 from dc.centrifyimage.vms [dwirth@c9n2 ~]$ whoami dwirth [dwirth@c9n2 ~]$ id uid=1627391058(dwirth) gid=1627391058(dwirth) groups=1627391058(dwirth),650(uni -adm) [dwirth@c9n2 ~]$ hadoop fs -ls /user 14/10/24 14:24:59 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] ls: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "c9n2.centrifyimage.vms/192.168.1.42"; destination host is: "c9n1.centrifyimage.vms":8020; [dwirth@c9n2 ~]$ yarn jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 16 1000 Number of Maps = 16 Samples per Map = 1000 14/10/24 14:25:17 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] java.io.IOException: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "c9n2.centrifyimage.vms/192.168.1.42"; destination host is: "c9n1.centrifyimage.vms":8020;

Page 22: Centrify Identity and Access Management for Hortonworkscommunity.centrify.com/centrify/attachments/centrify/techblog/22/3... · Centrify Identity and Access Management for Hortonworks

CENTRIFY INTEGRATION GUIDE IDENTITY AND ACCESS MANAGEMENT FOR HORTONWORKS

© 2015 CENTRIFY CORPORATION. ALL RIGHTS RESERVED. PAGE 22

at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) at org.apache.hadoop.ipc.Client.call(Client.java:1414) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:699) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1762) at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1124) at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1398) at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:278) at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72) at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:145) at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.main(RunJar.java:212) Caused by: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:677) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594) at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:640) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:724) at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462)

Page 23: Centrify Identity and Access Management for Hortonworkscommunity.centrify.com/centrify/attachments/centrify/techblog/22/3... · Centrify Identity and Access Management for Hortonworks

CENTRIFY INTEGRATION GUIDE IDENTITY AND ACCESS MANAGEMENT FOR HORTONWORKS

© 2015 CENTRIFY CORPORATION. ALL RIGHTS RESERVED. PAGE 23

at org.apache.hadoop.ipc.Client.call(Client.java:1381) ... 33 more Caused by: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212) at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:411) at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:550) at org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:367) at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:716) at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:712) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:711) ... 36 more Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt) at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147) at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121) at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187) at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179) at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193) ... 45 more [

Now that the Hortonworks cluster is using Centrify for Active Directory based authentication, the user

Diana Worth can now login using her Active Directory credentials directly at the console prompt or

could use a Kerberized SSH client such as Centrify’s version of PuTTY on her Windows computer to

get Single Sign-on to the Cluster. Once logged in, she will have Kerberos credentials from Active

Directory and then will be able to run a Hadoop job such as the example used below that computes

the value of Pi. Since the cluster is now running in secure mode, users without Kerberos will not be

able to successfully submit a job to the cluster.

dwirth@c9n2 ~]$ kinit Password for [email protected]: [dwirth@c9n2 ~]$ hadoop fs -ls /user Found 2 items drwxr-xr-x - dwirth dwirth 0 2014-10-24 12:38 /user/dwirth drwxrwx--- - hwc9-ambari-qa hwc9-hdfs 0 2014-10-24 12:19 /user/hwc9-ambari-qa [dwirth@c9n2 ~]$ yarn jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 16 1000 Number of Maps = 16 Samples per Map = 1000 Wrote input for Map #0 Wrote input for Map #1 Wrote input for Map #2 Wrote input for Map #3 Wrote input for Map #4 Wrote input for Map #5 Wrote input for Map #6

Page 24: Centrify Identity and Access Management for Hortonworkscommunity.centrify.com/centrify/attachments/centrify/techblog/22/3... · Centrify Identity and Access Management for Hortonworks

CENTRIFY INTEGRATION GUIDE IDENTITY AND ACCESS MANAGEMENT FOR HORTONWORKS

© 2015 CENTRIFY CORPORATION. ALL RIGHTS RESERVED. PAGE 24

Wrote input for Map #7 Wrote input for Map #8 Wrote input for Map #9 Wrote input for Map #10 Wrote input for Map #11 Wrote input for Map #12 Wrote input for Map #13 Wrote input for Map #14 Wrote input for Map #15 Starting Job 14/10/24 14:25:48 INFO client.RMProxy: Connecting to ResourceManager at c9n2.centrifyimage.vms/192.168.1.42:8050 14/10/24 14:25:48 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 6 for dwirth on 192.168.1.41:8020 14/10/24 14:25:48 INFO security.TokenCache: Got dt for hdfs://c9n1.centrifyimage.vms:8020; Kind: HDFS_DELEGATION_TOKEN, Service: 192.168.1.41:8020, Ident: (HDFS_DELEGATION_TOKEN token 6 for dwirth) 14/10/24 14:25:49 INFO input.FileInputFormat: Total input paths to process : 16 14/10/24 14:25:49 INFO mapreduce.JobSubmitter: number of splits:16 14/10/24 14:25:49 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1414167438309_0003 14/10/24 14:25:49 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: 192.168.1.41:8020, Ident: (HDFS_DELEGATION_TOKEN token 6 for dwirth) 14/10/24 14:25:50 INFO impl.YarnClientImpl: Submitted application application_1414167438309_0003 14/10/24 14:25:50 INFO mapreduce.Job: The url to track the job: http://c9n2.centrifyimage.vms:8088/proxy/application_1414167438309_0003/ 14/10/24 14:25:50 INFO mapreduce.Job: Running job: job_1414167438309_0003 14/10/24 14:26:00 INFO mapreduce.Job: Job job_1414167438309_0003 running in uber mode : false 14/10/24 14:26:00 INFO mapreduce.Job: map 0% reduce 0% 14/10/24 14:26:08 INFO mapreduce.Job: map 6% reduce 0% 14/10/24 14:26:09 INFO mapreduce.Job: map 13% reduce 0% 14/10/24 14:26:16 INFO mapreduce.Job: map 19% reduce 0% 14/10/24 14:26:17 INFO mapreduce.Job: map 25% reduce 0% 14/10/24 14:26:23 INFO mapreduce.Job: map 31% reduce 0% 14/10/24 14:26:25 INFO mapreduce.Job: map 38% reduce 0% 14/10/24 14:26:29 INFO mapreduce.Job: map 44% reduce 0% 14/10/24 14:26:33 INFO mapreduce.Job: map 50% reduce 0% 14/10/24 14:26:36 INFO mapreduce.Job: map 56% reduce 0% 14/10/24 14:26:40 INFO mapreduce.Job: map 63% reduce 0% 14/10/24 14:26:45 INFO mapreduce.Job: map 69% reduce 0% 14/10/24 14:26:48 INFO mapreduce.Job: map 69% reduce 23% 14/10/24 14:26:50 INFO mapreduce.Job: map 75% reduce 23% 14/10/24 14:26:54 INFO mapreduce.Job: map 75% reduce 25% 14/10/24 14:26:55 INFO mapreduce.Job: map 81% reduce 25% 14/10/24 14:26:57 INFO mapreduce.Job: map 81% reduce 27% 14/10/24 14:27:00 INFO mapreduce.Job: map 88% reduce 27% 14/10/24 14:27:03 INFO mapreduce.Job: map 88% reduce 29% 14/10/24 14:27:05 INFO mapreduce.Job: map 94% reduce 29% 14/10/24 14:27:06 INFO mapreduce.Job: map 94% reduce 31% 14/10/24 14:27:10 INFO mapreduce.Job: map 100% reduce 31% 14/10/24 14:27:11 INFO mapreduce.Job: map 100% reduce 100% 14/10/24 14:27:12 INFO mapreduce.Job: Job job_1414167438309_0003 completed successfully 14/10/24 14:27:13 INFO mapreduce.Job: Counters: 49 File System Counters FILE: Number of bytes read=358 FILE: Number of bytes written=1735845 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=4454 HDFS: Number of bytes written=215 HDFS: Number of read operations=67 HDFS: Number of large read operations=0 HDFS: Number of write operations=3 Job Counters Launched map tasks=16 Launched reduce tasks=1 Data-local map tasks=16 Total time spent by all maps in occupied slots (ms)=83721 Total time spent by all reduces in occupied slots (ms)=33925 Total time spent by all map tasks (ms)=83721 Total time spent by all reduce tasks (ms)=33925 Total vcore-seconds taken by all map tasks=83721 Total vcore-seconds taken by all reduce tasks=33925

Page 25: Centrify Identity and Access Management for Hortonworkscommunity.centrify.com/centrify/attachments/centrify/techblog/22/3... · Centrify Identity and Access Management for Hortonworks

CENTRIFY INTEGRATION GUIDE IDENTITY AND ACCESS MANAGEMENT FOR HORTONWORKS

© 2015 CENTRIFY CORPORATION. ALL RIGHTS RESERVED. PAGE 25

Total megabyte-seconds taken by all map tasks=85730304 Total megabyte-seconds taken by all reduce tasks=34739200 Map-Reduce Framework Map input records=16 Map output records=32 Map output bytes=288 Map output materialized bytes=448 Input split bytes=2566 Combine input records=0 Combine output records=0 Reduce input groups=2 Reduce shuffle bytes=448 Reduce input records=32 Reduce output records=0 Spilled Records=64 Shuffled Maps =16 Failed Shuffles=0 Merged Map outputs=16 GC time elapsed (ms)=546 CPU time spent (ms)=9890 Physical memory (bytes) snapshot=9654407168 Virtual memory (bytes) snapshot=26579210240 Total committed heap usage (bytes)=8767668224 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=1888 File Output Format Counters Bytes Written=97 Job Finished in 84.714 seconds Estimated value of Pi is 3.14250000000000000000 [dwirth@c9n2 ~]$

As you can see, the job executed properly and provided the desired output with the value of Pi after

successful login via Active Directory.

Conclusion Centrify Server Suite — the industry’s most widely deployed solution for securing identity on Linux-

and Windows-based servers and applications — provides several benefits for Hadoop and Big Data

environments including:

! Simple and secure access to Hadoop environments. Centrify makes it simple to run Hadoop

in secure mode by leveraging existing identity management infrastructure—Active Directory—

without the hassle of introducing alternative solutions that do not scale and are not enterprise

ready. Centrify Server Suite also saves money by letting organizations leverage existing skill

sets within the enterprise.

! Single sign-on for IT administrators and big data users. By extending the power of Active

Directory’s Kerberos and LDAP capabilities to Hadoop clusters, Centrify Server Suite lets

organizations leverage existing Active Directory-based authentication for Hadoop administrators

and end users. New SSO functionality in Big Data environments makes users more productive

and secure by allowing them to login in as themselves, rather than sharing privileged accounts.

! Secure machine-to-machine communications. Centrify Server Suite automates Hadoop

service account management within Active Directory. By automating machine-to-machine

Page 26: Centrify Identity and Access Management for Hortonworkscommunity.centrify.com/centrify/attachments/centrify/techblog/22/3... · Centrify Identity and Access Management for Hortonworks

CENTRIFY INTEGRATION GUIDE IDENTITY AND ACCESS MANAGEMENT FOR HORTONWORKS

© 2015 CENTRIFY CORPORATION. ALL RIGHTS RESERVED. PAGE 26

credential management, Centrify not only secures user identity but also system and service

account identity.

! Reduced identity-related risks and greater regulatory compliance. The reality is that

Hadoop environments store most if not all of an organization’s most important data. Centrify

Server Suite tracks user activity back to an individual in Active Directory, thereby making data

more secure. Centrify also reports on who did what across Hadoop clusters, nodes and services.

And, by enforcing access controls and least-privilege security across Hadoop, Centrify delivers

cost-effective compliance through combined access and activity reporting.

! Certified solution for superior compatibility and support. Centrify has worked closely with

Hortonworks and has received product certification. This ensures product compatibility and

technical support collaboration between customers, Hortonworks and Centrify.

Page 27: Centrify Identity and Access Management for Hortonworkscommunity.centrify.com/centrify/attachments/centrify/techblog/22/3... · Centrify Identity and Access Management for Hortonworks

CENTRIFY INTEGRATION GUIDE IDENTITY AND ACCESS MANAGEMENT FOR HORTONWORKS

© 2015 CENTRIFY CORPORATION. ALL RIGHTS RESERVED. PAGE 27

How to Contact Centrify North America

(And All Locations Outside EMEA)

Europe, Middle East, Africa

(EMEA)

Centrify Corporation

3393 Octavius Dr, Suite 100

Santa Clara, CA 95054

United States

Centrify EMEA

Lilly Hill House

Lilly Hill Road

Bracknell, Berkshire RG12 2SJ

United Kingdom

Sales: +1 (669) 444-5200

Online: www.centrify.com/contact

Sales: +44 (0) 1344 317950


Recommended