Date post: | 24-Mar-2015 |
Category: |
Documents |
Upload: | raja-shekar-komaragiri |
View: | 280 times |
Download: | 14 times |
Course # 25970 Version 1.0
Teradata Warehouse Administration
Student Guide
Trademarks
AT&T and AT&T Globe are registered trademarks of AT&T Corporation. CICS ,CICS/ESA, CICS/MVS, Data Base 2, DB2, IBM, MVS/ESA, MVS/XA, QMS, RACF, SQL/DS, VM/XA and VTAM are trademarks or registered trademarks of International Business Machines Corporation. DEC, VAX, MicroVax, and VMS are registered trademarks of Digital Equipment Corporation. Excelan is a trademark of Excelan, Incorporated Hewlett-Packard is a registered trademark of the Hewlett-Packard Company. INTELLECT and KBMS are trademarks of Trinzic Corporation. Excel, MICROSOFT, MS-DOS, DOS/V, and WINDOWS are registered trademarks of Microsoft Corporation. NCR is the name and mark of NCR Corporation. Sabre is the trademark of Seagate Technology, Inc. SAS and SAS/C are registered trademarks of the SAS Institute, Inc. Sun and SunOS are trademarks of Sun Microsystems, Inc. Teradata, Ynet, and DBC/1012 are registered trademarks of NCR Corporation. UNIX is a registered trademark of The Open Group. X and X/Open are registered trademarks of X/Open Company Limited.
The materials included in this book are a licensed product of Teradata a Division of NCR Corporation. Copyright ©2003 By Teradata a Division of NCR Corporation Rancho Bernardo, CA U.S.A. All Rights Reserved
Printed in U.S.A.
Inttroduction Page 0-1
Module 0
Introduction
Teradata Warehouse Administration
Page 0-2 Teradata Administration Introduction
Notes:
Inttroduction Page 0-3
Table of Contents
RECOMMENDED PREREQUISITE KNOWLEDGE...................................................................................................0-4
COURSE OBJECTIVES ..........................................................................................................................................................0-6
COURSE DESCRIPTION.......................................................................................................................................................0-8
NOTES ....................................................................................................................................................................................... 0-10
Page 0-4 Teradata Administration Introduction
Recommended Prerequisite Knowledge The facing page describes the courses recommended prior to Teradata Warehouse Administration.
Inttroduction Page 0-5
Recommended Prerequisite Knowledge
• This course assumes that the student has training, knowledge, or equivalent experience in the following courses:
– Introduction to Teradata– Relational Data Modeling Workshop (suggested)– Teradata Physical Database Design– Teradata SQL or– Teradata SQL WBT
Page 0-6 Teradata Administration Introduction
Course Objectives The facing page describes the objectives for this course.
Inttroduction Page 0-7
Course Objectives
• After completing this course, you will be able to:– Describe the basics of mainframe and client connectivity.– State how to define accounts and perform system accounting functions.– Grant implicit access rights and explain the difference between implicit,
explicit, and automatic access rights.– Use views to limit row or column accessibility.– State the purpose of access logging.– Describe how to use selected monitoring and administration tools: DBW,
Remote Console, Query Session, Recovery Manager, and Ferret. – Identify and describe the functions of selected Teradata Manager tools:
PMON, Configuration Check, System Maintenance, Session Information.– Identify the use of journaling, Fallback, and RAID options in data
protection.– Describe data backup and recovery, including the use of ARC and ASF2.
Page 0-8 Teradata Administration Introduction
Course Description
The audience and format of the class is described on the facing page.
Inttroduction Page 0-9
Course Description
• Who Should Attend
• The audience for this course consists of customers and NCR associates who need to be able to administer aTeradata database system.
• Class Format
• This course consists of:– Three days of classroom instruction– Review exercises following each module– Lab exercises– A course handbook in facing-page format
Page 0-10 Teradata Administration Introduction
Notes
i
Table Of Contents
Module 1 Getting to Teradata
GRANT/REVOKE LOGON STATEMENTS ....................................................................................................................... 4 CHANNEL ENVIRONMENT................................................................................................................................................... 6 SENDING PARCELS TO THE TDP...................................................................................................................................... 8 RETURNING THE ANSWER SET.......................................................................................................................................10 TDP MESSAGE FLOW ............................................................................................................................................................12 TDP EXITS ...................................................................................................................................................................................14 COMMUNICATING WITH THE TDP ...............................................................................................................................16 TDP OPERATOR COMMANDS ...........................................................................................................................................18 SESSIONS AND SESSION POOLS ......................................................................................................................................20 TDP MEMORY MANAGEMENT ........................................................................................................................................22 LAN ENVIRONMENT .............................................................................................................................................................24 CALL LEVEL INTERFACE (CLI) .......................................................................................................................................26 OPEN DATABASE CONNECTIVITY (ODBC)................................................................................................................28 JAVA DATABASE CONNECTIVITY (JDBC) .................................................................................................................30 GATEWAY GLOBAL UTILITY COMMANDS...............................................................................................................32 CLIENT SOFTWARE...............................................................................................................................................................34 CLIENT CONFIGURATION OVERVIEW SUMMARY ..............................................................................................36 REVIEW QUESTIONS .............................................................................................................................................................38 REFERENCES ............................................................................................................................................................................40
Module 2 Building the Database Environment
INITIAL TERADATA DATABASE....................................................................................................................................2-4 ADMINISTRATIVE USER ....................................................................................................................................................2-6 OWNERS, PARENTS AND CHILDREN ...........................................................................................................................2-8 CREATING OBJECTS ......................................................................................................................................................... 2-10 DELETE/DROP STATEMENTS ....................................................................................................................................... 2-12 HIERARCHIES SUMMARY.............................................................................................................................................. 2-14 REVIEW QUESTIONS ......................................................................................................................................................... 2-16 REFERENCES ........................................................................................................................................................................ 2-18
ii
Module 3 Databases, Users and the Data Dictionary
CREATE DATABASE STATEMENT ....................................................................................................................................4 CREATE USER STATEMENT ................................................................................................................................................6 TERADATA PASSWORD ENCRYPTION...........................................................................................................................8 PASSWORD SECURITY FEATURES ................................................................................................................................ 10 PROFILES .................................................................................................................................................................................... 12 EXAMPLE OF SIMPLIFYING USER MANAGEMENT.............................................................................................. 14 IMPLEMENTING PROFILES .............................................................................................................................................. 16 IMPACT OF PROFILES ON USERS .................................................................................................................................. 18 CREATE PROFILE STATEMENT...................................................................................................................................... 20 PASSWORD ATTRIBUTES (CREATE PROFILE)........................................................................................................ 22 DATA DICTIONARY............................................................................................................................................................... 24 FALLBACK PROTECTED DATA DICTIONARY TABLES ...................................................................................... 26 FALLBACK PROTECTED DATA DICTIONARY TABLES – CONT. ................................................................... 28 NON-HASHED DATA DICTIONARY TABLES ............................................................................................................. 30 UPDATING DATA DICTIONARY TABLES .................................................................................................................... 32 SYSTEM VIEWS ........................................................................................................................................................................ 34 RESTRICTED VIEWS ............................................................................................................................................................. 36 USING RESTRICTED VIEWS .............................................................................................................................................. 38 SELECTING INFORMATION ABOUT CREATED OBJECTS................................................................................. 40 CHILDREN VIEW ..................................................................................................................................................................... 42 DATABASES VIEW .................................................................................................................................................................. 44 USERS VIEW .............................................................................................................................................................................. 46 TABLES VIEW ........................................................................................................................................................................... 48 TERADATA ADMINISTRATOR ......................................................................................................................................... 50 DATABASES AND USERS SUMMARY ............................................................................................................................ 52 REVIEW QUESTIONS ............................................................................................................................................................ 54 LAB 1.............................................................................................................................................................................................. 56 REFERENCES ............................................................................................................................................................................ 58
Module 4
Space Allocation and Usage PERMANENT SPACE TERMINOLOGY.........................................................................................................................4-4 SPOOL AND TEMP SPACE TERMINOLOGY..............................................................................................................4-6 ASSIGNING SPACE LIMITS ...............................................................................................................................................4-8 GIVING ONE USER TO ANOTHER ...............................................................................................................................4-10 RESERVING SPACE FOR SPOOL ..................................................................................................................................4-12 VIEWS FOR SPACE ALLO CATION REPORTING ..................................................................................................4-14 DISKSPACE VIEW ................................................................................................................................................................4-16 TABLESIZE VIEW ................................................................................................................................................................4-18 ALLTEMPTABLES VIEW ..................................................................................................................................................4-20 RESETTING PEAK VALUES ............................................................................................................................................4-22 REVIEW QUESTIONS .........................................................................................................................................................4-24 REFERENCES .........................................................................................................................................................................4-26
iii
Module 5 Teradata Accounting
CREATE USER STATEMENT................................................................................................................................................ 4 SYSTEM ACCOUNTING.......................................................................................................................................................... 6 SYSTEM ACCOUNTING VIEWS .......................................................................................................................................... 8 DBC.ACCOUNTINFO[X] VIEW...........................................................................................................................................10 DBC.AMPUSAGE VIEW .........................................................................................................................................................12 DBC.AMPUSAGE VIEW —EXAMPLES ............................................................................................................................14 ACCOUNT STRING EXPANSION ......................................................................................................................................16 ACCOUNT STRING EXPANSION USAGE......................................................................................................................18 USER ACCOUNTING: RES ETTING THE VALUES ....................................................................................................20 TERADATA ACCOUNTING SUMMARY.........................................................................................................................22 REVIEW QUESTIONS .............................................................................................................................................................24 LAB 2 ..............................................................................................................................................................................................26 REFERENCES ............................................................................................................................................................................28
Module 6
Access Rights PRIVILEGES/ACCESS RIGHTS ............................................................................................................................................ 4 ACCESS RIGHTS MECHANISMS ........................................................................................................................................ 6 WHAT ARE ROLES? ................................................................................................................................................................. 8 AUTOMATIC RIGHTS GEN ERATED BY CREATE TAB LE ...................................................................................10 IMPLICIT, AUTOMATIC, AND EXPLICIT RIGHTS..................................................................................................12 RIGHTS GENERATED AUTOMATICALLY..................................................................................................................14 THE GRANT STATEMENT...................................................................................................................................................16 GRANTING RIGHTS ...............................................................................................................................................................18 GRANT PUBLIC ........................................................................................................................................................................20 THE REVOKE STATEMENT................................................................................................................................................22 TERADATA ADMINISTRATOR TOOLS - GRANT/REVOKE OPTION..............................................................24 REVOKING NON-EXISTENT RIGHTS ............................................................................................................................26 INHERITING ACCESS RIGHTS ..........................................................................................................................................28 THE GIVE STATEMENT AND ACCESS RIGHTS ........................................................................................................30 REMOVING A LEVEL IN THE HIERARCHY ...............................................................................................................32 A SUGGESTED ACCESS RIGHTS STRUCTURE .........................................................................................................34 ACCESS RIGHTS ISSUES (PRIOR TO ROLES)............................................................................................................36 ADVANTAGES OF ROLES ....................................................................................................................................................38 ACCESS RIGHTS WITHOUT ROLES ...............................................................................................................................40 ACCESS RIGHTS USING A ROLE.....................................................................................................................................42 GRANT AND REVOKE COMMANDS (ROLE FORM) ...............................................................................................44 IMPLEMENTING ROLES ......................................................................................................................................................46 ACCESS RIGHTS VALIDATION AND ROLES .............................................................................................................48 SQL STATEMENTS TO SUPPORT ROLES ....................................................................................................................50 GRANT COMMAND (SQL FORM).....................................................................................................................................52 REVOKE COMMAND (SQL FORM)..................................................................................................................................54 SYSTEM HIERARCHY (US ED IN FOLLOWING EXAMPLES)..............................................................................56 EXAMPLE - USING ROLES ..................................................................................................................................................58 EXAMPLE - USING ROLES (CONT.) ................................................................................................................................60
iv
EXAMPLE - USING ROLES (CONT.)................................................................................................................................ 62 STEPS TO IMPLEMENTING ROLES ............................................................................................................................... 64 ACCESS CONTROL MECHANISMS ................................................................................................................................. 66 USING VIEWS TO LIMIT ACCESS ................................................................................................................................... 68 USING MACROS AND STO RED PROCEDURES TO CONTROL ACCESS ....................................................... 70 ACCESS RIGHTS AND NESTED VIEWS ........................................................................................................................ 72 SYSTEM VIEWS FOR ACCESS RIGHTS ........................................................................................................................ 74 ALLRIGHTS AND USERRIGHTS VIEWS ....................................................................................................................... 76 DBC.USERGRANTEDRIGHTS VIEW ............................................................................................................................... 78 ROLEINFO[X] VIEW ............................................................................................................................................................... 80 ROLEMEMBERS[X] VIEW ................................................................................................................................................... 82 ALLROLERIGHTS AND US ERROLERIGHTS VIEWS ............................................................................................. 84 ACCESS RIGHTS SUMMARY ............................................................................................................................................. 86 REVIEW QUESTIONS ............................................................................................................................................................ 88 LAB 3.............................................................................................................................................................................................. 90 REFERENCES ............................................................................................................................................................................ 92
Module 7
Teradata Utilities TERADATA MANAGER .......................................................................................................................................................7-4 DATABASE WINDOW (DBW) ............................................................................................................................................7-6 DBW SUPERVISOR WINDOW ...........................................................................................................................................7-8 TERADATA MANAGER REMOTE CONSOLE..........................................................................................................7-10 THE UNIX TOOL CNSTER M............................................................................................................................................7-12 GENERAL GROUP PARAMETERS ................................................................................................................................7-14 FILE SYSTEM GROUP PARAMETERS........................................................................................................................7-16 PERFORMANCE GROUP PARAMETERS ...................................................................................................................7-18 QUERY CONFIGURATION UTILITY...........................................................................................................................7-20 GET CONFIG...........................................................................................................................................................................7-22 RECONFIG UTILITY...........................................................................................................................................................7-24 VPROC MANAGER UTILITY...........................................................................................................................................7-26 FERRET UTILITY.................................................................................................................................................................7-28 FERRET===èSHOWSPACE COMMAND ..................................................................................................................7-30 FERRET===èSHOWSPACE—SUMMARY REPORT ............................................................................................7-32 FERRET===èPACKDISK.................................................................................................................................................7-34 FERRET===èSHOWBLOCKS ........................................................................................................................................7-36 FERRET===èSCANDISK COMMAND........................................................................................................................7-38 CHECKTABLE UTILITY....................................................................................................................................................7-40 RUNNING CHECKTABLE .................................................................................................................................................7-42 TABLE REBUILD...................................................................................................................................................................7-44 RECOVERY MANAGER .....................................................................................................................................................7-46 RECOVERY MANAGER LIS T STATUS COMMAND..............................................................................................7-48 RECOVERY MANAGER LIS T LOCKS COMMAND ...............................................................................................7-50 RECOVERY MANAGER PRIORITY COMMAND....................................................................................................7-52 ABORT_ROLLBACK............................................................................................................................................................7-54 SHOWLOCKS UTILITY REPORT..................................................................................................................................7-56 ABORT HOST UTILITY......................................................................................................................................................7-58 SUMMARY ...............................................................................................................................................................................7-60 REVIEW QUESTIONS .........................................................................................................................................................7-62 LAB 4...........................................................................................................................................................................................7-64 REFERENCES .........................................................................................................................................................................7-66
v
Module 8 Meta Data Services
WHAT IS META DATA?........................................................................................................................................................8-4 MDS FEATURES .......................................................................................................................................................................8-6 MDS V5.0 - NEW FEATURES ..............................................................................................................................................8-8 META DATA SERVICES ARCHITECTURE.............................................................................................................. 8-10 MDS V5.0 APPLICATION .................................................................................................................................................. 8-12 MDS CONSULTING SERVICES ...................................................................................................................................... 8-14 MDS CONSULTING SERVICES ...................................................................................................................................... 8-16 MDS CONSULTING SERVICES ...................................................................................................................................... 8-18 DATA REPRESENTATION IN THE MDS REPOSITORY ..................................................................................... 8-20 THE APPLICATION INFORMATION MODEL ......................................................................................................... 8-22 MORE ON THE AIM ............................................................................................................................................................ 8-24 OBJECT RELATIONSHIPS ............................................................................................................................................... 8-26 ADDING A SUPER CLASS TO AN EXISTING CLASS ........................................................................................... 8-28 MODIFY OBJECT DESCRIPTION ................................................................................................................................. 8-30 META DATA IS STORED AS OBJECTS ...................................................................................................................... 8-32 DATABASE INFORMATION MODEL .......................................................................................................................... 8-34 REVIEW QUESTIONS ......................................................................................................................................................... 8-36
Module 9
Teradata Warehouse Miner TERADATA WAREHOUSE MINER OVERVIEW .......................................................................................................... 4 SPACE REQUIREMENTS ........................................................................................................................................................ 6 ERROR LOG - TWMERRORS.LOG .................................................................................................................................... 8 EVENT LOG - _TWM.LOG ...................................................................................................................................................10 CACHED XML FILES ..............................................................................................................................................................12 TERADATA WAREHOUSE MINER DATABASES .......................................................................................................14 REVIEW QUESTIONS .............................................................................................................................................................16 REFERENCES ............................................................................................................................................................................18
Module 10 So You Need to do Recovery?
TERADATA DATA PROTECTION OVERVIEW ............................................................................................................ 4 OPEN TERADATA BACKUP .................................................................................................................................................. 6 BAKBONE NETVAULT ............................................................................................................................................................ 8 VERITAS NETBACKUP..........................................................................................................................................................10 GENERAL ARCHITECTURE—MAINFRAME..............................................................................................................12 GENERAL ARCHITECTURE—UNIX NODE.................................................................................................................14 COMMON ALTERNATIVE RECOVERY STRATEGIES ...........................................................................................16
vi
COMMON USES OF ARCHIVED DATA.......................................................................................................................... 18 EXAMPLE TEMPLATE—DISASTER RECOVERY.................................................................................................... 20 EXAMPLE TEMPLATE—SIN GLE AMP RECOVERY............................................................................................... 22 RECONFIGURATION SCENARIO .................................................................................................................................... 24 MIGRATING ACROSS REL EASE LEVELS ................................................................................................................... 26 COMMON MISTAKES ............................................................................................................................................................ 28 TYPICAL TUNING AREAS —MAINFRAME.................................................................................................................. 30 TYPICAL TUNING AREAS —UNIX NODES ................................................................................................................... 32 SUMMARY .................................................................................................................................................................................. 34
Module 11 Disaster Recovery
DISASTER RECOVERY OVERVIEW ..................................................................................................................................4 DUAL SYSTEMS ...........................................................................................................................................................................6 TERADATA QUERY DIRECTOR..........................................................................................................................................8 ARCHIVE RECOVERY UTILITY (ARC) ......................................................................................................................... 10 RESTORE OPERATIONS ...................................................................................................................................................... 12 DATA PROTECTION MECHANISMS .............................................................................................................................. 14 TRANSIENT JOURNAL ......................................................................................................................................................... 16 FALLBACK PROTECTION .................................................................................................................................................. 18 DOWN AMP RECOVERY JOURNAL ............................................................................................................................... 20 DISK ARRAYS AND RAID TECHNOLOGY................................................................................................................... 22 HOT STANDBY NODES ......................................................................................................................................................... 24 HOT STANDBY NODES - EXAMPLE ............................................................................................................................... 26 LARGE CLIQUES ..................................................................................................................................................................... 28 PERMANENT JOURNALS—WHAT ARE THEY? ....................................................................................................... 30 PERMANENT JOURNAL SCENARIO .............................................................................................................................. 32 TABLE X....................................................................................................................................................................................... 34 TABLE Y....................................................................................................................................................................................... 36 TABLE Z....................................................................................................................................................................................... 38 ARCHIVE POLICY................................................................................................................................................................... 40 ARCHIVE SCENARIO ............................................................................................................................................................ 42 AFTER RESTART PROCES SING COMPLETES .......................................................................................................... 44 AFTER RESTART COMPLETES ........................................................................................................................................ 46 TABLE X RECOVERY............................................................................................................................................................ 48 TABLE Y RECOVERY............................................................................................................................................................ 50 TABLE Z RECOVERY ............................................................................................................................................................ 52 AFTER RECOVERY ................................................................................................................................................................ 54 PERMANENT JOURNAL US AGE SUMMARY ............................................................................................................. 56 DATA PROTECTION SUMMARY...................................................................................................................................... 58 REVIEW ....................................................................................................................................................................................... 60 REFERENCES ............................................................................................................................................................................ 62
vii
Module 12 Archiving Data
ARCHIVE RECOVERY UTILITY (ARC) ........................................................................................................................... 4 ARCHIVE AND RECOVERY PHASES ................................................................................................................................ 6 ARC VERSUS FASTLOAD....................................................................................................................................................... 8 SESSION CONTROL................................................................................................................................................................10 MULTIPLE SESSIONS ............................................................................................................................................................12 ARCHIVING STATEMENTS ................................................................................................................................................14 ARCHIVE STATEMENT ........................................................................................................................................................16 ARCHIVE TYPES ......................................................................................................................................................................18 ARCHIVE OBJECTS ................................................................................................................................................................20 ARCHIVE LEVELS ...................................................................................................................................................................22 ARCHIVE OPTIONS ................................................................................................................................................................24 INDEXES OPTION ....................................................................................................................................................................26 GROUP READ LOCK OPTION............................................................................................................................................28 TYPES OF ARCHIVE...............................................................................................................................................................30 DATABASE DBC ARCHIVE..................................................................................................................................................32 DATA ARCHIVING SUMMARY .........................................................................................................................................34 REVIEW QUESTIONS .............................................................................................................................................................36 REFERENCES ............................................................................................................................................................................38
Module 13
Restoring Data RESTORE-RELATED STATEMENTS ................................................................................................................................. 4 ANALYZE STATEMENT.......................................................................................................................................................... 6 THE RESTORE STATEMENT................................................................................................................................................ 8 RESTORING TABLES .............................................................................................................................................................10 COPY STATEMENT.................................................................................................................................................................12 COPYING TABLES ...................................................................................................................................................................14 BUILD STATEMENT ...............................................................................................................................................................16 REVALIDATE REFERENCES ..............................................................................................................................................18 RELEASE LOCK STATEMENT ..........................................................................................................................................20 RESTORING DATA SUMMARY.........................................................................................................................................22 REVIEW QUESTIONS .............................................................................................................................................................24 LAB 5 ..............................................................................................................................................................................................26 REFERENCES ............................................................................................................................................................................28
viii
Module 14 Permanent Journals
PERMANENT JOURNALS—WHERE ARE THEY? .......................................................................................................4 BEFORE-IMAGE JOURNALS.................................................................................................................................................6 AFTER-IMAGE JOURNALS ....................................................................................................................................................8 JOURNAL SUBTABLES ......................................................................................................................................................... 10 PERMANENT JOURNAL STATEMENTS ....................................................................................................................... 12 LOCATION OF CHANGE IMAGES ................................................................................................................................... 14 CREATING A PERMANENT JOURNAL ......................................................................................................................... 16 ASSIGNING A PERMANENT JOURNAL ........................................................................................................................ 18 JOURNALS[X] VIEW .............................................................................................................................................................. 20 PERMANENT JOURNALS S UMMARY ........................................................................................................................... 22 REVIEW QUESTIONS ............................................................................................................................................................ 24 REFERENCES ............................................................................................................................................................................ 26
Module 15
Data Recovery Operations DATA RECOVERY USING ROLL OPERATIONS ..........................................................................................................4 THE CHECKPOINT STATEMENT .......................................................................................................................................6 CHECKPOINT WITH SAVE STATEMENT.......................................................................................................................8 THE ROLLBACK STATEMENT ......................................................................................................................................... 10 USING THE ROLLBACK COMMAND............................................................................................................................. 12 THE ROLLFORWARD STATEMENT .............................................................................................................................. 14 USING THE ROLLFORWARD COMMAND.................................................................................................................. 16 ROLLFORWARD RESTRICTIONS ................................................................................................................................... 18 DELETE JOURNAL STATEMENT .................................................................................................................................... 20 RECOVERY CONTROL DATA DICTIONARY VIEWS ............................................................................................. 22 ASSOCIATION VIEW ............................................................................................................................................................. 24 EVENTS[X] VIEW ..................................................................................................................................................................... 26 EVENTS_CONFIGURATION[X] VIEW ............................................................................................................................ 28 EVENTS_MEDIA[X] VIEW ................................................................................................................................................... 30 DATA RECOVERY OPERATIONS SUMMARY............................................................................................................ 32 REVIEW QUESTIONS ............................................................................................................................................................ 34 LAB 6.............................................................................................................................................................................................. 36 REFERENCES ............................................................................................................................................................................ 38
Module 16
Administrative Tasks and Tools TERADATA DATABASE SYSTEM ADMINISTRATION..............................................................................................4 DICTIONARY TABLES TO MAINTAIN .............................................................................................................................6 DATABASE QUERY LOG – TABLES MAINTENANCE ...............................................................................................8 A RECOMMENDED STRUCTURE .................................................................................................................................... 10 ACCESS CONTROL MECHANISMS ................................................................................................................................. 12 PLAN AND FOLLOW-UP....................................................................................................................................................... 14
ix
Appendix A Review Questions/Solutions Appendix B Labs Appendix C Lab Solutions Appendix D Session Pools, Tuning with Teradata, and Available Values for Compose Graph Appendix E Acronyms
x
Getting to Teradata 1- 1
Module 1
After completing this module, you should be able to:
• Describe how the Teradata Database is accessed through channel-attached clients.
• Discuss the function of the Teradata Director program (TDP) and how it processes parcels.
• Describe how Teradata returns answer sets to the client system.
• Describe how the Teradata Database is accessed through network-attached clients.
• Discuss the use of session pools, including how to start and stop them.
Getting to Teradata
1- 2 Getting to Teradata
Notes:
Getting to Teradata 1- 3
Table of Contents
GRANT/REVOKE LOGON STATEMENTS ....................................................................................................................... 4 CHANNEL ENVIRONMENT................................................................................................................................................... 6 SENDING PARCELS TO THE TDP...................................................................................................................................... 8 RETURNING THE ANSWER SET.......................................................................................................................................10 TDP MESSAGE FLOW ............................................................................................................................................................12 TDP EXITS ...................................................................................................................................................................................14 COMMUNICATING WITH THE TDP ...............................................................................................................................16 TDP OPERATOR COMMANDS ...........................................................................................................................................18 SESSIONS AND SESSION POOLS ......................................................................................................................................20 TDP MEMORY MANAGEMENT ........................................................................................................................................22 LAN ENVIRONMENT .............................................................................................................................................................24 CALL LEVEL INTERFACE (CLI) .......................................................................................................................................26 OPEN DATABASE CONNECTIVITY (ODBC)................................................................................................................28 JAVA DATABASE CONNECTIVITY (JDBC) .................................................................................................................30 GATEWAY GLOBAL UTILITY COMMANDS...............................................................................................................32 CLIENT SOFTWARE...............................................................................................................................................................34 CLIENT CONFIGURATION OVERVIEW SUMMARY ..............................................................................................36 REVIEW QUESTIONS .............................................................................................................................................................38 REFERENCES ............................................................................................................................................................................40
1- 4 Getting to Teradata
GRANT/REVOKE LOGON Statements Keywords
Keywords you can use with the GRANT and REVOKE LOGON commands include:
Hostid Identifies a mainframe channel connection or a local area network connection that is currently defined to the Teradata Database by the hardware configuration data. The host ID for the Teradata Database console is zero (0). For any other connector, the host ID is a value from 1 to 1023.
ALL The ALL keyword, used in place of a host ID, applies to any source through which a logon is attempted, including the Teradata Database console.
AS DEFAULT Specifies that the current default for the specified host ID(s) is to be changed as defined in this GRANT LOGON statement. A statement with AS DEFAULT has no effect on the access granted to or revoked from particular user names.
TO FROM dbname(s)
Overrides the current default for the specified username(s) on the specified host ID(s). The name DBC cannot be specified as a username in a GRANT LOGON statement. A statement that includes this name will return an error message.
WITH NULL PASSWORD
The initial Teradata Database default is that all logon requests must include a password. The WITH NULL PASSWORD option, in conjunction with a TDP security exit procedure, permits a logon string that has no password to be accepted on a Teradata system.
Getting to Teradata 1- 5
GRANT/REVOKE LOGON Statements
hostid Hostid from configuration data. The database console is host ID “0” (zero).
AS DEFAULT Changes the default for the specified host.
dbname You can specify up to 25 user names, but not “DBC.”
WITH NULLPASSWORD
When used in conjunction with TDP exit, overrides the system default—that a password is required.
To execute a GRANT or REVOKE LOGON statement, you must hold execute privileges on the macro, DBC.LogonRule.
REPLACE MACRO DBC.LogonRule AS(;);
FF07A036
REVOKE LOGON
,
ONALL
hostid;
AS DEFAULT
TO dbname
,
FROM
FF07A027
GRANT LOGON
,
ONALL
hostid
WITH NULL PASSWORD
;AS DEFAULT
TO dbname
,
FROM
1- 6 Getting to Teradata
Channel Environment Teradata utilities and software programs support Teradata Database access in mainframe environments. These utilities and programs run under the client's operating system and provide the functionality for a user to access the database system.
Background notes The Teradata Channel Interface enables communication between a mainframe client and a Teradata server using a channel with either a parallel or serial I/O interface. Two devices that consist of an even and odd address pair make up this connection. Each device is independent of the other. The client sends Teradata Database messages to a server using the device with the even address. The server returns responses to the client using the device with the odd address.
This hardware was originally known as the Channel Interface Controller (CIC). Each CIC was associated with one Teradata Interface Processor (IFP) to provide a device pair on a parallel I/O interface channel, or with the MicroChannel-to-Channel Adapter (MCCA) board Application Processor (AP) and a Parsing Engine (PE).
The current platforms can be connected with either traditional Bus and Tag or ESCON fiber channel. The Host Channel Adapters available on NCR systems are:
• PBSA—PCI Bus ESCON Adapter (ESCON fiber)
• EBCA—EISA Bus Channel Adapter (Bus and Tag)
• PBCA—PCI Bus Channel Adapter (Bus and Tag)
Depending on your workload and required throughput, you do not need to configure one HCA per node. In smaller systems (up to 4 or 6 nodes), you might configure two or three HCAs. In a two-node system, however, you would configure one HCA per node for redundancy.
CP and CUA Currently, each pair of devices, whatever their implementation, is now referred to as a Channel Processor (CP). The even and odd address pair is also known as a channel unit address (CUA).
Getting to Teradata 1- 7
Channel Environment
RDBMS
PE1020
PE1021
GatewaySoftware
PE1020
PE1021
Channel DriverSoftware
HostNo 52 HostNo 512
BUS AND TAG OR ESCONAS AS AS AS AS
TSO CICS IMS/DC
CLI CLI CLI CLI
ITEQBTEQ
orCoordinated
Products
CICSTransactions
IMSMPP / BMP
Pre-Processed
Batchor
TeradataUtilities
(TDP)Teradata
Director
Program
OS390
SYSTEMCONSOLE
CROSS MEMORY SERVICES (XMS)
Node
1- 8 Getting to Teradata
Sending Parcels to the TDP The Teradata Director Program (TDP) resides in the client mainframe and manages communication between the client application programs and the Teradata server. The functions of the TDP include the following:
• Session initiation and termination
• Logging, verification, recovery and restart notification for client applications
• Physical input to, and output from, IFPs/PEs
• Security
To access the Teradata Database from a mainframe client, the user makes a request that a Teradata utility or program processes. These requests are directed to a Teradata Director Program (TDP) that resides on the mainframe.
The application talks to Call Level Interface (CLI), which builds the request into a parcel that the TDP sends through the Channel to the PE. When a PE receives a request, the PE formulates the steps to respond to the request and establishes a session with the Teradata Database.
The PE sends the processing steps to one or more AMPs where the information is gathered. This information could be the response to a SELECT statement, or it could be a status indicating an INSERT or UPDATE statement was successful.
Getting to Teradata 1- 9
Sending Parcels to the TDP
USER REGION
APPLICATION
USER
CODE
RESPONSE BUFFER
REQUEST BUFFER
REQ PARCEL(SQL TEXT)
RESPONDPARCEL
}
}CLI
Moves Parcels (SQL statements) intoapplication REQUEST BUFFER. TheRESPOND parcel indicates size of theRESPONSE BUFFER.
RDBMS
PE PE
TDP REGION
TDP
REQUESTPARCEL
MESSAGEHEADER
OTHERREQUESTS
OTHERREQUESTS
HEADER CHANNEL BLOCK
RESPONDPARCEL
FUNCTION CODESESSION NUMBER
REQUEST NUMBERTIMESTAMP FIELDS
CONTEXT AREA
HSICB
1- 10 Getting to Teradata
Returning the Answer Set The result of a query or a status becomes the returned answer set. The PE turns the answer set into a response parcel and returns it to the client utility or program through the channel.
The facing page shows the path the response parcel takes.
You can request DBCTIME time stamps to record when:
• The TDP receives the request.
• The request was queued to the server.
• The server received the response.
• The response was queued to the cross memory task.
• The response was returned to the user’s input (response) buffer.
Getting to Teradata 1- 11
Returning the Answer Set
RDBMS
PE PE
TDP REGION
TDP
OTHERREQUESTS
OTHERREQUESTS
HEADER CHANNEL BLOCK
MESSAGE
REQUESTPARCEL
RESPONDPARCEL
USER REGION
APPLICATION
USER
CODE
REQUEST BUFFER
CLIPasses Parcelsback to the Application
}RESPONSE BUFFER
FUNCTION CODE
SESSION NUMBERREQUEST NUMBER
TIMESTAMP FIELDS
CONTEXT AREA
HSICB
HSICB time stamps are available for throughput analysis.
1- 12 Getting to Teradata
TDP Message Flow All messages that the Teradata Database sends or receives normally pass through the Teradata Director Program (TDP).
For users with channel-attached systems, you can customize the TDP to perform a user-defined exit routine. Customizing the TDP can assist you in collecting information for:
• Performance analysis
• Functional analysis
TDPUTCE Exit An exit is a point at which a user request temporarily leaves the existing code to perform a user-specified task before continuing on with normal processing. TDP exits may be enabled and user-provided routines included that perform some function or alteration of normal processing.
The TDP User Transaction Collection Exit allows you to intercept and examine all of the requests and responses that traverse the TDP. TDPUTCE is an exit taken from the Transaction Monitor.
Getting to Teradata 1- 13
TDP Message Flow
TeradataDatabase
ALL REQUESTS
ALLRESPONSES
Teradata Director Program
APPLICATIONPROGRAM
REQUESTPARCELS
RESPONSEPARCELS
REQUESTMANAGER
OUTPUTROUTINES
TRANSACTIONMONITOR
TDPUTCE
RESPONSEMANAGER
INPUTROUTINES
TDPUTCE
Allows the user to examine requests and responses that pass through the TDP.
1- 14 Getting to Teradata
TDP Exits All messages between the mainframe host and the Teradata Database pass through the Teradata Director Program (TDP).
At three specific points, you can provide TDP exits and include user-written routines to perform some function or alteration of normal processing. You can use the exits to extend security. These exits will either all be turned on, or all turned off.
The three supplied exit points are:
TDPLGUX The User Logon Exit Interface is an exit you can use to process Logon Requests.
TDPUTCE The TDP User Transaction Collection Exit is an exit you may use to process any request or response that traverses the TDP. (This exit is called TDPTMON - the User Monitor exit - in OS1100.)
TDPUSEC The TDP User Security Interface is an exit you use to process logon request denials.
These three exits are available to all TDPs, running on MVS, VM, and OS1100 hosts.
For MVS systems only, an additional exit is provided:
TDPUAX The TDP User Address Space exit is called by the TDP when an application initiates a logon or connect request.
Logon example: logon tdpid/userid
Getting to Teradata 1- 15
TDP Exits
REQUESTPARCELS
APPLICATIONPROGRAM
RESPONSEPARCELS
TDPUAX
TeradataDatabase
REQUEST
MANAGER
OUTPUT
ROUTINES
TRANSACTION
MONITOR
SESSION
MANAGER
RESPONSE
MANAGER
INPUT
ROUTINES
ALLRESPONSES
LOGONSECURITYVIOLATIONS
TDP
LOGONREQUESTS
ALL REQUESTS
LOGON REQUESTS
TDPLGUX
TDPUSEC
TDPUTCE
• TDPUAX - Logon request processing exit (MVS only)
• TDPLGUX - Logon request processing exit
• TDPUTCE - Processes any requestor response traversing the TDP.
• TDPUSEC - Logon violation processing
Exits that extend security functions:
1- 16 Getting to Teradata
Communicating with the TDP The TDP accepts operator commands from the MVS console, MVS/TSO users, VM console, VM/CMS virtual machines, and CLIv2 applications.
Commands you enter from the console are not executed until you execute the RUN command. Messages that result from executing operator commands entered from a console are returned to the console.
Entering TDP Commands on MVS Use the MVS MODIFY command from the MVS console to issue TDP operator commands to a TDP already running the MVS environment. The syntax for the MVS Modify command is:
F Tdpid, TDPcommandtext
In the syntax example above, F is the abbreviation for the MVS MODIFY command and Tdpid is the four -character identifier associated with the TDP subsystem (for example TDP0, TDP1, and so on) to which the command TDPcommandtext is the syntax of the TDP command.
Entering TDP Commands on VM Enter a TDP operator command from the VM console by preceding it with the Tdpid command, for example:
Tdpid TDPcommandtext
In this example, Tdpid is the four-character identifier associated with the TDP (for example TDP0, TDP1, and so on) to which the command is directed and TDPcommandtext is the syntax for the TDP command.
Getting to Teradata 1- 17
Communicating with the TDP
From an MVS host console:
F Tdpid, TDPcommandtext
Where:
• F is the abbreviation for MVS modify.
• Tdpid is the four-character identifier for the TDP subsystem.
• TDPcommandtext is the syntax of the TDP command.
In a VM environment:
Tdpid, TDPcommandtext
Where:
• Tdpid is the four-character identifier for the TDP subsystem.
• TDPcommandtext is the syntax of the TDP command.
1- 18 Getting to Teradata
TDP Operator Commands Enter TDP operator commands from the MVS or VM console before you execute the RUN command and from the MVS or VM console during normal TDP operation.
The facing page lists the operator commands available. The boxes represent objects on which you can execute commands, and the commands are listed under the boxes. The uppercase letters indicate the minimum command keyword letters you must enter to identify a command.
Note: Users may be authorized to issue certain types of TDP commands:
UserID Specifies the VM name or TSO UserID.
Display Allows the AUthorized user to Display TDP.
Any Allows the AUthorized user to issue any TDP command.
AUthoriz Allows the user to issue any TDP command, including the Authoriz command.
Resolve Indicates 2PC coordinators are authorized to perform automatic in-doubt resolution.
MaxSess Constrains the number of sessions for this TDP.
For more detail on TDP commands, refer to the Teradata TDP Reference.
Getting to Teradata 1- 19
TDP Operator Commands
TDP
Display Module
RUNSHUTDOWN
CANCELQUIckOrderly
SET UseridSET CharsetSET Comchar
IFP
ATTachDETachDisplaySTArtSTOp
POOL
ENAbleDISAbleDisplayLOGOFFMODIFYSTArtSTOp
2PC
COMMITENAble IRFDISAble IRFDISPLAY INDoubtROLLBACK
Display TDP
LOGONS
DISAbleENAble
SMF
ENAbleDISAbleDisplay
SESsion / JOB
DisplayLOGOFFSET MAXSESSDISABLE SESSION RESERVEENAble SESSION RESERVEDISAble SESSION DETAILENAble SESSION DETAILDISAble SESSION STATUSENAble SESSION STATUS
CELLS
DisplayADD CELLS
QUEUES
Display
ADD XMSCELLS
EXITS
ENAbleDISAble
TIME
ENAbleDISAble
TEST
ENAbleDISAble
USERIDALLJOB
AUTHORIZ
MODIFYDISAbleENAble
SAF
Display
CHANNEL PROCESSORS
Display
SESSION PROCESSORS
UAX USEC
DISAbleENAble
TMONTDPSTATS
ENAble
DISAbleDisplay
DISAbleENAble
DISAbleENAble
SERVER
Display
1- 20 Getting to Teradata
Sessions and Session Pools Sessions
A session is a logical connection between the user that communicates through an application program and the Teradata Database. A session permits a user to send one request to, and receive one response from, the Teradata server at a time. A session can have only one request outstanding at any time. A user may communicate through one or more active sessions concurrently.
A session is explicitly logged on and off. It is established when the Teradata server accepts the user name and password of the user. When a session is logged off, the system discards the user name and password and does not accept additional Teradata SQL statements from that session.
A session number and a logical client number identify each session to the MVS or VM client. A session number uniquely identifies the work stream of a session for a given TDP. A logical client number uniquely identifies each TDP within an MVS or VM client or multiple clients.
Session Pools A session pool is a number of sessions that are logged on to the Teradata server as a unit using the same logon string. Unlike ordinary sessions, pool sessions are automatically assigned to applications that initiate a logon using the same logon string as that established for the pool. The number of sessions a user can log on is controlled.
Every session is assigned to a specific PE and stays with that PE until the pool ends. Logon times are typically 2-3 seconds faster with session pools.
Note: Refer to Appendix D for more information on session pools.
Getting to Teradata 1- 21
Sessions and Session Pools
• A session is:– A logical connection between the user and the database that
permits a user one request and one response at a time.– Sessions are explicitly logged on to and off from the database
and are identified by a logical client ID and a session number.• A session pool is:
– A number of sessions using the same logon string that are loggedon to the database using a START POOL command.
– To log off a session pool, use the LOGOFF or STOP POOL command.
– When you run a session pool, the TDP does not notify the database when an application logs off. It marks the session, “not in use” and makes it available to another application.
– Logons are typically 2-3 seconds faster with session pools.– The number of sessions a user can log on is controlled.
1- 22 Getting to Teradata
TDP Memory Management To provide for memory acquisition during system operation without incurring the high overhead associated with the operating system memory services, the TDP acquires units of main memory, or cells, from its own more efficient memory management.
During startup, the memory manager pre-allocates a number of cells in sizes that are convenient for use by the TDP. The sizes of the cells are internal constants. The initial number of cells is an internal default.
If a TDP subtask requests a cell from the memory manager, but other TDP subtasks are using all available cells, the memory manager takes one of the following actions:
• Obtains a new cell from the operating system
or
• Places the requesting subtask into a wait for memory.
If the requester is placed into a wait, the wait ends when another TDP subtask releases a cell. The decision to obtain a new cell or wait for an existing cell is based on TDP considerations.
Getting to Teradata 1- 23
TDP Memory Management
The TDP typically uses a virtual region of about 4 to 5 MB.
To avoid overhead calls to the operating system, the TDP divides its work areas in cells.
PROGRAMSTORAGE
320K
FREESTORAGE
500K
CELLS
A warning message (TDP0021) displays when 80% of the cells of a certain size are in use.ADD CELLS
TDP0501 CELSZ AVAIL INUSE MXUSE GMAIN XMS #WAITSTDP0528 00064 00062 00002 00003 00000 00000 00000TDP0528 00128 00050 00014 00017 00000 00000 00000TDP0528 00240 00032 00032 00032 00000 00000 00000TDP0528 00256 00320 00056 00077 00000 00200 00000TDP0528 00352 00480 00032 00032 00000 00200 00000TDP0528 00992 01331 00013 00013 00000 00300 00000TDP0528 12272 00009 00001 00006 00000 00000 00000
TDP0529 OVERSIZE CELLS, INUSE : 1, MXUSE: 1, GETMAINS : 1
ADD CELLS SIZE 256 NUMBER 1
This command adds an “extent” of sixteen 256 byte cells.
1- 24 Getting to Teradata
LAN Environment In a local area network (LAN) environment, each workstation on the network has a copy of Teradata software, including the utilities and programs needed to access the Teradata Database.
The three elements that comprise the Teradata Client Software in a LAN-attached environment are:
• MTDP—Micro Teradata Director Program
• CLI—Call Level Interface
• MOSI—Micro Operating System Interface
In a LAN-attached environment, MTDP and the Gateway Software in the node handle the functions that TDP performs in a channel-attached environment.
A network interface card connects workstations directly to the LAN, and an Ethernet card in the node chassis connects the node directly to the LAN.
These connections provide the workstation operating system access to the gateway software in the node.
There are two separate LAN connections to the network that consist of two LAN cards and two Ethernet cables to the network. This is done for redundancy.
Getting to Teradata 1- 25
LAN Environment
RDBMS
PE1020
PE1021
GatewaySoftware
PE1020
PE1021
Channel DriverSoftware
HostNo 52 HostNo 512
Node
Local Area Network - TCP/IP
Operating System
Application
CLI orODBC DriverMTDP MOSI
Software resides on
each workstation.
1- 26 Getting to Teradata
Call Level Interface (CLI) Call Level Interface Version 2 (CLIv2) is a collection of callable service routines that provide the interface between applications and the Teradata Director Program (TDP) on an IBM mainframe client.
TDP is the interface between CLIv2 and the Teradata server. CLIv2 can operate with all versions of MVS, OS / 390, CICS, IMS, and VM.
CLI routines minimize overhead and allow for efficient interfaces to Teradata applications.
CLI is a set of callable service routines that provide the interface between an application program and the Teradata RDBMS (TDP or MTDP).
• Programs submit SQL requests to Teradata.
• CLI routines minimize overhead.
• An SQL request is made up of parcels. The types of parcels are: - Request Parcel - Data Parcel - Respond Parcel
• Teradata provides a Response to the SQL Request.
• The Response is made up of Parcels. The types of parcels are: - Success Parcel - Record Parcel - End Request Parcel
Getting to Teradata 1- 27
Call Level Interface (CLI)
SQLRequest
Response
ApplicationProgram
TeradataRDBMS
parcel parcel parcel
parcelparcelparcel
request
response
– Teradata provides a RESPONSE to the SQL REQUEST.– The RESPONSE is made up of Parcels. The types of parcels are:
• Success Parcel• Record Parcel• End Request Parcel
– CLI (Call Level Interface) is a set of callable service routines that provide the interface between an application program and Teradata (TDP or MTDP).
– Programs submit SQL requests to Teradata.– CLI routines minimize overhead.– An SQL request is made up of parcels. The types of parcels are:
• Request Parcel - Data Parcel - Respond Parcel
1- 28 Getting to Teradata
Open Database Connectivity (ODBC) ODBC stands for Open Database Connectivity, which is a call-level interface. Under ODBC, drivers are used to connect applications with databases. The Teradata ODBC driver is the ODBC driver for the Teradata RDBMS. The Teradata ODBC driver conforms to Level 1 of the ODBC 2.5 API, and also includes much of the Level 2 functionality.
There are ODBC drivers available for many commercial databases. These drivers allow the end-user to access any database from their PC in a LAN or even internet environment.
Many third-party end-user tools may be used to query the database in an ODBC environment. The best technique to do this while ensuring data integrity and security would be to access tables via a VIEW using a locking table for access clause.
Getting to Teradata 1- 29
Open Database Connectivity (ODBC)
• Common database access mechanism
• Simplifies client/server computing
• A Call-Level Interface
• Connects applications with databases
• Allows interface to the database using 3rd party tools
Data Sources• Data Source Name
• Description of associated ODBC driver
Configuring a Data Source• Choose OBDC from the control panel
• Add an entry for each data source and its driver
• Add a data source specification
• Account String Information may be used during creation of a user to help track users
1- 30 Getting to Teradata
Java Database Connectivity (JDBC) Java Database Connectivity (JDBC) is a specification for an Application Programming Interface (API). This API allows platform independent Java applications to access database management systems using Structured Query Language (SQL).
The JDBC API provides a standard set of interfaces for:
• Opening connections to databases
• Executing SQL statements
• Processing results
The Teradata Driver for the JDBC Interface is a set of Java classes that work with the JDBC Interface, enabling you to access the Teradata database using the Java language.
JDBC enables the development of web-based Teradata end user tools that can access Teradata through a web server. JDBC will also provide support for access to other commercial databases.
Getting to Teradata 1- 31
Java Database Connectivity (JDBC)
Specification for an API
•Allows platform-independent Java applications access using SQL.
Provides a standard set of interfaces for:• Opening connections to databases
• Executing SQL statements
• Processing results
JDBC Driver•A set of Java classes that work with the JDBC interface.
•Enables access to Teradata using the Java language via web server.
Teradata JDBC Client•Can be used on any machine with a Java- enabled browser or Java virtual machine interpreter installed.
Teradata JDBC Gateway•Runs under UNIX or Windows NT.
•Connects Teradata JDBC driver clients to Teradata.
•Controls and manages database access.
1- 32 Getting to Teradata
Gateway Global Utility Commands Session Control
The Gateway Global utility allows you to monitor and control the sessions of Teradata Database network-attached users. For example, by starting the utility and issuing utility commands from a database console, you can monitor network sessions and traffic, disable logons, force users off the Teradata Database and diagnose gateway problems.
The gateway software runs on the system that is running the Teradata Database. Client programs that communicate through the gateway to Teradata may be resident on the NCR system, or may be installed and running on network-attached workstations.
In contrast, client programs which run on a channel-attached client access the Teradata Database through the TDP software and the channel connection. They bypass the gateway completely.
The number of gateways that are supported is: • One per node for Teradata Database for UNIX and Windows 2000
A gateway can support up to 1200 sessions, depending on available system resources. Gateway errors are handled in the same manner as other database errors.
Disconnect and Kill Commands The Disconnect User/Session and Kill User/Session commands are similar in that they both disconnect sessions from the database. The Kill command will abort one session immediately or all sessions of a particular user, then log the user off. The Disconnect command simply puts the sessions in a disconnect state and does not log the user off. The database is still aware of the sessions, and if the user re-establishes the connection from their client workstation, the sessions are allowed to re-connect.
Special Diagnostic Functions The Gateway Global utility provides functions that will perform routine diagnostics, as well as functions that will perform special diagnostics. These functions allow you to debug internal gateway errors or anomalies.
Starting Gateway Global – access GTW with the following commands: UNIX command line xgtwglobal -nw X Windows interface command line xgtwglobal DBWstart xgtwglobal -nw Teradata RDBMS for Windows Command prompt gtwglobal or xgtwglobal
These commands are also available in Teradata Manager.
Getting to Teradata 1- 33
Gateway Global Utility Commands
Network and Session Information
Special Diagnostics
DISPLAY DISCONNECTDISPLAY FORCEDDISPLAY GTWDISPLAY NETWORKDISPLAY SESSIONDISPLAY STATISTICS
DISPLAY TIMEOUTDISPLAY USER
Displays sessions that have been disconnected.Displays sessions killed or aborted via PMPC abort process.Displays all sessions connected to the gateway. Displays your network configuration. Displays information about a specific session on the gateway.Displays the RSS statistics for the gateway vproc and equivalent statistics for each session.Displays the timeout value.Displays session #, PE #, User name, IP address, connection status.
Administering Users and SessionsDISABLE LOGONSDISABLE EXLOGONSENABLE LOGONSENABLE EXLOGONS
DISCONNECT USERDISCONNECT SESSIONKILL USERKILL SESSIONSET TIMEOUT
Disable logons to the RDBMS through the gateway.Disables the EXLOGON option and reverses logons back to normal path. Enable logons to the RDBMS via the gateway.Enables and allows the gateway to choose the fast path when logging users onto the RDBMS. Improves logon response time tremendously.Disconnects all sessions owned by a user.Disconnects a specific session. Must provide session number in command.
Terminates all sessions of a specific user.Terminates a specific session. Must know session number.Sets a timeout value.
ENABLE TRACEDISABLE TRACEFLUSH TRACE
Records internal gateway events.Turns off the recording of event tracing and writing to the event log file.Directs the gateway to write the contents of its internal trace buffers to the event log file.
1- 34 Getting to Teradata
Client Software In a LAN environment, the Micro Teradata Director Program (MTDP) provides similar functionality to the TDP on the mainframe in that it makes request parcels out of user requests and sends them to the gateway. The gateway then sends the requests to the PE.
Mainframe TDP functions are split in two for workstations: the MTDP performs some functions and the gateway performs others.
A workstation client can utilize either CLI or ODBC drivers. When you work with ODBC drivers, the system provides you with several options from which to choose. You must choose the Teradata option.
CLI is an application development API that provides maximum control over Teradata connectivity.
Addressing is not done through a host file, but through a list of names/hosts to which you can connect.
On the mainframe side, TDP handles MTDP and MOSI functions. Additionally, mainframe TDPs are responsible for:
• Routing responses back to the originating address space
• Balancing sessions across assigned parsing engines
Getting to Teradata 1- 35
Client Software
Workstation Software
GATEWAY
PE
APPLICATION
CALL-LEVEL INTERFACEor ODBC or JDBC DRIVER
OPERATING SYSTEM
MOSI
MICRO-TDPCHANNEL
DRIVER
PE
APPLICATION
CALL-LEVELINTERFACE
OPERATING SYSTEM
TDP
Mainframe Software
1- 36 Getting to Teradata
Client Configuration Overview Summary The opposite page summarizes some important concepts in this module.
Getting to Teradata 1- 37
Client Configuration Overview Summary
– Channel Interface enables communication between a mainframe client and the Teradata server.
– TDP manages communications between client applications and theTeradata server.
– In a LAN environment, Teradata software and utilities are installed on each workstation.
– The Micro Teradata Director Program parcels user requests and sends them to the gateway.
– A session is a logical connection between the user and the database.
1- 38 Getting to Teradata
Review Questions Check your understanding of the concepts discussed in this module by completing the review questions as directed by your instructor.
Getting to Teradata 1- 39
Review Questions
Indicate whether a statement is True (T) or False (F).
1. The Teradata Director Program (TDP) facilitates communication between LAN clients and the Teradata database.
2. TDP commands entered from the MVS or VM console are not executed until you execute the RUN command.
3. You use two physical LAN connections per node to support concurrent sessions.
4. CLI is an API that provides control over Teradataconnectivity.
T F
T F
T F
T F
1- 40 Getting to Teradata
References For more information on the topics covered in this module:
• Teradata Client for MVS Installation Guide - (B035-2415-099A) • Teradata Client for VM Installation Guide - (B035-2422-099A) • CICS Interface to the Teradata RDBMS - (B035-2448-060A) • IMS Interface to the Teradata DBS - (B035-2447-122A) • Teradata CLI V2 for Channel-Attached Systems - (B035-2417-
122A) • Teradata TDP Reference - (B035-2416-122A) • Teradata RBDMS Utilities - (B035-1102-122A)
Building the Database Environment 2-1
Module 2
Building the Database Environment
After completing this module, you will be able to:
• Describe the purpose and function of an administrative user.
• Differentiate between creators, owners (parents), and children.
• Describe how to transfer ownership of databases and users.
• Describe the hierarchical nature of the creation of databases/users.
2-2 Building the Database Environment
Notes:
Building the Database Environment 2-3
Table of Contents
INITIAL TERADATA DATABASE....................................................................................................................................2-4 ADMINISTRATIVE USER ....................................................................................................................................................2-6 OWNERS, PARENTS AND CHILDREN ...........................................................................................................................2-8 CREATING OBJECTS ......................................................................................................................................................... 2-10 DELETE/DROP STATEMENTS ....................................................................................................................................... 2-12 HIERARCHIES SUMMARY.............................................................................................................................................. 2-14 REVIEW QUESTIONS ......................................................................................................................................................... 2-16 REFERENCES ........................................................................................................................................................................ 2-18
2-4 Building the Database Environment
Initial Teradata Database The Teradata Database software includes the following users and databases.
DBC User With the few exceptions described below, a system user named DBC owns all usable disk space. DBC’s space includes dictionary tables, views and macros discussed in the Data Dictionary module.
The usable disk space of DBC initially reflects the entire system hardware capacity, less the following users and databases.
SYSADMIN User SYSADMIN is a system user with MAXPERM of 500,000 bytes. You may need to modify the SYSADMIN to increase the perm space (suggested 5 MB or more). SYSADMIN contains several supplied views and macros as well as a restart table for all FastLoad jobs.
SYSTEMFE User SYSTEMFE is a system user delivered with a small amount of space for tables. It contains special macros used to generate diagnostic reports for field support personnel logged on as this user.
CRASHDUMPS Database CRASHDUMPS is a database provided for temporary storage of PDE dumps generated by the software. The default is 1 GB. CRASHDUMPS is created within system user DBC and its space is allocated from current PERM space for DBC. You should enlarge the CRASHDUMPS database based on the size of the configuration to accommodate at least three dumps.
Default, All and PUBLIC Databases, and TDPUSER Default, All, and PUBLIC are “dummy” database names used by the database system software. These databases are defined with no permanent space. TDPUSER is used to support two-phase commit.
System Calendar The underlying base table consists of one row for each day within the range of Jan 1, 1900 through Dec. 31, 2100. There is only one column, a date, in each row. Each level of view built on top of the base table adds intelligence to the date .
Building the Database Environment 2-5
Initial Teradata Database
Current Permanent SpaceMaximum Permanent Space
No Permanent SpaceNO BOX =
DBC
Default
All
PUBLIC
TDPUSER
SYSADMIN
CRASHDUMPS
SYSTEMFE
SYS_CALENDAR
2-6 Building the Database Environment
Administrative User System user DBC contains all Teradata Database software components and all system tables.
Before you define application users and databases, you should first use the CREATE USER statement to create a special administrative user to complete these tasks.
The amount of space for the administrative user is allocated from DBC’s current PERM space. DBC becomes the owner of your administrative user and of all users and databases you subsequently create.
Be sure to leave enough space in DBC to accommodate the growth of system tables and logs, and the transient journal.
(You can name the user anything you would like. We have called the user SYSDBA.)
Create the administrative user, then logon as that user to protect sensitive data in DBC. In addition, change and secure the DBC password.
To ensure perm space comes from the administrative user, logon as that user to create other users and databases.
Notes:
• All space in the Teradata Database is owned. No disk space known to the system is unassigned or not owned.
• Think of a user as a database with a password. Both may contain (or “own”) tables, views, macros and stored procedures.
• Both users and databases may hold privileges.
• Only users may logon, establish a session with the Teradata Database, and submit requests.
Building the Database Environment 2-7
Administrative User
SYSADMIN SYSTEMFE CRASHDUMPS
DBC
SYSDBA
Current Permanent Space
Maximum Permanent Space
2-8 Building the Database Environment
Owners, Parents and Children As you define users and databases, a hierarchical relationship among them will evolve.
When you create new objects, you subtract permanent space from the assigned limit of an existing database or user. A database or user that subtracts space from its own permanent space to create a new object becomes the immediate owner of that new object.
An “owner” or “parent” is any object above you in the hierarchy. (Note that you can use the terms owner and parent interchangeably.) A “child” is any object below you in the hierarchy. An owner or parent can have many children. A child can have many owners or parents.
The term “immediate parent” is sometimes used to describe a database or user just above you in the hierarchy.
Example The diagram on the facing page illus trates a Teradata system hierarchy. System user DBC is the owner, or parent, of all the objects in the hierarchy. The administrative user (SYSDBA) is the owner of all objects below it, such as Human Resources, Accounting, Personnel and Benefits. These objects are also children of DBC, since DBC owns SYSDBA.
Building the Database Environment 2-9
Owners, Parents, and Children
• Parent or owner– Any object above you in the hierarchy
• Immediate owner– The object immediately above you in the hierarchy
• Child– Any object below you in the hierarchy
Users may own databases and databases may own users.
Human_Resources Accounting
PersonnelBenefits
PR01 PR02 PR03
DBC
SYSDBASecurity_Admin
2-10 Building the Database Environment
Creating Objects The “creator” of an object is the user who submitted the CREATE statement.
Every object has one and only one creator. If you are the creator of a new object, you automatically have access rights to that object and anything created in it.
Notes:
• While you may be the creator of an object, you are not necessarily the immediate owner or even an owner of the database or user that contains the object.
• You are the immediate owner of an object if the new object is directly below you in the hierarchy.
• As a creator, you can submit a CREATE statement that adds a new object somewhere else in the hierarchy, assuming you have the appropriate privileges. In this instance, the creator (you) and the immediate owner are two different users or databases.
• If authorized, you may create databases or users FROM someone else's space.
• You can transfer databases and users from one immediate owner to another.
Building the Database Environment 2-11
Creating Objects
• The creator is the user who submits the CREATE statement.
• Every object has one and only one creator.
• Every object has one and only one immediate owner but may have multiple owners above it in the hierarchy.
• A user (if authorized) may create databases or other users from someone else’s space.
Human_Resources Accounting
Personnel Benefits
PR01 PR02 PR03
DBC Grant Database on Human_Resources to
Security_Admin
CREATE USER Payroll FROMHuman_Resources as...
Payroll
SYSDBASecurity_Admin
2-12 Building the Database Environment
DELETE/DROP Statements DELETE DATABASE and DELETE USER statements delete all data tables, views, and macros from a database or user. The database or user remains in the Teradata Database as a named object and retains the available space. None of that space is any longer in use. All space used by the deleted objects becomes available as spool space until it is reused as perm space.
You must have DROP DATABASE or DROP USER privileges on the referenced database or user to delete objects from them. The database or user that you are dropping cannot own other databases or users.
DELETE USER Example The diagram on the facing page illustrates a DELETE USER statement. User Personnel has three tables: PR01; PR02; and PR03. Human Resources logs on to the system and submits the DELETE USER statement on user Personnel. All tables are deleted from the user space owned by Personnel.
DELETE USER Syntax
FF07A018
DEL nameETE ;
DATABASE
USER
The DROP DATABASE or DROP USER statement drops empty databases or users only. You must delete all objects associated with the database or user before you can drop the DATABASE or USER. When you drop a database or user, its perm space is credited to the immediate owner.
Note: Join indexes must be dropped explicitly. DELETE DATABASE will fail if there are any join indexes.
DROP USER Example The diagram on the facing page illustrates the DROP USER statement. Human Resources submits the DROP USER statement on user Personnel. User Personnel is dropped from the hierarchy. The user space that belonged to user Personnel is returned to its parent, Human Resources.
DROP USER Syntax
FF07A019
DROP name;
DATABASE
USER
Building the Database Environment 2-13
DELETE/DROP Statements
Human Resources
Personnel
Tables Views Macros
DELETE USER Personnel;
Personnel
Human Resources
SYSDBA
DROP USER Personnel;SYSDBA
Human Resources
SYSDBA
2-14 Building the Database Environment
Hierarchies Summary The opposite page summarizes some important concepts in this module.
Building the Database Environment 2-15
Hierarchies Summary
– Initially, system user DBC owns all space in the Teradata Database(except that owned by system users and databases SYSADMIN,SYSTEMFE, and CRASHDUMPS.)
– The database administrator should create a special administrative usercontaining most of the space available which will become the owner ofall administrator-defined application databases and users.
– Everyone higher in the hierarchy is a parent or owner. Everyone lowerin the hierarchy is a child.
– Every object has one and only one immediate owner.– Every object has one and only one creator. The creator is the user who
executes the CREATE statement.– The GIVE statement enables you to transfer an object. The following
privileges are necessary:• DROP DATABASE on the given object.• CREATE DATABASE on the receiving object.
– You cannot DROP databases or users that own objects (tables, views,macros, journals or children databases).
2-16 Building the Database Environment
Review Questions Check your understanding of the concepts discussed in this module by completing the review questions as directed by your instructor.
Building the Database Environment 2-17
Review Questions
1. You may not drop Databases or Users in the hierarchy when they have children.
2. You should use system user DBC to create application Users and Databases.
3. A child object can have only one Owner.
4. An Owner and a Parent are the same thing.
5. To remove Tables, Views and Macros from a Database, use the DROP DATABASE command.
T F
T F
T F
T F
T F
Indicate whether a statement is True (T) or False (F).
2-18 Building the Database Environment
References For more information on the topics covered in this module:
• Teradata RDBMS Database Design - (B035-1094-122A)
• Teradata RDBMS Database Administration - (B035-1093-122A)
Databases, Users and the Data Dictionary 3- 1
Module 3
After completing this module, you should be able to:
• Create Users and Databases.
• Use Profiles when creating new users.
• List an advantage of utilizing profiles.
• Use system views to display profile information.
• Summarize information contained in the Data Dictionary tables.
• Differentiate between restricted and unrestricted views.
• Use the supplied Data Dictionary views to retrieve information about created objects.
Databases, Users and the Data Dictionary
3- 2 Databases, Users and the Data Dictionary
Notes:
Databases, Users and the Data Dictionary 3- 3
Table of Contents
CREATE DATABASE STATEMENT ................................................................................................................................... 4 CREATE USER STATEMENT................................................................................................................................................ 6 TERADATA PASSWORD ENCRYPTION.......................................................................................................................... 8 PASSWORD SECURITY FEATURES ................................................................................................................................10 PROFILES ....................................................................................................................................................................................12 EXAMPLE OF SIMPLIFYING USER MANAGEMENT..............................................................................................14 IMPLEMENTING PROFILES ...............................................................................................................................................16 IMPACT OF PROFILES ON USERS ..................................................................................................................................18 CREATE PROFILE STATEMENT......................................................................................................................................20 PASSWORD ATTRIBUTES (CREATE PROFILE) ........................................................................................................22 DATA DICTIONARY ...............................................................................................................................................................24 FALLBACK PROTECTED DATA DICTIONARY TABLES ......................................................................................26 FALLBACK PROTECTED DATA DICTIONARY TABLES – CONT.....................................................................28 NON-HASHED DATA DICTIONARY TABLES ..............................................................................................................30 UPDATING DATA DICTIONARY TABLES ....................................................................................................................32 SYSTEM VIEWS ........................................................................................................................................................................34 RESTRICTED VIEWS .............................................................................................................................................................36 USING RESTRICTED VIEWS ..............................................................................................................................................38 SELECTING INFORMATION ABOUT CREATED OBJECTS .................................................................................40 CHILDREN VIEW .....................................................................................................................................................................42 DATABASES VIEW ..................................................................................................................................................................44 USERS VIEW ...............................................................................................................................................................................46 TABLES VIEW ...........................................................................................................................................................................48 TERADATA ADMINISTRATOR .........................................................................................................................................50 DATABASES AND USERS SUMMARY.............................................................................................................................52 REVIEW QUESTIONS .............................................................................................................................................................54 LAB .................................................................................................................................................................................................56 REFERENCES ............................................................................................................................................................................58
3- 4 Databases, Users and the Data Dictionary
CREATE DATABASE Statement As the database administrator, you use the CREATE DATABASE statement to add new databases to the existing system. The permanent space given to the new databases comes from the current permanent space owned by the immediate parent (or owner) of the new database (specified by either default or in the FROM clause). A database becomes a uniquely named collection of tables, views, macros, and access rights.
The spool definition is not relevant to a database itself. However, it does establish the default and maximum value for objects you create within the database hierarchy.
NOTE: Account Priority information is discussed in the Priority Scheduler Facility module in the Teradata Warehouse Management course.
Databases, Users and the Data Dictionary 3- 5
CREATE DATABASE Statement
3- 6 Databases, Users and the Data Dictionary
CREATE USER Statement The CREATE USER statement enables you to add new users to the system. The permanent space given to the new user comes from the current permanent space owned by the immediate parent (or owner) of the new user (specified by either default or in the FROM clause).
Users have passwords while databases do not. User passwords allow users to log on to the Teradata Database and establish sessions.
When you create a new user, there is a feature that allows you to create a temporary password for the user. When the user logs on for the first time, he or she is prompted to change the password.
If a user forgets their password, you can assign a new temporary password. (As another option, you can set user passwords not to expire.)
NOTE: Account Priority information is discussed in the Priority Scheduler Facility module in the Teradata Warehouse Management course.
Databases, Users and the Data Dictionary 3- 7
CREATE USER Statement
3- 8 Databases, Users and the Data Dictionary
Teradata Password Encryption You can give access to the Teradata Database with the CREATE USER statement, which identifies a username and usually a password value.
To establish a session on the Teradata system, a user must enter a username at logon. Upon successful logon, the username is associated with a unique session number until the user logs off.
Although the username is the basis for identification to the system, it is not usually protected information. Often the username is openly displayed during interactive logon, on printer listings, and when session information is queried.
To protect system access, associate a password with the username. Teradata does not display or print passwords on listings, terminals or PC screens.
Note: Neither you nor other system users should ever write down passwords or share them among users.
Teradata stores password information in encrypted form in the DBC.DBase system table. Information stored in the table includes the date and time a user defined a password, along with the encrypted password. As the administrator, you can modify passwords temporarily when the PasswordLastModDate plus a fixed number has been reached. This allows you to ensure that users change their passwords regularly.
To supervise and enforce users’ access rights to stored data, the system associates each username with a default storage area and an arrangement of access rights.
Displaying Passwords The PasswordString column from the DBC.DBase table displays encrypted passwords. The SQL request on the facing page demonstrates how you can access an encrypted password. Notice that the password is never decrypted.
DBC.Users View The DBC.DBase table stores the date and time a user defines a password. The DBC.Users view displays PasswordLastModDate and PasswordLastModTime. A user can modify his or her password without additional access privileges.
Databases, Users and the Data Dictionary 3- 9
Teradata Password Encryption
CREATE USER sa01 AS PASSWORD = Password
Encryption AlgorithmDate Time
DBC.DbaseSELECT DataBaseName,PasswordString
FROM DBC.DbaseWHERE DataBaseName = USER ;
DataBaseName PasswordString--------------------- ---------------------sa01 @<nU#?J%4m 99/10/13 10:45:39
DBC.SysSecDefaults
DBC.OldPasswords
3- 10 Databases, Users and the Data Dictionary
Password Security Features Teradata password security features allow you to:
• Expire passwords after a specific number of days.
• Define the amount of time to elapse before a password can be reused.
• Control minimum/maximum length of password.
• Disallow digits/special characters in a password.
• Limit the number of erroneous logon attempts before the system locks a user’s access.
• Automatically unlock users after a specific period of time.
You can enable these features by updating the appropriate row in the DBC.SysSecDefaults table as shown on the facing page.
When you create a new user, you also create a temporary password for the user. When the user logs on for the first time, he or she is prompted to change the password.
If a user forgets their password, you can assign a new temporary password. (As another option, you can set user passwords not to expire.)
Databases, Users and the Data Dictionary 3- 11
Password Security Features
DBC.SysSecDefaults Column DescriptionsColumn Description
ExpirePassword Number of days to elapse before the password expires.Zero (0) indicates passwords do not expire and temporary passwords are not enabled; default is 0.
PasswordMinChar Minimum number of characters in a valid password String; default is 1.
PasswordMaxChar Maximum number of characters in a valid password String; default is 30.
PasswordDigits Indicate if digits are to be allowed in the password (Y or N); default is Y.
PasswordSpecChar Indicate if special characters are allowed in the password (Y or N); default is Y.
MaxLogonAttempts* Number of erroneous logons allowed before locking user. Zero (0) indicates that user is never locked; default is 0.
LockedUserExpire* Number of minutes to elapse before a locked user is unlocked.Zero (0) indicates immediate unlock; default is 0.
PasswordReuse Number of days to elapse before a password can be reused. Zero (0) indicates immediate reuse; default is 0.
* Note: If MaxLogonAttempts is set to a value other than zero, and if the time interval for LockedUserExpire is left at zero, then the user is never locked out.
3- 12 Databases, Users and the Data Dictionary
Profiles Profiles define system parameters. Assigning a profile to a group of users makes sure that all group members operate with a common set of parameters. To manage system parameters for groups, a database administrator can:
• Create a different profile for each user group, based on system parameters that group members share. You can define values for all or a subset of the parameters in a profile. If you do not set the value of a parameter, the system uses the setting defined for the user in a CREATE USER or MODIFY USER statement.
• Assign profiles to users.
The parameter settings in a user profile override the settings for the user in a CREATE USER or MODIFY USER statement. This information can be found in the view DBC.ProfileInfo[x]. Like roles, the concept of ownership and ownership hierarchy is not applicable to profiles.
Databases, Users and the Data Dictionary 3- 13
Profiles
What is a “profile”?
• A set of common user parameters that can be applied to a group of users.
• Profile parameters include:– Account id(s)– Default database– Spool space allocation– Temporary space allocation– Password attributes (expiration, etc.)
What is the advantages of using “profiles”?
• Profiles simplify user management. – A change of a common parameter requires an update of a profile instead of
each individual user affected by the change.
How are “profiles” managed?
• New DDL commands, tables, view, command options, and access rights.– CREATE PROFILE, MODIFY PROFILE, DROP PROFILE, and SELECT PROFILE– New system table - DBC.Profiles– New system views - DBC.ProfileInfo[x]
3- 14 Databases, Users and the Data Dictionary
Example of Simplifying User Management The profile concept provides a solution to the following problem. A customer has a group of 10,000 users that are assigned the same amount of spool space, the same default database, and the same account ID. Changing any of these parameters for 10,000 users is a very time-consuming task for the database administrator. The database administrators’ task will be simplified if they can create a profile that contains one or more system parameters such as account ids, default database, spool space and temporary space. This profile is assigned to the group of users. This would simplify system administration because a parameter change requires updating only the profile instead of each individual user. In summary, a set of parameters may be assigned certain values in a profile and this profile may be assigned to a group of users and thereby have them share the same settings. This makes changing parameters for a group of users a single step instead of a multi-step (one for each user in the group) process.
Databases, Users and the Data Dictionary 3- 15
Example of Simplifying User Management
Example:
• The problem:
– A customer has group of 10,000 users that are assigned the same spool space, the same default database, and the same account ID.
– Changing any of these parameters for 10,000 users can be a very time-consuming task.
• A solution using profiles:
– Create a profile that contains these parameters and assign that profile to the users.
– This would simplify system administration because a parameter change requires updating only the profile instead of eachindividual user.
3- 16 Databases, Users and the Data Dictionary
Implementing Profiles The CREATE PROFILE and DROP PROFILE access rights are system rights. These rights are not on a specific database object. Note that the PROFILE privileges can only be granted to a user and not to a role or database. Profiles enable you to manage the following common parameters: • Account strings, including ASE codes and Performance Groups • Default database • Spool space • Temporary space • Password attributes, including:
– Expiration – Composition (length, digits, and special characters) – Allowable logon attempts – Duration of user lockout (indefinite or elapsed time) – Reuse of passwords
Note: In the example on the facing page, another technique of granting CREATE PROFILE and DROP PROFILE to Sysdba is to use the following SQL.
GRANT PROFILE TO SYSDBA WITH GRANT OPTION; The key word PROFILE will give both the CREATE PROFILE and DROP PROFILE access rights. To remove a profile from a user, use the MODIFY USER command: MODIFY USER User_A AS PROFILE=NULL;
Databases, Users and the Data Dictionary 3- 17
Implementing Profiles
What access rights are used to support profiles?
• CREATE PROFILE –- needed to create new profiles
• DROP PROFILE – needed to modify and drop profiles
Who is allowed to create and modify profiles?
• Initially, only DBC has the CREATE PROFILE and DROP PROFILEaccess rights.
• As DBC, give the “profile” access rights to the databaseadministrators (e.g, Sysdba).
GRANT CREATE PROFILE, DROP PROFILE TO SYSDBA WITH GRANT OPTION;
How are users associated with a profile?
• The CREATE PROFILE command is used to create a profile ofdesired attributes.
CREATE PROFILE Employee AS … ;
• The PROFILE option (new) is used with CREATE USER and MODIFY USERcommands to assign a user to a specific profile.
CREATE USER Emp01 AS …, PROFILE = Employee;
MODIFY USER Emp02 AS PROFILE = Employee;
3- 18 Databases, Users and the Data Dictionary
Impact of Profiles on Users The assignment of a profile to a group of users is a way of ensuring that all members of a group operate with a common set of parameters. Therefore, the values in a profile always take precedence over values defined for a user via the CREATE and MODIFY USER statements. All members inherit changed profile parameters. The impact is immediate, or in response to a SET SESSION statement, or upon next logon, depending on the parameter: • SPOOL and TEMP space allocations are imposed immediately. This will affect
the current session of any member who is logged on at the time his or her user definition is modified.
• Password attributes take effect upon next logon.
• Account IDs and a default database are considered at next logon unless the
member submits a SET SESSION ACCOUNT statement, in which case the account ID must agree with the assigned profile definition.
Order of Precedence
With profiles, there are three ways of setting accounts and default database. The order of precedence (from high to low) is as follows:
1. The DATABASE statement is used to set the current default database or the SET SESSION ACCOUNT is used to set the account ID . However, a user can only specify a valid account ID .
2. Specify them in a profile and assign the profile to a user.
3. Specify accounts or default database for a user through the CREATE
USER/MODIFY USER statements.
Databases, Users and the Data Dictionary 3- 19
Impact of Profiles on Users
The assignment of a profile to a group of users is a way of ensuring that allmembers of a group operate with a common set of parameters.
Profile definitions apply to every assigned user, overriding specifications at thesystem or user level.
• However, any profile definition can be NULL or NONE.
All members inherit changed profile parameters. The impact on current users isas follows:
• SPOOL and TEMPORARY space allocations are imposed immediately.
• Password attributes take effect upon next logon.
• Database and Account IDs are considered at next logon unless the membersubmits a SET SESSION ACCOUNT statement.
Order of Precedence for parameters:
1. Specify database or account ID at session level
2. Specified parameters in a Profile
3. CREATE USER or MODIFY USER statements
3- 20 Databases, Users and the Data Dictionary
CREATE PROFILE Statement The CREATE PROFILE statement enables you to add new profiles to the system. The CREATE PROFILE access right is required in order to execute this command. The syntax is shown on the facing page. Profile names come from their own name space. Like roles, the concept of ownership and ownership hierarchy is not applicable to profiles. A parameter not set in a profile will have a value of NULL. Resetting a parameter to NULL will cause the system to apply the user’s setting instead. In a profile, the SPOOL and TEMPORARY limits may not exceed the current space limits of the user submitting the CREATE/MODIFY PROFILE statement. The default database specified in a profile need not refer to an existing database. This is consistent with current CREATE USER and MODIFY USER statements where a non-existent default database may be specified. An error will be returned when the user tries to create an object within the non-existent database. It is not necessary to define all of the parameters in a profile, a subset will also do. The parameter values in a user profile take precedence over the values set for the user. For example, if a user is assigned a profile containing Default Database and Spool Space, the profile settings will override the individual settings previously made via a CREATE USER or MODIFY USER statement. Accounts in a profile will also override, not supplement, any other accounts the user may have. The assignment of a profile to a group of users is a way of ensuring that all group members operate with a common set of parameters. If profile accounts are supplemented with user accounts, then the commonality will be lost. The first account in a list will be the default account. If a parameter in a profile is not set, then the user’s setting will be applied. Note when using the CREATE USER command: • When creating a new user, if the PROFILE option specifies a Profile that does
not exist, you will get the following error. Error 5653: Profile 'profile_name' does not exist.
Databases, Users and the Data Dictionary 3- 21
CREATE PROFILE Statement
ACREATE PROFILE profile_name
A,
AS ACCOUNT = 'account_ID ',
( 'account_ID ' ) NULL
DEFAULT DATABASE = database_nameNULL
SPOOL = n BYTES
NULLTEMPORARY = n
BYTES NULL
,PASSWORD = ( attribute )
ATTRIBUTES NULL
;
3- 22 Databases, Users and the Data Dictionary
Password Attributes (CREATE PROFILE) The facing page describes the Password Attributes associated with the CREATE PROFILE command.
Databases, Users and the Data Dictionary 3- 23
Password Attributes(CREATE PROFILE)
EXPIRE = n (# of days; 0 doesn't expire)NULL
MINCHAR = n (range is 1 - 30)NULL
MAXCHAR = n (range is 1 - 30)NULL
DIGITS = c (values are Y, y, N, n)NULL
SPECCHAR = c (values are Y, y, N, n)NULL
MAXLOGONATTEMPTS = n (# of attempts; 0 - never locked)NULL
LOCKUSEREXPIRE = n (# of minutes; 0 - not lockedNULL -1 - locked indefinitely)
REUSE = n (# of days; 0 - reuse immediately)NULL
Password Attributes
3- 24 Databases, Users and the Data Dictionary
Data Dictionary The Data Dictionary is a complete database composed of system tables, views, and macros that reside in system user DBC.
Data dictionary tables are present when you install the system.
The system references some of these tables with SQL requests, while others are used for system or data recovery only.
Data dictionary views reference data dictionary tables. Views and macros are created by running Database Initialization Program (DIP) scripts.
Data dictionary tables are used to:
• Store definitions of objects you create (e.g., databases, tables, indexes, etc.).
• Control access to data.
• Record system events (e.g ., logon, console messages, etc.).
• Hold system message texts.
• Control system restarts.
• Accumulate accounting information.
The first two bulleted items are of particular interest for physical implementation.
Databases, Users and the Data Dictionary 3- 25
Data Dictionary
SysAdmin SystemFE Crashdumps
DBC
SysDBA
Macros
Views of DD TablesAdministrativeSecuritySupervisoryEnd UserOperational
Object definitionsSystem event logsSystem message tableJournals and Restart control tablesAccounting informationAccess control tables
Data Dictionary Tables
Add calculation sequenceGenerate utilization reportsReset accounting valuesAuthorize secured functions
3- 26 Databases, Users and the Data Dictionary
Fallback Protected Data Dictionary Tables Table Name Description
Accounts Account with which a user can log on Acctg Each account a user owns on each AMP. All (Dummy) Represents all tables CollationTbl User-defined Collation tables ConstraintNames Named index or ref. constraint DatabaseSpace Space accounting. DBase Database and User Profiles DBCAssociation Ported Table Information DBCInfoTbl Software Release & Version ErrorMsgs Message Codes and text EventLog Session logon/logoff history Global Internal system flag bytes Hosts To override default char. sets HW_Event_Log Reserved for future use IdCol Parameters of every identity column. Indexes Defines indexes on tables InDoubtResLog Two-phase commit rollback LogonRuleTbl Host & Pswd requirements MDSRecovery Databases affected by down MDS gateway Migration (Internal use only) Next Next internal identifier DBC uses for processing. OldPasswords Encoded password history Owners Hierarchy (Downward) ParentChildCorrelation Parent/Child date correlation. Parents Hierarchy (Upward) Profiles User password security/system resources a user can use ReferencedTbls Parent table of a ref. Const. ReferencingTbls Child table of a ref. Const. RepBatchStatus Internal table Roles Assigns privileges on objects/operations to users RoleGrants Roles granted to users or other roles SessionTbl Current logon information SW_Event_Log Database Console Log SysSecDefaults Logon security options TableContraints Table-level constraints defined in the system TempStatistics Statistics collected on materialized tables. TempTables Each materialized temp table in the system. TextTbl (Internal use only) Translation National Character Support TriggersTbl Stores trigger information TVFields Table/View column descr. TVM Tables, Views and Macros UDFInfo Function information UnResolvedReferences Constraints in the system.
Databases, Users and the Data Dictionary 3- 27
Fallback Protected DD Tables
• Most data dictionary tables are fallback protected.
• Fallback protection means that a copy of every table row is maintained on a different AMP vproc in the configuration.
• Fallback-protected tables are always fully accessible and are automatically recovered by the system.
3- 28 Databases, Users and the Data Dictionary
Fallback Protected Data Dictionary Tables – cont. The following table describes the Data Dictionary tables added to support the Database Query Log, Access Rights tables, and Archive/Recovery related tables.
Table Name Description DBQLExplainTbl Contains the EXPLAIN of the query DBQLObjTbl Populated if object info is requested for the query DBQLLogTbl The main table for DBQL DBQLRuleCountTbl Reserved for internal use DBQLRuleTbl The rule table for DBQL DBQLSQLTbl The SQL for the query DBQLStepTbl Step level information DBQLSummaryTbl Populated if summary info is requested AccessRights Users Rights on objects AccLogRuleTbl Specifies events to be logged AccLogTbl Logged User-Object events RCConfiguration Archive/Recovery Config RCEvent Archive/Recovery events RCMedia VolSerial Archive/Recovery
Databases, Users and the Data Dictionary 3- 29
Fallback Data Dictionary Tables – cont.
DBQLExplainTblThe EXPLAIN of the query.
DBQLObjTblobject info for the query.
DBQLLogTblThe main table for DBQL
DBQLRuleCountTblReserved for internal use
DBQLRuleTblThe rule table for DBQL
DBQLSQLTblThe SQL for the query
DBQLStepTblStep level information
DBQLSummaryTblsummary info is requested
AccessRightsTblUser rights on objects.
RCConfigurationTblArchive/Recovery config
AccLogTblLogged user-object events.
RCEventTblArchive/Recovery config
AccLogRuleTblEvents to be logged.
RCMediaTblVol/Serial Archive/Recovery
3- 30 Databases, Users and the Data Dictionary
Non-Hashed Data Dictionary Tables The data dictionary tables on the following page contain rows that are not distributed using hash maps.
Rows in these tables are stored AMP-locally. For example, the TransientJournalTable rows are stored on the same AMP as the row being modified.
Note: User-defined table rows are always hash distributed ... either with or without a fallback copy.
Databases, Users and the Data Dictionary 3- 31
Non-Hashed Data Dictionary Tables
Acctg Resource usage by Acct/User
ChangedRowJournal Down-AMP Recovery Journal
DatabaseSpaceDbase and Table space acctg
LocalSessionStatusTableLast request status by AMP
LocalTransactionStatusTableLast TXN Consensus status
OrdSysChngTableTable-level recovery
RecoveryLockTableRecovery session locks
RecoveryPJTablePermanent Journal recovery
SavedTransactionStatusTableAMP recovery table
SysRcvStatJournalRecovery, reconfig, startup info
TransientJournalBackout uncommitted txns
UtilityLockJournalTableHost Utility Lock records
AMP
vdisks
Virtual AMP Cluster
AMPAMP AMP
AMP LOCALROW
PRIMARYROW
FALLBACKROW
AMP LOCALROW
AMP LOCALROW
AMP LOCALROW
3- 32 Databases, Users and the Data Dictionary
Updating Data Dictionary Tables Whenever you submit a data definition or data control statement, Teradata system software automatically updates data dictionary tables.
When you use the EXPLAIN modifier to describe a DDL statement, you can view updates to the data dictionary tables.
The EXPLAIN modifier is a helpful function that allows you to understand what happens when you execute an SQL statement.
• The statement is not executed.
• The type of locking used is described.
• At least five different tables are updated when you define a new table.
Databases, Users and the Data Dictionary 3- 33
Updating Data Dictionary Tables
EXPLAIN CREATE TABLE DBA01.Department( Department_Number SMALLINT,Department_Name CHAR(30) NOT NULL,Budget_Amount DECIMAL (10,2),Manager_Employee_Number INTEGER )
UNIQUE PRIMARY INDEX(Department_Number) ;Explanation--------------------------------------------------------------------------- 1) First, we lock DBA01.Department for exclusive use. 2) Next, we lock a distinct DBC."pseudo table" for write on a RowHash for deadlock prevention, we lock a distinct DBC."pseudo table" for read on a RowHash for deadlock prevention, we lock a distinct DBC."pseudo table" for write on a RowHash for deadlock prevention, and we lock a distinct DBC."pseudo table" for write on a RowHash for deadlock prevention. 3) We lock DBC.AccessRights for write on a RowHash, we lock DBC.TVFields for write on a RowHash, we lock DBC.TVM for write on a RowHash, we lock DBC.DBase for read on a RowHash, and we lock DBC.Indexes for write on a RowHash. 4) We execute the following steps in parallel. 1) We do a single-AMP ABORT test from DBC.DBase by way of the unique primary index. 2) We do a single-AMP ABORT test from DBC.TVM by way of the unique primary index with no residual conditions. 3) We do an INSERT into DBC.TVFields (no lock required). 4) We do an INSERT into DBC.TVFields (no lock required). 5) We do an INSERT into DBC.TVFields (no lock required). 6) We do an INSERT into DBC.TVFields (no lock required). 7) We do an INSERT into DBC.Indexes (no lock required). 8) We do an INSERT into DBC.TVM (no lock required). 9) We INSERT default rights to DBC.AccessRights for DBA01.Department. 5) We create the table header. 6) Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request. -> No rows are returned to the user as the result of statement 1.
3- 34 Databases, Users and the Data Dictionary
System Views Whenever you submit a data definition or data control statement, Teradata system software automatically updates data dictionary tables.
When you use the EXPLAIN modifier to describe a DDL statement, you can view updates to the data dictionary tables.
The EXPLAIN modifier is a helpful function that allows you to understand what happens when you execute an SQL statement.
• The statement is not executed.
• The type of locking used is described.
• At least five different tables are updated when you define a new table.
Databases, Users and the Data Dictionary 3- 35
System Views
Clarify tables• Re-title tables and/or columns.• Reorder and format columns.• Compute (derive) new column data.Simplify operations• Supply join operation syntax.• Select and project relevant rows and columns.Limit access to data• Exclude certain rows and/or columns from selection.• Limit update to selected table rows and/or columns.Reduce maintenance• When you add or drop columns, applications are not affected (unless a view
references a dropped column).• You can drop and recreate tables without affecting access rights granted to views.
Applications
UtilitiesVIEWS
CoordinatedProducts
TABLE
TABLE
TABLE
3- 36 Databases, Users and the Data Dictionary
Restricted Views There are two versions of the system views: restricted [x] and non-restricted [non-x]. The system administrator can load either or both versions.
Non-X views are named according to the contents of their underlying tables. DiskSpace, TableSize, and SessionInfo are examples of Non-X views.
X Views are the same views with an appended WHERE clause. The WHERE clause limits the information returned by a view to only those rows associated with the requesting user.
Granted Rights By default, the SELECT privilege is granted to PUBLIC User on most views in X and non-X versions. This privilege allows any user to retrieve view information via the SELECT statement. The system administrator can use GRANT or REVOKE statements to grant or revoke a privilege on any view to or from any user at any time.
Special Needs Views Some views are applicable only to users who have special needs. For example, the administrator, a security administrator, or a Teradata field engineer may need to see information that other users do not need. Access to these views is granted only to the applicable user.
Access Tests Limited views typically run three different tests before returning information from data dictionary tables to a user. Each test focuses on the user and his or her current privileges. It can take longer to receive a response when a user accesses a restricted view.
Note: A suggestion is to create a database called DBCX, and move these views into that database, and then GRANT PUBLIC SELECT on them.
Databases, Users and the Data Dictionary 3- 37
Restricted Views
Views with an [X] suffix typically makethe following three tests beforereturning information to the user:
View used with suffix [x]
DDTABLES
Where the user holdscertain rights on theselected objects
Where the user ownsthe selected objects
Where the user isthe selected object
3- 38 Databases, Users and the Data Dictionary
Using Restricted Views Views with an [X] suffix return information only about the executing user, and include information about objects owned by the user or on which he or she has privileges.
Operations that use restricted views tend to take longer to run because these views access more data dictionary tables. By contrast, operations that use unrestricted views may run faster but return more rows.
To control access to data dictionary information, the administrator can grant users permission to access only restricted views.
Databases, Users and the Data Dictionary 3- 39
Using Restricted Views
Views with an [x] suffix return information only on objectsthat the requesting user:
− Owns, or− Has privileges on
The following query returns information about ALL parentsand children recorded in the underlying dictionary table:
SELECT Child, ParentFROM DBC.Children ;
The restricted [x] version of this view selects onlyinformation on objects controlled by the executing user:
SELECT Child, ParentFROM DBC.ChildrenX ;
3- 40 Databases, Users and the Data Dictionary
Selecting Information about Created Objects The following views return information about created objects.
Note: The table "Indexes" is referenced by a view spelled "Indices.”
Object Definition System Views
View Name Data Dictionary Table
Purpose
DBC.Children[x] DBC.Owners Provides information about hierarchical relationships.
DBC.Databases[x] DBC.DBase Provides information about databases, users and their immediate parents.
DBC.Users DBC.DBase Similar to Databases view but includes columns specific to users.
DBC.Tables[x] DBC.TVM Table, view, macro, join index, stored procedure information.
Databases, Users and the Data Dictionary 3- 41
Selecting Information About Created Objects
VIEW NAME DESCRIPTION
DBC.Children[x]
DBC.Databases[x]
DBC.Users
DBC.Tables[x]
Hierarchical relationship information.
Similar to Databases view, but includes columns specific to users.
Table, view, macro, join index, stored procedure information.
Database, user and immediate parent information.
3- 42 Databases, Users and the Data Dictionary
Children View The Children view lists the names of databases and users and their parents in the hierarchy.
Column Names
Child Name of a child database or user
Parent Name of a parent database or user
Example The diagram on the facing page uses an SQL statement to list the parents of the current user. The SQL keyword USER causes the parser to substitute the USER ID of the user who has logged on and submitted the statement. The results of the request show one child, TDA05, and four parents.
Databases, Users and the Data Dictionary 3- 43
Children View
Provides the names of all databases, users and their owners wherethe user owns or has access rights on the user or database.
DBC.Children[x]
Child Parent
EXAMPLE: Using the unrestricted form of the view and aWHERE clause, list your parents.
SELECT *FROM DBC.ChildrenWHERE Child = USER ;
Child Parent---------------------- -------------------------TDA05 TDATDA05 StudentsTDA05 SysDBATDA05 DBC
3- 44 Databases, Users and the Data Dictionary
Databases View The Databases view returns information about databases and users from the DBC.DBase table.
Notes:
• Only the immediate owner is identified in this view. Use the parent column of the Children view to select all owners.
• The data dictionary records the name of the creator of a system user or database, as well as the date and time the user created the object. This information is not used by the software, but is recorded in DBC.DBase for historical purposes.
Column definitions in this view include:
Column Definition OwnerName The IMMEDIATE parent (owner) ProtectionType Default protection type for tables created within this
database: F = Fallback N = No Fallback JournalFlag Two characters (before and a fter image) where: S = Single
D = Dual
N = None
L = Local
For Example:
SD = Single before, dual after image
NL = (Single) Local after image CreatorName Name of the user who created the object. CreateTimeStamp Date and time the user created the object.
Example The SQL request on the facing page uses the Databases view to find the user's creator, permanent disk space limit, and spool disk space limit.
Databases, Users and the Data Dictionary 3- 45
Databases View
Provides information about databases and usersowned by the user or on which he has privileges.
DBC.Databases[x]
CreatorName CreateTimeStamp PermSpace SpoolSpace--------------------- ----------------------------- ---------------- ----------------TDA 1998-12-27 12:23:01 800,000,000 6,000,000
DatabaseName CreatorName OwnerNameAccountName ProtectionType JournalFlagPermSpace SpoolSpace TempSpaceCommentString CreateTimeStamp LastAlterNameLastAlterTimeStamp DBKind
SELECT CreatorName,CreateTimeStamp,PermSpace,SpoolSpace
FROM DBC.DatabasesWHERE DatabaseName = USER;
EXAMPLE:Find creator name,date-time stamp,perm space andspool space.
3- 46 Databases, Users and the Data Dictionary
Users View The Users view is a subset of the Databases view and:
• Limits rows returned from DBC.DBase to only USER records (e.g., where there is a password).
• Restricts rows returned to:
− The current users’ information.
− Information about owned users or databases (i.e., children).
− Information about users on which the cur rent user has DROP USER or DROP DATABASE rights.
− Date and time a user is locked due to excessive erroneous passwords, and the number of failed attempts since the last successful one.
The view features CreatorName and CreateTimeStamp columns that display the user name who created an object and the date and time they she created it. The LastAlterName and LastAlterTimeStamp columns list the name of the last user to modify an object, as well as the date and time.
Note: The users view is already a restricted view; there is no [X] version.
Column definitions in this view include:
Column Definition
PermSpace Maximum permanent space available for this user.
SpoolSpace Maximum spool space available for this user.
DefaultCollation A = ASCI
E = EBCDIC
M = Multinational
H = Host (default)
Example The SQL statement on the facing page finds the user’s default account code and name of the immediate owner.
Databases, Users and the Data Dictionary 3- 47
Users View
Provides information about the users that the requesting user owns or to whichhe or she has modify rights. This is a restricted view…there is no [x] version.)
DBC.Users
SELECT UserName,CreateTimeStamp,DefaultAccount,OwnerName
FROM DBC.UsersWHERE UserName = USER ;
DefaultAccount Owner Name-------------------- -------------------UserName--------------TDA01 $M_P9210 TDA
CreateTimeStamp-----------------------------1998-12-27 12:23:01
UserName CreatorName PasswordLastModDatePasswordLastModTime OwnerName PermSpaceSpoolSpace ProtectionType JournalFlagStartupString DefaultAccount DefaultDatabaseCommentString DefaultCollation PasswordChgDateLockedDate LockedTime LockedCountTimeZoneHour TimeZoneMinute DefaultDateFormCreateTimeStamp LastAlterName LastAlterTimeStampDefaultCharType TempSpace
EXAMPLE:Find your defaultaccount code andthe name of yourimmediate owner.
3- 48 Databases, Users and the Data Dictionary
Tables View The Tables view accesses the data dictionary table, DBC.TVM, which contains descriptions of tables, views, and macros, etc.
The view features a TableKind column that allows you to specify the kind of object to reference.
The view also features CreatorName and CreateTimeStamp columns that display the name of the user who created an object and the date and time he or she created it. The LastAlterName and LastAlterTimeStamp columns list the name of the last user to modify an object, as well as the date and time.
If a Primary Key is defined, this entry will indicate whether it has been implemented as a UPI (=0) or a USI (<>0, not =).
As the administrator, use this view to find NO FALLBACK tables (where ProtectionType = 'N').
Additional column definitions for this view include:
Column Definition
Version A number incremented each time a user alters a table.
RequestText Returns the text of the most recent DDL statement that was used to CREATE or MODIFY the table.
TableKind T = Table
M = Macro
V = View
J = Journal table
I = Join index
G = Trigger
P = Stored Procedure
Example The SQL statement on the facing page requests a list of all tables, views, etc. that contain the letters “rights” in their name. The response displays the database name, table name, and a code for the type of object.
Databases, Users and the Data Dictionary 3- 49
Tables View
Provides information about tables, views, macros, etc. ownedby the current user or on which he or she has privileges.
DBC.Tables[x] DataBaseName TableName VersionTableKind ProtectionType JournalFlagCreatorName RequestText CommentStringParentCount ChildCount NamedTblCheckCountUnnamedTblCheckExist PrimaryKeyIndexId
SELECT TRIM(DatabaseName)||’.’||TableName (TITLE ‘Qualified Name’),TableKind
FROM DBC.TablesWHERE TableName LIKE ‘%rights%’ORDER BY 1,2 ;
Qualified Name TableKind---------------------------------------- ---------------DBC.AccessRights TDBC.AllRights VDBC.UserGrantedRights VDBC.UserRights V
CreateTimeStampLastAlterName LastAlterTimeStamp
EXAMPLE:List all tables,views, macros, etc.that contain theletters “rights” intheir name.
3- 50 Databases, Users and the Data Dictionary
Teradata Administrator Teradata Administrator is an application that you can use to perform database administration tasks on the Teradata Database. It is available with Teradata Manager and as a stand-alone package.
Teradata Administrator runs in a standard Windows application window and provides an easy-to-use Windows-based graphical interface to the TeradataData Dictionary.
On Teradata Manager, Teradata Administrator is configured by default as a menu start application on the Tools pulldown menu of the Executive menu bar in the DEFAULT profile. Teradata Administrator is also configured as a separate application icon in the Teradata Manager group window so you can run it separately and independently of the Executive window. You may use it to:
• Create, modify and drop users or databases.
• Create tables (using ANSI or Teradata syntax).
• Grant or revoke access/monitor rights.
• Copy table, view or other object definitions to another database, or to another system.
• Drop or rename tables, views or other objects.
• Move space from one database to another.
• Run an SQL query.
• Display information about a Database (list of tables, views, macros, child databases, and rights, etc.)
• Display information about a table, view, or other object (columns, journals, indexes, row counts, users, and space summary, etc.).
Teradata Administrator keeps a record of all the actions you take and optionally can save this record to a file. This record contains a time stamp with the executed SQL and other information such as the statement’s success or failure.
To use the viewing functions of Teradata Administrator, you must have Select access to the DBC views of the Teradata Database. To use Copy, Drop, Create or Grant tools, you must have the corresponding privilege on the table or database that you are trying to modify or create. To use Browse or Row Count features, you must have select access to the Table or View.
The Teradata SQL Assistant is now integrated with Teradata Administrator. If Teradata Administrator detects Teradata SQL Assistant on the PC, it allows you to select it as the query interface to use in place of the Teradata Administrator Query window.
Databases, Users and the Data Dictionary 3- 51
Teradata Administrator
• Uses ODBC connection
• Multiple Teradata systems connections
•Scroll and window pane size options•Drag and drop options
3- 52 Databases, Users and the Data Dictionary
Summary The opposite page summarizes some important concepts in this module.
Databases, Users and the Data Dictionary 3- 53
Summary
• A user’s position in the hierarchy does not affect the user’s priority.
• A profile is a set of common user parameters that can be applied to a group of users.
• The CREATE PROFILE command is used to create a profile of desired attributes.CREATE PROFILE profile_name AS … ;
• The PROFILE option (new) is used with CREATE USER and MODIFY USER commands to assign a user to a specific profile.
– CREATE USER user1 AS …, PROFILE = prof_name;
– MODIFY USER user2 AS PROFILE = prof_name; The data dictionary consists of tables, views and macros stored in system user DBC.
• The Teradata Database automatically updates data dictionary tables as you create or drop objects.
• You can access data dictionary tables with supplied views.• Data Dictionary tables keep track of all created objects:
– Data Database and users - Columns and indexes - Hierarchies
• Tables, views, macros, triggers, join indexes, and stored procedures
3- 54 Databases, Users and the Data Dictionary
Review Questions Check your understanding of the concepts discussed in this module by completing the review questions as directed by your instructor.
Databases, Users and the Data Dictionary 3- 55
Review Questions
1. You can give the authority to use the CREATE DATABASE and CREATE USER statements only to system administrators.
2. All Profile designations are effective immediately.
3. System views have been created to provide data dictionary data to users of the system.
4. What is an advantage of using Profiles?
5. In which two places are password security information defined?A:B:
Match the view name with its purpose.
T F
T F
T F
___ Children A. Data about tables, views, macros.
___ Databases B. Information about hierarchical relationships.
___ Tables C. Information about databases, users, and immediate parents.
3- 56 Databases, Users and the Data Dictionary
Lab 1 The Lab for this Module is in Appendix B. Please follow your Instructor’s directions for completing Lab assignments.
Databases, Users and the Data Dictionary 3- 57
Lab 1
See Lab 1 in Appendix B
3- 58 Databases, Users and the Data Dictionary
References For more information on topics covered in this module:
• Teradata RDBMS Database Design - (B035-1094-122A)
• Teradata RBDMS Security Administration Guide - (B035-1100-122A)
• Teradata RDBMS SQL Reference (BO35-1101-0122A)
Space Allocation and Usage 4-1
Module 4
Space Allocation and Usage
After completing this module, you will be able to:
• Define permanent space, spool space and operating system space requirements.
• Estimate system capacity.
• Use the DiskSpace, TableSize and AllTempTables views to monitor disk space utilization.
• Use the DBC.ClearPeakDisk macro reset data dictionary tables used to collect accounting information.
4-2 Space Allocation and Usage
Notes:
Space Allocation and Usage 4-3
Table of Contents
PERMANENT SPACE TERMINOLOGY .........................................................................................................................4-4 SPOOL AND TEMP SPACE TERMINOLOGY ..............................................................................................................4-6 ASSIGNING SPACE LIMITS................................................................................................................................................4-8 GIVING ONE USER TO ANOTHER ............................................................................................................................... 4-10 RESERVING SPACE FOR SPOOL.................................................................................................................................. 4-12 VIEWS FOR SPACE ALLO CATION REPORTING.................................................................................................. 4-14 DISKSPACE VIEW ............................................................................................................................................................... 4-16 TABLESIZE VIEW ................................................................................................................................................................ 4-18 ALLTEMPTABLES VIEW ................................................................................................................................................. 4-20 RESETTING PEAK VALUES ............................................................................................................................................ 4-22 REVIEW QUESTIONS ......................................................................................................................................................... 4-24 REFERENCES ........................................................................................................................................................................ 4-26
4-4 Space Allocation and Usage
Permanent Space Terminology MaxPerm
MaxPerm is the maximum number of bytes available for table, index, and permanent journal storage in a database or user.
The number of bytes specified is divided by the number of AMPs in the system. The result is recorded on each AMP and may not be exceeded on that vproc.
Perm space limits are deducted from the limit set for the immediate parent of the object defined.
Perm space is acquired when data is added to a table. The space is released when you delete or drop objects.
CurrentPerm CurrentPerm is the total number of bytes (including table headers) in use on the database to store the tables, subtables and permanent journals contained in a User or Database. This value is maintained on each AMP.
PeakPerm PeakPerm is the largest number of bytes ever actually used to store data in a user or database. This value is maintained on each AMP.
Reset the PeakPerm value to zero by using the ClearPeakDisk Macro supplied in User DBC.
Note: Space limits are enforced at the database level. A database or user may own several small tables or a few large tables as long as they are within the MaxPerm limit set on each AMP.
Space Allocation and Usage 4-5
Permanent Space Terminology
MaxPermThe maximum number of bytes available for table, index and permanent journal storage in a database or user.
CurrentPermThe total number of bytes in use to store the tables, subtables, and permanent journals contained in the database or user.
PeakPermThe largest number of bytes actually used to store data in this user since the value was last reset.
MaxPerm
CurrentPerm
PeakPerm
4-6 Space Allocation and Usage
Spool and Temp Space Terminology MaxSpool
MaxSpool is a value used to limit the number of bytes the system will allocate to create spool files and volatile tables for a user.
The value you specify may not exceed that of a user's immediate parent (database or user) at the time you create the user. If you do not specify a value, MaxSpool defaults to the immediate parent’s MaxSpool value.
Limit the spool space you allocate to users to reduce the impact of "runaway" transactions, such as accidental product joins.
CurrentSpool CurrentSpool is the number of bytes in use for running transactions. This value is maintained on each AMP for each user.
Note: Spool space marked (last use) is recovered by a worker task that is initiated every five minutes.
PeakSpool PeakSpool is the maximum number of bytes used since the value was last reset by the ClearPeakDisk Macro (supplied in system user DBC).
MaxTemp MaxTemp is the limit of space available to be used for Global Temporary Tables for the database/user. The value you specify may not exceed the user’s immediate parent level. If you do not specify a value, MaxTemp defaults to the immediate parent’s value.
CurrentTemp This is the amount of space currently in use by Global Temporary Tables for the database/user.
PeakTemp This is the maximum space used by Global Temporary Tables for a user/database since the value was last reset by the DBC.ClearPeakDisk macro.
Note: Temp space is released when the session terminates or earlier depending on options and actions taken by the user.
Space Allocation and Usage 4-7
Spool and Temp Space Terminology
MaxSpoolA value used to limit the number of bytes the system will consume tocreate spool files and volatile tables for a user.
CurrentSpoolThe number of bytes currently in use for running transactions.
PeakSpoolThe maximum number of bytes used by a transaction run for this usersince the value was last reset with ClearPeakDiskMacro.
PeakTempMaximum space used by Global Temporary Tables for a user/databasesince last reset with ClearPeakDiskMacro.
MaxTempThe limit of space used for Global Temporary Tables (at the system levelor user level).
CurrentTempAmount of space currently in use by Global Temporary Tables.
4-8 Space Allocation and Usage
Assigning Space Limits You define permanent and spool space limits at the database or user level, not at the table level.
When you create databases or users, perm space limits are deducted from the available (unused) space of the immediate owner.
The spool and temporary space limits may not exceed that of the immediate owner at the time you create an object. If you do not specify a spool or temporary space limit, the new object “inherits” its limit from the immediate owner (user or database).
Example The diagram on the facing page illustrates how Teradata manages space.
A user, Payroll, has a 25 MB permanent space limit and a 50 MB spool space limit.
Payroll creates two new users, PA01 and PA02. After Payroll creates the new objects, its MaxPerm space drops to 15 MB. PA01 has 6 MB of MaxPerm and PA02 has 4 MB.
Later, Payroll drops user PA02. Payroll's MaxPerm space increases to 19 MB since it regains the permanent space that used to belong to PA02.
Payroll has a limit of 50 MB of MaxSpool. When it creates PA01, it assigns 25 MB of MaxSpool to the new user. Since there is no statement of spool space for PA02, its MaxSpool defaults to the limit of its immediate parent: 50 MB.
Since there is no statement of temporary space for either PA01 or PA02, their MaxTemp defaults to limit their immediate parent to 30 MB.
The immediate owner’s amount of MaxPerm decreases and increases as that owner creates and drops new users. The spool space figure remains constant even when the immediate owner adds and drops users.
Space Allocation and Usage 4-9
Assigning Space Limits
Payroll
CREATE USER PA01 AS PASSWORD = ABC,PERM = 6000000, SPOOL = 25000000;
CREATE USER PA02 AS PASSWORD = XYZ,PERM = 4000000;
PA01 PA02
Payroll
PA01
DROP USER PA02;
MaxPerm = 15,000,000MaxSpool = 50,000,000MaxTemp = 30,000,000
The MaxSpool and Max Temp values may not exceedthe value for the immediate owner at the time you create the new user.
MaxPerm = 19,000,000MaxSpool = 50,000,000MaxTemp = 30,000,000
MaxPerm = 25,000,000MaxSpool = 50,000,000MaxTemp = 30,000,000
Payroll
MaxPerm = 6,000,000MaxSpool = 25,000,000MaxTemp = 30,000,000
4-10 Space Allocation and Usage
Giving One User To Another When you give an object to another object in the hierarchy, all space allocated to that object goes with it. If you drop the object, its space is credited to its immediate owner.
When you give databases or users, all descendants of the given object remain descendants of the given object.
When you give an object to new parents, the ownership of space is transferred, however the limits remain the same.
If you drop an object, its space is credited to its immediate owner.
Adjusting Perm Space Limits You can easily adjust perm space limits. Using the illustration on the facing page as an example, you could transfer 5 MB from Human_Resources to Accounting using the following technique:
1. CREATE DATABASE Temp from Human_Resources as Perm = 5000000;
2. GIVE Temp to Accounting;
3. DROP database Temp.
Notes:
• You enforce limits when you create an object.
• Objects you give may have spool limits that exceed that of their new owner.
Space Allocation and Usage 4-11
Giving One User to Another
SysDBA
Human_Resources Accounting
Personnel Benefits
GIVE Payroll TOAccounting ;
Payroll
(Users) (Users)
MaxPerm = 10MaxSpool = 50
MaxPerm = 10MaxSpool = 20
MaxPerm = 15MaxSpool = 50
PA02PA01
4-12 Space Allocation and Usage
Reserving Space for Spool Spool space serves as temporary storage for returned rows during transactions that users submit. To ensure that space is always available, you may want to set aside about 35-40% of total available space as spool space. To do this, you can create a special database called Spool_Reserve. This database will not be used to load tables.
Decision support applications should reserve more of the total disk space as reserved spool space since their SQL statements generate larger spool files. OLTP applications can use less as reserved spool space because their statements generate smaller spool files.
The above actions guarantee that data tables will never occupy more than 60-65% of the total disk space. Since there is no data stored in Spool_Reserve, the system will use its permanent space as spool space when necessary.
File System Writes Though this approach prevents a certain amount of usable space from being used for tables and indexes, it does not guarantee that the space exists as whole cylinders for spool. It is important to know that space (Perm, Temp, and Spool) is allocated on a whole cylinder basis. This is why PercentFill and PackDisk are so important.
Space Allocation and Usage 4-13
Reserving Space for Spool
DBC
Spool_Reserve Users
View andMacro
Databases
To ensure space willalways be availablefor spool....
CREATE DATABASE Spool_Reserve AS PERM = XXXXXXXXXX ;(The space used by Spool_Reserve reduces the totalavailable permanent space in the system by 35%-40%.)
Do not use the Spool_Reserve database to store tables.Data tables will occupy up to 60-65% of the total disk space.
Database(1) (2)
Database
SysAdmin SystemFE CrashdumpsSysDBA
4-14 Space Allocation and Usage
Views for Space Allocation Reporting Use the following system views to report current space allocation:
DBC.DiskSpace[x] This view gives AMP information about disk space usage for any database or account. It gets this information from the ALL table.
DBC.TableSize[x] This view gives AMP information about disk space usage (excluding spool) for any table or account.
DBC.AllTempTables[X] This view provides information about the local temporary tables materialized from the base global temporary tables.
Space Allocation and Usage 4-15
Views for Space Allocation Reporting
VIEW NAME DESCRIPTION
DBC.DiskSpace[x]
DBC.TableSize[x]
AMP information about disk spaceusage (including spool) for any databaseor account
AMP information about disk spaceusage (excluding spool) for any table oraccount
DBC.AllTempTables information about local temporary tables materialized from base global temp tables.
4-16 Space Allocation and Usage
DiskSpace View The DiskSpace(x) view provides AMP information about disk space usage at the database level. This view includes spool space usage.
Example The SELECT statement on the facing page calculates the percentage of disk space used in the owner's database. The result displays a report with five rows of data. Finance has the highest percentage of utilized space at 98.46%. SystemFE has the lowest at 7.07%.
Note: In the statement, use NULLIFZERO to avoid a divide exception.
Space Allocation and Usage 4-17
DiskSpace View
AMP disk space usage at the database level, including spool space.
EXAMPLE: Calculate the percentage of total space used by each database:
PercentDataBaseName Sum(MaxPerm) Sum( CurrentPerm) Used---------------------- --------------------- -------------------------- -----------Finance 1,824,999,996 1,796,817,408 98.46%Mdata 12,000,000,006 8,877,606,400 73.98%DBC 2,067,640,026 321,806,848 15.56%CrashDumps 320,000,000 38,161,408 12.72%SystemFe 1,000,002 70,656 7.07%
DBC.DiskSpace[x] vproc DatabaseName AccountName
PeakPerm
MaxPerm MaxSpool
PeakSpoolCurrentPerm CurrentSpool
SELECT DatabaseName,SUM (MaxPerm),SUM (CurrentPerm),SUM(CurrentPerm) * 100 /
NULLIFZERO(SUM(MaxPerm), 0)(FORMAT ‘ZZ9.99%’, TITLE ‘Percent//Used’)
FROM DBC.DiskSpaceGROUP BY 1ORDER BY 4 desc;
Includes/Excludes the numbers from table ALL.
MaxTempCurrentTempPeakTemp
4-18 Space Allocation and Usage
TableSize View The TableSize [x] view provides AMP information about disk space usage at a table level, optionally for tables the current User owns or has SELECT privileges on.
Example The SELECT statement on the facing page looks for poorly distributed tables by displaying the CurrentPerm figures for tables on all AMPs for the current USER.
The result displays two tables:
• The Employee table is evenly distributed across all AMP in the system. The CurrentPerm figure is nearly identical across all AMPs.
• The Employee_nupi_ondept table is poorly distributed. The CurrentPerm figures range from 4,096 bytes to 30,208 bytes on different AMPs.
Space Allocation and Usage 4-19
TableSize View
AMP disk space usage at table level.
DBC.TableSize[x] vproc DatabaseName AccountNameTableName CurrentPerm PeakPerm
Vproc TableName CurrentPerm------- ----------------- ------------------- 0 employee 18,944 1 employee 18,944 2 employee 18,944 3 employee 19,968 0 employee_nupi_ondept 4,096 1 employee_nupi_ondept 30,208 2 employee_nupi_ondept 15,360 3 employee_nupi_ondept 12,288
SELECT vproc,TableName (FORMAT
‘X(20)’),CurrentPerm
FROM DBC.TableSizeWHERE DatabaseName = USERORDER BY TableName, vproc ;
EXAMPLE:Look for poorlydistributedtables.
Includes/Excludes the numbers from table ALL.
4-20 Space Allocation and Usage
AllTempTables View The AllTempTables view provides information about all global temporary tables materialized in the system.
A global temporary table is created by explicitly stating the keywords GLOBAL TEMPORARY in the CREATE TABLE statement. The temporary table defined during the CREATE TABLE statement is referred to as the base temporary table.
When referenced in an SQL session, a local temporary table is materialized with the exact same definition as the base table. Once the temporary table is materialized, subsequent DML statements referring to that table are mapped to the materialized instance.
A materialized temporary table is automatically dropped at the end of a session.
This view provides information about all temporary tables materialized in the system.
Global Temporary Tables Use global temporary tables to store temporary, immediate results from multiple queries into working tables. To create a global temporary table, you must state the keywords GLOBAL TEMPORARY in the CREATE TABLE statement. The temporary table defined during the CREATE TABLE statement is referred to as the base temporary table.
When referenced in an SQL session, a local temporary table is materialized with the exact same definition as the base table. Once the temporary table is materialized, subsequent DML statements referring to that table are mapped to the materialized instance.
Note: After you create a global temporary table definition, use the INSERT statement to create a local instance of the global temporary table to use during the session.
Temporary Versus Permanent Tables Temporary tables are different than permanent tables in the following ways:
• They are always empty at the start of a session.
• Their contents cannot be shared by other sessions.
• You can empty them at the end of each transaction.
The system automatically drops them at the end of each session.
.
Space Allocation and Usage 4-21
AllTempTables View
Local temp tables materialized from global temp tables.
EXAMPLE:
Show all temporary tables materialized by the login user in the system.
SELECT * FROM DBC.AllTempTablesX
DBC.AllTempTables[x]HostNo SessionNo UserName
B_DatabaseName B_TableName E_TableId
HostNo SessionNo DatabaseName TableName------- ---------------- -------------- ------------------
52 3,409 Test GTemp1TableID------------------00800A000000
4-22 Space Allocation and Usage
Resetting Peak Values From time to time, the administrator needs to clear out the values accumulated in the DBC.DataBaseSpace table.
DBC.ClearPeakDisk The Teradata software provides a macro to reset to zero the following columns of the DISKSPACE information:
• PEAKPERM
• PEAKSPOOL
• PEAKTEMP
You are able to determine the maximum amount of permanent space, the maximum amount of spool space, and the maximum amount of temporary space used at any one time by the database for a specified AMP (or all AMPs if the SUM aggregate is specified) since the last time the ClearPeakDisk macro was run.
Space Allocation and Usage 4-23
Resetting Peak Values
This macro may be used to Zero peak values for the next data collection period.To clear values: EXEC DBC.ClearPeakDisk;
SHOW MACRO DBC.ClearPeakDisk ;
REPLACE MACRO ClearPeakDisk AS (UPDATE DatabaseSpaceSET PeakPermSpace = 0PeakSpoolSpace = 0 PeakTempSpace = 0 ALL ;) ;
*** Update completed. 3911 rows changed.
*** Time was 4 seconds.
The ClearPeakDisk macro resets to zero the following columns of DISKSPACE information:PEAKPERM PEAKSPOOL PEAKTEMP
4-24 Space Allocation and Usage
Review Questions Check your understanding of the concepts discussed in this module by completing the review questions as directed by your instructor.
Space Allocation and Usage 4-25
Review Questions
1. Space limits are enforced at the table level.
2. When you use the GIVE statement totransfer a database or user to a new owner, all space assigned to the transferred object remain the same.
3. You should reserve anywhere from 35-40% of total available space for spool.
4. The DiskSpace View reports Database and Table information.
5. The TableSize View shows the maximum table size.
T F
T F
T F
T F
T F
Indicate whether a statement is True (T) or False (F).
4-26 Space Allocation and Usage
References For more information on space allocation and usage, refer to
• Teradata RDBMS Database Design - (B035-1094-122A).
Teradata Accounting 5 - 1
Module 5
After completing this module, you should be able to:
• Use Teradata accounting features to determine resource usage by user or account.
• Explain how the database administrator uses system accounting to support administrative functions.
• Use system views to access system accounting information.
Teradata Accounting
5- 2 Teradata Accounting
Notes:
Teradata Accounting 5 - 3
Table of Contents
CREATE USER STATEMENT................................................................................................................................................ 4 SYSTEM ACCOUNTING.......................................................................................................................................................... 6 SYSTEM ACCOUNTING VIEWS .......................................................................................................................................... 8 DBC.ACCOUNTINFO[X] VIEW ...........................................................................................................................................10 DBC.AMPUSAGE VIEW .........................................................................................................................................................12 DBC.AMPUSAGE VIEW —EXAMPLES ............................................................................................................................14 ACCOUNT STRING EXPANSION ......................................................................................................................................16 ACCOUNT STRING EXPANSION USAGE......................................................................................................................18 USER ACCOUNTING: RES ETTING THE VALUES ....................................................................................................20 TERADATA ACCOUNTING SUMMARY.........................................................................................................................22 REVIEW QUESTIONS .............................................................................................................................................................24 LAB .................................................................................................................................................................................................26 REFERENCES ............................................................................................................................................................................28
5- 4 Teradata Accounting
CREATE USER Statement Account string contains:
• Priority Group reference (relates to Priority Scheduler Facility, etc.)
• Account String Expansion (relates to AMPUsage reporting).
• Project code or accounting information (free form for inte rnal charge back or accounting purposes.
NOTE: Account Priority information is discussed in the Priority Scheduler Facility module in the Teradata Warehouse Management course.
Teradata Accounting 5 - 5
CREATE USER Statement
CREATE USER …
ACCOUNT=(‘ACCTID’, ‘ACCTID’);
Examples:
‘$M&D&HP210’ or ‘$GRP1$&LP3452’
Account Strings contain:
• Priority Group reference
• Account String Expansion
• Project code or accounting information
5- 6 Teradata Accounting
System Accounting System accounting serves three important administrative functions:
• Charge-back Billing
• Capacity Planning
• Resource Control
Charge-back Billing You may need to charge users for their use of Teradata Database resources. The accounting feature allows you to determine resource usage by any user or account.
The Teradata system tracks CPU and I/O resources expended by a session and charges them to the account specified at logon time. The I/O resource tracks the number of AMP to DSU read and write operations generated by a given user or account. Charge-back billing permits equitable cost allocation of system resources across all users.
Capacity Planning To plan for the resources needed to accommodate growth, you must know how the current workload is affecting the system. To assess the effect of the current workload, you can collect and analyze information about resource utilization.
Collecting and analyzing information about resource utilization is one component of analyzing data. Another component is the collection of historical usage statistics. The accounting feature provides information about the current workload to assist you in anticipating future needs.
Resource Control As the administrator, you may need to control who gets specific resources. Some users may need a higher priority than others. You can set user priorities to maintain efficient operation of the Teradata Database while providing equitable service to multiple client systems and users. The Priority Scheduler Facility utility enables you to review the priority weight of current processes by performance group and allocation group. The Priority Scheduler Facility is covered in the Teradata Warehouse Management course.
System accounting can also assist you to identify performance anomalies that impact resource availability.
Teradata Accounting 5 - 7
System Accounting
...to identify performance anomalies
Charge-back billing...for equitable cost allocation
Resource control
Capacity planning...to anticipate your needs
5- 8 Teradata Accounting
System Accounting Views The Teradata Database provides two system-supplied views to support accounting functions.
DBC.AccountInfo provides information about valid accounts, and DBC.AMPUsage provides information about the usage of each AMP vproc by user and account.
DBC.AccountInfo[x] The DBC.AccountInfo[x] view provides information about valid accounts for the specific user(s) or profiles. The information provided is based on data from the DBC.Accounts table in the data dictionary. Each time a CREATE or MODIFY statement indicates an account ID, a row is either inserted or updated in the DBC.Accounts table.
DBC.AMPUsage The DBC.AMPUsage view provides information about the usage of each AMP vproc for each user and account. It is based on information in the DBC.Acctg table in the data dictionary and supplies information about AMP CPU time consumed, and the number of AMP to DSU read and write operations generated by a given user or account. It also tracks the activities of any console utilities. Each time a user logs on to the system, rows are inserted into the DBC.Acctg table that track how much AMP usage they generate. Use this information to bill an account for system resource use.
Dictionary Tables Accessed • DBC.Accounts
• DBC.Acctg
Teradata Accounting 5 - 9
System Accounting Views
VIEW NAME DESCRIPTION
DBC.AccountInfo[x]
DBC.AMPUsage[x]
Returns each AccountName associated with a user or profile.
Provides information about I/O and AMP CPU usage by user and account.
5- 10 Teradata Accounting
DBC.AccountInfo[X] View The AccountInfo view shown on the facing page provides information about each user and the valid accounts associated with each user. When the requesting user indicates the [X] view, they can only see information about users that they own or can modify.
Example The SQL statement on the facing page requests a list of all users with a valid RUSH priority account code. The result displays four different users:
• DBC
• SysAdmin
• SysDBA
• SystemFE
Teradata Accounting 5 - 11
DBC. AccountInfo[X] View
EXAMPLE: Identify all users with a valid RUSH priority code.
SELECT AccountName,UserName
FROM DBC.AccountInfoWHERE AccountName LIKE ‘P%’ORDER BY AccountName ;
DBC.AccountInfo[x]
AccountName UserName------------------- ---------------P1230 DBCP1230 SysAdminP2450 SysDBAP3450 SystemFE
Provides information about users and the valid accounts associated with those users. (For users the requestor owns or can modify).
UserName AccountName
UserOrProfile
5- 12 Teradata Accounting
DBC.AMPUsage View The DBC.AMPUsage view provides information about the usage of each AMP for each user and account combination, for each AMP. AMPUsage monitors logical I/Os explicitly requested by the database software. It does not record the activity of parsing the user’s query. A logical I/O is counted even if the requested segment is in cache and no physical I/O is performed.
The information in the view is:
• Account Name (includes the Account Code for the user. The Account Code contains the session priority ($M, $H, etc.).
• User Name
• CPU Time
• Disk I/O
• Vproc
• Vproc Type
Cumulative Data The information is aggregated by User ID, account, and processor. Each row is continually updated as long as the User ID and account match. Therefore, the data is cumulative. Updates to the table are made at the end of each AMP step on each processor affected by the step and periodically during long steps. For a look at up-to-the-moment activity, Performance Monitor can be used.
The data is collected and continually added to what is already in the table until the counters are reset to zero. Many sites zero the information on a per week, day, or shift basis so a determination can be made on what resources were used, by User ID and account, for the corresponding period.
Consuming AMP Resources Since the data is kept on a per AMP basis, if processing on your system was skewed, you can check the DBC.AMPUsage table to determine which user consumed all the resources on a particular AMP. This can assist you in isolating a user that may be causing performance problems.
Teradata Accounting 5 - 13
DBC.AMPUsage View
DBC.AMPUsage
AMPUsage is an updateable view that uses the DBC.Acctg table to provide accounting information by username and account.
AccountName UserName CpuTime
DiskIO Vproc VprocType
Model
CpuTime: Total number of AMP CPU seconds used.
DiskIO: Total number of logical disk I/O operations.
AccountName: contains session priority information ($M, $H, etc.).
5- 14 Teradata Accounting
DBC.AMPUsage View—Examples Example 1
The SQL statement on the facing page requests totals for CPU time and I/O for user DBA01. The totals are aggregates of all resources used across all AMP vprocs. The result returns three rows, one for each account ID.
Example 2 DBC.AMPUsage is an updateable view and you can use it to update or remove rows in the DBC.Acctg table.
Note: Three rows are shown in the output because of hidden information that will be discussed in a later section.
Teradata Accounting 5 - 15
DBC.AMPUsage View—Examples
EXAMPLE 1: Show CPU time and I/O totals for a single user.
SELECT UserName (FORMAT ‘X (16)’),AccountName (FORMAT ‘X (12)’),SUM (CpuTime),SUM (DiskIO)
FROM DBC.AMPUsageWHERE UserName = ‘DBA01’GROUP BY 1, 2ORDER BY 3 DESC ;
To reset counters for ALL rows or selected rows.
EXAMPLE 2:
UPDATE DBC.AMPUsageSet CPUTime = 0
,DiskIO = 0ALL ;
UserName AccountName SUM (CpuTime) SUM (DiskIO)--------------- ------------------- ---------------------- ------------------DBA01 $LP9210 6,336.76 505,636DBA01 $MP9210 4,387.14DBA01 $HP9210 1.28 166
303,733
5- 16 Teradata Accounting
Account String Expansion Account String Expansion (ASE) lets the system administrator establish one or more expandable account identifiers when users are created or modified. When the user logs on, the account identifier must be supplied as part of the logon string. This can be done explicitly, or it may be supplied as a default by the sys tem. If the ASE variables are included in the default account string the user may be unaware that the additional information is being collected.
ASE enables the use of substitution variables in the account ID portion of the user’s logon string. Actual values are inserted into the account string at Teradata SQL execution time. The expanded account string can be used to increase the granularity at which AMP usage measurements are taken. The ASE substitution variables are:
&D Date YYMMDD Causes the 6-character date that the Teradata SQL request was received to be inserted in the account string.
&T Time HHMMSS Causes the 6 character time of day that the Teradata SQL request was received to be inserted in the account string.
&H Hour HH Causes the hour of the day that the Teradata SQL request was received to be inserted into the account string.
&L Logon time stamp
YYMMDDHHMMSS.hh
Causes the logon stamp to be inserted into the account string. This value is inserted into AMPUsage at logon time and does not change unless the user logs off and then on again.
&I Logical host id (4 characters)/Session number (9 characters)/Request number (9 characters)
LLLLSSSSSSSSSRRRRRRRRR
Inserts the logon host ID, the current session number, and the request number into the account string.
&S Session number SSSSSSSSS Inserts the current session number into the account string.
The ASE variables may be used in any combination and in any order, subject to the constraints on length and position. The maximum unexpanded and expanded account string cannot exceed 30 characters. If the user account has a priority associated with it ($L, $M, $H, $R, or a performance group), the priority must appear at the beginning of the account string.
Note: If either &H or &T is specified and not &D, statistics collected on one day at the specified time are combined with stats on other days at the same time. A combination of &H and &T is legal, however, the data is redundant.
Teradata Accounting 5 - 17
Account String Expansion
ASE is a mechanism to provide more detailed utilization reports and user accounting data.
You can add the following substitution variables to a user’s account string. The system resolves the variables at logon or at SQL statement execution time.
&D Date (YYMMDD)
&T Time (HHMMSS)
&H Hour (HH)
&L Logon timestamp (YYMMDDHHMMSS.hh)
&I Logon hostidsession number, request number (LLLLSSSSSSSSSRRRRRRRRR)
&S Session number (SSSSSSSSS)
5- 18 Teradata Accounting
Account String Expansion Usage Each time the system determines that a new account string is in effect, it collects a new set of statistics. The system stores the accumulated statistics for a user/account string pair as a row in DBC.AMPUsage. Each different user/account string pair results in a new set of statistics and an additional row. ASE uses the AMPUsage mechanism, but by adding in the substitution variables, the amount of information recorded can greatly increase.
The measurement rate may be specified by date (&D), time (&T), or a combination thereof. Information can be written to AMPUsage based on the time the user logged on (&L). It can be directed to generate a row for each user, each session, or for an aggregation of the user’s daily activities. At the finest granularity, ASE can generate a summary row for every SQL request.
The collection activity occurs on all AMPs involved in processing the user’s SQL request.
ASE has a negligible effect on PE performance; the cost incurred for analyzing the account string requires only a few microseconds. However, the AMP does have the burden of additional AMPUsage logging. Depending on the number of users and the ASE options chosen, the added burden may vary from slight to enough to degrade overall performance. For example, by specifying the &T variable, ASE will log a row to AMPUsage for every AMP for every request. This should not be a problem for long running DSS requests, but could be a performance issue if there are numerous small requests.
The information collected when using ASE can be very helpful in analysis, but you should take care not to create a bigger problem than you are trying to solve. For this reason, NCR recommends that the &T parameter not be used with OLTP, BulkLoad, or as a default in the account string.
If you use ASE, you must be sure to delete rows from DBC.AMPUsage. The number of rows collected can increase significantly and the rows remain in the table until they are explicitly deleted.
Teradata Accounting 5 - 19
Account String Expansion Usage
– Each different user/account string results in a new row being inserted in DBC.AMPUsage.
– You must determine the measurement rate you need and the users you wish to monitor.
– Collection activity occurs on all AMPs involved with the request.
– Performance impact of ASE can vary greatly depending upon granularity requested and the types of requests submitted.
– Be sure to clean out DBC.AMPUsage on a regular basis by deleting rows.
5- 20 Teradata Accounting
User Accounting: Resetting the Values Clear Peak Values
The ClearPeakDisk macro resets the PEAKPERM, PEAKSPOOL, and PEAKTEMP columns of the DBC.DiskSpace view to 0.
You can determine the maximum amount of permanent space, spool space, and temporary space used at any one time by the database for a specified AMP (or all AMPs if the SUM aggregate is specified) since the last time the ClearPeakDisk macro was run.
There are two options to clear peak values:
1. ClearPeakDisk macro resets peak values of perm, spool, and temp to 0.
2. Teradata Manager sets peak perm to current perm and sets peak spool and temp to 0.
Reset AMPUsage The AMPUsage view provides information about the usage of each AMP for each user and account.
AMPUsage monitors logical I/Os explicitly requested by the AMP database software or file system that is running in the context of an AMP worker task for the purpose of executing a step in the user query. I/Os done by UNIX for swapping are not included in AMPUsage, nor are the I/Os caused by parsing the user query.
There are two options to reset AMPUsage:
1. Use AMPUsage view to reset CPU time and Disk I/O to 0. This is viable if not using ASE (if &H is used, it may be viable).
2. Use AMPUsage view to delete rows generated by ASE. If using ASE, you must delete to remove accumulated rows.
Teradata Accounting 5 - 21
User Accounting: Resetting the Values
• Options to Clear Peak Values– Use DBC.ClearPeakDisk macro to reset perm, spool, and
temp to 0.
– Teradata Manager sets peak perm to current perm and peak spool and peak temp to 0.
• Options to reset AMPUsage:– Use AMPUsage view to reset CPU time and Disk I/O to 0.
– Use AMPUsage view to delete rows generated by ASE.
5- 22 Teradata Accounting
Teradata Accounting Summary The opposite page summarizes some important concepts in this module.
Teradata Accounting 5 - 23
Teradata Accounting Summary
• To establish execution time priorities, use the account ID.
• A user’s position in the hierarchy does not affect their priority.
• You can define accounting mechanisms for:– Charge-back billing– System usage reporting– Capacity planning– Performance analysis
• To reset data dictionary tables used to collect accounting information, you can use:– DBC.AMPUsage view– DBC.ClearPeakDisk macro
5- 24 Teradata Accounting
Review Questions Check your understanding of the concepts discussed in this module by completing the review questions as directed by your instructor.
Teradata Accounting 5 - 25
Review Questions
1. What does AMPUsage monitor?
2. If you use Account String Expansion, from which table must you be sure to delete rows?
3. If you want wanted to know the AMP CPU time and logical disk I/O for a particular user, how would you find out?
5- 26 Teradata Accounting
Lab 2 The Lab for this Module is in Appendix B. Please follow your Instructor’s directions for completing Lab assignments.
Teradata Accounting 5 - 27
Lab 2
Please do Lab 2 in Appendix B
5- 28 Teradata Accounting
References For more information on space allocation and usage, refer to
• Teradata RDBMS Database Design - (B035-1094-122A)
• Teradata RBDMS Security Administration Guide - (B035-1100-122A)
• Teradata RDBMS SQL Reference (BO35-1101-122A)
Access Rights 6- 1
Module 6
After completing this module, you should be able to:
• Describe the three types of access rights: Automatic, Explicit, and Owner/Implicit.
• Use GRANT and REVOKE statements to assign and remove access rights.
• Use the DBC.AllRights, DBC.UserRights and DBC.UserGrantedRights views to obtain information about current user privileges.
• Identify the access rights needed to create roles.
• Number the steps of access rights validation.
• Use roles when creating new users.
• Use system views to display role information.
Access Rights
6- 2 Access Rights
Notes:
Access Rights 6- 3
Table of Contents
PRIVILEGES/ACCESS RIGHTS ............................................................................................................................................ 4 ACCESS RIGHTS MECHANISMS ........................................................................................................................................ 6 WHAT ARE ROLES? ................................................................................................................................................................. 8 AUTOMATIC RIGHTS GEN ERATED BY CREATE TAB LE ...................................................................................10 IMPLICIT, AUTOMATIC, AND EXPLICIT RIGHTS..................................................................................................12 RIGHTS GENERATED AUTOMATICALLY..................................................................................................................14 THE GRANT STATEMENT...................................................................................................................................................16 GRANTING RIGHTS ...............................................................................................................................................................18 GRANT PUBLIC ........................................................................................................................................................................20 THE REVOKE STATEMENT................................................................................................................................................22 TERADATA ADMINISTRATOR TOOLS - GRANT/REVOKE OPTION..............................................................24 REVOKING NON-EXISTENT RIGHTS ............................................................................................................................26 INHERITING ACCESS RIGHTS ..........................................................................................................................................28 THE GIVE STATEMENT AND ACCESS RIGHTS ........................................................................................................30 REMOVING A LEVEL IN THE HIERARCHY ...............................................................................................................32 A SUGGESTED ACCESS RIGHTS STRUCTURE .........................................................................................................34 ACCESS RIGHTS ISSUES (PRIOR TO ROLES)............................................................................................................36 ADVANTAGES OF ROLES ....................................................................................................................................................38 ACCESS RIGHTS WITHOUT ROLES ...............................................................................................................................40 ACCESS RIGHTS USING A ROLE.....................................................................................................................................42 GRANT AND REVOKE COMMANDS (ROLE FORM) ...............................................................................................44 IMPLEMENTING ROLES ......................................................................................................................................................46 ACCESS RIGHTS VALIDATION AND ROLES .............................................................................................................48 SQL STATEMENTS TO SUPPORT ROLES ....................................................................................................................50 GRANT COMMAND (SQL FORM).....................................................................................................................................52 REVOKE COMMAND (SQL FORM)..................................................................................................................................54 SYSTEM HIERARCHY (US ED IN FOLLOWING EXAMPLES)..............................................................................56 EXAMPLE - USING ROLES ..................................................................................................................................................58 EXAMPLE - USING ROLES (CONT.) ................................................................................................................................60 EXAMPLE - USING ROLES (CONT.) ................................................................................................................................62 SET ROLE ALL..........................................................................................................................................................................64 STEPS TO IMPLEMENTING ROLES ................................................................................................................................66 ACCESS CONTROL MECHANISMS .................................................................................................................................68 USING VIEWS TO LIMIT ACCESS ...................................................................................................................................70 USING MACROS AND STO RED PROCEDURES TO CONTROL ACCESS........................................................72 ACCESS RIGHTS AND NESTED VIEWS .........................................................................................................................74 SYSTEM VIEWS FOR ACCESS RIGHTS .........................................................................................................................76 ALLRIGHTS AND USERRIGHTS VIEWS .......................................................................................................................78 DBC.USERGRANTEDRIGHTS VIEW ...............................................................................................................................80 ROLEINFO[X] VIEW ...............................................................................................................................................................82 ROLEMEMBERS[X] VIEW ...................................................................................................................................................84 ALLROLERIGHTS AND USERROLERIGHTS VIEWS ..............................................................................................86 ACCESS RIGHTS SUMMARY..............................................................................................................................................88 REVIEW QUESTIONS .............................................................................................................................................................90 LAB 3 ..............................................................................................................................................................................................92 REFERENCES ............................................................................................................................................................................94
6- 4 Access Rights
Privileges/Access Rights Your privileges or access rights define the types of activities you can perform during a session.
The following operations require that you have specific privileges:
Privilege Operation Type
CREATE DDL
DROP DDL
REFERENCES DDL
INDEX DDL
CREATE TRIGGER
DROP TRIGGER
DDL
SELECT DML
UPDATE DML
INSERT DML
DELETE DML
EXECUTE DML
CHECKPOINT • DML
• Archive/Recovery
DUMP Archive/Recovery
RESTORE Archive/Recovery
Access rights may be granted on:
• Users • Views • Columns of tables
• Databases • Macros • Columns of views
• Tables • Stored Procedures
Notes:
• To use UPDATE or DELETE commands, you must have the SELECT right on the object if values in existing rows are referenced.
• Additional rights you need to control access to performance monitoring functions are discussed in another module.
Access Rights 6- 5
Privileges/Access Rights
. . . on a specified Object.
CREATE DROP
INDEX REFERENCES SELECT
UPDATEINSERT DELETE
EXECUTE
DUMP RESTORE CHECKPOINT
CREATE TRIGGER DROP TRIGGER
A privilege (or access right) allows the user to perform a specified operation . .
TABLE VIEW MACRO
DATABASE USER
STORED PROCEDURE TRIGGER
COLUMNS OF TABLES COLUMNS OF VIEWS
6- 6 Access Rights
Access Rights Mechanisms The data dictionary includes a system table called DBC.AccessRights that contains information about the access rights assigned to existing users.
There are three types of access rights or privileges:
• Automatic
• Explicit
• Ownership/Implicit
Automatic Rights
Automatic rights are privileges given to creators and their created objects when the newly created object is a user or database. When a user submits a CREATE statement, new rows are inserted in the DBC.AccessRights table. All rights are automatically removed for an object when it is dropped. Automatic rights can be removed using the REVOKE command.
Explicit Rights
Explicit rights are privileges conferred by using a GRANT statement. This statement inserts new rows into the DBC.AccessRights table. Explicit rights can be removed using the REVOKE statement. All rights are automatically removed for an object when it is dropped.
Ownership Rights
Owners (Parents) have the implicit right to grant rights on any or all of their owned objects (Children), either to themselves or to any other user or database. If an owner grants him or herself rights over any owned object, the parser will validate that GRANT statement even though the owner holds no other privileges.
Ownership rights cannot be taken away unless ownership is transferred or the object is dropped.
Access Rights Views
The data dictionary contains three system views that return information about access rights:
• DBC.AllRights
• DBC.UserRights
• DBC.UserGrantedRights
Access Rights 6- 7
Access Rights Mechanisms
Implicit RightDBC.Owners
Explicit
DBC.AccessRights
CREATE
DROP
GRANT
REVOKE
Automatic
6- 8 Access Rights
What are Roles? With Teradata V2R5.0, the database administration and security concept of roles has been introduced.
A role can be viewed as a pseudo-user with privileges on a number of database objects. Any user granted a role could then take on the identity of the pseudo-user and access all of the objects it has rights to.
A database administrator can create different roles for different job functions and responsibilities, grant specific privileges on database objects to these roles, and then grant these roles to users.
Access Rights 6- 9
What are Roles?
A new administration/security feature:
• Roles simplify the management of users and access rights.
What is a “role”?
• A role is simply a collection of access rights.
– Rights are first granted to a role and the right to use the roleis then granted to users.
• A DBA can create different roles for different job functions andresponsibilities.
• Roles can help reduce the number of rows in the DBC.AccessRightstable.
6- 10 Access Rights
Automatic Rights Generated by CREATE TABLE To view the rights generated by a CREATE TABLE STATEMENT, the SQL request can be preceded by the modifier EXPLAIN. As a result, the parser prints out the AMP steps (in simple English) that the CREATE statement generates.
The facing page shows how the system adds access rights to the AccessRights table.
The following access rights are inserted; each with the grant authority:
• SELECT (R)
• INSERT (I)
• UPDATE (U)
• DELETE (D)
• DROP TABLE (DT)
• INDEX (IX)
• REFERENCE (RF)
• CREATE TRIGGER (CG)
• DROP TRIGGER (DG)
• DUMP (DP)
• RESTORE (RS)
Access Rights 6- 11
Automatic Rights Generated by CREATE TABLE
CREATE SET TABLE tjhc20.department(department_number SMALLINT,department_name CHAR(30),budget_amount DECIMAL(10,2),manager_employee_number INTEGER)
UNIQUE PRIMARY INDEX ( department_number );
Explanation1) First, we lock tjhc20.department for exclusive use.2) Next, we lock a distinct DBC."pseudo table" for read on a RowHash for deadlock prevention, we lock a distinct
DBC."pseudo table" for write on a RowHash for deadlock prevention, we lock a distinct DBC."pseudo table" forwrite on a RowHash for deadlock prevention, and we lock a distinct DBC."pseudo table" for write on a RowHash for deadlock prevention.
3) We lock DBC.AccessRights for write on a RowHash, we lock DBC.TVFields for write on a RowHash, we lock DBC.TVM for write on a RowHash, we lock DBC.DBase for read on a RowHash, and we lock DBC.Indexes for write on a RowHash.
4) We execute the following steps in parallel.1) We do a single-AMP ABORT test from DBC.DBase by way of the unique primary index.2) We do a single-AMP ABORT test from DBC.TVM by way of the unique primary index with no residual conditions.3) We do an INSERT into DBC.TVFields (no lock required).4) We do an INSERT into DBC.TVFields (no lock required).5) We do an INSERT into DBC.TVFields (no lock required).6) We do an INSERT into DBC.TVFields (no lock required).7) We do an INSERT into DBC.Indexes (no lock required).8) We do an INSERT into DBC.TVM (no lock required).9) We INSERT default rights* to DBC.AccessRights for tjhc20.department.
5) We create the table header.6) Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request.
•→No rows are returned to the user as the result of statement 1.
* Note: The default rights are R, I, U, D, DT, IX, RF, CG, DG, DP, RS.
6- 12 Access Rights
Implicit, Automatic, and Explicit Rights Implicit rights belong to the owners of objects. Owners do not require rows in the AccessRights table to grant privileges on owned objects. Ownership rights cannot be “revoked.” An owner has the implicit right to GRANT privileges over any owned object.
When you submit a CREATE statement, the system automatically adds new rows to the AccessRights table. You can remove the rows for automatic rights with the REVOKE or DROP statements.
GRANT and REVOKE statements control explicit rights. The GRANT statement adds new rows to the AccessRights table. The REVOKE and DROP statements remove them.
Example
In the example, Security_Admin is the creator. The system automatically inserts rows in DBC.AccessRights for Security_Admin access rights over Personnel. The rights include CREATE/DROP DATABASE/USER. These rights can be revoked.
Personnel is the created object. Rows for the rights CREATE/DROP DATABASE/USER are not included.
Human_Resources is the immediate owner of Personnel. The system does not insert any rows in DBC.AccessRights, but Human_Resources has the owner's implicit right to grant itself rights over Personnel. You cannot revoke the right to GRANT (or re-GRANT) rights over owned objects.
Note: Security_Admin was explicitly GRANTED rights on Human_Resources.
Access Rights 6- 13
Implicit, Automatic, and Explicit Rights
SysDBA
Human_Resources Security_Admin
Personnel
CREATOR
GRANT USER ONHuman_Resources TOSecurity_Admin ;
OWNERS
CREATE USER PersonnelFROM Human_ResourcesAS PASSWORD= yyyyyy,PERM = 1000000 ;
6- 14 Access Rights
Rights Generated Automatically When you create a new user or database, the system automatically generates access rights for the created object and the creator of the object. The system inserts this rights information into the AccessRights table when you submit a CREATE request. You can remove these rights from the AccessRights table with the REVOKE statement.
Example In the example on the facing page, user Security_Admin logs on to the system and creates a new user called Personnel. Both Security_Admin and Personnel have privileges over Personnel.
In addition, user Security_Admin has the rights over Personnel as its creator.
Access Rights 6- 15
Rights Generated Automatically
Security_Admin
Personnel
Security_Admin is given these additional rights over Personnel :
CREATE Database CREATE User
DROP Database DROP User
Note: EXECUTE procedure is not granted automatically.
By issuing a CREATE USER statement, the CREATOR causesAutomatic rights to be generated both for and on the Created Object:
But not CREATE or EXECUTE PROCEDURE
CREATE Table DROP Table
CREATE View DROP View
CREATE Macro DROP Macro
SELECT INSERT
UPDATE DELETE
EXECUTE CHECKPOINT
DUMP RESTORE
CREATE TRIGGER DROP TRIGGER
DROP PROCEDURE
CREATE PROCEDURE
EXECUTE PROCEDURE
6- 16 Access Rights
The GRANT Statement You can use the GRANT statement to give users or group or users explicit privileges on a database, user, table, view, stored procedure, or macro.
The recipient of an explicitly granted privilege may be:
username The specific user(s) or database(s) named. Up to 25 can be specified in one GRANT statement.
PUBLIC Every user in the DBC system (same as ALL DBC).
ALL username The named user and ALL descendants.
Note: UPDATE and REFERENCE privileges have both table- and column-level options.
For detailed information on the GRANT statement, refer to the Teradata RBDMS SQL Reference manual.
Access Rights 6- 17
The GRANT Statement
• To GRANT a privilege on an object, the grantor must be one of the following:– User DBC– Someone DBC has granted privileges to– An owner of the object– Someone with WITH GRANT OPTION for the privilege to be granted,
plus all of the privileges that are to be conferred, on the object.• The WITH GRANT OPTION confers on the recipient “Grant Authority.”
That recipient may then grant the access right to other users or databases.
6- 18 Access Rights
Granting Rights The diagram on the facing page illustrates privileges granted at the database level. User Payroll logs on to the system. Payroll grants the SELECT privilege to user Personnel and ALL of its descendants on the database Pay_DB.
Access Rights 6- 19
Granting Rights
PAY_DBTableATableBTableC
Payroll_UsersPayroll_Admin
User1 User2 User3
.LOGON T / Payroll_Admin ,ZZZZZZZZGRANT SELECT ON Pay_DB TO ALL Payroll_Users;GRANT ACCEPTED.
6- 20 Access Rights
GRANT PUBLIC The PUBLIC option of the GRANT command allows privileges to be granted to all existing and future users.
With V2R5, the PUBLIC implementation (also works with the ALL DBC syntax) was changed from one dictionary row per PUBLIC right per user to one row per right. That is, a single row per access right is placed in the DBC.AccessRights table when the PUBLIC option is used.
The facing page shows the use of PUBLIC. The following example shows that ALL DBC effectively works the same as PUBLIC.
SELECT COUNT(*) FROM DBC.AllRights;
Result: Count(*)
4447
GRANT SELECT ON HR_VM.View6 TO ALL DBC;
SELECT COUNT(*) FROM DBC.AllRights;
Result: Count(*)
4448 (only one access right is added)
Access Rights 6- 21
GRANT PUBLIC
The PUBLIC option of the GRANT command allows privileges to be granted to all existing and future users.
With V2R5, the PUBLIC implementation is changed – a single row per PUBLIC access right is placed in the DBC.AccessRights table.
Example:
SELECT COUNT(*) FROM DBC.AllRights;
Result: Count(*)4446
GRANT SELECT ON HR_VM.View5 TO PUBLIC;
SELECT COUNT(*) FROM DBC.AllRights;
Result: Count(*)4447 (only one access right is added)
GRANT SELECT ON HR_VM.View6 TO ALL Employees;
SELECT COUNT(*) FROM DBC.AllRights;
Result: Count(*)4453 (possibility of many rights added)
6- 22 Access Rights
The REVOKE Statement REVOKE is passive in that it:
• Does not add rows to DBC.AccessRights.
• Removes rows from the DBC.AccessRights table only if the privileges specified exist.
• Does not cascade through the hierarchy unless you specify the “ALL username” option.
• Is not automatically issued for privileges granted by a grantor dropped from the system.
The REVOKE statement removes rights inserted in the AccessRights table by a CREATE statement. It can also remove explicit rights inserted in the AccessRights table by the GRANT statement.
REVOKE Recipients
The REVOKE statement can remove privileges from one of the following:
username A specific named user(s)
PUBLIC Every user in the DBC system
ALL username The named user and ALL descendants
Access Rights 6- 23
The REVOKE Statement
• To revoke a privilege, you must: – Have the right to grant the privilege.– Either own the database object, or someone must first grant the
right to you using WITH GRANT OPTION.
6- 24 Access Rights
Teradata Administrator Tools - Grant/Revoke Option Selecting Grant / Revoke on the Tools pulldown menu displays the Grant / Revoke dialog that you use to grant database, table, view, or macro level privileges. Enter appropriate information and press Grant or Revoke.
Database Name Specify the database or user on which you want to grant or revoke privileges. Teradata Administrator displays all of the objects of the type that is specified by the currently selected Type Option button in the Objects list box. Teradata Administrator activates all of the check boxes for the privileges associated with that type of object.
Object Type Specify the type of object on which you want to grant privileges. When you select one of the types, Teradata Administrator activates the check boxes for the privileges that are available for that type of object.
Object Displays all of the database object names of the type specified by the currently selected Object Type button for the user or database selected in the Database field.
To/From User Displays all of the users and databases of the data source you are working with. Select the name of the user or database to which you want to grant privileges. Press the Control key to select multiple users.
ALL Adds the ALL specification to user or database you selected in the To User list box. If you selected multiple users, then the ALL specification is added only to the first user in the list.
Public Extends the general access to all users and databases on the currently selected RDBMS.
Normal Specify the general access pri vileges you want to grant or revoke: execute, select, insert, etc.
Create Specify the create privileges you want to grant or revoke: table, view, macro, etc.
Drop Specify the drop privileges you want to grant or revoke: table, view, macro, etc.
All Grants or revokes all of the privileges whose check boxes are active in the Normal, Create, Drop, or Privileges areas of the dialog.
All But Grants or revokes all of the privileges whose check boxes are active except those that have been selected.
Grant Grants, to the selected “To User,” the ability to Grant the selected privileges to others, in addition to granting the privileges themselves.
Access Rights 6- 25
Teradata Aministrator Tools Grant/Revoke Option
• Select Database name from list.
• Select Object Type and Object.
• Select a user from the list and the privileges you wish to grant or revoke.
• Select Grant or Revoke.
6- 26 Access Rights
Revoking Non-Existent Rights A REVOKE statement at the object level cannot remove privileges from that object that were granted at the database or user level because there is no correlating row in the AccessRights table for the individual object.
Example The diagram on the facing page illustrates privileges granted at the database level. User Payroll_Admin logs on to the system, and grants the SELECT privilege to user Payroll_User and ALL of its descendants on the database Pay_DB.
Later, Payroll_Admin REVOKES the SELECT privilege from ALL Payroll_User only on tableC that resides in Pay_DB. Although the system returns the message "Revoke Accepted,” nothing actually happened. User Payroll_User and its descendants still have the SELECT privilege on all tables residing in database Pay_DB because the AccessRights table does not have a row correlating to TableC. Since the row granting select at the database level is still intact, all access rights remain in effect.
Access Rights 6- 27
1
Revoking Non-Existent Rights
REVOKE SELECT ON TableC FROM ALL Payroll_Users;REVOKE ACCEPTED.
REVOKE is passive. It doesn’t add rows to DBC.AccessRights, but removes existing rows.
No effect! You cannot remove rights from an object (Table C) if they were granted at the database level (PAY_DB).
PAY_DBTableATableBTableC
Payroll_UsersPayroll_Admin
User1 User2 User3
.LOGON T / Payroll_Admin ,ZZZZZZZZGRANT SELECT ON Pay_DB TO ALL Payroll_Users;GRANT ACCEPTED.
6- 28 Access Rights
Inheriting Access Rights You may inherit access rights by the placement of your user in the hierarchy. As an administrator, you can set up an object hierarchy so that any new object added to an existing user or database inherits specific access rights. Doing so saves time since you do not need to submit a GRANT statement each time you add a new user.
The immediate owner (user or database) of a view or table that is referenced by another must have the right on the referenced object that is specified (SELECT, EXECUTE, etc.) and must have that right with the GRANT option.
Example The diagram on the facing page illustrates a user inheriting access rights. User Payroll logs on the system. It grants the SELECT and EXECUTE privileges to user Pay_Prof and all of its current and future descendants on the database Pay_VMDB.
Later, Payroll creates a new user called Ann from the space owned by user Pay_Prof. Ann inherits the SELECT and EXECUTE privileges on database Pay_VMDB.
In the example, appropriate privileges are granted to ALL Payroll Users. Ann joins the department and INHERITS these privileges.
When inheriting rights, an actual row representing that right is put into the AccessRights table for each right that has been inherited
Access Rights 6- 29
Inheriting Access Rights
Human_Resources
Payroll
JAN
Personnel
TEDBOB ANN
PAY_PROF
PAY_VMDB
KAYJOE
PAY_DBTableATableBTableC
PER_VMDB PER_DB
PER_PROF
LOGON Payroll, zzzzzzzz;
GRANT SELECT, EXECUTE ON Pay_VMDB TO ALL Pay_Prof;
CREATE USER Ann FROM Pay_Prof AS PERM = 0 , PASSWORD = temp
6- 30 Access Rights
The GIVE Statement and Access Rights GIVE transfers ownership of a database or user space to another user. The GIVE statement does not alter DBC.AccessRights. The database or user that you GIVE does not receive any access rights from its new owner. The new owner does gain implicit access rights over the transferred object and the old owner loses them.
Example In the diagram on the facing page, Human_Resources logs on to the system and gives user Ann to Per_Prof. Ann retains the privileges that she inherited from Pay_Prof when she was created. Ann does not inherit any access privileges from the new owner, Per_Prof, or from Personnel.
Per_Prof is Ann's new owner. It has ownership rights over Ann. Pay_Prof loses ownership rights over Ann when she is transferred.
The syntax of the GIVE statement is as follows:
FF07A025
GIVE;
dborusername TO recipientname
Access Rights 6- 31
The GIVE Statement and Access Rights
Human_Resources
Payroll
JAN
Personnel
TEDBOB
PAY_PROF
PAY_VMDB PAY_DBTableATableBTableC
PER_VMDB PER_DB
PER_PROF
LOGON Human_Resources, ssssssss;
GIVE Ann TO Per_Prof ;
KAY ANNJOE
6- 32 Access Rights
Removing a Level in the Hierarchy The example on the facing page demonstrates how to remove a level from an existing hierarchy. In the first diagram, user A is the owner of users B, C, and D. User A no longer needs user B. He wants to keep users C and D.
Transfer Ownership The first thing user A needs to do is transfer ownership of user C to A. When user A submits the GIVE statement, both user C and user D will be transferred. That is because the GIVE statement transfers the named object and all of its children. Since user D is a child of user C, both objects are transferred under user A.
Delete User In order to DROP user B, user A must first delete all objects from user B.
Drop User After user A removes all objects from user B, user A can submit the DROP statement.
Access Rights The privileges for user C and user D remain intact. Although user B, their original creator, no longer exists, the privileges granted or caused to be granted are not automatically revoked. Note that user A has recovered the perm space held by user B.
Access Rights 6- 33
Removing a Level in the Hierarchy
A
B
C
DLOGON with the required privileges, and
1) GIVE C TO A ;2) DELETE USER B ;3) DROP USER B ;
Although B no longer exists as a user, the privileges granted or caused to be granted by B are not automatically revoked.
A
B C
D
A
C
D
6- 34 Access Rights
A Suggested Access Rights Structure An access rights structure recommended for the Teradata database has the following characteristics:
• All users belong to a PROFILE and inherit their access rights.
• Users do not have direct access to data tables, unless they are performing batch operations.
• Users access databases that contain only views and macros.
• UPD_DB databases contain only views, macros, and stored procedures.
• TABLE databases contain only tables.
• Access rights are only extended at the database or user level, not at the individual table level.
Example The diagram on the facing page illustrates an example of the suggested Teradata access rights scheme. This scheme has three user profiles:
INQ_PROFILE Users that belong to the Inquiry Profile inherit SELECT and EXECUTE privileges when you create them.
UPD_PROFILE Users that belong to the Update Profile inherit SELECT, EXECUTE, INSERT, DELETE and UPDATE privileges when you create them.
MAINT_PROF Users that belong to the Maintenance Profile inherit DROP and CREATE TABLE, CHECKPOINT, DUMP, and RESTORE privileges when you create them. These users also run the Load Utilities.
In addition to the access rights stored in each user profile, the Inquiry and Update databases also contain a set of access rights. Both are discussed below:
INQ_DB The Inquiry Database contains views and macros that give Inquiry Profile users access to information. The database has the SELECT privilege with GRANT OPTION.
UPD_DB The Update Database contains views and macros that enable Update Profile users to modify information. This database has the SELECT, INSERT, DELETE and UPDATE privileges with GRANT OPTION.
The GRANT option enables the Update Database to give the necessary privileges to the update profile.
Access Rights 6- 35
A Suggested Access Rights Structure(Before Roles)
SELECTEXECUTE
SELECTEXECUTE
SELECT, EXECUTE,INSERT, DELETE, UPDATE
SELECT, INSERTDELETE, UPDATE,
SELECT, GRANT
DROP and CREATE TABLECHECKPOINT, DUMP, RESTORE, SELECT, EXECUTE, INSERT, DELETE, UPDATE
INQVIEWS & MACROS
UPD VIEWS, MACROS, andSTORED PROCEDURES
TABLEDatabase
GRANT
INQ_PROFILE
INQ_USER_1 INQ_USER_2
MAINT_PROF
MAINT _1 MAINT _2
UPD_PROFILE
UPD_USER_2UPD_USER_1
All users belong to a profile & inherit their access rights.
You extend access rights at the database or user level.
Table databases contain only tables.
Users access databases that contain only views and macros.
Users don’t have direct access to data tables, unless performing a batch operation.
6- 36 Access Rights
Access Rights Issues (prior to Roles) The role concept provides a solution to the following problem.
Prior to Teradata V2R5 and the concept of roles, there are typically two ways of granting rights to a large user base:
1. Use the ALL option of the GRANT statement to grant rights on the shared object(s) to a parent database. Sometimes this is referred to as a “profile” database or a “group” database in V2R4.1. Do not confuse the logical profile database with the Profile capability in V2R5.
GRANT SELECT ON database_object TO ALL profile_database;
Then, create users under the profile database. The system will automatically grant all rights held by the profile database to each user created under the profile database. This is frequently referred to as “inherited rights”.
2. Grant the rights to users individually – an administrative nightmare.
Access Rights 6- 37
Access Rights Issues (prior to Roles)
The problems:
• Assume a customers has a large user base.
Assume that different users require different access rights on differentobjects - probably located in different databases.
– Example: 300 different access rights for 10,000 users; this results inover 3 million access rights in the AccessRights table.
• If users are not granted privileges to all of the objects within a database,then access rights have to be maintained for each object in the database.
• If a user changes job functions, changing access rights can becometedious.
Prior to Teradata V2R5, possible solutions were ...
1. Place users into different parent databases based on their access rightrequirements.
– Use the ALL option of the GRANT statement to grant rights on theshared object(s) to a parent database.
2. Grant the rights to users individually – an administrative nightmare.
6- 38 Access Rights
Advantages of Roles Advantages of roles include:
• Simplify access rights administration
A database administrator can grant rights on database objects to a role and have these rights automatically applied to all users assigned to that role. When a user's function within his organization changes, changing his role is far easier than deleting old rights and granting new rights that go along with the new function.
• Reduce disk space usage
Maintaining rights on a role level rather than on an individual level makes the size of the DBC.AccessRights table much smaller. Instead of inserting one row per user per right on a database object, one row per role per right is placed in the DBC.AccessRights table.
• Better performance
Roles can improves performance and reduces dictionary contention for DDL.
If roles are fully utilized on a system, roles will reduce the size of the AccessRights table and improves the performance of DDL commands that do full-file scans of this table.
- Faster DROP/DELETE USER/DATABASE, DROP TABLE/VIEW/MACRO due to shorter scans of the AccessRights table.
- Faster CREATE USER, DATABASE - remove copy of hierarchical inherited rights.
- Less dictionary contention during DDL operations because the commands use less time.
Access Rights 6- 39
Advantages of Roles
What are the advantages of “roles”?
• Simplify access rights management by allowing grants and revokes ofmultiple rights with one request.
– useful when an employee changes job function (role) within thecompany.
– if a job function needs a new access right, grant it to the role andit is effective immediately.
• The number of access rights in the DBC.AccessRights table is reduced.
– Disk space usage is reduced when rights are managed on rolelevel rather than individual level.
• Improves performance and reduces dictionary contention for DDL,especially CREATE USER.
– Removal of hierarchical inherited rights improves DDLperformance and reduces dictionary contention.
6- 40 Access Rights
Access Rights without Roles The facing page illustrates the following:
• If 10 users have the SELECT access right on each of 10 views,
there would be 100 rows in the DBC.AccessRights table for these 10 users .
• What if there were 50,000 users in the system and there were 500 views needed by each user? The DBC.AccessRights table would have 25 million rows.
When a new user is added in this simple example, 10 rows have to be added to the DBC.AccessRights table.
Access Rights 6- 41
Access Rights Without Roles
GRANT SELECT ON View1, View2, ... TO New_User;
When a new user is given the SELECT access right to these 10 views,10 new access right rows are added to the DBC.AccessRights table.
In this simple example, these 10 views and 11 users would place 110access right rows in the DBC.AccessRights table.
New_User
10 views(possibly in
different databases)
10 users
6- 42 Access Rights
Access Rights Using a Role When creating a new user, only one right to use a role needs to be granted, as opposed to a right for every table/view/macro/stored procedure that the user needs to access.
As mentioned earlier, a role can be viewed as a pseudo-user with privileges on a number of database objects. Any user granted a role could then take on the identity of the pseudo-user and access all of the objects it has rights to.
A database administrator can create different roles for different job functions and responsibilities, grant specific privileges on database objects to these roles, and then grant these roles to users.
In the example on the facing page, the GRANT Role_X to New_User places a row in the DBC.RoleGrants table, not the DBC.AccessRights table.
Note: When an access right is granted to a role, a row in placed in the DBC.AccessRights table. The DBC.AllRights system view only shows access rights associated with users, not roles. The DBC.UserRoleRights system view shows access right rows associated with roles.
Access Rights 6- 43
Access Rights Using a Role
First, create a role and grant privileges to the role.CREATE ROLE Role_X;GRANT SELECT ON View1, View2, ... TO Role_X;
When creating a new user, only one right to use a role needs to be granted.GRANT Role_X TO New_User;
This command places a row in the DBC.RoleGrants table, not theDBC.AccessRights table.
New_User
10 usersRole_X
10 views(possibly in different databases)
6- 44 Access Rights
GRANT and REVOKE Commands (Role Form) GRANT ROLE is used to grant role membership to users or other roles.
role_name One or more comma-separated names of roles from which membership or
administrative ability is being revoked. TO user_name or role_name The names of role grantees. Grantees can be users or roles; however, a
role cannot be granted membership to itself. WITH ADMIN OPTION The role grantees have the right to use DROP ROLE, GRANT, and
REVOKE statements to administer the roles to which they are becoming members.
A GRANT statement that does not include WITH ADMIN OPTION does not
revoke a previously granted WITH ADMIN OPTION privilege from grantee.
REVOKE ROLE is used to revoke role membership to users or other roles.
ADMIN OPTION FOR The role members maintain membership status, but lose the right to use
GRANT, REVOKE, and DROP ROLE statements to administer the roles to which they are members.
If ADMIN OPTION FOR does not appear in the REVOKE statement, the
system removes the specified roles or users as role members. role_name One or more comma-separated names of roles from which membership or
administrative ability is being revoked. The system ignores duplicate role names.
TO/FROM user_name or role_name The names of role members that are losing membership or administrative
ability. Members can be users or roles.
Access Rights 6- 45
GRANT and REVOKE Commands(Role Form)
The syntax to grant a role to a user (or role) is:
GRANT
WITH ADMIN OPTION ;
,
role_name TO user_name
,
role_name
REVOKE;
,
role_nameADMIN OPTION FOR
TOFROM
user_namerole_name
,
The syntax to revoke a role from a user (or role) is:
WITH ADMIN OPTIONGives the role grantee(s) the right to use DROP ROLE, GRANT, and REVOKEstatements to administer the roles to which they are becoming members.
ADMIN OPTION FOR
The role members maintain membership status, but lose the right to useability to administer the roles to which they are members.
If this option is not used, the system removes the specified roles or users asrole members.
6- 46 Access Rights
Implementing Roles The CREATE ROLE and DROP ROLE access rights are system rights. These rights are not on a specific database object. Note that the ROLE privileges can only be granted to a user and not to a role or database. The example on the facing page explicitly identifies the CREATE ROLE and DROP ROLE rights for Sysdba. Another technique of granting both the CREATE ROLE and DROP ROLE access rights to Sysdba is to use the following SQL.
GRANT ROLE TO SYSDBA WITH GRANT OPTION; The key word ROLE will give both the CREATE ROLE and DROP ROLE
access rights. Note: If Sysdba is only given the CREATE ROLE access right, Sysdba can create
new roles and Sysdba can drop roles that he/she has created. Sysdba would not be able to drop roles created by other users (such as DBC).
The syntax to create a new role is simply:
CREATE ROLE role_name; When a role is first created, it does not have any associated rights until grants are made to it.
Access Rights 6- 47
Implementing Roles
What access rights are used to create new roles?
• CREATE ROLE – needed to create new roles
• DROP ROLE – needed to drop roles
Who is allowed to create and modify roles?
• Initially, only DBC has the CREATE ROLE and DROP ROLE access rights.
• As DBC, give the “role” access rights to the database administrators (e.g,Sysdba).
GRANT CREATE ROLE, DROP ROLE TO Sysdba WITH GRANT OPTION;
How are access rights associated with a role?
• First, create a role.CREATE ROLE Inquiry_HR;
The newly created role does not have any associated rights until grantsare made to it.
• Use the GRANT (or REVOKE) command to assign (or take away) accessrights to (or from) the role.
GRANT SELECT, EXECUTE ON HR_VM TO Inquiry_HR;
6- 48 Access Rights
Access Rights Validation and Roles At any time, only one role may be the session’s current role. Enabled roles are the session’s current role plus any nested roles. At logon time, the current role will be the user’s default role. Validation of rights for accessing a given database object will be carried out in one or more steps. The first step verifies if a right has been granted on an individual level. If no such right exists and there is a current role for the session, then the second and third steps verify if a right has been granted to a role. The actual search goes like this:
1) Search the AccessRights table for an UserId -ObjectId pair entry for the
required right. In this step, the system will check for rights at the database/user level and at the object (e.g., table, view) level.
2) If the access right is not yet found and the user has a current role, search
the AccessRights table for RoleId-ObjectId pair entry for the required right. 3) If not yet found, retrieve all roles nested within the current role from the
RoleGrants table. For each nested role, search the AccessRights table for RoleId -ObjectId pair entry for the required right.
4) If not yet found, check if the right is a Public right.
Performance note: If numerous roles are nested within the current role, there may have noticeable performance impact on “short requests”. A few more access right checks won't be noticed on a 1-hour query.
Access Rights 6- 49
Access Rights Validation and Roles
At any time, only one role will be the session's current or active role.
• Enabled roles are referred to as the current role plus any nested roles.To change roles, ...
SET ROLE role_name ;
• At logon, the current role is the user’s default role.
Validation of access rights for accessing a given database object will becarried out in the following steps.
Order of access right validation is:
1) Check the DBC.AccessRights table for the required right at theindividual level.
2) If the user has a current role, check the DBC.AccessRights table for therequired right at the role level.
3) Retrieve all roles nested within the current role from the DBC.RoleGrantstable. For each nested role, check the DBC.AccessRights table for therequired right.
4) Check if required right is a PUBLIC right.
6- 50 Access Rights
SQL Statements to Support Roles Some miscellaneous rules concerning roles include:
• Roles may only be granted to users and other roles. • There is no limit on the number of roles that can be granted to a
grantee. • The default role for a user will automatically be made the current
role for the session when he first logs on. The default role can be established with the CREATE USER or MODIFY USER commands.
• A role grantor can only be a user, but a role grantee can be a user or another role. A role may share the same name as a profile, table, column, view, macro, trigger, or stored procedure. However, a role name must be unique amongst users, databases and roles.
• The role creator is automatically granted membership to the newly created role WITH ADMIN OPTION, which makes the role creator a member of the role who can grant membership to the role to other users and roles.
Dropping a Role The following users can drop a role:
1. DBC 2. Any user given the system right DROP ROLE 3. Any user granted the role WITH ADMIN OPTION
The creator does not have the implicit right to drop a role. If WITH ADMIN OPTION and DROP ROLE rights are revoked from him/her, he/she will not be able to drop the role.
Dropping a User
When a DROP USER command is issued, both individual rights and role rights granted to the user being dropped will be deleted from the DBC.AccessRights and the DBC.RoleGrants tables. Deletions of database objects within the user space prior to the DROP USER command will cause corresponding deletions of DBC.AccessRights rows for rights granted on these objects to roles and other users/databases.
However, rights granted by the dropped user that are not on objects within its space will remain in the system. This would include role rights. Roles and profiles created by the dropped user will remain in the system.
Access Rights 6- 51
SQL Statements to Support Roles
Command Syntax:
CREATE ROLE role_name;
GRANT access_rights TO role_name;
GRANT role_name TO user_name / role_name[WITH ADMIN OPTION];
– ADMIN OPTION allows grantee the right to grant or drop the role.
SET ROLE role_name / NONE / NULL;– Assigns/changes current role for session.– Role must be granted to user before statement is valid.
CREATE/MODIFY USER u1 AS …,DEFAULT ROLE = role_name;
– When the user logs on, the default role will become the session’sinitial current role.
Other commands:
REVOKE ROLE … ;DROP ROLE role_name ;
SELECT ROLE ;
6- 52 Access Rights
GRANT Command (SQL Form) Once a new role is created, access rights can be added to or withdrawn from the role with GRANT/REVOKE statements. Roles may be granted privileges on the following database objects. Database Table View Macro Column Triggers Stored procedures Join and Hash indexes Roles may not be granted on the following functions (or access rights). CREATE ROLE and DROP ROLE CREATE PROFILE and DROP PROFILE CREATE USER and DROP USER
Exceptions A role cannot have descendents, i.e., the ALL option of a GRANT/REVOKE statement cannot be applied to a role. The following statement is not allowed. GRANT <right> ON <database object> TO ALL <role name>; ANSI also disallows a right to be granted to a role with the GRANT option. The following statement is also illegal. GRANT <right> ON <db object> TO <role name> WITH GRANT OPTION;
Access Rights 6- 53
GRANT Command(SQL Form)
TO
ALLusername
PUBLIC
A
,
WITH GRANT OPTION ;
25
role_name,
GRANT ALL
privilege,
ALL BUT
PRIVILEGESdbname
dbname.objname objname
AON
,
privilege
PROCEDURE identifier
The GRANT command has extensions to support granting access rights toroles.
6- 54 Access Rights
REVOKE Command (SQL Form) The facing page shows the syntax for the REVOKE Command.
Access Rights 6- 55
REVOKE Command(SQL Form)
dbname
dbname.objname objname
ON
PROCEDURE identifier
TOALL
username
PUBLIC
,
;
25
role_name,
FROM
A
REVOKE ALL
,PRIVILEGES
A
privilege
WITH GRANT OPTION
ALL BUT privilege
,
The REVOKE command has extensions to support revoking access rights fromroles.
6- 56 Access Rights
System Hierarchy (used in following examples) A system structure for the Teradata database is shown on the facing page and this hierarchy will be used in numerous examples. Keys to the hierarchy on the facing page are:
• Inquiry Users – users that require SELECT and EXECUTE access rights on the views and macros in the VM databases
• Update Users – users that require SELECT, EXECUTE, INSERT,
UPDATE, and DELETE access rights on the views and macros in the VM databases
• Batch Users – operational users that execute utilities that directly
access the tables (e.g., FastLoad) and need access rights on the tables.
The database HR_VM will have SELECT and EXECUTE privileges WITH GRANT OPTION on the database named HR_Tab. (Likewise for Payroll_VM and Payroll_Tab.)
Access Rights 6- 57
System Hierarchy (used in following examples)
SysDBA
CrashDumps SystemFE
Employees
DBC
SysAdmin
HR_VM
View_1View_2
Macro_1Macro_2
Sys_CalendarQCD
Spool_Reserve
Human_Resources
HR_Tab
Table_1Table_2Table_3Table_4
Payroll_VM
View_3View_4
Macro_3Macro_4
Payroll
Payroll_Tab
Table_5Table_6Table_7Table_8
Emp02
Emp04
Emp01
Emp03
Sup05
Inquiry Users
Batch Users
Update Users
6- 58 Access Rights
Example - Using Roles The facing page contains a simple example of creating a role, assigning access rights to it, and granting the role to users. The default role for a user will automatically be made the current role for the session when the user first logs on. The role must be currently granted to the user (otherwise, it is ignored). Only a partial listing of the access rights that would be assigned to roles is shown on the facing page. Additionally, these commands would also be executed to complete the example.
GRANT CREATE TABLE, DROP TABLE, INSERT, UPDATE, DELETE ON HR_Tab TO Batch_HR_Pay; GRANT CREATE TABLE, DROP TABLE, INSERT, UPDATE, DELETE ON Payroll_Tab TO Batch_HR_Pay;
Role nesting (at this time) can only nested a single-level.
Access Rights 6- 59
Example - Using Roles
Create roles.CREATE ROLE Inquiry_HR;CREATE ROLE Update_HR;CREATE ROLE Batch_HR_Pay;
Assign access rights to the roles (partial listing).GRANT SELECT, EXECUTE ON HR_VM TO Inquiry_HR;GRANT INSERT, UPDATE, DELETE ON HR_VM TO Update_HR;
Grant users permission to use the roles.GRANT Inquiry_HR TO Update_HR; /*nested role*/GRANT Inquiry_HR TO Emp01, Emp02;GRANT Update_HR TO Emp03, Emp04;GRANT Update_HR TO Sup05 WITH ADMIN OPTION;
Modify the user to set the default role.MODIFY USER Emp01 AS DEFAULT ROLE = Inquiry_HR;MODIFY USER Emp02 AS DEFAULT ROLE = Inquiry_HR;MODIFY USER Emp03 AS DEFAULT ROLE = Update_HR;MODIFY USER Emp04 AS DEFAULT ROLE = Update_HR;MODIFY USER Sup05 AS DEFAULT ROLE = Update_HR;
6- 60 Access Rights
Example - Using Roles (cont.) The facing page continues the example. Emp01 does not have UPDATE permission to update the Employee table via the Employee_v view. The error returned is: 5315: The user does not have UPDATE access to HR_VM.Employee_v.Dept_Number. Answer to question: Both SQL statements work for Emp03 because the access rights for Inquiry_HR are nested within Update_HR.
Access Rights 6- 61
Example - Using Roles (cont.)
Emp01 – is granted to Inquiry_HR role; Inquiry_HR is current role.
SELECT * FROM Employee_v ORDER BY 1; (success)
UPDATE Employee_v SET Dept_Number=1001 WHERE Employee_Number=100001; (fails)
Why does this statement fail for Emp01?
Emp03 – is granted to Update_HR role; Update_HR is current role.
SELECT * FROM Employee_v ORDER BY 1; (success)
UPDATE Employee_v SET Dept_Number=1001 WHERE Employee_Number=100001; (success)
Why do both of these statements succeed for Emp03?
6- 62 Access Rights
Example - Using Roles (cont.) The facing page continues the example. If a user tries to use the SET ROLE command to specify a role they have not been granted access, the user will get the following error: 5621: User has not been granted a specified role. Another option with the SET ROLE command is to disable the current role for a session. The syntax is: SET ROLE NONE; Answer to first question: The statement fails because Emp02’s current role is Inquiry_HR and this role does not have update permission on Employee_v. Answer to second question: The statement succeeds because Emp02’s current role is now Update_HR and this role does have update permission on Employee_v. Answer to third question: Assuming that the default role for Emp02 is Inquiry_HR, the statement will fail until Emp02 uses the SET ROLE command or uses a MODIFY USER command to change the DEFAULT ROLE. For example: MODIFY USER Emp02 as DEFAULT ROLE = Update_HR;
Access Rights 6- 63
Example - Using Roles (cont.)
Sup05 – is granted to Update_HR role WITH ADMIN OPTION.
GRANT Update_HR TO Emp02; (success)
Emp02 – is granted to Update_HR role; Inquiry_HR is current role.
SELECT * FROM Employee_v ORDER BY 1; (success)
UPDATE Employee_v SET Dept_Number=1001 WHERE Employee_Number=100001; (fails)
Why does this statement fail for Emp02?
Emp02 – executes the following SET ROLE command
SET ROLE Update_HR;
UPDATE Employee_v SET Dept_Number=1001 WHERE Employee_Number=100001;
Will this UPDATE statement succeed this time?
Will this UPDATE statement succeed the next time Emp02 logs on?
6- 64 Access Rights
SET ROLE ALL When a user logs on to the system, the assigned default role is the initial current role for the session. This current role is used to authorize privileges after all checks against individually granted privileges have failed. Once the session is active, the user can submit a SET ROLE statement to change or nullify the current role.
For example, if a user is assigned to RoleA and Ro leB, but logs in as RoleA, then the system checks against RoleA and all nested roles for privileges. The user cannot use the privileges of Role B. To use privileges of both RoleA and Role B, the user can activate all roles with the SET ROLE ALL statement.
The rules for using roles are as follows:
• You can grant one or more roles to one or more users or roles; thus:
– A role can have many members
– A user or role can be a member of more than one role
• Only single-level nesting is allowed; that is, a role that has a member role cannot also be a member of another role.
• A privilege granted to an existing role immediately affects any user or role that is specified as a recipient in the GRANT statement and currently active within in a session.
• The privileges of a role granted to another role are inherited by every user member of the grantee role.
• Using SET ROLE ALL allows all roles available to a user to be enabled within a session. Available roles are those that have been directly granted to the user as well as those that are nested within the granted roles. All available roles may be enabled either:
– dynamically by submitting a SET ROLE ALL statement or
– upon logon if the default role of a user was set to ‘ALL’ through a CREATE USER or MODIFY USER statement.
• Users may set their current session to ALL even if the users do not have any roles granted to them. No privilege is required for users to do so.
• When using CREATE USER…AS DEFAULT ROLE ALL, the creator does not have to be granted any roles.
• When using MODIFY USER…AS DEFAULT ROLE ALL, the user does not have to be granted any roles.
Access Rights 6- 65
SET ROLE ALL
The rules for using roles are as follows:• You can grant one or more roles to one or more users or roles; thus:
– A role can have many members– A user or role can be a member of more than one role
• Only single-level nesting is allowed; that is, a role that has a member rolecannot also be a member of another role.
• A privilege granted to an existing role immediately affects any user or role that is specified as a recipient in the GRANT statement and currently active within in a session.
• The privileges of a role granted to another role are inherited by every usermember of the grantee role.
• Using SET ROLE ALL allows all roles available to a user to be enabledwithin a session. Available roles are those that have been directly granted tothe user as well as those that are nested within the granted roles. Allavailable roles may be enabled either:
– dynamically by submitting a SET ROLE ALL statement or– upon logon if the default role of a user was set to ‘ALL’ through a CREATE or MODIFY USER statement.
6- 66 Access Rights
Steps to Implementing Roles The facing page identifies a sequence of steps to consider when implementing roles. A sample query and results are also provided.
Access Rights 6- 67
Steps to Implementing Roles
Sample query to identify individual rights that may be good candidates for conversion toroles.
1. Identify individual rights to be converted into role rights.2. Create roles and grant appropriate rights to each role.3. Grant roles to users and assign users their default roles.4. Revoke from users individual rights that have been replaced by role rights.
SELECT DatabaseName,TVMName,COUNT(*) AS RightsCount
FROM DBC.AccessRights AR,DBC.TVM TVM,DBC.DBase DBASE
WHERE AR.tvmid = TVM.tvmidAND AR.databaseid = DBASE.databaseidAND AR.fieldid = 0GROUP BY DatabaseName, TVMNameORDER BY 3 DESC;
DatabaseName TableName RightsCount
DS All 110QCD All 86HR_Tab All 72HR_VM All 68Payroll_VM All 67Payroll_Tab All 67
Results:
6- 68 Access Rights
Access Control Mechanisms You can control user access by granting access to specific views and macros. Views limit user access to table columns or rows that may contain sensitive information. Macros limit the types of actions a user can perform on the columns and rows.
User Privileges (Access Rights) An arrangement of predefined privileges or access rights control the user’s activities during a session. Access rights are associated with a user, a database, and an object (e.g., table, view, macro, or stored procedures).
The system verifies a user’s access rights when the user attempts to access or execute a function that accesses an object. Teradata stores access rights information in the system table DBC.AccessRights. You can retrieve this information by querying the DBC.UserRights view.
Additional Controls As the administrator, there are additional methods you can use to limit user access to the Teradata Database:
• Create views
• Create macros
• Create stored procedures
The facing page shows a diagram of access control mechanisms in Teradata.
Access Rights 6- 69
Access Control Mechanisms
DBC.AccessRights User Privileges
Teradata Logon Processing
DBC.AccLogRuleTbl
DBC.AccLogTbl
DBC.Dbase
DBC.LogonRuleTbl
DBC.SysSecDefaults
DBC.SessionTbl
DBC.EventLog
DBC.TVM Views MacrosStored
Procedures
Information (Data Tables)
6- 70 Access Rights
Using Views to Limit Access Another method of controlling user access to data is through the structure of views and macros. Views limit access to table columns or rows that contain sensitive information. Macros limit those actions a user can perform on table columns or rows.
Example The example on the facing page demonstrates how to create a view that limits user access to data.
An existing table called Employee contains some sensitive information. As the administrator, you need to create a view that allows users to see only certain information. You create a new view called Sal_401 as shown on the following page.
After you create the view, you must GRANT SELECT privileges to all users that need to access the new restricted view. If a view is to be used to UPDATE values in a base table, the prospective users must also be given the UPDATE privilege.
Users submit the SELECT statement to access the new restricted view. The user only sees selected rows and columns as if they were looking at a complete table. The user does not know that the underlying table contains more information. The user does not realize he or she is looking at a restricted view.
WITH CHECK OPTION When you create a view, you can use WITH CHECK OPTION to restrict the rows that can be accessed by an INSERT or UPDATE statement.
MANAGER EMPLOYEEEMPLOYEE DEPARTMENT JOB LAST FIRST HIRE BIRTH SALARY NUMBER NUMBER NUMBER CODE NAME NAME DATE DATE AMOUNT
1006 1019 301 312101 Stein John 861015 631015 39450.00 1008 1019 301 312102 Kanieski Carol 870201 680517 39250.00 1005 0801 403 431100 Ryan Loretta 861015 650910 41200.00 1004 1003 401 412101 Johnson Darlene 861015560423 46300.00 1007 1005 403 432101 Villegas Arnando 870102 470131 59700.00 1003 0801 401 411100 Trader James 860731 570619 47850.00
PK FK FK FK
EMPLOYEE
Access Rights 6- 71
Using Views to Limit Access
GRANT UPDATE (Last_Name, Salary_Amount) , SELECT ON Payroll_VMDB.Sal_401 TO UserName ;
SELECT * FROM Sal_401 ORDER BY Salary_Amount DESC;
UPDATE Sal_401SET Salary = 35000WHERE EmpNo = 1003
CREATE VIEW Payroll_VMDB.Sal_401 ASSELECT Employee_Numer AS EmpNo
,Last_Name (FORMAT ‘X(10)’),First_Name (FORMAT ‘X(10)’),Hire_Date (FORMAT ‘YYYY-MM-DD’),Salary_Amount (FORMAT ‘$ZZZ,ZZ9.99’) AS Salary
FROM Payroll.EmployeeWHERE Department_Number = 401AND Salary_Amount > 39000
WITH CHECK OPTION;
Note:Failure 3564 Rangeconstraint; Check error in fieldemployee.salary_amount.
6- 72 Access Rights
Using Macros and Stored Procedures to Control Access Macros
Teradata macros are SQL statements that the server stores and executes.
You can control access to table data by granting a user the EXECUTE privilege on a macro.
Example If the creator of a macro grants a personnel clerk the EXECUTE privilege on the macro on the facing page, the clerk can enter new employee data as parameters to the NewEmp macro statement rather than using the INSERT statement. Thus, the clerk need not be aware of the database being accessed, the tables affected, or even the result.
Stored Procedures Like a Teradata macro, a stored procedure provides a way to combine a sequence of SQL to store and execute on the Teradata Database. In addition, stored procedures provide a procedural interface and are an ANSI SQL feature. You can control access to a table by granting a user the EXECUTE PROCEDURE privilege on a stored procedure.
A stored procedure is defined and stored as a database object, although unlike objects such as views and macros whose DDL statement text is stored in the Data Dictionary, a stored procedure is created in the user’s database space as a table.
An example of a statement to create a stored procedure is:
CREATE PROCEDURE spSample1(ip INTEGER, OUT op INTEGER)
BEGIN DECLARE var1 INTEGER;
SELECT col1 INTO :var1 FROM tab1 WHERE col2 = :ip;
SET op = var1 * 10;
END;
Access Rights 6- 73
Using Macros and Stored Procedures toControl Access
• Macros are SQL statements stored and executed on the Teradata server.• A stored procedure combines a sequence of SQL statements stored and
executed on the Teradata server.• Control access to table data by granting a user the EXECUTE privilege on a
macro or stored procedure.• Example: If you create the following macro, you can grant someone execute
privilege for entering employee data. That person need not be aware of thedatabase or tables involved.
CREATE MACRO NewEmp( number (INTEGER),name (VARCHAR(12),title (VARCHAR(12),dept (SMALLINT )
AS (INSERT INTO Employee (EmpNo,Name,JobTitle,DeptNo)VALUES (:number,:name,:title,:dept); ) ;
Macro example for entering new employee data.
6- 74 Access Rights
Access Rights and Nested Views
Views that reference other views are sometimes called nested views. Views may be nested up to 64 levels.
View names are fully expanded (resolved) at creation time.
The system checks access rights at creation time, and validates them again at execution time. Any database referenced by the view requires access rights on all objects accessed by the view.
The facing page shows an example of a nested view.
You can create a view with the intention of read access only, or for controlled UPDATES use. For read access, the SELECT right is needed. For updates, the UPDATE right is needed.
For other users to access a view, the users must grant the appropriate rights on the view and the immediate owner must have the appropriate rights on objects referenced by the view WITH GRANT OPTION.
Access Rights 6- 75
Access Rights and Nested Views
If you REVOKE access rights from any user in the chain, the system issues the following message:
3523 An owner referenced by the user does not have [privilege] access to [databasename.tablename].
• Nested Views– Views that reference other views– May be nested up to 64 levels– Fully resolved at creation time
View Y View X Table A
User 1 accesses View YUser 2 owns View YDatabase VMDB owns View XDatabase DBX owns Table A
Rights checked are:User 1 rights on View YUser 2 rights on View X WITH GRANT OPTIONDatabase VMDB rights on Table A WITH GRANT OPTION
• Access RightsSystem checks access rights at creation time.Validates access rights at execution time.Database referenced by the view requires access rights on all objects accessed by view.
6- 76 Access Rights
System Views for Access Rights
There are several system views you can use to obtain information about access rights. (These views access the DBC.AccessRights table to obtain needed information.) They are:
• DBC.AllRights
• DBC.UserRights
• DBC.UserGrantedRights
• DBC.RoleInfo
• DBC.RoleMembers
• DBC.AllRoleRights
• DBC.UserRoleRights
Access Rights 6- 77
System Views for Access Rights
VIEW NAME DESCRIPTION
DBC.UserGrantedRights
Provides information about all rights that have been automatically or explicitly granted.
Provides information about all rights the user has acquired, either automatically or explicitly.
Provides information about rights which the current user explicitly has granted to other users.
DBC.UserRights
DBC.AllRights
DBC.RoleMembers
DBC.AllRoleRights
DBC.RoleInfo
DBC.UserRoleRights The UserRoleRights view lists all rights granted to the current role of the user and its nested roles.
The AllRoleRights view lists all rights granted to each role.
The RoleMembers view lists each role and all of its members.
The RoleInfo view returns the names of the role creators corresponding to each role.
6- 78 Access Rights
AllRights and UserRights Views Access rights are granted against the various database objects including: DATABASE, USER, TABLE, VIEW, MACRO, TRIGGER, and PROCEDURE.
Access Rights and Abbreviations Type Description Type Description
AS ABORT SESSION E EXECUTE
CD CREATE DATABASE I INSERT
CG CREATE TRIGGER IX INDEX
CM CREATE MACRO MR MONITOR RESOURCE
CP CHECK POINT MS MONITOR SESSION
CT CREATE TABLE PC CREATE PROCEDURE
CU CREATE USER PD DROP PROCEDURE
CV CREATE VIEW PE EXECUTE PROCEDURE
D DELETE R RETRIEVE/SELECT
DD DROP DATABASE RF REFERENCE
DG DROP TRIGGER RO REPLICATION OVERRIDE
DM DROP MACRO RS RESTORE
DP DUMP SS SET SESSION RATE
DT DROP TABLE SR SET RESOURCE RATE
DU DROP USER U UPDATE
DV DROP VIEW
Access Rights 6- 79
AllRights and UserRights Views
DBC.AllRights
DataBaseName AccessRight GrantorName---------------------- ----------------- -------------------Customer_Service R DBCDBC R DBCpersonnel R SYSADMINresmacros E DBCSA_VMDB R SASA01 CD SA
SELECT
FROMWHEREORDER BY
DataBaseName (FORMAT ‘X (16)’),,AccessRight,,GrantorName (FORMAT ‘X (12)’)
DBC.UserRightsTableName = ‘All’1, 2 ;
Example: All rights held by the user at the database level.
DBC.UserRights
Provides information about the objects on which all users (DBC.AllRights), or the current user (DBC.UserRights), have automatically or explicitly been granted privileges.
UserName DatabaseName TableNameColumnName AccessRight GrantAuthorityGrantorName AllnessFlag CreatorNameCreateTimeStamp
DatabaseName TableName ColumnNameGrantorNameAccessRight GrantAuthority
CreatorName CreateTimeStamp
6- 80 Access Rights
DBC.UserGrantedRights View The UserGrantedRights view provides information about objects on which the current user has explicitly granted privileges. When you submit the GRANT statement, the system stores explicit privileges as rows in the AccessRights table.
Column definitions in this view include:
Column Definition
Grantee The recipient of the access right.
Allness Flag Y (Yes) indicates the privilege was granted to all. N (No) indicates the privilege was not granted to all.
Access Rights 6- 81
DBC.UserGrantedRights View
DataBaseName TableName Grantee AccessRight AllnessFlag---------------------- ---------------- ----------- ----------------- ----------------SA01 employee SA01a D NSA01 employee SA01a I NSA01 employee SA01a R NSA01 employee SA01a U N
Provides information about objects on which the current user has explicitly granted privileges to other users.
DatabaseName TableNameAccessRightGrantAuthority
AllnessFlag
DBC.UserGrantedRights
ColumnNameGrantee
CreatorName CreateTimeStamp
Example: List the rights granted by the current user.
SELECT DatabaseName (FORMAT ‘X (12)’),TableName (FORMAT ‘X (10)’),Grantee (FORMAT ‘X (10)’),AccessRights,AllnessFlag
FROM DBC.UserGrantedRightsORDER BY 1,2,3,4 ;
6- 82 Access Rights
RoleInfo[X] View The DBC.RoleInfo view will list all of roles, their creators, and the creation timestamp in the system. This information is taken from the DBC.Roles and the DBC.Dbase tables. The DBC.RoleInfoX view returns rows that a user has created. Users that can create roles need the system access right – CREATE ROLE. Extension to COMMENT command: COMMENT [ON] ROLE <profile name> [ [AS] <comment string> ]
– inserts or retrieves comments in CommentString column of the DBC.Roles table for the named role.
Example : COMMENT ON ROLE Inquiry_HR
AS 'SEL and EXE rights for HR_VM';
Access Rights 6- 83
RoleInfo[X] View
Example: List role names that exist in the system.
SELECT RoleName, CreatorName, CreateTimeStampFROM DBC.RoleInfoORDER BY 1;
Provides information about roles.
DBC.RoleInfo[X]
RoleName CommentStringCreatorName CreateTimeStamp
RoleName CreatorName CreateTimeStamp
Batch_HR_Pay Sysdba 2003-01-12 20:48:32Inquiry_HR Sysdba 2003-01-12 20:48:31Inquiry_Payroll Sysdba 2003-01-12 20:48:31Update_HR Sysdba 2003-01-12 20:48:31Update_Payroll Sysdba 2003-01-12 20:48:31
Example Results:
6- 84 Access Rights
RoleMembers[X] View The DBC.RoleMembers view lists each role and all of its members. The DBC.RoleMembersX view lists all roles, if any, directly granted to the user. For example, Emp02 executes the following statement:
SELECT * FROM DBC.RoleMembersX ORDER BY 1; The result is: RoleName Grantor WhenGranted DefaultRole WithAdmin Inquiry_HR Sysdba 2003-01-12 22:30:00 Y N Update_HR Sup05 2003-01-12 23:16:02 N N
Access Rights 6- 85
RoleMembers[X] View
Example: List roles and the members.
SELECT RoleName, Grantee, GranteeKind, DefaultRole, WithAdminFROM DBC.RoleMembersWHERE RoleName IN ('Inquiry_HR' ,'Update_HR')ORDER BY 1, 2;
Provides information about roles and its members.
RoleName Grantee GranteeKind DefaultRole WithAdmin
Inquiry_HR DBC User N YInquiry_HR Emp01 User Y NInquiry_HR Emp02 User Y NInquiry_HR Sysdba User N YInquiry_HR Update_HR Role N NUpdate_HR DBC User N YUpdate_HR Emp02 User N NUpdate_HR Emp03 User Y NUpdate_HR Emp04 User Y NUpdate_HR Sup05 User Y YUpdate_HR Sysdba User N Y
Example Results:
DBC.RoleMembers[x]
RoleName Grantee GranteeKindGrantor WhenGranted DefaultRoleWithAdmin
6- 86 Access Rights
AllRoleRights and UserRoleRights Views The DBC.AllRoleRights and DBC.UserRoleRights views provide information about role and access rights granted to roles in the system.
DBC.UserRoleRights views lists all rights granted to the current role of the user and its nested roles.
Access Rights 6- 87
AllRoleRights and UserRoleRights Views
Example: List current role rights.
SELECT RoleName, DatabaseName, TableName, ColumnName, AccessRightFROM DBC.UserRoleRIghtsORDER BY 1;
AllRoleRights - lists all rights granted to roles in the system.UserRoleRights - lists all rights granted to the enabled roles of the user.
RoleName DatabaseName TableName ColumnName AccessRight
Inquiry_HR HR_VM All All RInquiry_HR HR_VM All All E
Example Results for Emp01:
DBC.AllRoleRights and DBC.UserRoleRights
RoleName DatabaseName TableNameColumnName AccessRight GrantorName CreateTimeStamp
RoleName DatabaseName TableName ColumnName AccessRight
Inquiry_HR HR_VM All All RInquiry_HR HR_VM All All EUpdate_HR HR_VM All All IUpdate_HR HR_VM All All DUpdate_HR HR_VM All All U
Example Results for Emp03 - shows nested role:
6- 88 Access Rights
Access Rights Summary The opposite page summarizes some important concepts in this module.
Access Rights 6- 89
Access Rights Summary
– Access Rights (Privileges) are maintained in the data dictionary.– Rows are inserted into or removed from DBC.AccessRights by:
• CREATE or DROP statements and GRANT or REVOKE statements– Creators are given automatic rights on created objects except for:
• CREATE PROCEDURE and EXECUTE PROCEDURE– Users and databases are given all rights on themselves except:
• CREATE Database/User and DROP Database/user– Owners have the right to grant privileges on their owned objects.– The GIVE command affects ownership (and ownership/implicit rights), but not
Information in the DBC.AccessRights table.
• Your role as System Administrator is enhanced by good access rights management. – Security rule enforcement - Data maintenance - Archive and Recovery
• Characteristics of a good database structure: – Users belong to a profile and inherit access rights.– Users do not have direct access to tables.– Access rights given at the database or user level.
• A role is simply a collection of access rights.
– Rights are first granted to a role and the right to use the role is then granted to users.
– CREATE ROLE role_name;
• The GRANT and REVOKE commands have new extensions to support granting (or removing) access rights to roles.
• GRANT and REVOKE roles to users to simplify access rights management.
– GRANT role_name TO user1;
6- 90 Access Rights
Review Questions Check your understanding of the concepts discussed in this module by completing the review questions as directed by your instructor.
Access Rights 6- 91
Review Questions
Indicate whether statement is True (T) or False (F).
1. There are only two types of access rights or privileges: explicit and implicit.
2. The statements you use to affect access rights are GRANT and REVOKE.
3. As the administrator, you can set up a hierarchy so that when new objects are added to the system, selected users can automatically gain appropriate rights on those objects.
4. If a user creates a table, he automatically has the rights SELECT, INSERT, DELETE.
5. If user SYSDBA creates the user Marketing, both users have CREATE/DROP Database/User rights on User Marketing.
6. If you want to remove a user but keep its children, you do modify user on the children.
7. You can GIVE both databases and tables.
8. A user use the SET ROLE command to set their current role to any defined role in the system.
9. Roles may only be granted to users and other roles.
T F
T F
T F
T F
T F
T F
T F
T F
T F
6- 92 Access Rights
Lab 3 The Lab for this Module is in Appendix B. Please follow your Instructor’s directions for completing Lab assignments.
Access Rights 6- 93
Lab 3
Please see Lab 3 in Appendix B.
6- 94 Access Rights
References For more information on space allocation and usage, refer to
• Teradata RDBMS Database Design - (B035-1094-122A)
• Teradata RDBMS Security Administration Guide - (B035-1100-122A)
• Teradata RDBMS SQL Reference - (B035-1101-122A)
• Teradata RDBMS Data Dictionary Reference - (B035-1092-122A)
Teradata Utilities 7-1
Module 7
After completing this module, you should be able to:
• Explain the difference between host-based and AMP-based utilities.
• Describe multiple ways to initiate console-based utilities and be able to list what they are.
• List the three DBS Control parameter groups.
• Use the following utilities to recover a Teradata database:SCANDISK (Ferret)CheckTableTable RebuildRecovery ManagerShowlocksAbort Host
Teradata Utilities
7-2 Teradata Utilities
Notes:
Teradata Utilities 7-3
Table of Contents
TERADATA MANAGER ........................................................................................................................................................7-4 DATABASE WINDOW (DBW).............................................................................................................................................7-6 DBW SUPERVISOR WINDOW ...........................................................................................................................................7-8 TERADATA MANAGER REMOTE CONSOLE......................................................................................................... 7-10 THE UNIX TOOL CNSTER M ........................................................................................................................................... 7-12 GENERAL GROUP PARAMETERS ............................................................................................................................... 7-14 FILE SYSTEM GROUP PARAMETERS ....................................................................................................................... 7-16 PERFORMANCE GROUP PARAMETERS .................................................................................................................. 7-18 QUERY CONFIGURATION UTILITY.......................................................................................................................... 7-20 GET CONFIG .......................................................................................................................................................................... 7-22 RECONFIG UTILITY.......................................................................................................................................................... 7-24 VPROC MANAGER UTILITY.......................................................................................................................................... 7-26 FERRET UTILITY................................................................................................................................................................ 7-28 FERRET===èSHOWSPACE COMMAND.................................................................................................................. 7-30 FERRET===èSHOWSPACE—SUMMARY REPORT............................................................................................ 7-32 FERRET===èPACKDISK................................................................................................................................................ 7-34 FERRET===èSHOWBLOCKS ........................................................................................................................................ 7-36 FERRET===èSCANDISK COMMAND....................................................................................................................... 7-38 CHECKTABLE UTILITY................................................................................................................................................... 7-40 RUNNING CHECKTABLE................................................................................................................................................. 7-42 TABLE REBUILD.................................................................................................................................................................. 7-44 RECOVERY MANAGER .................................................................................................................................................... 7-46 RECOVERY MANAGER LIS T STATUS COMMAND ............................................................................................. 7-48 RECOVERY MANAGER LIS T LOCKS COMMAND............................................................................................... 7-50 RECOVERY MANAGER PRIORITY COMMAND................................................................................................... 7-52 ABORT_ROLLBACK........................................................................................................................................................... 7-54 SHOWLOCKS UTILITY REPORT ................................................................................................................................. 7-56 ABORT HOST UTILITY..................................................................................................................................................... 7-58 SUMMARY............................................................................................................................................................................... 7-60 REVIEW QUESTIONS ......................................................................................................................................................... 7-62 LAB 4 .......................................................................................................................................................................................... 7-64 REFERENCES ........................................................................................................................................................................ 7-66
7-4 Teradata Utilities
Teradata Manager Teradata Manager is a production and performance monitoring system that simplifies the tasks of monitoring, controlling, and administering one or more Teradata Relational Database Management Systems (RDBMS).
With Teradata Manager you can use a variety of specially designed tools and applications to gather, manipulate, and analyze information about each Teradata Database you need to administer.
• It provides a graphical interface that makes the tools and applications easy to use.
• You can query the Teradata Database status and utilization, present information about system performance in reports and graphs that are easy to read, administer Teradata Database users and sessions, and more.
Teradata Manager is both powerful and flexible because its structure can be customized so you can configure it to address a wide variety of issues that suit the needs of individuals or groups within your organization.
You will find it easy to run several applications simultaneously, to configure Teradata Manager to fit your specific needs, and to work with more than one instance of the Teradata Database at a time.
Teradata Manager runs on a PC that is network-connected to the Teradata Database platform(s).
Note: Teradata Manager and its included applications are covered in detail in the Teradata Warehouse Management course.
Teradata Utilities 7-5
Teradata Manager
• A production and performance monitoring system that lets you monitor, control, and administer one or more Teradata Database systems.– Provides applications to gather, manipulate, and analyze
information about the Teradata Database.– Provides an easy to use Graphical User Interface.– Can query status and utilization statistics.– Presents information in reports and graphs.– Can be used to administer Teradata RDBMS users and
sessions.– Can run several applications simultaneously.– Runs on a network-connected PC.
7-6 Teradata Utilities
Database Window (DBW) The Database Window (DBW) is the console software for the Teradata Database. It runs in any environment that supports X windows.
The DBW software provides flexibility to the database administrator since you can start the console from virtually any workstation.
Database Window Icons The DBW icon labeled "Supvr" opens the Supervisor window. You can start Teradata AMP-based utilities from the Supervisor window.
Once a utility is running, the DBW displays an application icon. You can have four applications running. You move back and forth from one utility to another by returning to the DBW.
You can have multiple instances of the DBW window running at the same time. While you can have up to nine DBW windows open, you probably should not have more than seven open. Two windows should be reserved for remote support, if necessary.
Starting DBW for UNIX Execute the following command from the UNIX command line:
/user/ntos/bin/xdbw -display X-host:server.screen
X-host network name of the machine with the X display.
server server connection (typically 0).
screen screen number of the server (default 0). This value is optional.
Example 1 To run DBW on a server named tdbase, with the display appearing on a PC name ws369, log on to node tdbase and enter the command line:
/usr/ntos/bin/xdbw –display ws369:0.0
Example 2 To run the DBW from an MP-RAS PC named pc350 and connect to 17490, from the UNIX command line type:
xdbw –display pc350:0.0 –machine 17490
Note: The /ntos/rhosts file on the 17490 needs and entry for pc350.
Starting DBW for Windows To start the DBW on Windows, select Start>Programs>Teradata RDBMS>Database Window.
Teradata Utilities 7-7
Database Window (DBW)
Database Window (DBW)
• The Database Window is the x-windows console software for Teradata.
Database Window Icons
• The “Supvr” (Supervisor) icon opens the Supervisor window.
• Once a Teradata utility is running, the DBW displays a new icon which opens a window for the Teradata utility.
7-8 Teradata Utilities
DBW Supervisor Window You must start the DBW before you can start the Supervisor program. To open the Supervisor window, click the “Supvr” icon in the DBW.
Sub-windows The Supervisor window contains the following four sub-windows:
Output Displays the results of user commands. It displays Input Supervisor Command when the Supervisor window first opens. Use the scroll bars to review results from previous commands not currently visible in the Output sub-window.
Status Displays the current status message. The word Status: appears to the left of the sub-window and is used by CNS to indicate the state of the application running in this window. Current states include:
Blank, Running: and Reading:
Command History
Displays a list of commands you previously entered. Use the scroll bars to review commands previously entered that are not currently visible in the Command History sub-window.
Input Area where you type commands. The phrase Enter a command: appears just above this sub-window.
Teradata Utilities 7-9
DBW Supervisor Window
Command history area
7-10 Teradata Utilities
Teradata Manager Remote Console The Teradata Manager Remote Console runs in a standard application window. By default, it is configured as a menu start application under the Production Control menu on the Executive menu bar of the DEFAULT profile.
Remote Console allows you to run many of the Teradata Database console utilities from your Teradata Manager PC. Supported utilities include:
This Utility… Allows you to:
Abort Host Cancel all outstanding transactions running on a host that is no longer operating.
Check Table Check for inconsistencies in internal data structures and for corruption.
Configure Review and update the configuration of Teradata.
DBSControl Set tunable parameters for Teradata.
Gateway Global Inspect and modify the gateway operating parameters for any network attached to the associated Teradata system.
Lock Display (V2R4) Provide a snapshot capture of all real-time database locks.
Operator Console Run supervisor commands to manage the programs that perform Teradata Database operations.
Priority Scheduler (V2R4.1)
Control resource allocation for Database sessions based on either session-related priority designations or system-level scheduling parameters.
Query Configuration Display the current Teradata Database configuration.
Query Session Monitor the state of the database sessions on logical host Ids attached to the Teradata Database.
Recovery Manager Monitor the progress of a Teradata recovery.
Show Locks Provide information about host utility locks.
Vprocmanager btain the status of vprocs, change vproc states, initialize and boot a specific vproc, initialize the vdisk associated with a specific vproc, or force a Database restart.
Ferret Display and set various disk space utilization attributes, and dynamically reconfigure the data on the disks to correspond with selections.
The Remote Console log contains the input and output for all console utilities started. The list of available console utilities is taken from the application-specific entry RCONSUTILS. When you select a utility to run, the output for the utility appears on the main window.
Teradata Utilities 7-11
Teradata Manager Remote Console
– Runs in a standard application window.– Allows you to run Teradata console utilities from Teradata Manager, including:
– Abort Host - Check Table– Configure - DBSControl– Gateway Global - Operator Console– Lock Display (V2R4) - Priority Scheduler (V2R4.1)– Query Configuration - Query Session– Recovery Manager - Show Locks– Ferret - Vprocmanager
– Remote Console Log• Contains input and output for all console utilities started.
– Script Feature• Allows you to record commands associated with running utilities.• Can be played back at a later time.• May be run interactively from the main menu.• May be run in batch mode by setting an Autostart entry.
7-12 Teradata Utilities
The UNIX tool cnsterm The cnsterm tool is the PDE Console Subsystem Terminal. You can use it to display console utility output without starting the xdbw (Database Window) program. The cnsterm tool is executed from the UNIX command line, and you do not need to have X-Windows configured to use it, as you do with the xdbw. The cnsterm tool displays a single window at a time.
When using cnsterm the only command line option available is the Database Window partition number. Partition numbers 1 through 4 are the Database Window console utility windows, partition 5 is the Database I/O window, and partition 6 is the Database Window Supervisor screen. You will typically start cnsterm in window 6, and then move to that window to start any console utility programs.
You must be logged on as root to execute cnsterm. To use cnsterm from the UNIX command line:
1. Logon as Root.
2. Enter cnsterm 6 on the UNIX command line. This will display the Database Window Supervisor screen.
3. To start a console utility, type in the start command syntax (i.e., start qryconfig). The display will tell you which partition qryconfig was started in (i.e., Window 1).
4. Press the Delete key and enter cnsterm 1. You will now see the contents of Window 1 (i.e., the qryconfig program) and you may now enter commands to execute the qryconfig utility.
Note: To move between windows, use the Delete key to get a prompt, and enter cnsterm n, where n is the number of the window you want.
5. Press the DELETE key and enter cnsterm 6 to move back to the Supervisor display.
6. Type stop 1 to terminate the utility that was running in Window 1 (i.e., qryconfig).
You may start as many utilities as there are partitions available in the Database Window.
Teradata Utilities 7-13
The UNIX tool cnsterm
From the UNIXcommand line:
7-14 Teradata Utilities
General Group Parameters The table describes the tunable parameters in the General Group.
Field What it is used for Version Indicates the DBS Control Record version number.
SysInit Ensures the system has been initiated properly using the System Initializer utility. (Warning: Destroys all user and dictionary data.)
DeadLockTimeout Used for deadlock time-out detection cycles.
HashFuncDBC Defines the DBS hashing function that the RBDMS uses.
SessionMode Defines the system default transaction mode, case sensitivity, and character truncation rule for a session.
LockLogger Defines the system default for the locking logger.
RollbackPriority Defines the system default for the rollback priority.
MaxLoadTasks Controls the combined number of FastLoad, MultiLoad, and FastExport tasks allowed in the system.
RollForwardLock Defines default for the RollForward using Row Hash Locks option.
MaxDecimal Defines maximum number of decimal digits used in expression typing.
Century Break Defines how to interpret in dates two-digit years.
DateForm Defines whether Integer Date (0) or ANSI Date (1) is used for a session.
System TimeZone Hour
Defines the System Time Zone Hour offset from Universal Coordinated Time (UTC).
System TimeZone Minute
Defines the System Time Zone Minute offset from UTC.
RollbackRS Transaction
Used when a subscriber-replicated transaction and a user transaction are involved in a deadlock.
RSDeadLock Interval
Used to check for deadlocks between subscriber-replicated transactions and user transactions.
RoundHalfWay MagUp
Indicates how rounding should be performed when computing values of DECIMAL types.
Default Date Format Allows you to specify a system default date format instead of using the default format YY/MM/DD.
Target Level Emulation
Allows a test engineer to set the costing parameters considered by the optimizer for the system.
Export Width Table ID
Controls the export width of a character in bytes.
EnableStepText TRUE-dispatcher step text includes names and costs. FALSE-no name and cost information will be available. The default value is FALSE.
EnableDBQM TRUE-validation of all SQL through the DBQM rules will be enforced. FALSE-no validation through DBQM will be done. The default value is FALSE.
Single Sign On Indicates whether Single-Sign-Ons are enabled. Valid values are 0, 1 and 2. The default value is 0.
IdColBatchSize Indicates size of the pool of numbers reserved for generating numbers for a batch of rows to be bulk-inserted in a table with an identity column. Valid range of value is1-10000. Default is 1000.
Teradata Utilities 7-15
General Group Parameters
1. Version = 4
2. SysInit = FALSE
3. DeadLockTimeOut = 240 (seconds)
5. HashFuncDBC = 5 (Universal)
8. SessionMode = 0 (Teradata)
9. LockLogger = FALSE
10. RollbackPriority = FALSE
11. MaxLoadTasks = 5
12. RollForwardLock = FALSE
13. MaxDecimal = 15
14. Century Break = 0
15. DateForm = 0 (IntegerDate)
16. System TimeZone Hour = 0
17. System TimeZone Minute = 0
18. RollbackRSTransaction = FALSE
19. RSDeadLockInterval = 0 (240)
20. RoundHalfwayMagUp = FALSE
22. Target Level Emulation = FALSE
23. Export Width Table ID = 0 (Expected Defaults)
24. EnableStepText = TRUE
25. EnableDBQM = TRUE26. Single Sign On = 0 (On)
27. Idcol Batch Size = 1000 (Expected Defaults)
General Fieldswith defaults:
7-16 Teradata Utilities
File System Group Parameters The tunable parameters in the File System Group include:
Field What it is used for
FreeSpacePercent Default percentage of free space to leave on disk cylinders during data load operations.
Range of values is 0 - 75 (percent). Default value is 0. Note: Does not override value specified for a table on CREATE or ALTER TABLE request.
MiniCylPackLowCyl Prod
Threshold at which system will perform MiniCylPacks. Default value is 10 (free cylinders).
PermDBSize Default maximum data block size for permanent tables.
Range of values depends on cylinder size. Default value is 63; or 255 in V2R4.1 sectors. Note: Effective only if not specified on CREATE or ALTER TABLE request.
JournalDBSize Determines maximum size of Transient Journal and Permanent Journal Table data blocks. Range of values depends on cylinder size. Default value is 12 sectors.
DefragLowCylProd Threshold for system to perform “Cylinder defragmentation” operation. 0 disables “Cylinder defragmentation”. Default value is 100 (free cylinders).
PermDBAllocUnit Determines incremental allocation unit of data block up to maximum data block size. Maximum data block size is 127.5 KB. Range of values is 1-63 (sectors). Default value is 1(sector). Note: Does not affect Transient Journal (TJ) data blocks or spool tables.
WriteDBsToDisk Forces writing data blocks directly to disk rather than committing to a backup Node.
Write immediately to disk: TRUE/ FALSE. Default value is FALSE. Note: Does not affect Cylinder Indexes, spool tables, or the Transient Journal.
Cylinders Saved for PERM
Defines the number of cylinders to be saved for perm data only (cannot be used for spool). Range of values is 1-100 (cylinders).
Default is 10 (cylinders).
Teradata Utilities 7-17
File System Group Parameters
File System Fields:
1. FreeSpacePercent = 0%
2. MiniCylPackLowCylProd = 10 (free cylinders)
3. PermDBSize = 63 (sectors)
4. JournalDBSize = 8 (sectors)
5. DefragLowCylProd = 100 (free cylinders)
6. PermDBAllocUnit = 1 (sectors)
7. WriteDBsToDisk = FALSE
8. Cylinders Saved for PERM = 10 (cylinders)
7-18 Teradata Utilities
Performance Group Parameters The following table describes the tunable parameters in the Performance Group:
Field What it is used for
DictionaryCacheSize Defines the size of the dictionary cache for each PE on the system. Range of values is 64 - 1024 KB.
DBSCacheCtrl Enables or disables Cache Control features associated with DBSCacheThr parameter. FALSE causes old caching rules to be used.
DBSCacheThr Specifies percentage of free memory to use for caching data tables. Range of values is 0 - 100 percent.
MaxParseTreeSegs Defines the maximum number of 64 KB tree segments the parser can allocate to parse a request. Range of values is 12 - 1000 (64 KB/segment).
ReadAhead Enables or disables Read-Ahead Sequential File operation. ReadAhead on: TRUE/FALSE. Note: Not as effective for very large row sizes.
StepsSegmentSize Defines the maximum size of the plastic steps segment. Range of values is 64 - 1024 KB.
RedistBufSize Determines the size of row redistribution buffers. Range of values is 1 - 32 KB.
DisableSyncScan Enables or disables synchronized full file scans. TRUE disables sync scan.
SyncScanCacheThr Specifies percentage of free memory for synchronized full file scans. Range of values is 0 - 100 percent.
HTMemAlloc Sizes the hash table (used for Hash Join) by defining the percentage of memory allocated to it. Valid range is 0 - 10 percent.
SkewAllowance Permits partition sizing by making the size of each partition smaller than the hash table.
Read Ahead Count Specifies the number of data blocks that will be preloaded in advance of the current file position while performing sequential scans.
PPICacheThrP
Specifies % value to be used to calculate cache threshold used in operations dealing with multiple partitions. Valid range is 0 to 500. The default value is 10.
Teradata Utilities 7-19
Performance Group Parameters
Performance Fields and defaults:
1. Dictionary/CacheSize = 128 (kilobytes)
2. DBSCacheCtrl = TRUE
3. DBSCacheThr = 10%
4. MaxParseTreeSeg = 32
5. ReadAhead = TRUE
6. StepsSegmentSize = 1024 (kilobytes)
7. RedistBufSize = 4 (kilobytes)
8. DisableSyncScan = FALSE
9. SyncScanCacheThr = 10%
10. HTMemAlloc = 0%
11 SkewAllowance = 75%
12. ReadAhead Count = 1
13. PPICacheThrP = 10
7-20 Teradata Utilities
Query Configuration Utility The Query Configuration utility reports on the current database configuration. To start the utility, enter start qryconfig in the Supervisor window.
Configuration Options You can customize the amount of information displayed in this report, ranging from information for a complete configuration to just a part of it. The following list describes available display options:
All Displays all components in the configuration and their status. This option may not be the most desirable if the system is very large or if the information desired is for a specific device type.
Processors Displays the status of all processors.
AMPs Displays the status of all AMP vprocs.
PEs Displays the status of all PE vprocs.
The Online and Offline options further qualify the above options.
Query Configuration Output The report on the facing page illustrates the output of the Qryconfig command. Type Help or a question mark (?) to display the list once the Query Configuration utility is running.
Changing Configuration When you need to change configuration on a UNIX system, such as mapping virtual elements (AMPs and PEs) to physical elements (e.g., disks), use the pdeconfig utility.
Teradata Utilities 7-21
Query Configuration Utility
qryconfig reports on current database configuration.
Note: To change configuration on a UNIX system, use pdeconfig.
7-22 Teradata Utilities
GET Config Starting Get Config
To start the Get Config utility, enter get config (in upper or lowercase) in the command-input area of the Supervisor window. The results display in the output display area of the Supervisor window.
The report on the facing page displays the result of a get config command. The report shows the following columns:
Node Id The ID number of the Node associated with each vproc
Node State Online - a participating processor Offline - a non-participating processor
Clique # Clique # of the node
# CPUs Number of processors in that node
Memory (MB) Amount of memory assigned to that node
# Channels Number of channel connections to the node
# LANs Number of LAN connections to the node
Node Name Host name assigned to the node
Stopping Get Config The Get Config utility terminates on its own.
Help Enter Help to display an option list.
Get Config is not available from HUTCNS.
Teradata Utilities 7-23
Get Config
7-24 Teradata Utilities
Reconfig Utility After Config builds a new configuration map, Reconfig redefines the system configuration according to the new map. Reconfig copies the new configuration map to the current configuration map.
Typically, you use Reconfig to alter the number of AMPs in the databas. The new configuration map includes the status of each AMP in the system. AMP status in a new configuration map is shown on the facing page.
As Reconfig runs, it performs these functions:
• Checks the RDBMS status to ensure that reconfiguration is possible.
− Disk storage capacity is checked to ensure that the system has sufficient storage to accommodate the redistributed data in the event of a delete AMP reconfiguration. Reconfiguration terminates if the system does not have sufficient storage capacity.
− After system status is verified, new hash bucket arrays are calculated based on current and new configuration maps.
• Table Redistribution (including stored procedures)—Redistributes primary and fallback data. Unique secondary index subtables, if any, are redistributed also.
• Deletes rows that were redistributed elsewhere from AMPs on which they formerly resided. Non-unique secondary indexes, if any, are rebuilt.
• Updates space accounting information, hash bucket arrays, and configuration maps.
To start Reconfig, open the Supervisor Window from the Database Window and type ‘start reconfig.’ To exit, enter ‘stop’ and confirm with Y.
When Reconfig reaches the Table Redistribution phase (the irreversible phase) and begins to change data in the current configuration, you must run the operation to completion.
If you interrupt the operation after the irreversible phase, Reconfig restarts automatically to complete the remaining phases when the system is restarted.
You can specify whether or not to pause before Reconfig enters the irreversible phase using the WITH PAUSE command.
Teradata Utilities 7-25
Reconfig Utility
– Use Reconfig to alter the number of AMPs in a RDBMS.– Use the Config Utility to first create a map.– The configuration map includes the status of each AMP in a system.
• Add—AMP is new and is to be added to the current configuration.• ChgClust—AMP’s cluster assignment was modified.• Delete—AMP is to be deleted from the current configuration.• Online—AMP has not been modified.
– Disk capacity is checked to ensure that there is sufficient storage for redistributed data in a delete AMP reconfiguration. It terminates if the system does not have sufficient capacity.
– You can specify whether or not to pause before Reconfig enters the Table Redistribution or “irreversible phase.”
– During the reconfiguration process, the utility updates:• Space accounting information• Hash bucket arrays• Configurations
7-26 Teradata Utilities
Vproc Manager Utility The Vproc Manager utility is used to perform the following functions:
• Obtain the status of all or some of the vprocs
• Change vproc states
• Initialize and boot a specific vproc
• Initialize the vdisk associated with a specific vproc.
• Force a Teradata Database restart
Vproc Manager is also used with Table Rebuild to initialize and boot a specific vproc when it becomes necessary to rebuild all tables on the vdisk associated with the vproc.
Vproc Manager utility commands • HELP—provides help for using the Vproc Manager utility program.
• STATUS returns the following:
− RDBMS and PDE status tables in their entirety.
− Vproc status table row for the specified vproc(s).
− Current system RestartKind.
− Current RDBMS and PDE system state.
− RDBMS vprocs with a VprocState of ONLINE and a ConfigStatus of Online. In addition, it returns all PDE nodes that are ONLINE.
− RDBMS vprocs with a VprocState of not ONLINE and a ConfigStatus of not online. In addition, it returns all PDE nodes that are not ONLINE.
− Various information about the Database logical configuration.
− Various information about the PDE physical configuration
Teradata Utilities 7-27
Vproc Manager Utility
Using Vproc Managerfrom a remote xtermwindow:
7-28 Teradata Utilities
Ferret Utility To maintain data integrity, the Ferret utility (File Reconfiguration Tool) enables you to display and set various disk space utilization attributes associated with the Teradata Database.
When you select the Ferret utility attributes and functions, it dynamically reconfigures the data on the disks to correspond with the selections.
Depending on the functions, Ferret can operate at the vproc, table, subtable, disk, or cylinder level.
Start Ferret from the DBW connected to the Teradata Database. Note that the Teradata database must be in the Logons Enabled state.
The commands within the Ferret utility that we will discuss in this include:
• SHOWSPACE
• PACKDISK
• SCOPE
• SHOWBLOCKS
Starting Ferret To start the Ferret utility, enter the following command in the Supervisor screen of the DBW:
start ferret
You will be placed in the interactive partition where the Ferret utility was started.
Ferret Priority You can use the PRIORITY command to set the priority of the Ferret process. This is most commonly used with SCANDISK and PACKDISK. The command is:
SET PRIORITY = <priority class>
The values for the priority class are:
• number between 0 and 7
• L (equivalent to 2)
• M (equivalent to 3)
• H (equivalent to 4)
• R (equivalent to 6)
Teradata Utilities 7-29
Ferret Utility
7-30 Teradata Utilities
Ferret===èSHOWSPACE Command The SHOWSPACE command reports the amount of disk cylinder space currently in use and the amount of cylinder space that remains available. Use SHOWSPACE to determine if disk compaction or system expansion is required.
SHOWSPACE is a command you execute from within the Ferret utility. To start the utility, enter start Ferret in the Supervisor window. Within the Ferret application window, enter showspace. The Showspace command reports on physical disk utilization, reported as:
• Permanent space
• Spool space
• Lost disk space from disk flaws
• Free disk space
The facing page shows the results of a SHOWSPACE command. Notice the command displays the average utilization per cylinder for permanent space, spool space, etc. It displays the percentage of total available cylinders as well as the number of cylinders for all types of space.
The full report format (showspace /L) displays information separately for each of the pdisks used by an AMP vproc, as well as total space utilization for the vproc.
Teradata Utilities 7-31
Ferret===èSHOWSPACE Command
– Reports amount of disk cylinder space in use and amount available.
– Use to determine if disk compaction or additional capacity is needed.
– To execute:• Start Ferret from DBW.• Enter showspace.• Use /s for a summary report.• Use /L for the full report.
7-32 Teradata Utilities
Ferret===èSHOWSPACE—Summary Report Enter an S for a summary report that displays only subtotals for all AMP vprocs in the system. The facing page shows an example of a Showspace summary report, and displays the following request to determine if the report is for a single AMP or for all AMPs:
Enter a processor number, ALL for all processors, or Q to quit.
This report contains the following information:
Proc Num AMP vproc number DSU Disk Storage Unit number (this field is blank). Total Avail Cyls Number of cylinders available to the users. Does not include
cylinders reserved for system use. Perm Data Cyls Av % Util Per Cyl
Average percent utilization of these cylinders for permanent data.
% of Total Avail Cyls Percentage of the total number of available cylinders that these permanent data cylinders represent.
#Cyls Total number of data cylinders that contain one or more blocks of permanent data.
Spool Cyls Av % of Util Per Cyl Average percent utilization of these cylinders for spool tables. % Total Avail Cyls Percentage of the total number of available cylinders that
these cylinders represent. #Cyls Total number of cylinders allocated for spool tables. Temp Cyls Av % of Util Per Cyl Average percent utilization of these cylinders for temporary
data. % Total Avail Cyls Percentage of the total number of available cylinders that
these cylinders represent. #Cyls Total number of cylinders allocated for temporary data. Journal Cyls Av% of Util Per Cyl Average percent utilization of the cylinders for Journal Tables. % Total Avail Cyls Percentage of the total number of available cylinders that
these journal table cylinders represent. #Cyls Total number of cylinders allocated for journal tables. Bad Cyls % of Total Avail Cyls Not used in disk array configuration. #Cyls Not used in disk array configuration. Free Cyls % of Total Avail Cyls Percentage of the total number of available cylinders that
these cylinders represent. #Cyls Total number of free cylinders.
Teradata Utilities 7-33
Ferret===èSHOWSPACESummary Report
Ferret===èSHOWSPACE /s
7-34 Teradata Utilities
Ferret===èPACKDISK PACKDISK Command
The PACKDISK command reconfigures the contents of a disk, leaving a percentage of free space for cylinders within a scope defined by the SCOPE command. (See Below.) PACKDISK uses the default Free Space Percent or a new percentage specified as part of the command to pack the entire disk or a single table.
Because of the method of packing, some cylinders may be fragmented. If this is the case, the DEFRAGMENT command may be used to defragment the cylinder. The allowable scope for PACKDISK is vprocs or tables, but not both.
The system will automatically pack mini-cylinders when the number of cylinders falls below a certain internal threshold value. The PACKDISK command can be used to force this situation.
Starting PACKDISK PACKDISK is a command within the Ferret utility. To start PACKDISK, enter packdisk fsp = nnn (where fsp = free space percent and nnn equals the percentage of cylinder free space) in the command window of the Ferret partition. Key the command in uppercase, lowercase or a combination of both. Note the interactive area where the utility has been started.
Other PACKDISK Commands • To terminate the PACKDISK command, enter ABORT.
• To determine the progress of a PACKDISK command, type the word INQUIRE.
SCOPE Command The SCOPE command defines the class of tables, the range of tables, the vprocs, and the cylinders and vprocs that the user inputs as parameters for use with other Ferret commands.
After defining the SCOPE, you can reconfigure the disk using the PACKDISK command.
The following screen shows an example of entering a scope followed by a PACKDISK command.
Note: The SCOPE command requires internal Table IDs when selecting tables. Use the Ferret TableID command to display an internal TableID number. Examples include:
• 1024 is a primary subtable
• 2048 is a fallback subtable
Teradata Utilities 7-35
Ferret===èPACKDISK
7-36 Teradata Utilities
Ferret===èSHOWBLOCKS The Ferret utility includes a SHOWBLOCKS command that displays the data block size and/or the number of rows per data block for a defined scope.
Teradata Utilities 7-37
Ferret===èSHOWBLOCKS
Option Gives statistics about:
/S The Primary Data Tables defined by the SCOPE command.
/M All subtables defined by the SCOPE command.
/L Minimum, average, andmaximum number of rows per datablock size for all subtables.
7-38 Teradata Utilities
Ferret===èSCANDISK Command The SCANDISK command helps you determine if there is a problem with the AMP file system and assess its extent. SCANDISK is a diagnostic tool designed to check for inconsistencies between key file system data structures such as the master index, cylinder index, and data blocks.
As an administrator, you can perform this procedure as preventative maintenance to validate the file system, as part of other maintenance procedures, or when users report file system problems.
Execute the SCANDISK command in the Ferret utility while the system is operational.
The SCANDISK command:
• Verifies data block content matches the data descriptor.
• Checks that all sectors are allocated to one and only one of the either: the Bad sector list, the Free sector list, or a data block.
• Ensures that continuation bits are flagged correctly.
If SCANDISK discovers a problem with a disk, you must use the Table Rebuild utility to rebuild any tables it reports as having bad data for the particular AMP. (The Table Rebuild utility is discussed later in this lesson.) The output of the SCANDISK command is displayed on the screen directly after the command completes.
• To avoid potential tpa resets, run SCANDISK prior to running the CheckTable utility and to rebuilding tables with Table Rebuild.
Starting SCANDISK Enter start Ferret and from within the Ferret utility window, enter the command SCANDISK. The SCANDISK command may be limited by the SCOPE command to scan only one table, a range of tables, or the whole AMP. You can use the PRIORITY command to specify a priority class.
Note: Ferret is case-insensitive.
Stopping SCANDISK SCANDISK terminates itself after performing the scan.
For more information about the Recovery Manager utility, see the Teradata RBDMS Utilities manual.
The INQUIRE command is used to check the status of SCANDISK and reports progress as a percentage of total time to completion and a list of errors that occurred since the last INQUIRE command.
Teradata Utilities 7-39
Ferret===èSCANDISK Command
Identifies and determines the extent of any problems with AMP file system.
7-40 Teradata Utilities
CheckTable Utility CheckTable is a diagnostic tool designed to check for inconsistencies in internal data structures such as table headers, row identifiers, and secondary indexes. CheckTable can help determine if there is corruption in your system. It allows checking of up to 25 tables in parallel.
Use the CheckTable utility as both a diagnostic and validation tool. As a diagnostic tool, you can identify problems with data integrity. As a validation tool, you can verify data integrity prior to a reconfiguration or archive. CheckTable only identifies inconsistencies; it does not correct them.
Always run SCANDISK before you run CheckTable. CheckTable assumes the underlying structure of the file system is intact. If there are structural errors, CheckTable could cause a tpa reset on the database.
The estimated run time for a CheckTable varies depending on the characteristics of the data. The more non-unique secondary indexes defined on the tables, the longer it takes to run CheckTable. If you invoke CheckTable when users are logged on, the time it takes to process the CheckTable will depend on the activity on the system and the amount of resource contentions that it encounters (for example, object locks). For this reason, it is recommended that you run CheckTable when the system is quiescent. Use PRIORITY and SKIPLOCKS options to minimize impact on non-quiescent systems.
The CHECK command supports wildcard syntax including % and ? and new special characters in table or database names. Pendingop-level checking checks field four of a table header and warns you if the table is pending any of the following activities: FastLoad, Restore, Reconfig, Rebuild, Replicate copy, or MultiLoad.
Starting CheckTable To start the utility, enter start CheckTable in supervisor mode.
Stopping CheckTable To stop CheckTable, enter QUIT;
For more information about the CheckTable utility, see the Teradata RBDMS Utilities manual.
Teradata Utilities 7-41
CheckTable Utility
CHECK
1102D029
A,
dbname
ALL TABLES
EXCLUDE
30
dbname.tablename
,
dbname
EXCLUDE30
dbname.tablenamedbname
,
dbname30
dbname.tablename
BAT LEVEL
INDEX ID=nnn
,
UNIQUE INDEXES
NONUNIQUE INDEXES
REFERENCE ID=nnn
DATA
NOT
ONLYBUT
A
CB PENDINGOP
ONE
TWO
THREE
WITH ERROR LIMIT= nnn
NO ERROR LIMIT
C
SKIPLOCKS IN SERIAL
PARALLEL
PRIORITY= pgname
REFERENCE INDEXES
;
7-42 Teradata Utilities
Running CheckTable Checking Levels
The CheckTable utility provides three different levels of checking. Each level is a superset of the lower levels and runs all previous level checks.
Checking Level Internal Data Structures Checked
Level-one checking Data dictionary (if database DBC is checked) Table dictionary Table header Obsolete subtables Unique secondary indexes Nonunique secondary indexes ParentCount ChildCount AMPs with reference indexes Reference indexes on target table and “buddy” table Subtable of a given table Data subtables Unique secondary indexes Nonunique secondary indexes Reference indexes
Level-two checking Data subtables Unique secondary indexes Nonunique secondary indexes Reference indexes
Level-three checking Data subtables Unique secondary indexes Nonunique secondary indexes Reference indexes
Teradata Recommendations Teradata recommends that you perform the following maintenance routine once a month:
1. Run a SCANDISK diagnostic for all vdisks. SCANDISK performs intra-disk integrity checks by determining that the underlying file system is intact. Users may want their field support representative to start this task.
2. Run a CheckTable run at Level 2. The CheckTable utility completes the diagnostic analysis with inter-disk integrity checks, according to the rules of the database system.
Teradata Utilities 7-43
Running CheckTable
CheckTable provides three levels of checking:
Level 1 Checks specified system data structures, the data subtables, and unique and non-unique secondaryindexes. Use only to isolate specific tables with errors, then perform detailed check with level 2 or 3.
Level 2 Determines whether row IDs on any given subtable are consistent with row IDs on other subtables, bycomparing lists of IDs in those objects. Also compares the checksum of primary and fallback rows.Recommended when checks by level 1 fail. Also verifies that hash codes reflect correct rowdistribution in each subtable.
Level 3 Provides most detailed check and requires more system resources than other levels. Because of thecost in resources, use this checking level rarely and only for very specific diagnostic purposes.
If an AMP is unavailable and the table is no-fallback, all USI checks are bypassed. If an AMP isunavailable and the table is fallback, the fallback copies of index and data subtables (on theunavailable AMP) are used in place of the primary copies which are on the unavailable AMP.
Use the following function keys with CheckTable:
F4
F2 Displays current status.
Aborts the current table check and continues with the next.
Aborts execution of the CheckTable command.
F3
1. A SCANDISK diagnostic run for all vdisks. This function performs intra-disk integritychecks. Your field support rep can start this diagnostic tool.
2. A CHECKTABLE run at level 2. CheckTable completes the diagnostic analysis withinter-disk integrity checks.
NCR recommends that you schedule the following maintenance routine once per month:
7-44 Teradata Utilities
Table Rebuild Table Rebuild is a utility that repairs data corruption. It does so by rebuilding tables on a specific AMP based on data located on the other AMPs in the fallback cluster.
Table Rebuild can rebuild data in the following subsets:
• The primary or fallback portion of a table
• An entire table (both primary and fallback portions)
• All tables in a database
• All tables that reside on an AMP
Note: Table Rebuild also handles stored procedures, which are stored internally as ordinary tables.
Table Rebuild performs differently when the table you are rebuilding is fallback or non-fallback.
Type of Table Action
Fallback tables Delete the table header (one row table which defines a user data table).
Delete specified portion of the table being rebuilt.
Rebuild the table header.
Rebuild the specified portion of the table.
Non-fallback tables
Delete the table header.
Note: You must restore non-fallback data.
Permanent journal tables
Delete data for the table being rebuilt.
Rebuild the table header.
Locks the table (“pending rebuild”).
Note: You must restore non-fallback data.
Note: Table Rebuild should be run by NCR Customer Support staff.
For more information about the Table Rebuild utility, see the Teradata RBDMS Utilities manual.
Teradata Utilities 7-45
Table Rebuild
Note: Table Rebuild should be run by NCR Customer Support staff.
7-46 Teradata Utilities
Recovery Manager The Recovery Manage utility lets you monitor the progress of a Teradata Database recovery session. Recovery Manager only runs when the system is in one of the following states:
• Logon
• Logon/Quiet
• Logoff
• Logoff/Quiet
• Startup (if the system has completed voting for transaction recovery)
If the system is not in one of the above states, Recovery Manager will terminate immediately after you start it.
Starting RcvManager To start Recovery Manager, enter start rcvmanager in the Supervisor screen of the database window. This command must be entered in lower case.
Stopping RcvManager You must use the rcvmanager quit; command to stop the program. You cannot stop it with the Supervisor program STOP command.
Note: All RcvManager commands end with a semicolon (;).
For more information about the Recovery Manager utility, see the Teradata RBDMS Utilities Reference manual.
Teradata Utilities 7-47
Recovery Manager
– Monitors the backing out of incomplete transactions.– Shows the count of rows presently in the Down AMP Recovery Journal.– Represents data rows an AMP must recover from the other AMPs in the
cluster.– Runs only when the system is in one of these states:
• Logon• Logon/Quiet• Logoff• Logoff/Quiet• Startup
• Starting Recovery Manager:
Enter start rcvmanager in Supervisor interactive area.• Stopping Recovery Manager:
Enter quit; in the Supervisor window.
7-48 Teradata Utilities
Recovery Manager LIST STATUS Command The LIST STATUS command displays information about online transaction recovery and offline AMP recovery. It generates two reports.
Online Transaction Recovery Journal The Online Transaction Recovery Journal displays a list of all active recovery sessions and the maximum number of transaction rows remaining to be processed on the AMP with the maximum count. Online Transaction Recovery Journal counts are updated each time a checkpoint is taken. After a checkpoint, the count decreases by 1000. Issuing the List Status command again should show this count decreasing. If there are no recovery sessions active, the report will display the titles, with no data.
Down AMP Recovery Status This report pertains to offline AMP recovery and displays an entry for each offline AMP. It will indicate when an AMP is ready to be brought online by showing an asterisk (*) next to that AMP when its recovery rows go below a certain threshold. This threshold is less than 3000 Changed Row Journal rows, zero Ordered System Changed Journal rows, and zero Transient Journal Rows.
ONLINE TRANSACTION RECOVERY JOURNAL
Recovery Session ID of the active recovery session
Count Maximum number of transient journal rows remaining to be processed for a specific AMP
AMP w/Count The AMP to which the corresponding count applies
DOWN AMP RECOVERY STATUS
AMP to be caught up
Designates which AMP needs to be recovered. Asterisk (*) indicates the AMP would be brought into online catchup if a restart were to occur.
Recovery Action Recovery mode of the recovering AMP:
NOT IN RECOVERY - AMP hasn't come into recovery yet
OFFLINE CATCHUP - AMP is not part of the configuration
ONLINE CATCHUP - AMP should join the configuration on next Restart. (This is indicated by an asterisk (*) next to the AMP in the report.)
CJ Count Changed Row Journal Count - The number of rows that were updated in the cluster while an AMP was down.
OJ Count Ordered System Change Journal Count – The number of system or table level changes done in the cluster while an AMP was down.
Teradata Utilities 7-49
Recovery Manager LIST STATUSCommand
7-50 Teradata Utilities
Recovery Manager LIST LOCKS Command The LIST LOCKS command displays all locks currently held by online transaction recovery. The report displays the following information:
Lock Mode Mode of the lock held:
- Write
- Exclusive
Lock Object The object type locked:
- Database
- Table
- Row Range
- Row Hash
Object Name The name of the object
The report is sorted alphabetically by object name. The report does not display information for row range and row hash locks, but does display the table that the row is in. If Recovery Manager is unable to determine the database name associated with an object, then it displays the database ID in decimal and hexadecimal. The same is true if the table name cannot be determined.
Teradata Utilities 7-51
Recovery Manager LIST LOCKSCommand
7-52 Teradata Utilities
Recovery Manager Priority Command The PRIORITY command enables you to specify priorities for:
• Table rebuild operations
• System recovery operations
Both operations are independent of each other. For either operation, if you do not explicitly set a recovery priority, the system uses the default priority. If you do not enter a new priority, the current priority setting displays. The system saves the priority settings for both operations in the Recovery Status system table.
The facing page shows the syntax for the PRIORITY command. 11
Priority command Description Default
REBUILD PRIORITY
Enables you to set the table rebuild priority to low, medium or high.
Medium
RECOVERY PRIORITY
Allows you to set the system recovery priority to low, medium or high.
Low
DEFAULT PRIORITY
Sets both priorities back to the default (e.g., REBUILD is set to medium, and RECOVERY is set to low).
N/A
Note: The REBUILD PRIORITY command applies to any Table Rebuild started from the console, automatic table rebuild due to disk error recovery and MLOAD rebuild of target tables for non-participant online AMPs.
Recovery Priority The RECOVERY PRIORITY command enables you to set a priority for the system recovery operation.
Teradata Utilities 7-53
Recovery Manager Priority Command
Default PrioritySets REBUILD PRIORITY to MEDIUM, and RECOVERY PRIORITY toLOW.
When you enter the DEFAULT PRIORITY command, the systemdisplays the following messages:
YY/MM/DD HH:MM:SS RECOVERY priority changed to LOW; it was<old priority>
YY/MM/DD HH:MM:SS REBUILD priority changed to MEDIUM; it was<old priority>
REBUILD PRIORITY HIGH | MEDIUM | LOW
RECOVERY PRIORITY HIGH | MEDIUM | LOW
7-54 Teradata Utilities
Abort_Rollback Recovery Manager provides a mechanism to cancel or skip the rollback of specified tables during a Teradata system restart or an aborted, online transaction. When the CANCEL ROLLBACK ON TABLE command is executed for a table, the Teradata Database marks the related table header invalid. Only the rollback pertaining to the specified table in the transaction is cancelled. The rollback processing for the rest of the transaction is not impacted.
Cancelling the rollback of long-running transactions improves the availability of database resources and reduces the system startup time after a crash.
Use the CANCEL ROLLBACK ON TABLE command when one of the following occurs:
• The rollback of a table is likely to take longer than its restoration.
• The table, such as a temporary table, is unimportant.
List Cancel Rollback Tables
The LIST CANCEL ROLLBACK TABLES command displays a report containing the table-id, database name, and table name of the tables whose rollback processing is cancelled during an online, user-requested abort or during Teradata Database system recovery.
Note: The LIST ROLLBACK TABLES command report does not include invalid tables, that is, tables on which rollback is being cancelled. Therefore, if all the tables in a session have been specified for rollback cancellation, they appear only in the output of the LIST CANCEL ROLLBACK TABLES command. If no tables on which rollback is cancelled exist, then only the column headings are displayed.
Rollback Session…Performance Group
The ROLLBACK SESSION...PERFORMANCE GROUP command displays or sets the Performance Group of rollbacks for a specified session.
The host-id and the session-id specified with the ROLLBACK SESSION...PERFORMANCE GROUP command must be in the rollback tables list generated by the LIST ROLLBACK TABLES command from RcvManager. This indicates that rollback is in progress in that session. You can either display or change the Performance Group.
If you specify a host-id or a session-id that does not exist in the rollback list, RcvManager displays a message, and the command is ignored.
Teradata Utilities 7-55
Abort_Rollback
Use the CANCEL ROLLBACK ON TABLE command with cautionbecause the target table becomes invalid and unusable after executing this command. Also, NCR highly recommends that you perform a DELETE ALL operation on the table after canceling rollback on it.
The typical process for canceling rollback on a table is as follows:
1. The rollback is taking too long.
2. You identify a large table(s) that can be restored faster than the rollback will take.
3. You perform a LIST ROLLBACK TABLES to generate a list of rollback tables.
4. You perform a CANCEL ROLLBACK ON TABLE.
5. You perform a DELETE ALL and restore the table(s).
7-56 Teradata Utilities
Showlocks Utility Report The Showlocks utility displays information about host utili ty locks the ARC utility places on databases and tables during backup or restore activities. This utility also displays information about locks placed during a system failure.
Host utility locks may interfere with application processing and are normally released after the utility process is complete.
If locks interfere with application processing, you can remove them by invoking the RELEASE LOCK statement. An individual session may be in a "blocked" state due to one of the following situations:
• An ARC operation failed and a database cannot be accessed.
• Locks were not implicitly or explicitly released after an ARC operation.
• A lock was not released by the user after a system failure occurred.
Report Contents
The utility displays the following information for each utility lock:
• Database name that contains lock
• Table name that contains lock (if applicable)
• User name of user who placed the utility lock
• Lock mode (read, write, exclusive, access)
• Read = Dump
• Write = Roll
• Exclusive = Restore/Copy
• Access = Group read lock or Checkpoint
• ID of vproc (all AMPs when lock resides on all AMPs)
If an object has more than one utility lock, Showlocks provides information for the most restrictive lock placed on the object.
For more information about the Recovery Manager utility, see the Teradata RBDMS Utilities manual.
Teradata Utilities 7-57
Showlocks Utility Report
Displays information about host utility locks that ARC places on database and tables.
7-58 Teradata Utilities
Abort Host Utility The Abort Host utility aborts all outstanding transactions on behalf of a channel host that is no longer operating. The Transient Journal will be used to rollback changes, spool files will be released, and sessions ended.
Starting the Abort Host Utility To start the Abort Host utility, enter start aborthost in the Supervisor interactive area. (Enter the command in lowercase). Note the interactive area where the utility has been started.
The following screen shows that you enter the following command:
abort host nnn
In the above example, “nnn” is the host number. You can obtain the host number using Query Config.
For more information about the Recovery Manager utility, see the Teradata RBDMS Utilities manual.
Teradata Utilities 7-59
Abort Host Utility
Note: Obtain the Hostnumber using the QueryConfig utility.
Aborts outstanding transactions on behalf of a channel host that is no longer operating.
7-60 Teradata Utilities
Summary The opposite page summarizes some important concepts in this module.
Teradata Utilities 7-61
Summary
Teradata utilities are either host-based or AMP-based.
The host-based utilities run under the host operating system and support user activities such as loading, unloading, database administration, backup and restore of information in the database.
Examples include:
• BTEQ Export/Import• FastLoad (FDL)
• TPump• MultiLoad (MLOAD)• Archive & Recovery
The AMP-based utilities run on the database and are accessed through the DBW, Teradata Remote Console, cnsterm, or HUTCNS.
Examples include:• Query Session (Qrysessn)• Configuration Display
(Qryconfig)• Get Config• Checktable• Table Rebuild
• Ferret Utility- SHOWSPACE - SHOWBLOCKS- PACKDISK - PRIORITY- SCOPE - SCANDISK
Teradata Administrator is the Teradata Manager Application that can be used to perform database administration tasks such as creating, modifying or dropping users or databases, and granting or revoking access rights.
•FastExport•Dump Unload/Load (DUL)
• SHOWLOCKS
• Abort Host
•Recovery Manager
7-62 Teradata Utilities
Review Questions Check your understanding of the concepts discussed in this module by completing the review questions as directed by your instructor.
Teradata Utilities 7-63
Review Questions
Indicate whether statement is True (T) or False (F)
1. Most host-based utilities can run only on channel-attached systems.
2. You can initiate AMP-based utilities via the DBC Console (HUTCNS), Teradata Manager Remote Console, cnsterm, and Database Window (DBW).
3. You can only access Ferret through the DBW.
4. You can access Query Session through the DBC Console Interface (HUTCNS) from a VM terminal.
5. CheckTable features two levels of internal table checking.
6. Table Rebuild rebuilds tables differently depending on whether the table is Fallback, non-Fallback, or a Permanent Journal table.
7. You should run SCANDISK before running CheckTable.
8. How do you display the recovery status of a DOWN AMP?
T F
T F
T F
T F
T F
T F
T F
7-64 Teradata Utilities
Lab 4 The lab for this Module.is in Appendix B.
Teradata Utilities 7-65
1
Lab 4
Please see Lab 4 in Appendix B
7-66 Teradata Utilities
References For more information on space allocation and usage, refer to
• Teradata RDBMS Utilities - (B035-1022-122A)
• Teradata RDBMS Database Window Reference (B035-1095-122A)
Meta Data Services 8-1
Module 8
Meta Data Services
After completing this module, you should be able to:
• Recognize MDS terminology.
• Identify Meta Data Services Utilities.
• Identify MDS AIM and DIM model.
• Identify object relationships.
Metadata Services V5.0 8-2
Notes:
Meta Data Services 8-3
Table Of Contents What is Meta Data?......................................................................................................................8-4 MDS Features...............................................................................................................................8-6 MDS V5.0 - New Features ...........................................................................................................8-8 Meta Data Services Architecture................................................................................................8-10 MDS V5.0 Application ..............................................................................................................8-12 MDS Consulting Services ..........................................................................................................8-14 MDS Consulting Services ..........................................................................................................8-16 MDS Consulting Services ..........................................................................................................8-18 Data Representation in the MDS Repository.............................................................................8-20 The Application Information Model ..........................................................................................8-22 More on the AIM........................................................................................................................8-24 Object Relationships ..................................................................................................................8-26 Adding a Super Class to an Existing Class ................................................................................8-28 Modify Object Description.........................................................................................................8-30 Meta Data is Stored as Objects ..................................................................................................8-32 Database Information Model......................................................................................................8-34 Review Questions .......................................................................................................................8-36
Metadata Services V5.0 8-4
What is Meta Data? Meta data is data about data it describes your data so that all users work off of the same definitions , reducing the need for making assumptions on what the data represents, what it means, or where it came from. Examples:
• The definition of which states constitute “Western Region Sales”? The definition could include formulas used to calculate gross margin, predictive model scores, or indications that particular data is sensitive or private.
• Where does data come from (sources) and what transformations occur. The definition could include how tables are organized and updated as well as accessed data relationships, including how sets of tables of various subject areas are linked together, how relational data is mapped to non-relational forms and vice versa, etc.
• How applications are integrated and kept consistent. The definition could include information about transformation.
Meta data is information. It gives meaning and context to the data in a database, providing for a common understanding of the data. Many businesses already have meta data regarding their data warehouse somewhere—a document, spreadsheet, paper notes or just in "someone’s head." Storing, locating and maintaining this information can prove cumbersome. A data warehouse meta data repository provides a central storage location, with on-line access, for information spanning the entire warehouse process – from loading the warehouse, to the architecture of the data in the warehouse, and to the usage of that data in the warehouse. The goal of MDS is to provide the means to integrate the meta data of a Teradata Warehouse, and to offer the best solution for doing this in a Teradata data warehouse environment. Gartner in a recent strategic planning article regarding meta data described meta data as the collection of rules and definitions that give structure and consistency to databases, processes and definitions. Data element edit rules, report layouts, data dictionaries and object models are all examples of meta data. Why is meta data so important? It provides the lever to alter that form so that it better represents the reality of the business it serves.
Meta Data Services 8-5
What is Meta Data?
Meta data is sometimes described as data about data.
Meta data provides the “fit and finish” regarding how a packagedapplication operates in a specific implementation.
When considering packaged applications, meta data use iscrucial in three roles:
- Application configuration and customization.- Application integration.- Application upgrade and release migration.
Why is Meta data so important?
Meta data management provides the tools to organize voluminousapplication data details into a comprehendible form, and it provides the structure to better represent the reality of the business it serves.
Metadata Services V5.0 8-6
MDS Features Features of MDS V5.0 include:
• Teradata Shared Repository – The MDS repository is stored in a Teradata warehouse. You can store meta data from one or more Teradata warehouses in a single repository.
• Tool to load Teradata meta data – MDS provides a tool to load Teradata meta data into repository directly from data dictionary.
• Loading of specified databases –You can specify which databases you want to load from Teradata into the repository.
• Views – The Database Information Model (DIM) shows which table columns are used by a view.
• Security – MDS has its own set of users and administration groups used for accessing meta data objects.
• Excel Import – You can import data from MS Excel macros into the MDS repository (sample scripts available).
• Administrative GUI (graphical user interface)– MDS has the MetaManager for administrative functions.
• GUI Browser – MDS also has MetaBrowse for looking at the meta data and meta models.
• Internationalization – The MDS GUIs are globalized and can be localized for specific areas.
• MDS V5.0 included with Teradata V2R5. • Windows 98/2000/XP/NT 4.0 – MDS runs on multiple Windows
platforms. You can access the same repository from various clients running different operating sys tems.
• MetaSurf application to provide web-based read-only access to MDS repository. In MDS V5.0 using the admin profile you can update object descriptions through MetaSurf.
• MetaXML application to import information from an XML (Extensible markup language) formatted file into the MDS repository.
• MetaClient application to import information from MultiLoad and FastLoad scripts and output files into the MDS repository.
• Automatic DIM Update, via tight integration with Teradata V2R4 and V2R5, provides a means of automatically updating the MDS repository when Teradata data dictionary changes occur.
• Audit trail maintains a record of the Teradata Data Dictionary changes that have occurred.
• COM interfaces, including support for Microsoft’s Visual Basic scripting
Meta Data Services 8-7
MDS Features
Teradata Shared RepositoryTool to load Teradata metadataTeradata View Support (DIM)SecurityExcel Import UtilityAdministrative GUIGUI Browser Internationalization Teradata V2R4 and V2R5Windows 98/2000/XP/NT 4.0MetaSurf enhanced Web based access to repositoryMetaXML allows for XML script importMetaClient imports client load dataAutomatic DIM updateAudit trailVisual Basic scripting interface (Microsoft COM interface.)
Metadata Services V5.0 8-8
MDS V5.0 - New Features • Support for intermediate view information. • MetaLoad performance enhancements. • Locking enhancements. • Ability to create, edit, or delete object data through MetaBrowse. • LastAlterTimeStamp property for Tables, Views, Triggers, And
Stored Procedures. • Automatic DIM update feature to dynamically to keep the MDS
physical Meta data repository synchronized with the Teradata system.
• Improved Business search. • V2R5 support for Partition Primary Indexes and Identity columns. • Enhanced MetaSurf interface. • MetaSurf search by rank and relevance. • Support for MetaIntegrations import bridge. (MIMB)
Meta Data Services 8-9
MDS V5.0 - New Features
ØSupport for intermediate view information.ØMetaLoad performance enhancements.ØLocking enhancements.ØAbility to create, edit, or delete object data through MetaBrowse.ØLastAlterTimeStamp property for Tables, Views, Triggers, And Stored Procedures.ØAutomatic DIM update feature to dynamically to keep the MDS physical Meta data repository synchronized with the Teradata system.ØImproved Business search.ØV2R5 support for Partition Primary Indexes and Identity columns.ØEnhanced MetaSurf interface.ØMetaSurf search by rank and relevance.ØSupport for MetaIntegrations import bridge. (MIMB)
Metadata Services V5.0 8-10
Meta Data Services Architecture GUIs
MetaBrowse is a graphical user program that displays meta data application information models and user meta data stored in the MDS repository. MetaManager is the administration GUI used to create and load the repository using the MDS utilities.
WEB MetaSurf is an Active Server Pages Web based application that allows end users to browse, search, and drill-down data in the MDS repository using Netscape or Internet Explorer browsers.
Programming Interfaces Meta Data Services API (application program interface) Libraries are a set of APIs for applications to define and extend models and to store, administer, and retrieve meta data from the MDS Repository Database. The APIs run as part of the calling client application process. The APIs supported are C++ APIs (for Windows and UNIX) and a Microsoft COM interface supporting Visual Basic scripting (Windows only). Non-Programmatic Interface is an XML scripting language that enables integration of customer applications and imports 3rd party or pre-existing meta data.
MDS Engine
Meta Data Services Engine is the heart of MDS, which performs the services to persist and retrieve meta data from the repository database. The Automatic DIM Gateway is a UNIX component to synchronize the Teradata meta data with the Teradata database.
MDS Utilities
We will be discussing the MDS utilities in detail throughout this course. • Metacreate • Metadelete • Metaxml • Metaclient • Metaload • Metamigrate • MetaSurf
MDS Repository
Meta Data Services Repository Database is the set of relational tables in the Teradata RDBMS in which the MDS repository is stored. Audit log tracks the audit trail for changes in the Teradata meta data.
Meta Data Services 8-11
Meta Data Services Architecture
Metadata Services V5.0 8-12
MDS V5.0 Application All MDS software is on its own CD-ROM.
• Windows install executable (98SE,2000,XP,NT4.0) • UNIX MP-RAS packages • Teradata RSG VPROC package
MDS CD-ROM is automatically shipped with Teradata V2R5.
• No additional cost associated with MDS V5.0 when upgrading to V2R5.
• Order Number (F574-0320-000)
Meta Data Services 8-13
MDS V5.0 Application
• MDS software delivered on CD-ROM
• Windows install executable (Windows 98SE, 2000,XP &NT4.0)•New for V5.1 (Windows 2000)
• UNIX MP-RAS packages
• Teradata RSG V PROC package included
• MDS CD-ROM is automatically shipped with Teradata V2R5 at no cost. $$$$
•No cost associated with MDS V5.0 when upgrading to V2R5. (or higher)
Metadata Services V5.0 8-14
MDS Consulting Services The first MDS consulting service is a Meta data Assessment. For this type of consulting engagement, your goals are to:
• Determine meta data needs What does your customer plan on doing with the meta data? Is it for admin use, business use, or both?
• Locate sources of meta data Where is the meta data now? In the warehouse (DDL), Excel spreadsheet, flat file, note pad, an MDIS file, etc.
• What are they using meta data for? Business users, administrators, or both?
• Who access the meta data? Business users, DBAs, etc.
• What tools are they using to access the meta data? Custom applications, 3rd party tools, etc.
• What are the business questions? Determine up front what questions they need to answer. This will tell you if MDS is the correct solution or if you need to use another product or tool.
Meta Data Services 8-15
MDS Consulting Services
Determine metadata needs:
• Business vs. Technical.
• Do they have an existing meta-model?
• Do they have existing metadata standards?
Locate sources of metadata.
Determine users of metadata.
• How are they going to access the metadata
This service is similar to the SDW Business Discovery and Information Discovery Services , rolled into one.
Meta Data Assessment
Metadata Services V5.0 8-16
MDS Consulting Services Another MDS consulting service is Meta data Design. For this type of consulting engagement, your goal is to determine the best methods to integrate the meta data into repository. Based on where the meta data is you need to recommend a strategy to get all the meta data into the MDS repository. You need to ask the following questions:
• Do you just need commentary on existing tables & columns? • Do you need additional properties ( e.g. transformation) describing
columns? • Do you need to add classes and/or relationships to the DIM ( e.g.
procedures and scripts)? • Are you trying to model a part of the business that doesn’t relate to
the warehouse?
For this engagement you need to:
• Design the extract and population process. • Determine how the meta data will be moved from sources to the
repository. • Determine what customization is required. • Identify what additional processes (e.g., batch schedules) need to
be enhanced.
Meta Data Services 8-17
MDS Consulting Services
• Determine best methods for integration into repository.
• Use the existing DIM.• Add fields to existing DIM.• Create new AIMs.
• Design the extract and population processes.
Meta Data Design
Metadata Services V5.0 8-18
MDS Consulting Services The final MDS consulting service is Meta data Implementation Issues. For this type of consulting engagement your goals are to determine:
• Installation of Infrastructure & MDS How many PCs need the software. Configuration of PCs being used. (Win 98, 2000, NT, RAM 32-128, Hard drive space available 8-10MB)
• Loading the Teradata Meta Data Loading of production vs. development Partial versus all databases Loading Teradata client load scripts (Transfer of MVS scripts to
Windows or UNIX load)
• MetaSurf Installation and customization Loading MetaSurf application Configure Internet Information Server (IIS) or Personal Web Server Configure MetaSurf
• Automatic DIM update / Setup / Configuration Update / Setup of Automatic DIM Update Configuration of Automatic DIM Update
• Integration of meta data from other sources
Using Excel import utility Custom programs vs. MDIS import vs. XML sources.
• What are the performance issues? How long will it take to load the repository? How long will it take to
refresh the repository? What impact will the repository have on warehouse users and vice versa?
• What security precautions need to be taken? Who is responsible for granting and denying access to the
repository data? How is the repository backed up? Get the security officers
involved. How many users and groups will be needed? Are there multiple security profiles?
Meta Data Services 8-19
MDS Consulting Services
• Installation of infrastructure and Meta Data Services.• Loading of Teradata metadata• MetaSurf installation and setup• Installation and setup of IIS• Integration of metadata from around the warehouse,
source-by-source with MDS• What are the performance issues?• What security precautions need to be taken?
Meta Data Implementation Issues
Metadata Services V5.0 8-20
Data Representation in the MDS Repository The meta data is stored in the repository in application information models. The AIMs define the structures and relationships between meta data objects. The repository itself has an AIM, MDS Meta Model which describes the structure of the repository. MDS comes with an AIM specifically for Teradata meta data. The Teradata AIM is called the DIM. The DIM has the structures to describe the meta data in the Teradata dictionary. Using MetaBrowse GUI interface, you can extend an existing DIM and create NEW AIMs. You can also use C++ , Visual Basic, XML, and the Microsoft COM API set to add to the structure to meet specific needs. This process is referred to as extending the DIM. Using the client load model, the client load script information is added to the DIM information with data from the client load scripts. You can also define AIMs for other specific needs, such as industries.
Meta Data Services 8-21
Data Representation in the MDS Repository
• The MDS repository contains a collection of Application Information models which models define how information is stored.
• The Database Information Model.• Defines how the Teradata metadata is stored.• Is provided with MDS.• Can be extended using the MDS APIs to add classes
to provide additional information such as:• Transformation• Logical model relationships
• Using client load model, load scripts update the DIM
Metadata Services V5.0 8-22
The Application Information Model An AIM consists of:
• Class Descriptions Class descriptions define a type of meta data in the repository.
• Property Descriptions Property descriptions are the data fields associated with class
descriptions. • Relationship Descriptions
Relationship descriptions create an association between two class descriptions.
Meta Data Services 8-23
The Application Information Model
• An AIM describes how data is stored in MDS the same way a schema describes how data is stored in a database.
• MDS provides APIs for creating AIMs. • An AIM consists of:
• Class Descriptions (e.g. table, column)
• Property Descriptions (e.g. column id, table id)
• Relationship Descriptions (e.g. database has table)
• Defining a Class Description with associated Property Descriptions is similar to defining a table with columns in a database.
Metadata Services V5.0 8-24
More on the AIM The relationship descriptions describe connections between AIMs, classes, and objects.
Meta Data Services 8-25
More on the AIM
• Relationship Descriptions:• Create an association between two class descriptions.• Relate data of one type with associated data of another
type.• Relationship descriptions are similar to using foreign keys
in the database.• Examples:
• A Class Description is defined to store Teradata Table information.
• A Class Description is defined to store Teradata Column information.
• Need to know which Columns belong to a particular Table?
• Define a Relationship Description called TableHasColumns which forms a relationship between the Table class and the Column class.
• The columns for a table are physically related in the MDS Repository using the TableHasColumnsrelationship.
Metadata Services V5.0 8-26
Object Relationships The following page shows the hierarchical relationships between repository objects.
Meta Data Services 8-27
Object Relationships
DatabaseSystemName: Teradata1LoadDate: 07-22-2000
DatabaseName: SalesType: database
TableName: Customer
ColumnName: AccountIDType: INTEGER
SysHasDBs
DBHasTables
TableHasCols
CLASSES
PROPERTIESPROPERTY VALUES
RELATIONSHIPS
ColumnName: OrderIDType: INTEGER
ViewName: sales
ViewHasTableColumns
DBHasViews
Metadata Services V5.0 8-28
Adding a Super Class to an Existing Class Inheritance is a feature added in MDS 5.0 that allows a user to create a class (subclass) that inherits the properties and relationships from a previously defined class (superclass). The new subclass will contain all the properties of the class it inherits from. In MDS 5.1 this feature was extended to allow a superclass to be dynamically defined for a subclass instead of requiring the superclass to be defined prior to the subclass. Creating a super class allows the users to perform better database analysis by simplifying search criteria and allowing the user to access the repository information quicker and easier. Super classes are not required and would be created by the MDS administrator based on need. An example would be the class hierarchy represented with tables to columns, columns to views. A super class could be created that might be called TCV that would represent all tables, columns and views in a database. The relationships and objects could be searched and grouped in this super class instead of on a table-by-table basis.
Business Value
Simplification of search criteria and better enterprise analysis is available to the MDS user taking advantage of this new functionality.
Technical Description
Class hierarchy is defined at the object creation level in a MDS repository. Using this feature the MDS administrator can create a Super class representing many individual classes. This allows the users to more effectively analyze the information about the objects stored in the repository.
Implementation
Installation of MDS V5.1 will install all of the components needed to take advantage of this new feature. The actual creation of the super class would be made manually in MetaBrowse by using the add a new class panel.
Meta Data Services 8-29
Adding a Super Class to an Existing Class
Class hierarchy is defined at the object creation level in a MDS repository. Using this feature the MDS administrator can create a Super class representing many individual classes. This allows the users to more effectively analyze the information about the objects stored in the repository.
Class A
Class C
ExistingClassHierarchy
New Super class
Class B
Metadata Services V5.0 8-30
Modify Object Description This new feature was created in response to a request from a field engineer to support customer needs to make relationship descriptions used in MDS more intuitive. It includes two new components in the functionality of the feature. First, the ability to modify the default descriptions of a relationship (Previously only NEW relationship descriptions could be modified) and the ability to view this description by placing the cursor over the relationship. (A pop up bubble will appear with the relationship description when the cursor is placed over the relationship.)
Business Value
This feature will allow the user to customize the default relationship descriptions used in MDS to describe the relationships that exist between objects in the repository. You can modify the descriptions of MDS defined relationships to better suit the user base. An example of this would be the relationship “tableshascolumns” can now be described as “ This is the relationship tables having columns” Previous versions of MDS did not allow the user to modify the default descriptions only new relationship created by the MDS administrator. This new feature also allows the user to quickly view this description by placing the cursor over the relationship and a pop up bubble will appear with the description.
Technical Description
MDS defines the relationships of all the objects stored in the repository. Each of these relationships have a MDS definition. (ie: viewhascolumns, databasehasviews) these definitions may seem cryptic to the user. MDS V5.1 allows the user to modify those default descriptions to make those relationship definitions easier to understand. To view the description of a relationship, simply place the cursor over any relationship and either the default description (same as MDS relationship title) or the customized description will appear if the user modified the description in the relationship properties panel.
Implementation
Installation of MDS V5.1 will install all of the necessary components to take advantage of this new feature. Prior to customization default descriptions will appear in the pop up bubbles when a cursor is placed over a relationship. The user must customize these descriptions manually. To customize a description the user will open the properties panel of any relationship and customize the description. Once saved this will be the description that will appear when the cursor is placed over the relationship definition.
Meta Data Services 8-31
Modify Object Description
MDS defines the relationships of all the objects stored in the repository.
Each of these relationships have a MDS definition. (ie: viewhascolumns, databasehasviews) these definitions may seem cryptic to the user.
MDS V5.1 allows the user to modify those default descriptions to make those relationship definitions easier to understand.
To view the description of a relationship, simply place the cursor over any relationship and either the default description (same as MDSrelationship title) or the customized description will appear if the user modified the description in the relationship properties panel.
Metadata Services V5.0 8-32
Meta Data is Stored as Objects Meta data is stored in the repository as objects.
Meta Data Services 8-33
Meta Data is Stored as Objects
Objects are:• An instance of a type.• Similar to a data record or a row in a database table.• An instance of a class description.• Same concepts as:
Table Definition = Class DescriptionRow = Object
Column Class Description
Name: char not nullDescription: varchar nullColumnId: smallintTableID: BYTE(6)DataBaseId: BYTE(4)ColumnFormat: char nullColumnTitle: varchar nullColumnType: char nullColumnLength: smallint null
Name: CityDescription: City of the
address for the customerColumnId: 1029TableID: 000001350000DataBaseId: 00001945ColumnFormat: X(50)ColumnTitle: NULLColumnType: CVColumnLength: 50
Name: StateDescription: State of the
address for the customerColumnId: 1030TableID: 000001350000DataBaseId: 00001945ColumnFormat: X(2)ColumnTitle: NULLColumnType: CVColumnLength: 2
Name: ZipcodeDescription: Zipcode of the
address for the customerColumnId: 1032TableID: 000001350000DataBaseId: 00001945ColumnFormat: -(10)9ColumnTitle: NULLColumnType: IColumnLength: 4
Objects of the Class Type:Column
Metadata Services V5.0 8-34
Database Information Model The Database Information Model provides the base model on which to store database meta data in the MDS repository. The Database Information Model, or DIM, is a specific meta data model (AIM) that MDS provides to contain information about the structure and contents of a Teradata Database. Other warehouse applications can reference the Teradata meta data in the DIM instead of maintaining identical meta data in their own meta data. This will reduce meta data redundancy and the number of places that a change must be made. The DIM for MDS V5.0 is represented by the figure on the following page.
The MetaLoad Utility reads Data Dictionary information from one or more Teradata systems and populates the MDS Repository with that information
• Information from multiple Teradata systems can all be stored in a
single MDS repository. • Repository does not have to reside on the Teradata system whose
information is loaded into MDS. • Parses SQL text to store information not readily available (e.g. “Which
table columns are contained in this view?”) • Runs on both Windows and MP-RAS Unix Platforms.
Meta Data Services 8-35
Database Information Model
Metadata Services V5.0 8-36
Review Questions Please answer the review questions as directed by your instructor.
Meta Data Services 8-37
Review Questions
Match each of the following terms with the description that best defines it.
_AIM
_DIM
_Objects
_Classes
_Relationships
_Properties
A. Description of an association between two classes
B. Metadata in the repository
C. Relationships between specific objects
D. Structure for data in the MDS repository
E. Definition of a specific type of metadata.
F . Data fields of a class object.
G. Specific structure for Teradata metadata
Metadata Services V5.0 8-38
Notes:
Teradata Warehouse Miner 9 - 1
Module 9
After completing this module, you should be able to:
• Describe Teradata Warehouse Miner.
• List the system requirements for installing TWM.• Define the Database Concepts in TWM.
• Set the DBS Control parameters for TWM.
• Locate TWM error and log files.
Teradata Warehouse Miner
9 - 2 Teradata Warehouse Miner
Notes:
Teradata Warehouse Miner 9 - 3
Table of Contents
TERADATA WAREHOUSE MINER OVERVIEW .......................................................................................................... 4 SPACE REQUIREMENTS ........................................................................................................................................................ 6 ERROR LOG - TWMERRORS.LO G .................................................................................................................................... 8 EVENT LOG - _TWM.LOG ...................................................................................................................................................10 CACHED XML FILES ..............................................................................................................................................................12 TERADATA WAREHOUSE MINER DATABASES .......................................................................................................14 REVIEW QUESTIONS .............................................................................................................................................................16 REFERENCES ............................................................................................................................................................................18
9 - 4 Teradata Warehouse Miner
Teradata Warehouse Miner Overview Data Warehousing is becoming a required component of business information technology today. In recent years, data mining has become a key aspect of decision support and customer relationship management applications built on top of the data warehouse and a crucial component in exploiting their inherent value. Teradata Warehouses span a wide range of system sizes from entry level servers to the largest massively parallel data warehouses in the world, and they all have unparalleled decision support performance and scalability.
In order to extend the power of Teradata in the area of data mining, Teradata Warehouse Miner data mining software was developed as a set of tools and application interfaces to provide high performance, scalable data mining to Teradata customers. Teradata Warehouse Miner complements traditional data mining software for Teradata customers by addressing the need to handle large volumes of data in a scalable manner. Teradata Warehouse Miner does this by providing data mining functions that operate directly on the data in Teradata via programmatically generated SQL. This facilitates data mining without moving the data, using as much of the data as desired, storing results directly in the database, and utilizing the parallel, scalable processing power of Teradata to perform data intensive operations.
When considering use of data mining technology, it is very important to understand that there is no silver bullet when it comes to this type of analysis – successful data mining can only be done within the context of a rigorous analytic process. In fact, NCR defines data mining as follows:
“Data Mining is the process of identifying and interpreting patterns in data to solve a specific business problem.”
No matter what the definition used, two key themes should be understood about data mining.
First, much like data warehousing, data mining is a process, not a specific product or algorithm that one buys.
?Second, the process needs to be applied to solving a business problem and the problem needs to be appropriate for data mining.
Teradata Warehouse Miner 9 - 5
Teradata Warehouse Miner - Overview
Data mining is a process, not a specific product or algorithm that one buys.
The process needs to be applied to solve a business problem and the problem needs to be appropriate for data mining.
Data Mining is the process of identifying and interpreting patterns in data to solve a specific business problem.
9 - 6 Teradata Warehouse Miner
Space Requirements PERMSPACE available for Teradata Warehouse Miner
The amount of PERMSPACE required by Teradata Warehouse Miner is dependent upon the user. All functions can create tables and views, or the results can be simply selected out. Creating a User/Database with
"PERM=1000000000" is considered minimal. This is not critical unless results are persisted in Teradata. The functions that persist results include the Descriptive Statistics, Transformation, Data Reorganization and Scoring analyses.
SPOOLSPACE available for Teradata Warehouse Miner The amount of SPOOLSPACE required by Teradata Warehouse Miner is dependent upon the size of the tables being operated on. Creating a User/Database with
"SPOOL=1000000000" is considered minimal.
Teradata Warehouse Miner 9 - 7
Space Requirements
PERMSPACE
Dependent upon the User.
Creating a User/Database with:"PERM=1000000000“ - is considered minimal.
SPOOLSPACE
Dependent upon the size of the tables being operated on.
Creating a User/Database with"SPOOL=1000000000“ - is considered minimal
9 - 8 Teradata Warehouse Miner
Error Log - TWMErrors.log Teradata Warehouse Miner provides both event and error logging. Within the Teradata Warehouse Miner installation folder, in the Temp sub-folder, there is an error log file, an event log file as well as cached XML files. These should be sent to the TSGSC when any incident is reported. All files are text and can be viewed with any text editor.
Whenever an error is encountered the Teradata Warehouse Miner GUI writes to this log file. The format of TWMErrors.log is as follows:
------------------------------------------------------------------------------------- Log Entry, <Data and Time> Data Source: <Teradata Data Source Name> MetaDatabase: <Teradata Warehouse Miner Result Database> Analysis Id: <Teradata Warehouse Miner Internal Analysis Identifier> Analysis Type: <Teradata Warehouse Miner Analysis Type: the following are valid: "Affinity" "Linear Regression" "Logistic Regression" "Factor Analysis" "Clustering" "Decision Tree" "Build Matrix" "Get Matrix" "Restart Matrix" "Scoring" "Histogram" "Frequency" "Overlap" "Scatter Plot" "Statistical Analysis" "Values Analysis" "Trigonometrics" "Mathematics" "Derive" "OLAP" "Statistics" "Bin Coding" "Design Coding" "Recoding" "Rescaling" "Denorm" "Sample" "Partition" "Join" "SQL Node" Number: <Microsoft Error Number> Source: <Microsoft Error Source> Description: <Error Description in text format>
SQL: <For Database Errors, the SQL that was executing when the error occurred>
Teradata Warehouse Miner 9 - 9
Error Log - TWMErrors.log
• Teradata Warehouse Miner provides both event and error logging. • Within the Teradata Warehouse Miner installation folder, there are:
– an error log file– an event log file – cached XML files
• These should be sent to the TSGSC when any incident is reported.All files are text and can be viewed with any text editor.
TWMErrors.log• When an error is encountered the Teradata Warehouse Miner GUI
writes to this log file.
9 - 10 Teradata Warehouse Miner
Event Log - _twm.log The Teradata Warehouse Miner event log is used for analyses that are invoked from the GUI. Whenever any Teradata Warehouse Miner Analysis is invoked from the front end, all the parameters passed to the analysis are written to this log file. The format of _twm.log is as follows:
<Teradata Warehouse Miner Analysis Type> -- <Date and Timestamp>
<Function Specific Parameters> : <Parameter Settings>
<Function Specific Parameters> : <Parameter Settings>
... ... ... ... ... ... ...
... ... ... ... ... ... ...
... ... ... ... ... ... ...
<Function Specific Parameters> : <Parameter Settings>
AnalysisId: <Teradata Warehouse Miner Analysis ID>
DataSource: <Teradata Data Source Name>
DBCName: <DBC Name - for File Data Sources Only>
Database: <Teradata Warehouse Miner Source Database>
ResultDatabase: <Teradata Warehouse Miner Result Database>
UserId: <Teradata UserID>
MaxSelectRecords: <Preference Setting on the Maximum Number of Rows to return on a Select>
MaxCreateRecords: <Preference Setting on the Maximum Number of Rows to return on a Create>
Teradata Warehouse Miner 9 - 11
Event Log - _twm.log
The event log is used for analyses that are invoked from the GUI.
When a Teradata Warehouse Miner Analysis is invoked from the front end, all the parameters passed to the analysis are written to this log file.
9 - 12 Teradata Warehouse Miner
Cached XML Files All Teradata Warehouse Miner meta data is cached to disk before it is written to Teradata. These files have pseudo-random names associated with them. When an analysis is executed or an existing analysis loaded, the XML for the analysis parameters, results, and any model created are cached to disk in three different XML files. These files reside in the Teradata Warehouse Miner Temp folder. When an analysis fails, or presents incorrect results/graphs, these files are very useful for debugging purposes.
To access the cached XML files, sort the Temp folder by modified date, and you'll see an HTML file followed by three XML files with about the same timestamp. The following naming conventions are used, where #### is a pseudo-random number:
• Model XML ####_####_m.xml
• Parameter XML ####_####_p.xml
• Results XML TWM####_####.xml (executed analysis)
####_####_r.xml (loaded analysis)
These files should be gathered and sent in along with any incident.
Teradata Warehouse Miner 9 - 13
Cached XML Files
TWM metadata is cached to disk before it is written to Teradata.
These files have pseudo-random names associated with them.
When an analysis is executed or an existing analysis loaded, the XML for the analysis parameters, results, and any model created are cached to disk in three different XML files. These files reside in the TWM Temp folder. When an analysis fails, or presents incorrect results/graphs, these files are very useful for debugging purposes.
To access the cached XML files, sort the Temp folder by modified date, to find an HTML file followed by three XML files with the same timestamp.
These files should be gathered and sent in along with any incident. .
9 - 14 Teradata Warehouse Miner
Teradata Warehouse Miner Databases Two utility programs create the required Teradata Warehouse Miner metadata, as well as demonstration data used in the analysis tutorials. A program item is added as part of the TWM program group – Load Demonstration Data for the tutorial data, while the Metadata Wizard creates the Teradata Warehouse Miner metadata. Refer to the User’s Guide for a complete description of both. Teradata Warehouse Miner has three database “concepts” including a “Source Database,” “Result Database,” and “Metadata Database.” These can all refer to the same physical Teradata Database, or three distinct databases. These are defined below, along with necessary Access Rights:
Database Concept
Definition Access Rights
Source Database
This is the database where the tables to be analyzed exist. By default, this is equivalent to the “Default Database” defined in the Teradata ODBC data source, but it can be modified globally within the ToolsàPreferences menu, Connection Properties menu, or changed for each analysis.
SELECT
Result Database This is the database where Teradata Warehouse Miner will build result tables/views. By default, this is equivalent to the “Userid” defined in the Teradata ODBC data source, but it can be modified globally within the ToolsàPreferences menu, Connection Properties menu, or changed for each analysis.
CREATE and SELECT WITH GRANT if other analyses will be executed against VIEWs created in Result Database.
Metadata Database
This is the database where the Teradata Warehouse Miner metadata resides. By default, this is equivalent to the “Userid” defined in the Teradata ODBC data source, but it can be modified globally within the ToolsàPreferences menu, Connection Properties menu, or changed for each analysis.
CREATE to use the Metadata Wizard; UPDATE otherwise. There is a potential issue with query timeouts on project and analysis saves as the metadata becomes large. See section 8.1 for a work-around.
Teradata Warehouse Miner 9 - 15
Teradata Warehouse Miner Databases
Database Concept
Definition Access Rights
Source Database Tables to be analyzed exist here. This is equivalent to the “Default Database” defined in the Teradata ODBC data source, but it can be modified globally, or changed for each analysis.
SELECT
Result Database Database where TWM builds result tables/views. This is equivalent to the “Userid”defined in the Teradata ODBC data source, but it can be modified globally, or changed for each analysis.
CREATE and SELECT WITH GRANT if other analyses will be executed against VIEWscreated in Result Database.
Metadata Database
This is the database where the TeradataWarehouse Miner metadata resides. By default, this is equivalent to the “Userid”defined in the Teradata ODBC data source, but it can be modified globally, or changed for each analysis.
CREATE to use the Metadata Wizard; UPDATE otherwise.
9 - 16 Teradata Warehouse Miner
Review Questions Check your understanding of the concepts discussed in this module by completing the review questions as directed by your instructor.
Teradata Warehouse Miner 9 - 17
Review Questions
1. Teradata Warehouse Miner is included with theTeradata RDBMS.
2. You must define one or more Teradata ODBC datasources to use TWM.
3. TWM requires additional Perm and Spool space.
4. TWM log files are not accessible by the DBA.
T F
T F
T F
T F
9 - 18 Teradata Warehouse Miner
References For more information on these topics, please refer to:
• Teradata Warehouse Miner User’s Guide – Release 3.02.00 – B035-2493-122A
• Teradata Warehouse Miner – Release 3.02.00 – Release Definition – BO35-2494-073C
So You Need to do Recovery? 10- 1
Module 10
So You Need to do Recovery?
After completing this module, you will be able to:
• Describe the general architecture of a mainframe recoveryenvironment.
• Describe the general architecture of a LAN-attached recovery environment.
• Identify key recovery issues.
• Identify operational scenarios you should plan for in your archive and recovery design.
10- 2 So You Need to do Recovery?
Notes:
So You Need to do Recovery? 10- 3
Table of Contents
TERADATA DATA PROTECTION OVERVIEW ............................................................................................................ 4 OPEN TERADATA BACKUP .................................................................................................................................................. 6 BAKBONE NETVAULT ............................................................................................................................................................ 8 VERITAS NETBACKUP..........................................................................................................................................................10 GENERAL ARCHITECTURE—MAINFRAME..............................................................................................................12 GENERAL ARCHITECTURE—UNIX NODE.................................................................................................................14 COMMON ALTERNATIVE RECOVERY STRATEGIES ...........................................................................................16 COMMON USES OF ARCHIVED DATA..........................................................................................................................18 EXAMPLE TEMPLATE—DISASTER RECOVERY.....................................................................................................20 EXAMPLE TEMPLATE—SIN GLE AMP RECOVERY...............................................................................................22 RECONFIGURATION SCENARIO.....................................................................................................................................24 MIGRATING ACROSS REL EASE LEVELS ...................................................................................................................26 COMMON MISTAKES ............................................................................................................................................................28 TYPICAL TUNING AREAS —MAINFRAME..................................................................................................................30 TYPICAL TUNING AREAS —UNIX NODES ...................................................................................................................32 SUMMARY...................................................................................................................................................................................34
10- 4 So You Need to do Recovery?
Teradata Data Protection Overview With the Teradata Database, there are multiple levels of data protection and recovery. The focus in this module is on backup and restore from off-line storage.
So You Need to do Recovery? 10- 5
Teradata Data Protection Overview
• Disk drive or H/W failure– RAID– Fallback
• Database or Application Failure– Transaction or, “transient” journal– Permanent journals
• Backup to off-line storage– Mainframe (ARC)– UNIX - Open Tape Backup (OTB)
10- 6 So You Need to do Recovery?
Open Teradata Backup Provide superior protection of the Teradata Warehouse by integrating into customer environments, meeting new customer demands, and offering the most reliable, high performance, scalable and easy to use BAR solutions, while providing superior information availability.
Provide the customer with choices that best support their requirements and address a range of capabilities and tradeoffs.
• Totally integrated software, hardware and services.
• Focus on performance, scalability and data availability.
• Focus on investment protection as your systems grows.
• Provide alternatives that allow the usage of the customer existing tape management software environment.
• New choices will be LAN based.
• Provide BAR Services for all levels of BAR implementation.
o BakBone NetVault as the advocated solution
§ New, Expansions or Upgrades
o ARC on the Mainframe (channel attached)
§ New, Expansions or Upgrades
o Veritas NetBackup
We are beginning the first phase of evaluations, leading to future development and release of LAN based tape management software options. These will require careful and clear communication as to operational expectations, configurations requirements and performance characterizations.
The first of these will likely be IBM Tivoli Storage Manager (TMS). No dates have been committed at this time, but we are targeting for either Q1 or Q2 of 2003.
We have not investigated what additional tape management software applications will follow at this time.
BakBone will be our advocated and integrated solution for MP-RAS. At this time there are no plans to port another tape management application to MP-RAS.
BakBone NetVault is continuing to become a well-integrated product. BakBone is also committing key resources to Teradata, which will result in usability, manageability and feature/functionality that meets or exceeds customer expectations.
So You Need to do Recovery? 10- 7
Open Teradata Backup
All major Operating Systems and Databases
Direct Attached
Dedicated SCSI or Fibre Channel
Shared
Automated Cartridge System Library Software (ACSLS)
Enterprise
SAN
LAN
Gigabit Ethernet
Tivoli Backup Server
VM/MVS Mainframe Backup
ESCON
Shared: StorageTek ACSLS
Excellent Throughput/Scalability
Scale LAN, drives, servers, etc.
10- 8 So You Need to do Recovery?
Bakbone NetVault Bakbone's NetVault is the advocated Tape Management product for direct attached Teradata BAR solutions. It is part of the Open Teradata Backup (OTB) program and has been customized for Teradata. OTB is based on off-the-shelf products from leading third party vendors along with the development of a Teradata Extension.
NetVault provides:
The best integrated approach for Teradata
The best archive and restore performance
The best alignment for support of BCS – Disaster Recovery
Flexible design to support the entire enterprise
Backbone NetVault 6.5.3 was originally planned as a maintenance release, but there were many significant changes incorporated by BakBone in response to specific customer, operational services and engineering inputs. It was released on CD as a point release and brings approximately 30 new Teradata changes to the core NetVault offer for Teradata customers. Bakbone NetVault is supported with Teradata Database V2R5 and TTU 7.0. It is the advocated solution.
Bakbone NetVault 7.0 (released June 2003)
This new release encompasses both a number of significant Teradata specific enhancements and a major core product release by BakBone. Some key new improvements and additions are:
• Advanced Policy Management
• Event Notification (SNMP Traps)
• User Level Access Controls
• Advanced Reporting
• Vaulting (automated, managed, scheduled & etc.)
So You Need to do Recovery? 10- 9
Bakbone NetVault
• Advocated Tape Management product for direct attached TeradataBAR solutions.
• Part of the Open Teradata Backup (OTB) program and has been customized for Teradata.
• OTB is based on off-the-shelf products from leading third party vendors along with a Teradata Extension.
• NetVault provides:• The best integrated approach for Teradata• The best archive and restore performance• The best alignment for support of BCS – Disaster Recovery• Flexible design to support the entire enterprise
10- 10 So You Need to do Recovery?
Veritas NetBackup NetBackup 3.4.1 has been certified with V2R5.0 and Teradata Tools and Utilities 7.0.
Veritas NetBackup 4.5
• This release on Veritas NetBackup allows for interoperability with a customer NBU 4.5 environment. Although both this and the earlier release are certified, this release does not remove any of the feature/ function restrictions.
So You Need to do Recovery? 10- 11
Veritas NetBackup
• NetBackup 3.4.1 has been certified with V2R5.0 and TTU7.0
• Veritas NetBackup 4.5 - allows for interoperability with a customer NBU 4.5 environment.
• Both this and the earlier release provide full feature/ function.
10- 12 So You Need to do Recovery?
General Architecture—Mainframe The facing page shows the processing thread for backup. ARCMAIN establishes a virtual circuit to each AMP task involved in the dump using “directed requests”. Data is dumped in the raw AMP format (for example, no conversion to EBCDIC) and so is faster than FastLoad or FastExport for large data volumes.
Note that from a throughput perspective, channel speed and mainframe resources are the typical bottlenecks, but you have the advantage of being able to leverage an existing tape management system.
So You Need to do Recovery? 10- 13
General Architecture - Mainframe
. . . AMPs
BYnet
PE
Archive
Dbclog
ARC TDP
“Host”
Teradata Database Channel(s)
Multiple Sessions and Multiple Jobs can help to saturate the bandwidth of any of these components.
10- 14 So You Need to do Recovery?
General Architecture—UNIX node Since UNIX does not have built-in tape management facilities, one of Teradata’s Open Tape Backup (OTB) must be employed.
So You Need to do Recovery? 10- 15
General Architecture – UNIX Node
•••
BYNET
TAPELIBRARY
OTBOTB
ARCMAIN
NODE(CLIENT)
TERADATADATABASE
ARCMAIN
NODE(MASTER)
TERADATADATABASE
DISK ARRAYSDISK ARRAYS
10- 16 So You Need to do Recovery?
Common Alternative Recovery Strategies The facing page identifies some recovery strategies.
So You Need to do Recovery? 10- 17
Common Alternative Recovery Strategies
• Keep Source Updates– Save enough input load files on GDGs to recover from weekly or
monthly backups (no Permanent Journals).
• Track Changes as they Occur– Use Permanent Journals and ROLLFORWARD.
• Bypass ARC Except for Dictionary– Use DICTIONARY backups and FastLoad as recovery mechanism.
• Archive in Manageable Steps– Partition the backups by database/table and spread out operation
throughout week/month.
10- 18 So You Need to do Recovery?
Common Uses of Archived Data The facing page lists some scenarios you need to think about when planning your recovery strategies.
You should develop some templates for jobs, and then you can map them to database objects. You should also consider performance when planning your strategy.
So You Need to do Recovery? 10- 19
Common Uses of Archived Data
• You need to plan for the following scenarios:– Disaster Recovery Backup and Restore– Application Port– Single AMP Recovery– Cluster Backup/Restore– Reconfigured/Rehashed Recovery Issues– Migrating across major release level
• Approach– Define base job templates.– Map job templates to DB objects.– Consider performance tuning.
10- 20 So You Need to do Recovery?
Example Template—Disaster Recovery In a disaster recovery mode, the Teradata database is assumed initialized with a SYSINIT operation followed by DIP. Since DIP will populate some databases outside of user DBC, the “DELETE DATABASE” is needed prior to the DBC restore. Next, any Permanent Journal definitions need to be restored if there are user tables that reference those PJs. If you do not need to rollforward those PJs, it is possible to perform a “RESTORE DICTIONARY TABLES...” operation where the FILE option references the PJ archive (this will merely restore the DDL for the PJ and no t any of the journaling data).
In our template, we presume that we need to rollforward on the after image journals so the NO BUILD and PRIMARY DATA options are utilized to defer secondary index maintenance until all updates have been applied. RESTORE is different from ARCHIVE in that each parallel job stream must have a distinct userid on the logon. This is done to protect against one job inadvertently releasing another job’s locks.
If you performed cluster-level archives, the only thing that changes in this template is performing a “RESTORE DICTIONARY” statement before any “RESTORE DATA TABLES” statement that has the CLUSTER clause.
The ARCMAIN engine can restore selected tables off a database-level archive. Any number of individual tables can be restored without disturbing the other tables in the target database. Only one pass will be made through the archive and the operation will proceed as a sequence of individual table-level restore operations (that is, only the table being restored will be locked and not the entire database).
The facing page provides an example of a template for a disaster recovery job.
So You Need to do Recovery? 10- 21
Example Template – Disaster Recovery
• Perform SYSINIT and DIP to initialize Teradata Database
• Delete Database (DBC) All, Exclude (DBC)
• Restore Data Tables (DBC)
• Restore Journal Tables (DBC)
• Restore Data Tables (DBC) All
• Rollforward (DBC) All, Exclude (DBC)
• Build Data Tables (DBC) All
10- 22 So You Need to do Recovery?
Example Template—Single AMP Recovery The facing page shows an example of a template for single AMP recovery.
So You Need to do Recovery? 10- 23
Example Template – Single AMP Recovery
• Replace Disk drive and run DiskCopy/Table Rebuild
• Restore No Fallback Tables
• Restore Journal Tables
• Rollforward
• May need to restore sequence of PJs and ROLL each one.
• Rollforward
• Build No Fallback Tables
• Revalidate References
10- 24 So You Need to do Recovery?
Reconfiguration Scenario If you are performing recovery after a system upgrade or reconfiguration, there are special issues to consider.
Space Management—After the DBC database restore, you will have all the database definitions for all databases and users in the system. If you configured down the number of AMPs or the number of AMP disk drives, you probably have more space allocated to user databases than you have available on the entire system (leading to negati ve perm space numbers for the DBC userid immediately after the DBC restore has completed). If you have increased the number of AMPs and you have some databases close to their limit on perm space, the restore process might run out of space if there is even a slight amount of hashing skew. In either case, there will likely be an extra step between the DBC restore and the restore of user databases wherein you want to manually adjust the perm space allocations.
Performance Differences—If the target and source systems have a different number of AMPs, the rows being restored need to be redistributed according to the new hash bucket assignment. This implies the following issues may increase the batch window for completing your restores.
1. BYnet bandwidth is reduced because of row redistribution. Since archived rows are stored in sorted row hash order, the worst case scenario is that every primary data row travels on interconnect twice. Theoretically, it might get worse than that but for all practical cases, you can estimate performance impact by assuming your BYnet bandwidth is effectively cut in half.
2. AMP I/O bandwidth is reduced because smaller blocks are written out as compared to the non-reconfigured case. It is unlikely that you will see more than twice as many physical I/Os occurring in the data transfer phase. Therefore, a rough estimate of performance impact is to assume the worst case is half your normal AMP bandwidth.
3. If you are used to restoring indices from the archive tape, you will likely see increased time due to building of NUSIs. USIs can still be restored.
4. Cluster archives will have to be restored to all AMPs and locks will cause single threading.
5. PJ restores will have the overhead of row redistribution but also a sort phase to insure the redistributed journal images maintain the proper update sequence. Will probably perform approximately the same as a FastLoad of an equivalent amount of data.
If re-hashing, data restore algorithm is similar to FastLoad logic.
So You Need to do Recovery? 10- 25
Reconfiguration Scenario
– Space management on target system– Possible reconfiguration of perm and spool space parameters
after DBC restore but before data restore.– Performance differences
• BYNET bandwidth reduced due to row re-distribution or re-hashing.
• AMP I/O bandwidth reduced due to smaller blocks.• NUSIs cannot be restored from archive—must be built.• Cluster archives need to be restored to all AMPs (lock
implications).• PJ restores will require a sort phase to insure proper update
sequence.• If re-hashing, performance will be more like FastLoad.
– If migrating Teradata across major release levels, extra step toconvert system catalog (DBC).
10- 26 So You Need to do Recovery?
Migrating Across Release Levels Another special situation is when you migrate data across software releases.
The essence of the migration strategy is to establish a working baseline of DDL on the new system by restoring a complete DICTIONARY archive along with the system catalog contained in database DBC. You do this whether you are doing a SNAPSHOT or INCREMENTAL migration because it sidesteps esoteric issues related to how tableids are assigned on different Teradata systems. After you get through the D ICTIONARY restores, the target system will have essentially the same DDL image as the originating system.
Then, the data restores begin. Since this may take several days for really large systems, an INCREMENTAL migration approach would use the COPY statement to bring data over. SNAPSHOT migrations could rely upon either RESTORE or COPY since they presume the system is being brought over all at one time. INCREMENTAL migrations presume that you will be bringing over only selected tables and duplexing updates for a period of time before the rest of the system follows.
So You Need to do Recovery? 10- 27
Migrating Across Release Levels
– Archive Database DBC after getting complete dictionary archive of source system.
– Restore Database DBC.
– Run Conversion Process for your release.
– Restore JOURNAL or DICTIONARY on PJ archives.
– Restore complete DICTIONARY archive.
– Restore/COPY User Databases and ROLLFORWARD if appropriate.
10- 28 So You Need to do Recovery?
Common Mistakes The facing page identifies some common mistakes that you should watch out for. Table-level archives do not restore RI constraints or Join Indices. Triggers cannot be COPY’ed, but rather need to be re-created.
So You Need to do Recovery? 10- 29
Common Mistakes
– Utility Locks left around (use RELEASE LOCK statement).
– Forgetting to archive something (query DBC.Events view).
– Using two “FILE=“ clauses for large backups (archive once followed by tape-to-tape copy is better).
– Triggers cannot be copied—they need to be recreated.
10- 30 So You Need to do Recovery?
Typical Tuning Areas—Mainframe Inspect the mainframe system logs for messages such as “100% of the 992 cells in use”. This indicates that the TDP ran out of statically allocated memory buffers for moving data between the channel device and user address space. By increasing the initial cell allocation you can get better throughput. Note that this is a function of the number of sessions and jobs you are running, so just because you tuned it a year ago doesn’t mean that it hasn’t become an issue again.
Another issue is if you have jobs that define small blocksizes on the DBCLOG or archive files. The larger the blocksize the better the throughput, typically.
Exploit as many different channel paths/mainframe connections that you can. Scalable throughput advantages come from running mainframe backups on multiple channel connections, for example, use your development and test systems as well as the production system. From an operational standpoint, this works best if you have shared catalogs between the two mainframes (because then you don’t have to manually remember which tape vol-serials a job used or which mainframe the backup ran on when you want to restore).
It is not unusual for a shop to grow their warehouse to a point where lots of load jobs start competing with the backups—especially if you are performing database-level backups when there are a lot of tables in the database. If you see a long time lag between the “ARCHIVING …” message and the immediately preceding message in the ARCMAIN sysprint that will suggest a long wait for acquiring the utility lock. Using negative dependencies in the scheduling system so the archive job and the associated load jobs aren’t running concurrently can solve this issue.
If there is a lot of mainframe background activity it can slow down ARCMAIN. If you really have to get the job running at peak throughput making the job non-swappable and giving it a higher CPU priority than most of the other stuff will make a difference.
Lastly, don’t assume that just because you have an automatic tape loader (ATL) that tape mount times aren’t an issue. Teradata backups tend to consume lots of tapes, and it is not unusual to have the ATL run out of scratch volumes. Even if the availability of scratch volumes is not an issue, some jobs might use 50 to 100 tapes even with compression. If your typical tape mount time with the ATL averaged one minute apiece, this amounts to an hour or an hour-and-a-half just devoted to mounting and un-mounting tapes! If time is an issue and you are willing to devote two tape drives to the job, the pre-mount of the next volume can help, but as a normal course of business it is probably better to partition the backup into multiple jobs that consume fewer tapes each. Study the syslog for the job to determine the average time to mount tapes and decide accordingly.
So You Need to do Recovery? 10- 31
Typical Tuning Areas - Mainframe
– TDP Cell Allocation
– Blocksize of archive and DBCLOG files
– Utilize as many multiple paths through mainframe as possible
– Eliminate locking contention from concurrent load jobs
– “Host” priority levels and job contention
– Throughput of tape I/O device or excessive mount times
10- 32 So You Need to do Recovery?
Typical Tuning Areas—UNIX Nodes This is a short list of performance problems that occur frequently. Using local gateways dramatically decreases the number of “hops” across the BYNET the data takes. To do this, change scripts to use a logon script with the TDPID forcing to the local gateway. (In /etc/hosts file, include an alias for the local BYNET address asfcop1. No other asfcop entries should appear. In the script, use the logon string “asf/user,password” Each host file on each host will differ as the asfcop1 entry must be on a local BYNET address).
Generally, one job per node gives the best overall throughput.
Note: There are slot considerations for connecting the tape silo, etc. where running at least two jobs per node would be more cost-effective. This is true especially if you want to leverage the peaks and valleys in throughput introduced by many tables in the archive.
Ensure that the device is physically connected to the node on which the job is running.
The default is to checkpoint every 500MB, this can be changed up to every 4GB.
Collecting performance statistics may cause some overhead in a large system with many jobs running as a small file is copied from client to master every minute. If the traffic on the network is heavy, the delay for the file copy may cause a performance p roblem (ASF2_PERF can be set to the number of seconds between collection intervals 600 for every 10 minutes, for example.)
So You Need to do Recovery? 10- 33
Typical Tuning Areas – UNIX Nodes
– TDP local Gateways
– Utilize as many nodes as possible
– Device location and device type
– Number of sessions per job
– Checkpoint frequency
10- 34 So You Need to do Recovery?
Summary The facing page summarizes some important concepts in this module.
So You Need to do Recovery? 10- 35
Summary
– ARC is the utility for backing up and restoring data.
– Teradata Open Tape Backup (OTB) provides backup facilities in a non-mainframe environment.
– Establish standard job templates for types of scenarios appropriate for you.
– Tune the overall backup and recovery process and not just a single job.
– Train people to be aware of and avoid common mistakes.
10- 36 So You Need to do Recovery?
Notes:
Disaster Recovery 11- 1
Module 11
After completing this module, you should be able to:
• Describe the components that provide Disaster Recovery.
• Identify the recovery journals that provide automatic data protection features.
• Explain the Fallback feature and how it offers data protection.
• Describe RAID levels you can use on a Teradata system and how they aid in protecting data.
• Describe the use of Hot Standby Nodes in system recovery.
• Discuss the benefits of large cliques in a Teradata configuration.
Disaster Recovery
11- 2 Disaster Recovery
Notes:
Disaster Recovery 11- 3
Table of Contents
DISASTER RECOVERY OVERVIEW ................................................................................................................................. 4 DUAL SYSTEMS .......................................................................................................................................................................... 6 TERADATA QUERY DIRECTOR......................................................................................................................................... 8 ARCHIVE RECOVERY UTILITY (ARC) .........................................................................................................................10 RESTORE OPERATIONS.......................................................................................................................................................12 DATA PROTECTION MECHANISMS ..............................................................................................................................14 TRANSIENT JOURNAL..........................................................................................................................................................16 FALLBACK PROTECTION...................................................................................................................................................18 DOWN AMP RECOVERY JOURNAL................................................................................................................................20 DISK ARRAYS AND RAID TECHNOLOGY ...................................................................................................................22 HOT STANDBY NODES ..........................................................................................................................................................24 HOT STANDBY NODES - EXAMPLE................................................................................................................................26 LARGE CLIQUES .....................................................................................................................................................................28 PERMANENT JOURNALS—WHAT ARE THEY? ........................................................................................................30 PERMANENT JOURNAL SCENARIO ..............................................................................................................................32 TABLE X.......................................................................................................................................................................................34 TABLE Y.......................................................................................................................................................................................36 TABLE Z .......................................................................................................................................................................................38 ARCHIVE POLICY...................................................................................................................................................................40 ARCHIVE SCENARIO.............................................................................................................................................................42 AFTER RESTART PROCES SING COMPLETES ..........................................................................................................44 AFTER RESTART COMPLETES ........................................................................................................................................46 TABLE X RECOVERY ............................................................................................................................................................48 TABLE Y RECOVERY ............................................................................................................................................................50 TABLE Z RECOVERY.............................................................................................................................................................52 AFTER RECOVERY.................................................................................................................................................................54 PERMANENT JOURNAL US AGE SUMMARY..............................................................................................................56 DATA PROTECTION SUMMARY......................................................................................................................................58 REVIEW ........................................................................................................................................................................................60 REFERENCES ............................................................................................................................................................................62
11- 4 Disaster Recovery
Disaster Recovery Overview As Data Warehouse implementations become increasingly mission-critical, the requirements for availability approach those of operational systems.
Business Continuity is a comprehensive term that applies to all the things you do to respond to a large scale catastrophic event as well as to the things you do ensure that your system doesn’t go down at all. Lets take a holistic view of protection from isolated failures that could occur, as well as offer products and service to address those situations when a large scale disaster outage happens. Teradata systems are architected to eliminate single points of failure. From dual power supplies to a redundant Bynet, there are no critical single points of failure in the hardware. The multiple parallel processing system archititecture is designed to reduce outages and respond to events. And in the unlikely event of an isolated outage, Teradata offers a variety of solutions to address those events.
By the very nature of disasters, we are talking about large-scale catastrophic events that result in long term down time of the entire system or even the entire data center. These types of events require a multi-system approach to protection. So regardless of how highly available your system is, you need a Multi System Business Continuity plan to address disaster recovery.
Customers need multiple options to solve their business needs, factoring in multiple elements. There is no “one size fits all” solution. Consider:
o Time
§ How much downtime is tolerable?
o Cost
§ What are you willing to spend?
o Technology
§ System wide recovery standards?
o Security
§ Do security policies dictate/eliminate a solution?
o Application Architecture
§ Does architecture dictate/eliminate a solution?
o Data Synchronization
§ Data volume and data freshness requirements?
Disaster Recovery 11- 5
Disaster Recovery Overview
HardwareArchitecture
Enterprise System Support
Single System Availability
SystemArchitecture
Performance ContinuityHot Standby Nodes, Large Cliques
Recovery in hours
Business Continuity Solutions
Data ManagementFallback, BAR, Journaling
Recovery in Days
Dual SystemsRecovery Center
London, Dayton, San Diego
Teradata Query Director
Implementation Services
11- 6 Disaster Recovery
Dual Systems As the use of your Teradata Warehouse becomes more critical to the day-to-day operation of your business (Active Data Warehousing), there is a growing need for higher availability than can be achieved by a single system and disaster protection of the data.
Dual Systems configurations are implemented to support Business Continuity. There are three aspects of Business Continuity. They are:
• Recoverability: Products and services to restore systems or data after an outage.
• Availability: Products and services to prevent outages or the impact of outages.
• Performance Continuity: Products and services to maintain performance objectives.
Teradata Dual Systems (Active-Active) is the only Teradata offering that supports customer requirements in all three areas.
From a Recoverability standpoint, it is the requirement to survive a total disaster to a single system that drives the need for dual systems, and the two systems will thus typically be placed in separate locations. From an Availability standpoint, it is the requirement to minimize planned and unplanned downtime and maximize the use of available resources that put requirements on the dual system solution. It is further the goal of many organizations to be able to use all available system resources to ride through periods with a peak load on the system. Teradata Dual Systems, in Active-Active mode, supports all three business continuity requirements.
Some requirements are more specifically defining the desired level of disaster protection, e.g.:
• Critical applications must be operational within two hours after a disaster.
• Updates may be lost if they happened within a few minutes before the disaster.
• It must be possible to protect just a critical subset of all the data in the Teradata Database system.
These are business driven requirements, which often influence the cost of the dual system solution. It is difficult, but possible, to quantify the value of meeting these requirements. Teradata offers a Business Impact Analysis engagement to assist customers in financially quantifying the impact of downtime and their specific recoverability and availability requirements.
Disaster Recovery 11- 7
Dual Systems
User
TeradataSystem A
SubscriberTeradata
DependentData Mart
TeradataQuery Director
TeradataQuery Director
TeradataSystem B
Appl User….. ApplAppl
Source System
Source System
Source System
Cross Feed
ETLMessaging Layer
Source System
OperationalData Store
Data DictionaryMaintenance &Synchronization
Dual SystemsMonitoring &
Control
Cross Feed
Dual Apply
• Supports:� • Recoverability:
Products and services to restore systems or data after an outage.
� • Availability: Products and services to prevent outages or lessen the impact of outages.•Performance Continuity: Products and services to maintain performance objectives
11- 8 Disaster Recovery
Teradata Query Director Teradata Query Director (TQD) is a product that is able to route application requests to Teradata systems. Therefore, it may reroute application requests to the alternate system in case of a failure to the system where the user/application was logged on. In the normal situation where both systems are working, Teradata Query Director may balance the query load across the Teradata systems. Teradata Query Director may also be used to force a user/application onto a specific data warehouse.
In this process Teradata Query Director may take many different constraints into consideration:
• The Teradata systems that Teradata Query Director knows about.
• The status of these Teradata systems: Are they available and running? What is the current (observed) load on each Teradata system?
• The status of the user/account that the application connects to on these Teradata systems: Does it exist? Is logon currently allowed?
• The preferences that a user/account may have with respect to Teradata systems.
• The state of the data being queried on each Teradata system.
The policies used by Teradata Query Director to route requests to working data warehouses may be extended with new releases of Teradata Query Director. It is not the objective of this chapter to describe all of these details. Consult the latest TQD documentation for specifics.
The first release of Teradata Query Director comes with a mandatory CS Implementation service. This service installs the software and customizes it to support the routing protocols required for each user or application. TQD runs on a Windows 2000 platform and should be located out on the network (as apposed to co-located with the Teradata server) in order to provide maximum failover protection during a disaster.
Teradata Query Director works with Teradata Database V2R4.1.3, V2R5, and V2R5.1. Each system must be one of these releases, but they do not need to be on the same release.
Systems with V2R5.1 come with logon encryption enabled by default. Since Teradata Query Director does not support logon encryption in the first release, all sessions routed through Teradata Query Director will have logon encryption disabled. All users or applications accessing the Teradata server directly will continue to have logon encryption enabled. Future releases of Teradata Query Director will resolve the logon encryption concern.
Disaster Recovery 11- 9
Teradata Query Director
User
TeradataSystem A
SubscriberTeradata
DependentData Mart
TeradataSystem B
Appl User….. ApplAppl
Source System
Source System
Source System
Cross Feed
ETLMessaging Layer
Source System
OperationalData Store
Data DictionaryMaintenance &
Synchronization
Dual SystemsMonitoring &
Control
Cross Feed
Dual Apply
TeradataQuery Director
TeradataQuery Director
• Routes queries between multipleTeradata systems
• Provides support forDual-Active includingActive-Active or Active-Passive
11- 10 Disaster Recovery
Archive Recovery Utility (ARC) The Archive/Recovery (ARC) utility writes and reads sequential files on a Teradata client system to archive, restore and recover, as well as copy, Teradata table data. ARC does the following:
• Archives a database or individual table from Teradata to a tape.
• Restores a database or individual table to Teradata from a tape.
• Restores an archived database or table to a Teradata Database other than the one from which it was archived (e.g., COPY).
• Places a checkpoint entry in a journal table.
• Recovers a database to an arbitrary checkpoint by rolling it back or forward using change images from a journal table.
• Deletes a change image row from a journal table.
How ARC Works ARC creates files when you archive databases, individual data tables or permanent journal tables from Teradata and uses such files to restore databases, individual data tables or permanent journal tables back to Teradata. ARC includes recovery with rollback and rollforward functions for data tables defined with a journal option. You can checkpoint these journals with a synchronization point across all AMPs, and you can delete selected portions of the journals.
The archive task archives information from the Teradata system onto some type of portable storage media. The restore function reverses the archive process and moves the data from the storage media back to the database. The recovery feature utilizes information stored in permanent journals to Rollback or Rollforward row information.
Invoking ARC ARC runs in either online or batch mode under MVS, VM or Windows NT. Although ARC is normally invoked in batch mode, it can be run interactively. ARC does not provide a user-friendly interface in online sessions. ARC is invoked by calling the program module ARCMAIN.
ARC usage and commands are covered in a later module.
Disaster Recovery 11- 11
Archive Recovery Utility (ARC)
• Archive - Captures user data on portable storage media.• Restore - Restores data from portable storage media.• Recovery - Recovers changes to data from permanent journal tables.
– Disaster recovery• Common uses of ARC include recovery from:
– Loss of AMP’s vdisk for no fallback tables– Loss of multiple AMPs’ vdisks in the same cluster– Failed batch processes– Accidentally dropped tables, views, or macros– Miscellaneous user errors
ARC resides on the host and backs up or restores to and fromchannel or network hosts.
Third Party products Veritas NetBackup and BakBone NetVault, also provide interface to tape backup from the Nodes.
11- 12 Disaster Recovery
Restore Operations A restore operation transfers database information from archive files backed up on portable storage media to all AMPs, clusters of AMPs, or specified AMPs.
Data Definitions You can restore archived data tables to the database if the data dictionary contains a definition of the entity you want to restore.
For example, if the entity is a database, that database must be defined in the dictionary. Or, if the entity is a table, that table must be defined in the dictionary. You cannot restore entities not defined in the data dictionary.
A dictionary table archive contains all table, view, and macro definitions in the database. A restore of a dictionary archive restores the objects, however, it does not restore any data.
Restore usage and commands are covered in a later module.
Disaster Recovery 11- 13
Restore Operations
• Restore operations transfer information from archive files to AMPs.
• Data Definitions– Database archives contain dictionary definitions.– Dictionary table archives contain dictionary definitions.
• Replacing Objects– ALL AMP archives contain data and dictionary definitions.– Restore operations replace both.
• Copy Objects– Copy a table from one database to another.
11- 14 Disaster Recovery
Data Protection Mechanisms The Teradata system offers a variety of methods to protect data. Some methods are automatically activated when particular events occur in the system. Other data protection methods require that you set options when you create tables. Each data protection technique offers different types of advantages under different circumstances.
Transient Journal The transient journal maintains snapshots of rows in tables before you or other users make changes to them. If the transaction fails or if you abort the request, the transient journal copies its snapshot into the existing table that rolls back any changes the failed transaction may have made to the table.
Note: Permanent Journals are described in another module.
Fallback Protection Fallback protection is an optional data protection feature that you activate with the CREATE or MODIFY commands. Fallback provides data level protection by automatically creating a copy of each row on a fallback AMP. If the primary AMP fails, the system can access the fallback copy. The fallback feature allows automatic recovery using the Down AMP Recovery Journal once the down AMP comes back on-line. Fallback protected tables occupy twice the space in your system as non-fallback tables.
Down AMP Recovery Journal The Down AMP Recovery Journal supports fallback protection. If a primary AMP fails, the fallback feature allows automatic data recovery using the Down AMP Recovery Journal (consisting of these two journals: DBC.ChangedRowJournal and DBC.OrdSysChngTable).
RAID 1 This technique creates data redundancy through disk mirroring which means that data on one disk is identical to the information on another disk. If one disk fails, the alternate disk takes over. The user experiences no downtime.
RAID 5 RAID 5 protects data with a technique called "data striping and parity.” Data is striped across multiple disks while the parity of each piece of data is preserved so that the system can determine whether any data is missing. This allows the system to rebuild any missing data if a single disk fails. As with RAID 1, the user experiences no downtime.
Disaster Recovery 11- 15
Data Protection Mechanisms
• Transient Journal– Takes snapshot of row before change is made.– Copies snapshot back to table if transaction fails.
• Fallback Protection– Optional data protection feature.– Creates copy of each row on fallback AMP.
• Down AMP Recovery Journal– Supports fallback protection.– Allows automatic recovery if AMP fails.
• RAID 1– Data redundancy through disk mirroring.– Alternate disk takes over if one drive fails.
• RAID 5– Protects data using striping and parity.– System rebuilds data if single disk fails.
11- 16 Disaster Recovery
Transient Journal The transient journal provides protection against failures that may occur during a transaction. The transient journal is a system file that is stored in DBC User in the form of a table. It is called DBC.TransientJournal.
Each time a user submits an INSERT, UPDATE or DELETE statement that changes the information in an existing table, the system inserts a new row into the transient journal. The new row is a snapshot of the existing row before any changes were made to it. This is referred to as a “before image.”
Normally, the change is successful and the before image is deleted from the transient journal as soon as you commit the transaction. If the transaction fails, or the user aborts the request, a before image is called up from the transient journal and applied to the existing table. The before image reverses, or rolls back, the undesired change made to the table.
The transient journal does not require any user input. It is always in effect. Each AMP maintains its own transient journal. Since the transient journal deletes entries that are committed, there is no user maintenance required to keep the size of the journal small. Disk space for the transient journal comes out of user DBC, and obsolete rows are periodically deleted by an AMP background task.
Contents of DBC.TransientJournal In addition to the rows added for UPDATE and DELETE statements, the transient journal tracks other types of data changes as well. The list below describes all of the transactions written to the transient journal:
• Control records for DROPs and CREATEs
• Before-change images for UPDATEs and DELETEs
• RowIDs for INSERTs
• BEGIN TRANSACTION and END TRANSACTION images
Refer to the Teradata RDBMS Database Design manual for more information.
Disaster Recovery 11- 17
Transient Journal
DATA TABLE(S)
TRANSIENT JOURNAL
TABLE AROW
TABLE BROW
TABLE CROW
AMP 1
Transactions
Each AMP maintains its own transient journal rows.The journal automatically rolls back failed transactions.An AMP background task periodically deletes obsolete journal rows.
11- 18 Disaster Recovery
Fallback Protection Fallback protection is an optional data protection feature that you define with the CREATE or MODIFY commands. Fallback provides data level protection by automatically creating a copy of each row on another AMP in the same cluster. If the primary AMP fails, the system uses the fallback copy of the data. The fallback feature allows automatic recovery using the Down AMP Recovery Journal once the down AMP comes back on-line. If a disk needs to be replaced, a Table Rebuild is required to build the table headers and any fallback tables on the failed AMP.
A cluster is a group of AMPs (a cluster can be anywhere from two to sixteen AMPs) that provide fallback capability for each other. A copy of each row is stored on a separate AMP in the cluster. A large system usually consists of many of these AMP clusters. A small cluster size reduces the chances of a down AMP causing a non-operational configuration, while a large cluster size causes less performance degradation whi le an AMP is down. Normally, there are four AMPs in each cluster.
If you activate RAID 1 or RAID 5, you may not want to use fallback protection for all of your data. It might be more cost-effective in terms of disk space to activate fallback protection for only those tables where an added measure of protection is needed—in case of software failure or the loss of two failed disks in a rank—which RAID 5 and RAID 1cannot protect you from.
Activating Fallback Protection The following SQL statements demonstrate how to activate the fallback option using CREATE and MODIFY:
CREATE USER maxim
,AS PERMANENT = 1000000,
,PASSWORD = mxm,
,FALLBACK;
MODIFY USER maxim as FALLBACK ;
AMPs are virtual (vprocs), so the AMPs themselves cannot experience a hardware failure. If an AMP loses two disks in a rank, it will be unable to access its data and is the only situation where an AMP will stay down. Two down AMPs in the same cluster causes the Teradata database to halt. A software problem can cause a vproc to go down and the database to restart, but as long as the AMP can access its disk, it should come back up during the restart.
Refer to the Teradata RDBMS Database Design and Database Administration manuals for more information.
Disaster Recovery 11- 19
Fallback Protection
• Fallback rows go to a different AMP in the same cluster.
• Two AMP failures in the same cluster halt the database.
• Two disks in the same rank that fail cause an AMP failure.
• A software failure will not keep the AMP down if its disks are okay.
DISK/AMP 0
1, 9, 17
2, 3, 4
DISK/AMP 1
2, 10, 18
1, 11, 12
DISK/AMP 2
3, 11, 19
9, 10, 20
DISK/AMP 3
4, 12, 20
17, 18, 19
DISK/AMP 4
5, 13, 21
6, 7, 8
DISK/AMP 5
6, 14, 22
5, 15, 16
DISK/AMP 6
7, 15, 23
13, 14, 24
DISK/AMP 7
8, 16, 24
21, 22, 23
Cluster 1
Cluster 2
11- 20 Disaster Recovery
Down AMP Recovery Journal The Down AMP Recovery Journal captures changes to fallback protected tables while an AMP is out of service. An AMP is placed out of service if two physical disk failures occur in a single rank. The AMP remains out of service until the disks are replaced and the data is reconstructed using Table Rebuild.
The Down AMP Recovery automatically recovers the data from the other AMPs in the fallback cluster. The Down AMP Recovery Journal consists of two system files stored in system user DBC: DBC.ChangedRowJournal and DBC.OrdSysChngTable.
Each time a change is made to a fallback protected row whose copy resides on a down AMP, the Down AMP Recovery Journal stores the table ID and row ID of the committed changes. When the AMP comes back online, the system opens the Down AMP Recovery Journal to update, or roll forward, any changes made while the AMP was down.
The Down AMP Recovery Journal may update a primary AMP with changes previously made on a fallback AMP, or it may update a fallback AMP with changes previously made on a primary AMP.
The Down AMP Recovery Journal ensures that the information on fallback and primary AMPs is identical. Once the transfer of information is complete, the Down AMP Recovery Journal is discarded. Space for the Down AMP Recovery Journal comes from system user DBC.
Disaster Recovery 11- 21
Down AMP Recovery Journal
• When an AMP goes down,the Down AMP RecoveryJournal is opened on theother AMPs in the cluster.
• Operation is started on therecovering AMP using fallbackrows to replace primary rows,and primary rows to replacefallback rows.
• The journal is automaticallydeleted when recovery iscomplete.
When an AMP is offline or out or service, the Down AMP Recovery Journal storesRowIDs of rows changed in fallback-protected tables.
TID RID TID RID TID RID TID RIDAMP 0
AMP 1
AMP 2
AMP 3
TID RID TID RID TID RID TID RID TID RID
TID RID TID RID TID RID
AMP 0 AMP 1 AMP 3AMP 2
BYNET
11- 22 Disaster Recovery
Disk Arrays and RAID Technology A disk array is a configuration that consists of a number of drives that utilize specialized disk controllers to manage and distribute data and parity across disks, while providing fast access and data integrity.
A disk array implementation is a parallel collection of disk drives connected through an array controller board (by a SCSI interface) to an SMP or MPP node. The processors do not directly access the disks, but instead issue requests for data on logical units maintained by the array controller.
There are numerous recording techniques that you can use with disk arrays. Each technique offers different degrees of data protection. Within the industry, these recording techniques are known as RAID. The two RAID levels supported by Teradata are RAID 1 and RAID 5. Both RAID levels provide data protection in the event of a single disk failure.
Benefits of RAID technology include:
• Reduced I/O bottleneck by distributing the I/O load across multiple drives.
• Improved performance by transferring data blocks currently using multiple controller processors.
• Offloaded workload from the host using sophisticated array controllers.
Note: Because of the low cost and large capacity of today’s disk drives, many shops are electing to implement RAID 1 because the disk savings of RAID 5 no longer outweigh the performance benefits of RAID 1.
Disaster Recovery 11- 23
Disk Arrays and RAID Technology
DAC DAC
Rank 00
1
2
3
4
0
1
2
3
4
0
1
2
3
4
0
1
2
3
4
Rank 1
Rank 2
Rank 3
Rank 4
Drive Group 1(RAID 5)
Drive Groups 2 & 3(RAID 1)
Example of WES 6288 (WorldMark Enterprise Storage) Disk Array (Logical View)
RAID 1 - Data redundancy through disk mirroring.RAID 5 - Protects data using striping and parity.
11- 24 Disaster Recovery
Hot Standby Nodes A Hot Standby Node is an "extra" node configured in a system that is activated only if a node in the clique fails. Hot Standby Nodes provide performance protection in the event of a node failure. You can configure your system with Hot Standby Nodes and eliminate any performance degradation to end users due to the failure of a single node. If a node fails, the work assigned to that node is completely redirected to the hot standby node. Once the failed node is active, it remains in a NULL state. A restart is required to bring the node back online and migrate vprocs back to their home node.
Hot Standby Nodes and Large Cliques, which enhance our solution with performance protection in the unlikely event of a node failure.
• Eliminate or reduce system performance degradation during recovery for only 5% - 10% of the overall system price.
• They are easy to deploy.
• They are available for new NCR 5350 systems.
Teradata survives node outages by migrating vprocs to the remaining nodes in a clique, but there is a performance degradation while migrated and a restart is required to bring the node back online and migrate vprocs back to their home node. Restarts are particularly disruptive to long running queries and response time sensitive workloads:
• Nodes are becoming abundant
• Adding Hot Standby Nodes
o Eliminates degradation after a node outage
o Eliminates restart to bring node back into service
• A Hot Standby Node
o Is not normally part of the TPA
o Belongs to a clique
o Joins the TPA if a node in its clique goes down
o Remains in the TPA until no longer needed
Disaster Recovery 11- 25
Hot Standby Nodes
Not normally part of the TPA
Belongs to aclique
Joins the TPA if a node in itsclique goes down
Remains in the TPA until no longer needed
11- 26 Disaster Recovery
Hot Standby Nodes - Example A Hot Standby Node eliminates the performance degradation associated with loss of a node. Bringing a Hot Standby Node into the TPA keeps the ratio of vprocs to nodes constant.
A Hot Standby Node eliminates the desire to force a second TPA reset after a down node comes back up. Normally the second TPA reset would be desirable because system performance is degraded while there is a down node. A Hot Standby Node eliminates that performance degradation.
There are now two parameters defining the size of a clique:
How many normal nodes are in the clique (2, 3, 4, …)
How many Hot Standby Nodes are in the clique (0, 1, …)
By convention we talk about N + S node cliques. For example:
A 4 + 0 node clique has 4 normal nodes and no Hot Standby Nodes
A 3 + 1 node clique has 3 normal nodes plus one Hot Standby Node
The implementation allows more than one Hot Standby Node per clique, but it is expected that 1 Hot Standby Node per clique will suffice.
o When a Hot Standby is not needed it is excluded from the TPA and goes to NULL (NULL/STANDBY). This is similar to the way a late arriver behaves.
Disaster Recovery 11- 27
Hot Standby Nodes - Example
– blah
Eliminates the need to force a second TPA reset after a down node comes back up
11- 28 Disaster Recovery
Large Cliques With Fibre channel switches, the clique can have more nodes and disk arrays. Clique configurations with up to 8 nodes and up to 8 arrays can be configured using a 16 port FC switch.
Large Cliques minimize the impact of a single node failure on system performance -- providing performance continuity to your end users. A hardware advantage of a switch is that it provides multiple paths to controllers for better fault resilience.
The smallest clique configuration for a system that uses Fibre Channel Switches is 4 x 4 (4 nodes and 4 disk arrays) the largest is an 8 x 8 configuration. NCR supports different configurations and expansions up to an 8 X 8 configuration.
An existing system that does not use Fibre Channel switches (not enabled for Large Clique support) CAN NOT be upgraded at a customer site. However, existing switched systems can be upgraded with more nodes and arrays if they are not already at the maximum 8 x 8 configuration.
Disaster Recovery 11- 29
Large Cliques
– blah
Minimize the impact of a single node failure on system performance -- providing performance continuity to end users.
A hardware advantage of a switch is that it provides multiple paths to controllers for better fault resilience.
11- 30 Disaster Recovery
Permanent Journals—What are They? The purpose of a permanent journal (PJ) is to protect user data by maintaining a sequential history of all changes made to the rows of one or more tables. A permanent journal can capture a snapshot of rows before a change, after a change, or both.
When you create a new journal table, there are options you can use to control the type of information the table captures.
A permanent journal provides four options:
Option Description
Single Image Captures/stores one copy of the data.
Dual Image Captures/stores two separate copies of data: one copy on the primary AMP and one on the backup AMP.
Before Image Captures/stores row values before a change occurs.
After Image Captures/stores row values after a change occurs.
Unlike transient and recovery journals, permanent journal options capture and store all changes whether committed, uncommitted, or aborted. In addition, journal maintenance and activity are under user control. A PJ requires permanent space.
Journaling Functions Journal tables can protect against:
• Loss of data caused by a disk failure in a table that is not fallback or RAID protected
• Loss of data if two or more AMPs fail in the same cluster. (This would mean the loss of two disks in a rank per failed AMP.)
• Incorrect operation of a batch or application program
• Disaster recovery of an entire system
• Loss of changes made after a data table is archived
• Loss of one copy of the journal table (Dual journal)
The permanent journal allows disaster recovery of an entire system.
Disaster Recovery 11- 31
Permanent Journals—What are They?
• Permanent journals– Provide protection for software and hardware failures.– Store committed, uncommitted, and aborted changes.– Require user management journal tables.
• Permanent journal options– Single before change image: BEFORE
• Captures images before a change is made.• Protects against software failures.• Allows rollback to a checkpoint.
– Single after-change image: AFTER• Captures images after a change is made.• Protect against hardware failures.• Allows rollforward to a checkpoint.
– Dual image: DUAL BEFORE or DUAL AFTER• Maintains two images copies. • Protects against loss of journals.
– Keyword JOURNAL with no other keywords capture single before and after images.
11- 32 Disaster Recovery
Permanent Journal Scenario In the following scenario, assume that a user has three tables in a four - AMP system. Each table has its own data protection features stored on all four AMPs. The diagrams on the following pages illustrate data protection features in effect for each table.
Disaster Recovery 11- 33
Permanent Journal Scenario
A Tale of Three Tables
A user has three data tables:
Table X FallbackBefore and After Image Journals
Table Y No FallbackNo Before and Dual After Image Journals
Table Z no FallbackSingle Before and Single After Image Journals
11- 34 Disaster Recovery
Table X Table X is defined as having fallback, dual before images, and dual after images.
Disaster Recovery 11- 35
Table X
TABLE XPRIMARY
1
TABLE XFALLBACK
2, 3, 4
AFTER1
AFTER2, 3, 4
BEFORE1
BEFORE2, 3, 4
AMP 1
TABLE XPRIMARY
2
TABLE XFALLBACK
1, 3, 4
AFTER2
AFTER1, 3, 4
BEFORE2
BEFORE1, 3, 4
AMP 2
TABLE XPRIMARY
3
TABLE XFALLBACK
1, 2, 4
AFTER3
AFTER1, 2, 4
BEFORE3
BEFORE1, 2, 4
AMP 3
TABLE XPRIMARY
4
TABLE XFALLBACK
1, 2, 3
AFTER4
AFTER1, 2, 3
BEFORE4
BEFORE1, 2, 3
AMP 4
ü Fallback
ü Dual before images
ü Dual after images
11- 36 Disaster Recovery
Table Y Table Y has no fallback protection, but has dual after image journaling defined.
Disaster Recovery 11- 37
Table Y
AMP 4
TABLE YPRIMARY
1
AFTER1
AFTER2
TABLE YPRIMARY
2
AFTER2
AFTER3
TABLE YPRIMARY
3
AFTER3
AFTER4
TABLE YPRIMARY
4
AFTER4
AFTER1
ü No fallback
ü Dual after image
AMP 1 AMP 2 AMP 3
11- 38 Disaster Recovery
Table Z Table Z has no fallback protection. This table has single before and after- images.
Disaster Recovery 11- 39
Table Z
TABLE ZPRIMARY
1
BEFORE1
AFTER2
TABLE ZPRIMARY
3
BEFORE3
AFTER4
TABLE ZPRIMARY
2
BEFORE2
AFTER3
AFTER1
TABLE ZPRIMARY
4
BEFORE4
AMP 4
ü No fallback
ü Single before images
ü Single after images
AMP 1 AMP 2 AMP 3
11- 40 Disaster Recovery
Archive Policy The company established an archive policy to cover any data loss in the event of a site disaster. The archive policy has two components:
• Daily archive procedures
• Weekly archive procedures
Daily Archive Procedures Each day the administrator submits a CHECKPOINT WITH SAVE command for each journal table that appends any changes stored in the active journal subtable to the saved journal subtable. In addition, it initiates a new active journal subtable. Second, the administrator archives each current journal, then deletes the saved journal subtable from the saved journal.
Weekly Archive Procedure Each week the administrator submits an all-AMPs DUMP of all data tables. The command is set up so that only one table is dumped each day. By the end of the week, each table has been dumped once.
Disaster Recovery 11- 41
Archive Policy
DAILY
• CHECKPOINT ALL JOURNALS w/ SAVE.
• ARCHIVE JOURNAL TABLES.
• DELETE SAVED JOURNALS.
WEEKLY
• PERFORM ALL-AMPs DUMP of DATA TABLES
11- 42 Disaster Recovery
Archive Scenario The company activated its archive policy and implemented daily and weekly backup procedures as scheduled. Each day the administrator archived journals X, Y, and Z.
On Monday, , the administrator dumped data table X, and on Tuesday dumped table Y. On Wednesday, the administrator backed up data table Z. On Thursday, two drives failed in a rank.
Disaster Recovery 11- 43
Archive Scenario
Monday:Archive journals X, Y and Z
Dump table X
Tuesday:Archive journals X, Y and Z
Dump table Y
Wednesday:Archive journals X, Y and Z
Dump table Z
Thursday:AMP 3: Two disks fail in a rank
11- 44 Disaster Recovery
After Restart Processing Completes The administrator utilized restart procedures to replace the down AMP. The diagram on the following page outlines each restart step. Each restart procedure is explained below:
1. Replace the two disks.
2. Initialize the rank.
3. Format RAID 5.
4. Rebuild the AMP. (Use vprocmanager to format the disk and build the Teradata file system.)
5. Restart the database.
6. When the AMP is online, perform a Table Rebuild on AMP 3.
7. Once AMP 3 is online, the system automatically restarts the Teradata RDBMS.
Disaster Recovery 11- 45
After Restart Processing Completes
AMP 1 AMP 2 AMP 3 AMP 4
TABLE ZPRIMARY 1
TABLE ZPRIMARY 2
TABLE XPRIMARY 1
TABLE XFALLBACK
2, 3, 4
AFTER1
AFTER2, 3, 4
BEFORE1
BEFORE2, 3, 4
TABLE XPRIMARY 2
TABLE XFALLBACK
1, 3, 4
AFTER2
AFTER1, 3, 4
BEFORE2
BEFORE1, 3, 4
TABLE XPRIMARY 4
TABLE XFALLBACK
1, 2, 3
AFTER4
AFTER1, 2, 3
BEFORE4
BEFORE1, 2, 3
TABLE YPRIMARY 1
AFTER1
AFTER2
TABLE YPRIMARY 4
AFTER4
AFTER1
TABLE ZPRIMARY 4
BEFORE1
AFTER 2
BEFORE2
AFTER 3
BEFORE4
AFTER 1
1. REPLACE THE 2DISKS.
2. INITIALIZE THE
RANK. 3. FORMAT RAID 5. 4. REBUILD THE
AMP.(vprocmanager)
5. RESTART THEDATABASE.
6. WHEN THE AMPCOMES ONLINE,TABLE REBUILDAMP3.
7. THE AMP IS
PLACED ONLINEAND A RESTARTIS REQUESTED.
TABLE YPRIMARY 2
AFTER2
AFTER3
11- 46 Disaster Recovery
After Restart Completes The diagram on the facing page shows the row information that the administrator recovered after executing the REBUILD and RESTART commands.
Table X is fully recovered. All primary and fallback rows are restored. In addition, all before and after-journal images a re recovered as well. The administrator needs to perform additional recovery measures on table Y and table Z.
Disaster Recovery 11- 47
After Restart Completes
TABLE XPRIMARY
1
TABLE XFALLBACK
2, 3, 4
AMP 1TABLE XPRIMARY
2
TABLE XFALLBACK
1, 3, 4
AMP 2 AMP 3 AMP 4
AFTER1
AFTER2, 3, 4
BEFORE1
BEFORE2, 3, 4
AFTER2
AFTER1, 3, 4
BEFORE2
BEFORE1, 3, 4
TABLE XPRIMARY
3
TABLE XFALLBACK
1, 2, 4
AFTER3
AFTER1, 2, 4
BEFORE3
BEFORE1, 2, 4
TABLE XPRIMARY
4
TABLE XFALLBACK
1, 2, 3
AFTER4
AFTER1, 2, 3
BEFORE4
BEFORE1, 2, 3
TABLE YPRIMARY
1
AFTER1
AFTER2
TABLE YPRIMARY
2
AFTER2
AFTER3
TABLE Y(Header)
AFTER3
AFTER4
TABLE YPRIMARY
4
AFTER4
AFTER1
TABLE ZPRIMARY 1
BEFORE 1
AFTER 2
TABLE ZPRIMARY 2BEFORE 2
AFTER 3
TABLE Z(Header)
TABLE ZPRIMARY 4
BEFORE 4
AFTER 1
11- 48 Disaster Recovery
Table X Recovery Table X has primary and fallback rows restored. All journal images are also recovered.
Disaster Recovery 11- 49
Table X Recovery
TABLE XPRIMARY
1
TABLE XFALLBACK
2, 3, 4
AMP 1 AMP 2 AMP 3
AFTER1
AFTER2, 3, 4
BEFORE1
BEFORE2, 3, 4
TABLE XPRIMARY
2
TABLE XFALLBACK
1, 3, 4
AFTER2
AFTER1, 3, 4
BEFORE2
BEFORE1, 3, 4
TABLE XPRIMARY
3
TABLE XFALLBACK
1, 2, 4
AFTER3
AFTER1, 2, 4
BEFORE3
BEFORE1, 2, 4
TABLE XPRIMARY
4
TABLE XFALLBACK
1, 2, 3
AFTER4
AFTER1, 2, 3
BEFORE4
BEFORE1, 2, 3
AMP 4
FULLY RECOVERED
ü Fallback
ü Dual before images
ü Dual after images
11- 50 Disaster Recovery
Table Y Recovery The diagram on the facing page illustrates table Y after REBUILD and RESTART procedures.
The system used tables stored on AMP 2 and AMP 4 to restore the two permanent journal tables stored on AMP. The primary table is still missing. The administrator needs to perform some interactive recovery procedures to fully recover missing data for table Y on AMP 3.
The users will be unsuccessful if they attempt to access the row information from table Y. The following message may appear in response to an attempted SQL statement:
2642 AMP Down: The request against non-fallback Table_Y cannot be done.
Recovery Action
The administrator must perform the following steps to fully recover table Y:
1. Perform a sing le-AMP RESTORE of AMP 3 using Tuesday's DUMP of table Y to restore all data rows stored in the archive file from table Y.
2. Do NOT release the utility locks.
3. Restore Wednesday’s DUMP of journal Y for AMP 3.
4. Perform a single-AMP ROLLFORWARD on AMP 3 using the RESTORED journal from table Y. Doing so replaces the existing rows in table Y with any after-change images made since the last backup on Tuesday.
5. Use the DELETE JOURNAL command to delete restored journal Y. This action deletes all stored images from the restored journal.
6. Perform a single-AMP ROLLFORWARD on AMP 3 using the CURRENT journal from table Y. This step replaces existing table rows with any after-change images stored in the active and/or saved subtables of the permanent journal.
7. RELEASE all utility locks.
Table Y is now fully recovered. All its contents are now available to users.
Disaster Recovery 11- 51
Table Y Recovery
TABLE YPRIMARY
1
AMP 2 AMP 3 AMP 4Before Recovery:
Specific AMP RESTORE of Tuesday’s DUMP of Table Y. Do NOT release utility locks. RESTORE of Wednesday’s DUMP of journal Y.Specific AMP ROLLFORWARD Table Y using its restored journal.DELETE restored JOURNAL Y.Specific AMP ROLLFORWARD of Table Y using its current journal.RELEASE utility locks.
AMP 1
AMP 1After Recovery:
TABLE YPRIMARY
1
AFTER1
AFTER2
AFTER1
AFTER2
TABLE YPRIMARY
2
AMP 2
TABLE YPRIMARY
2
AFTER2
AFTER3
AFTER2
AFTER3
TABLE Y(Header)
3
AMP 3
TABLE YPRIMARY
3
AFTER3
AFTER4
AFTER3
AFTER4
TABLE YPRIMARY
4
AMP 4
TABLE YPRIMARY
4
AFTER4
AFTER1
AFTER4
AFTER1
11- 52 Disaster Recovery
Table Z Recovery The first diagram on the facing pace illustrates table Z after REBUILD and RESTART procedures.
Neither permanent journal tables stored on AMP 3 were restored. In addition, the primary table information is still missing. The administrator needs to perform some interactive recovery procedures to fully recover the missing data for table Z on AMP 3.
Recovery Action The administrator must perform the following steps to fully recover table Z:
1. Perform single-AMP RESTORE of AMP 3 using Wednesday's DUMP of table Z to restore all data rows stored in the archive file from table Z.
The administrator does not restore the journal tables for table Z since a complete backup of the table was performed on the same day as the journal archive. All changes through Wednesday would be in the archive of the entire table.
2. The administrator does NOT release the utility locks.
3. Perform a single -AMP ROLLFORWARD on AMP 3 using the CURRENT journal from table Z. This action replaces existing table rows with any after-change images stored in the active and/or saved subtables of the permanent journal. Any changes in the current journal would have occurred on Thursday before the disk failure.
4. Perform an all-AMPs dump of table Z to protect against a second disk failure in the same cluster. The administrator is unable to restore the journal for AMP 3 because dual image journaling was not chosen. Another disk failure in this cluster leaves data unrecoverable. To correct this, the adminstrator dumps the entire table, deletes the saved journal and starts a new journal.
5. Perform a CHECKPOINT WITH SAVE and DELETE SAVED JOURNAL. The CHECKPOINT step moves any stored images from the active subtable to the saved subtable of the current journal and initiates the active subtable. The DELETE step erases the contents of the saved subtable since they are no longer needed.
6. RELEASE all utility locks.
Table Z is now fully recovered. All its contents are now available to users. Notice that the table is recovered but the journals are not.
Disaster Recovery 11- 53
Table Z Recovery
TABLE ZPRIMARY
1
BEFORE1
AFTER2
AMP 1 AMP 2 AMP 3 AMP 4Before Recovery:
1. Specific AMP RESTORE of Wednesday’s DUMP of Table Z.
2. Do NOT release utility locks.
3. Specific AMP ROLLFORWARD CURRENT of Table Z.
4. Perform all- AMPs DUMP of Table Z.
5. Run CHECKPOINT WITH SAVE and DELETE SAVED JOURNAL.
6. RELEASE utility locks.
AMP 1
After Recovery:
TABLE ZPRIMARY
1
TABLE ZPRIMARY
2
BEFORE2
AFTER3
TABLE Z(Header)
TABLE ZPRIMARY
4
BEFORE4
AFTER1
AMP 2
TABLE ZPRIMARY
2
AMP 3
TABLE ZPRIMARY
3
AMP 4
TABLE ZPRIMARY
4
11- 54 Disaster Recovery
After Recovery The diagram on the next page shows the three tables after recovery. The following summary outlines the effects of permanent journals on recovery from a single disk failure.
Fallback Tables, Dual Image Tables (Table X) • Processing continues.
• Journals play no part in recovery.
No Fallback Tables, Dual Image Journals (Table Y) • Limited processing continues.
• Data and journal tables are fully recovered.
No Fallback Tables, Single Image Journals (Table Z) • Limited processing continues.
• Data is fully recovered.
• Journals are lost.
No Fallback Tables, No Journals • Limited processing continues.
The administrator can only recover data to the point of the last archive.
Disaster Recovery 11- 55
After Recovery
TABLE XPRIMARY 1TABLE X
FALLBACK2, 3, 4
AMP 1 AMP 2
AFTER1
AFTER2, 3, 4
BEFORE1
BEFORE2, 3, 4
AFTER2
AFTER1, 3, 4
BEFORE2
BEFORE1, 3, 4
TABLE XPRIMARY 2TABLE X
FALLBACK1, 3, 4
AMP 3
AFTER3
AFTER1, 2, 4
BEFORE3
BEFORE1, 2, 4
TABLE XPRIMARY 3TABLE X
FALLBACK1, 2, 4
AMP 4
AFTER4
AFTER1, 2, 3
BEFORE4
BEFORE1, 2, 3
TABLE XPRIMARY 4TABLE X
FALLBACK1, 2, 3
TABLE YPRIMARY
1
AFTER1
AFTER2
AFTER2
AFTER3
TABLE YPRIMARY
2
AFTER3
AFTER4
TABLE YPRIMARY
3
AFTER4
AFTER1
TABLE YPRIMARY
4
TABLE ZPRIMARY
1
TABLE ZPRIMARY
2
TABLE ZPRIMARY
3
TABLE ZPRIMARY
4
11- 56 Disaster Recovery
Permanent Journal Usage Summary The facing page contains some useful concepts on how permanent journals operate during recovery.
Disaster Recovery 11- 57
Permanent Journal Usage Summary
Fallback Tables
No Fallback TablesDual Image Journals
No Fallback TablesSingle Image Journals
No Fallback TablesNo Journals
Data is fully recoverable. Journals play no part in recovery.
Data is partially available. Data and journals may be fully recoverable.
Data is partially available. Data may be recovered, but journals are lost.
Data is partially available. Data can be recovered only to the point of the last archive.
11- 58 Disaster Recovery
Data Protection Summary The opposite page summarizes some important concepts in this module.
Disaster Recovery 11- 59
Data Protection Summary
– The Teradata Database offers automatic and system administrator-activated methods to protect data.
– The Transient Journal automatically protects data by taking a snapshot of a row before a change to a table is made. It copies the snapshot back to the table if a transaction fails.
– Fallback protection is an optional data protection feature that creates a copy of each row on another AMP in the same cluster.
– The Down AMP Recovery Journal supports fallback protection and automatically recovers data when an AMP is out of service or fails.
– Redundant array of independent disks (RAID) technology that provides two types of data protection. They include:
• RAID 1
– Pairs of disk drives contain mirrored data.– For critical fault-tolerant transaction processing.
• RAID 5
– Reconstructs missing data.– Requires less disk space than RAID 1, but
reconstructing data takes longer than usingRAID 1 and switching to a mirrored disk.
11- 60 Disaster Recovery
Review Check your understanding of the concepts discussed in this module by completing the review questions as directed by your instructor.
Disaster Recovery 11- 61
Review Questions
1. The Transient Journal and Down AMP Journals provideautomatic data protection.
2. It can be more cost effective in terms of disk space toactivate fallback protection for only those tables where anadded measure of protection is needed.
3. While RAID 5 requires less disk space than RAID 1, a tradeoffof using RAID 5 is in the event of a failure, it takes longer toreconstruct data than to switch to a mirrored disk.
4. What does the Transient Journal store?
5. Why does the database halt if two AMPs in the same clusterare out of service, even when there is fallback protection?
T F
T F
T F
11- 62 Disaster Recovery
References For more information on the topics covered in this module:
• Teradata Archive/Recovery Reference - (B035-2412-122A)
• Teradata RDBMS Database Design – (B035-1094-122A)
• Teradata RBDMS Database Administration – (B035-1093-122A)
• What is Dual Systems? White Paper #905-0004877 Rev. B
Archiving Data 12 1
Module 12
After completing this module, you should be able to:
• Explain how to use the ARC utility to back up data on portable media storage.
• Identify the function of Archive and Recovery utility statements.
• Identify the kind of utility locks placed during archive and recovery procedures, and identify statements used to release the locks when appropriate to do so.
Archiving Data
12- 2 Archiving Data
Notes:
Archiving Data 12 3
Table of Contents
ARCHIVE RECOVERY UTILITY (ARC) ........................................................................................................................... 4 ARCHIVE AND RECOVERY PHASES ................................................................................................................................ 6 ARC VERSUS FASTLOAD....................................................................................................................................................... 8 SESSION CONTROL................................................................................................................................................................10 MULTIPLE SESSIONS ............................................................................................................................................................12 ARCHIVING STATEMENTS ................................................................................................................................................14 ARCHIVE STATEMENT ........................................................................................................................................................16 ARCHIVE TYPES ......................................................................................................................................................................18 ARCHIVE OBJECTS ................................................................................................................................................................20 ARCHIVE LEVELS ...................................................................................................................................................................22 ARCHIVE OPTIONS ................................................................................................................................................................24 INDEXES OPTION ....................................................................................................................................................................26 GROUP READ LOCK OPTION............................................................................................................................................28 TYPES OF ARCHIVE...............................................................................................................................................................30 DATABASE DBC ARCHIVE..................................................................................................................................................32 DATA ARCHIVING SUMMARY .........................................................................................................................................34 REVIEW QUESTIONS .............................................................................................................................................................36 REFERENCES ............................................................................................................................................................................38
12- 4 Archiving Data
Archive Recovery Utility (ARC) The Archive/Recovery (ARC) utility writes and reads sequential files on a Teradata client system to archive, restore and recover, as well as copy, Teradata RDBMS table data. ARC does the following:
• Archives a database or individual table from Teradata to a tape.
• Restores a database or individual table to Teradata from a tape.
• Restores an archived database or table to a Teradata RDBMS other than the one from which it was archived (e.g., COPY).
• Places a checkpoint entry in a journal table.
• Recovers a database to an arbitrary checkpoint by rolling it back or forward using change images from a journal table.
• Deletes a change image row from a journal table.
How ARC Works ARC creates files when you archive databases, individual data tables or permanent journal tables from Teradata and uses such files to restore databases, individual data tables or permanent journal tables back to Teradata. ARC includes recovery with rollback and rollforward functions for data tables defined with a journal option. You can checkpoint these journals with a synchronization point across all AMPs, and you can delete selected portions of the journals.
The archive task archives information from the Teradata system onto some type of portable storage media. The restore function reverses the archive process and moves the data from the storage media back to the database. The recovery feature utilizes information stored in permanent journals to Rollback or Rollforward row information.
Invoking ARC ARC runs in either online or batch mode under MVS, VM or Windows NT. Although ARC is normally invoked in batch mode, it can be run interactively. ARC does not provide a user-friendly interface in online sessions. ARC is invoked by calling the program module ARCMAIN.
Archiving Data 12 5
Archive Recovery Utility
• Archive - Captures user data on portable storage media.• Restore - Restores data from portable storage media.• Recovery - Recovers changes to data from permanent journal tables.
– Disaster recovery• Common uses of ARC include recovery from:
– Loss of AMP’s vdisk for no fallback tables– Loss of multiple AMPs’ vdisks in the same cluster– Failed batch processes– Accidentally dropped tables, views, or macros– Miscellaneous user errors
ARC resides on the host and backs up or restores to and fromchannel or network hosts.
12- 6 Archiving Data
Archive and Recovery Phases Archive or recovery jobs always operate in two phases. The steps of each phase are described on the facing page.
The archive process is intensive. You may want to create a user just for archive activities so that you can use your user ID to perform other actions while archive is running.
Archiving Data 12 7
Archive and Recovery Phases
1. Allocate an event number.
2. Issue a BEGIN TRANSACTION statement.
3. Resolve object name.
4. Check access rights.
5. Place locks:
• Utility locks on data dictionary rows.
• Utility locks on data rows.
6. Delete existing tables prior to RESTORE.
7. Issue an END TRANSACTION statement.
Phase 1—Dictionary Phase
Phase 2—Data Phase
1. Issue a BEGIN TRANSACTION statement.
2. Insert rows into RCEVENT and RCCONFIG.
3. Perform the operation.
4. Update RCEVENT.
5. Release locks (if user specified).
6. Issue an END TRANSACTION statement.
12- 8 Archiving Data
ARC versus FastLoad You could consider running a FastLoad utility job to restore the information to disk. This would mean that instead of archiving to tape, you have used BTEQ EXPORT, FastExport, or some other means to store the information in a host file. FastLoad requires an empty table.
FastLoad Steps to Restore a Table
1. FastLoad uses a single session to send the INSERT statement to the PE and AMPs.
2. Multiple sessions are then used to facilitate sending rows to the AMPs.
3. Upon receipt, each AMP hashes each record and redistributes it over the BYNET. This is done in parallel.
4. The receiving AMP then writes these rows directly to the target table as unsorted blocks.
5. When loading completes, each AMP sorts the target table, puts the rows into blocks, and writes the blocks to disk.
6. Then, fallback rows are generated if required. FastLoad operates only on tables with no secondary indexes.
7. You have to create any required indexes when the FastLoad is complete.
Recovery Steps Recovering to the same configuration includes:
• Recovery of data blocks to the AMP.
• The blocks are already in the appropriate format.
Recovering to a different configuration includes:
• The block is first sent to the AMP in the old configuration.
• Then, it strips off its own rows and forwards (redistributes) the remainder of the block to the AMP for the new configuration. Since the original rows were sorted in data blocks by ROWID, the result is usually much faster than a normal redistribution.
ARC is the easiest and fastest method for recovering a very large number of objects. FastLoad operates on a table-by-table basis, while ARC can restore an entire machine with one command.
Archiving Data 12 9
ARC versus FastLoad
As an alternative to using ARC to archive and restore data, you can export a table’sdata and then insert it using FastLoad when you need to restore the table.
S t e p s E x p o r t / F a s t L o a d A R C
B a c k u p t h e d a t a F a s t E x p o r t A r c h i v e
R e s t o r e t o a d i f f e r e n t d a t a b a s e o r s y s t e m
F a s t L o a d C o p y
R e s t o r e t o s a m e s y s t e m
B l o c k a t a t i m e s e n t t o A M P .
R e d i s t r i b u t e b y r o w t o t a r g e t A M P s .
B l o c k s a r e s o r t e d .
T a b l e b l o c k s a r e bu i l t .
B l o c k s s e n t d i r e c t l y t o t a r g e t A M P s .
T a b l e s a r e b u i l t .
P r o c e d u r e T a b l e -b y -t a b l e o n l y .
C a n r e s t o r e e n t i r e s y s t e m w i t h o n e c o m m a n d .
12- 10 Archiving Data
Session Control
You must log on to aTeradata system before you can execute other ARC statements. The user ID with which you log on has to have access rights for the ARC statements that you want to use.
The facing page shows a session control example.
Archiving Data 12 11
Session Control
The LOGON statement:– 2 sessions logged on: 1 for SQL statements, and 1 for control requests.– At DUMP or RESTORE command, ARC starts additional sessions.– Identifies account to charge for resources.– Identifies User to Teradata and verifies ownership and access rights.
• Checkpoint Permits you to execute both the SQL and ARC utility checkpoint statements.
• DUMP Permits you to execute the ARC Dump (Archive) statement• RESTORE Permits you to execute the following ARC statements:
Restore Delete JournalRollforward Release Lock*Rollback Build
The LOGOFF statement:– Ends all Teradata sessions logged on by the task.– Terminates the utility.
*To release a lock held by another User, you must specify Override and hold DROP privileges on the underlying objects.
12- 12 Archiving Data
Multiple Sessions You can specify the number of archive and/or recover sessions with which to work, or use the default. To set the number, use the SESSIONS runtime parameter.
The optimum number of sessions is:
• One per AMP for archive.
• Two per AMP for recovery.
The number of sessions to use can vary based on a number of factors. Several are described below. Two or three sessions per AMP are a good starting point.
The description on the following page tells more about how the AMPs use the sessions.
If fewer than one session per AMP is specified for the archive:
• For AMP groups, archive/recovery will archive blocks from each group with each AMP completed before the next starts.
• In this case, a large number of sessions allocated to recovery will not help recovery performance.
For larger configurations, say over 100 AMPs, specifying one session per AMP will not increase performance because of other limiting component(s).
In this case, for maximum throughput, cluster level operation is recommended with one session per AMP for involved AMPs. For example, if the system has 50 clusters with 4 AMPs each, you can partition it into two jobs with 25 clusters each and 100 sessions per job provided that your site has two (or more) tape drives available and enough host resources to run two jobs in parallel.
Archiving Data 12 13
Multiple Sessions
– Teradata assigns each session to an AMP. All sessions stay with that AMP until all required data is archived. Then it will be moved to another AMP if necessary.
– Archive attempts to build blocks from each AMP in turn. The blocks are composed of complete database blocks.
– Data blocks from different AMPs are never mixed within the same archive block.
12- 14 Archiving Data
Archiving Statements The ARC utility contains a number of commands to perform archive, restore, and recovery tasks. The commands on the facing page enable you to perform archive-related tasks.
Archiving Data 12 15
Archiving Statements
LOGON Begins a session.
LOGOFF Ends a session.
ARCHIVE Archives a copy of a database or table to a host-resident data set/file.
CHECKPOINT Marks a journal for later archive or recovery activities.
RELEASE LOCK Releases host utility locks on databases or tables.
Example Archive Statement
ARCHIVE DATA TABLES (Database1, ALL),RELEASE LOCK,INDEXES,FILE=archdb1;
12- 16 Archiving Data
ARCHIVE Statement The ARCHIVE statement allows you to backup database objects to host media (usually magnetic tape). The format for this statement is shown on the following page.
Note: ARCHIVE is the preferred term as the ARCHIVE command is supported only for backward compatibility.
The ARCHIVE control statement allows you to specify the archive:
• Type
• Objects
• Levels
• Options
Example Archive Statement ARCHIVE DATA TABLES (Database1, ALL) ,RELEASE LOCK ,INDEXES ,FILE=archdb1;
Archiving Data 12 17
ARCHIVE Statement
12- 18 Archiving Data
Archive Types The archive statement can only back up one table type at a time: data; dictionary; no fallback; or journa l. Users must submit separate archive statements in order to archive each. Below is a description of each archive type:
DATA TABLES Archives fallback, non- fallback, or both types of tables from all AMPs or clusters of AMPs.
DICTIONARY TABLES
Backs up DD rows that describe the databases or tables archived during a cluster- level archive. If you archive a database, the archive includes table, view, and macro definitions. If you archive a table, back up only includes table definition rows. DD information for permanent journals is not included.
NO FALLBACK TABLES
Run this archive type only to back up no fallback tables on an AMP that was down during a DATA TABLE archive. It completes the previous ALL AMP or cluster archive.
JOURNAL TABLES
Archives the dictionary rows and selected contents of the journal tables.
Archiving Data 12 19
Archive Types
• Fallback tables• Non fallback tables• All AMP or cluster archive
• DD rows to complementcluster-level archive
• Journal Tables
ARC UTILITY
ARCHIVEDATA TABLES
ARCHIVEDICTIONARY TABLES
ARC UTILITY
• Non-fallback tables• Archives AMP data missed
during previous all AMP orcluster-level archive
ARCHIVENO FALLBACK
TABLES
ARC UTILITY
ARCHIVEJOURNAL TABLES
ARC UTILITY
12- 20 Archiving Data
Archive Objects The information backed up in an archive operation varies depending upon the type of object you select:
• Single database or table
• Multiple databases or tables
• All databases
Single Database Archive An ALL AMP database archive backs up a wide range of DD information. It archives all objects that belong to the database including views, macros, stored procedures, and the data tables themselves. The information archived for the data tables includes table, column, and index information as well as table headers and data rows. A table header is a row of information about the table that is kept in the first block of the table.
Database ALL Archive A Database ALL archive archives the parent and all children. The backed up objects are identical to those archived in a single database archive.
Single or Multiple Table Archives For each table specified in the archive statement, the ARC utility backs up table, column, and index information along with table headers and the actual data rows.
EXCLUDE Option This option directly affects which databases are backed up. The exclude option changes the range of objects that the ARC utility archives. Users can leave out a single database, a database and all of its children, or a range of alphabetically sorted databases.
Archiving Data 12 21
Archive Objects
The EXCLUDE option allows you to exclude:• a single database• a database and all its descendents• a range of alphabetically sorted databases
An ALL AMPs table archivearchives table, column andindex information, as well astable headers and data rows.
DATABASENAME.TABLENAME
An ALL AMP archive of data tables that identifies a database.
The ALL option archives all items listed above for the specified database, as well as all its descendents.
Archives all DD informationfor the identified database, including views,macros, and stored procedures. The archive also includes all table, column and index information, as well as table headers and data rows.
VIEWS and MACROS
TABLES
12- 22 Archiving Data
Archive Levels The default archive level for any archive operation is all AMPs.
Normally, you do not specify an archive level in your ARCHIVE statement since ALL is the default. When an AMP is off-line during an all-AMP archive, non-fallback tables may only be partially archived.
You need to perform a single-AMP back up of NO FALLBACK TABLES to obtain a complete back up. Fallback tables are always completely archived even if an AMP is down, because there is either a primary or fallback copy of the data on another AMP.
The first ARCHIVE statement illustrated on the next page performs a backup of ALL of the NO FALLBACK TABLES that reside in the entire system on AMP 2. This AMP was down during the all-AMP archive. The user issued the ARCHIVE statement after the AMP came back on-line.
Cluster Archives As an alternative to archiving data tables from all AMPs into a single archive, you can partition the archive into a set of archive files called a cluster archive. A cluster archive archives data tables by groups of AMP clusters so that the complete set of archive files contains all data from all AMPs.
You can run a cluster archive in parallel, or schedule it to run over several days. It may be faster to restore a single AMP since the system has fewer tapes to scan to recover lost data.
In general, cluster archiving improves the archive and recovery performance of very large tables. In addition, it simplifies the restore process of non-fallback tables for a specific AMP.
A cluster archive does not contain any dictionary information. You must perform a DICTIONARY TABLE archive before you run a cluster archive for the first time, because Database DBC is automatically excluded for this kind of archive operation. You must run the dictionary table archive again any time there is a change in the structure of the tables in the cluster archive.
Cluster archives have two restrictions:
1. You cannot create a cluster archive of journal tables.
2. You cannot setup cluster archives when you are archiving Database DBC.
Archiving Data 12 23
Archive Levels
The system performs an ALL AMP-level archive unless you specify a processor or cluster archive.
You can partially archive non-fallback tables if an AMP is offline.
Fallback tables are always completely archived, regardless of the configuration.
AMPAMPAMP AMP AMPAMP AMP AMP
Single-Processor archives are onlyused to complete the archive of NoFallback Tables after a processor isrestored to service.
Cluster level archives group data from one ormore clusters into separate archive data sets.
They either run in parallel or scheduled to be runover several days.
A single AMP may be recovered in less time.
Dictionary Info must be dumped separately.
ARCHIVE NO FALLBACK TABLES(DBC) ALL, PN = 001-2;
ARCHIVE DATA TABLES(DBC) ALL, CLUSTER = 2;
12- 24 Archiving Data
Archive Options The archive statement includes a number of options. Each option is described below:
RELEASE LOCK Automatically releases Utility Locks if the operation completes successfully.
INDEXES For all-AMP archives only, this option specifies to include secondary indexes with the archive. You will need more time and media to archive objects with their secondary indexes.
ABORT Causes all AMP or cluster archives to fail with error messages if an AMP is off-line and the objects to be archived includes:
− No fallback tables
− Single image journals
NONEMPTY DATABASES
Instructs the ARC utility to exclude users/databases without tables, views, etc., from the archive.
USE GROUP READ LOCK
Permits you to archive as transactions update locked rows. You must define after image journaling for the table during the time the archive is taking place.
Archiving Data 12 25
Archive Options
• Release Lock– Utility locks are automatically released upon successful operation completion.
• Indexes– Restricted to all-AMP archives.– Includes secondary indexes with archive.– Requires more time and media.
• Abort– Fails ALL AMP or cluster archives AND
Provides error messages if:• AMP is off-line AND,• Archived objects include no fallback tables, OR• Archived objects include single image journals.
• Non empty Database(s)– Excludes users/databases without tables, views, etc., from archive operation.
• Use Group Read Lock– Permits concurrent table archiving and transaction updates on locked rows.– Requires after-image journaling of table.
12- 26 Archiving Data
Indexes Option Archive operations do not automatically archive secondary indexes. The INDEXES option enables you to archive secondary indexes as part of the archive process.
The INDEXES option archives both unique and non-unique secondary indexes on all data tables. However, if an AMP is off-line, the utility only archives unique secondary indexes on fallback tables. It ignores the non-unique indexes. In addition, it does not archive any secondary indexes for non-fallback tables. For this option to be the most effective, it is best to use it when all AMPs are on-line.
The reverse process is true for restoring data that was archived with the INDEXES option. All indexes are restored if all AMPs are on-line. If an AMP is down, only unique secondary indexes are restored and only for fallback tables. No non-unique secondary indexes are restored. No indexes are restored for non-fallback tables.
Restrictions You can only use the INDEXES option with all-AMP data table archive operations. The INDEXES option does not apply to dictionary, no fallback, and journal table archive operations. This option is ignored in cluster or single processor archive operations as well as an archive statement that includes the GROUP READ LOCK option.
Recommendations If you specify the INDEXES option, the time and media required to perform and archive increases. It will also take you longer to restore an archive you have created with the INDEXES option than to restore an archive created without it. However, it will usually be quicker to restore secondary indexes than rebuild them. In most cases, archive and restore without INDEXES.
The following do not archive index subtables:
• Dictionary, no fallback, or journal table archives
• Cluster or single processor archives
• Archives made using a group read lock
Archiving Data 12 27
Indexes Option
This option applies only to ARCHIVE DATA TABLES ALL AMP.
OT HERWISE
OTHE RWISE
OTHE RWISE
ARCHIVE PROCEDURE
RESTORE OPERATION
If all AMPs are online, then allindexes are archived.
If the table is fallback, then onlyunique secondary indexes are archived.
No indexes are archived.
If all AMPs are online, thenall indexes are restored,
If the table is fallback, then onlyunique secondary indexes are restored.
12- 28 Archiving Data
Group Read Lock Option The group read lock option allows an archive operation to proceed while you and other users make changes to the table. Only tables that have an after-image journal associated with them can utilize the group read lock option. This option is only valid during an all-AMPs archive.
The ARC utility places a read lock on tables during archive operations that prevents users from updating a table during the process. The archive must be complete and the lock released before processing resumes to the table. You can use the keyword GROUP with the READ LOCK option to circumvent this limitation using the following steps:
1. The utility places an access lock on the entire table.
2. A group of table rows are read locked (about 32,000 bytes).
3. The locked rows are archived.
4. The lock is released on that group of rows.
5. Another group of rows is locked... etc.
The access lock prevents anyone from placing an exclusive lock on the data table while the archive is in process. By placing a read lock which disables writing on a small group of rows within the table, users can continue to make updates directly to the rows not being archived. In the event that someone attempts to update a row that is under a read lock, the change is written to the after-image journal but the data row remains unchanged until the read lock is removed. The after-image journal must be backed up to have a complete archive of all data.
Example The diagram on the facing page illustrates an archive process with the group read lock option. The shaded rectangle indicates the rows containing a read lock. Any changes submitted to rows 011 through 100 will not be written to the data table until after the group read lock is removed. Three transactions occurred during the archive process. The first transaction affected row 001. This change is not reflected in the archive file since it occurred after that row was already archived. The second transaction affected row 080. This change is not in the archive file either because it had a read lock on it when the transaction occurred. The third transaction affected row 101. This transaction will appear in the archive file since it took place before row 101 was archived. All three transactions are written to the after-image journal table. Once the archive is complete, the user will archive the after-image journal. The data table archive along with the journal table archive represent a complete archive of the data.
The backup must be an all-AMP or cluster-level archive, and you can’t archive system User DBC with GROUP READ LOCK.
The table must have after-image journal and the journal must be archived to complete the archive.
Archiving Data 12 29
Group Read Lock Option
ROW 011
ROW 012
ROW 100
ROW 001
ROW 002
ROW 010
ROW 101
ROW 102
ROW 999
DATA TABLE
GROUP
READ
ROW 080
• All transactions will be included in the journal.
• Journal must be archived.
• Completed archive set includes data tablearchive and journal table archive.
ROW 001
ROW 080
ROW 101
AFTER IMAGE JOURNAL
1. UPDATE ROW 001
2. UPDATE ROW 080
3. UPDATE ROW 101
TRANSACTIONS DURING ARCHIVE
12- 30 Archiving Data
Types of Archive There are four types of archives that archive different information to a removable media. (See the list on the facing page.)
ALL AMP Database ARCHIVE An ALL AMP Database ARCHIVE contains:
• Data rows from all the Tables in the Database(s) being archived.
• The Data Dictionary rows of the object(s) being archived.
• All Table, View, and Macro, and Stored Procedures information.
• Information about the structure of all the Tables in the Database(s).
You can restore a table from an archive only if the table ID in the archive matches that in the DBC.TVM table. To restore a database or user, the database ID in the archive must match the database ID in the DBC.Dbase table. You can only restore database DBC to an otherwise empty Teradata database.
ALL-AMP Table Archive An all-AMP table archive contains:
• Data rows from the table
• Dictionary information for the table
• All table, column, and index definitions
• Table structure information
Specific AMP or Cluster ARCHIVE Specific AMP or cluster archives include:
• Data rows from the table or tables within the database(s)
• No data dictionary rows residing on that AMP or within that cluster
• Table structure information
• A supplemental data dictionary archive to provide necessary information (for dictionary archives only)
Dictionary ARCHIVE A dictionary archive contains:
• Dictionary rows from the database DBC for the archived object(s)
• No permanent journal information
Archiving Data 12 31
Types of Archive
• ALL AMP Database ARCHIVE includes:– Data rows from the tables in the specific database(s)– Table structure information– All table, column, index definitions– All view, macro, and stored procedure definitions– Permanent journal information is not included
• ALL AMP Table ARCHIVE includes:– Data rows from the Table– All dictionary information for the table– All table, column, and index definitions
• Specific AMPs or Cluster ARCHIVE includes:– Data rows from the table or tables within the specific database(s)– No dictionary rows
• Dictionary ARCHIVE includes:– Dictionary rows for the object being archived (Tables: TVM, TVFields,
Indexes, IndexNames)– Permanent journal information is not included
• Since a Cluster ARCHIVE does not contain dictionary information, youmust maintain a Dictionary archive to restore the database or table.
12- 32 Archiving Data
Database DBC Archive An archive of the information in DBC should be done every time DDL makes changes to the definitions stored in the database. Examples of the types of commands that make these changes are:
• CREATE DATABASE/USER
• MODIFY DATABASE/USER
• CREATE/ALTER TABLE
• CREATE/REPLACE VIEW
• CREATE/REPLACE MACRO
• CREATE INDEX
• DROP TABLE/VIEW/MACRO
• DROP INDEX
• GRANT
• REVOKE
You can only restore Database DBC to an initialized Teradata RDBMS.
Archiving Data 12 33
Database DBC ArchiveTables archived in a Database DBC Archive.
AccessRights Specifies all granted rights.
AccLogRuleTbl Defines access logging rules generated by executingBEGIN/END LOGGING statements.
Accounts Lists all authorized account numbers.
CollationTbl Defines MULTINATIONAL collation.
DBase Defines each database and user.
Hosts Defines information about user defined charactersets used as defaults for client systems.
LogonRuleTbl Defines information about logon rules generatedby a GRANT LOGON statement.
Next Generates table and database identifiers (internal table).
Owners Defines all databases owned by another.
Parents Defines all parent/child relationships between databases.
RCEvent Records all archive and recovery activities.
RCConfiguration Records the configuration for RCEvent rows.
RCMedia Records all removable devices used in archive activities.
Translation Defines hexadecimal codes that form translation tables for non-English character sets.
12- 34 Archiving Data
Data Archiving Summary The opposite page summarizes some important concepts in this module.
Archiving Data 12 35
Data Archiving Summary
– Archive and Recovery (ARC) is a command-line utility that performsthree operations: archive, restore and recovery.
– Veritas NetBackup, and BackBone NetVault provide interfaces toperform archive, restore, and recovery operations.
– Archive and recovery jobs operate in two phases: dictionary anddata phase.
– The optimum number of sessions for archive and recovery operations is:• One per AMP for archive• Two per AMP for recovery
– An archive operation can back up a single database or table, multipledatabases or tables, or all databases.
– Use copy to move backed up table data to another system.– Available archive levels are all-AMP, specific AMP, and cluster archives.– The four types of archives are all-AMP database archive, all-AMP-table
archive, specific-AMP or cluster archive, and dictionary archive.
12- 36 Archiving Data
Review Questions Check your understanding of the concepts discussed in this module by completing the review questions as directed by your instructor.
Archiving Data 12 37
Review Questions
Indicate whether a statement is True(T) or False(F).
1. Since the archive process can be intensive, you may want to create a user just for archiving to free your user ID for other process while archive is running.
2. The Archive and Recovery utility protects against more types of potential data loss than automatic data protection features.
3. Recovery and FastLoad provide the same ease and speed to recover data.
T F
T F
T F
12- 38 Archiving Data
References For more information on the topics covered in this module:
• Teradata Archive/Recovery Reference - (B035-2412-060A)
Restoring Data 13- 1
Module 13
After completing this module, you should be able to:
• Describe how to use the ARC utility to replace existing data on a Teradata system with information stored on portable storage media.
• Describe how to execute RESTORE, COPY, BUILD, REVALIDATE REFERENCES FOR, and RELEASE LOCK statements.
Restoring Data
13- 2 Restoring Data
Notes:
Restoring Data 13- 3
Table of Contents
RESTORE-RELATED STATEMENTS ................................................................................................................................. 4 ANALYZE STATEMENT.......................................................................................................................................................... 6 THE RESTORE STATEMENT................................................................................................................................................ 8 RESTORING TAB LES .............................................................................................................................................................10 COPY STATEMENT.................................................................................................................................................................12 COPYING TABLES ...................................................................................................................................................................14 BUILD STATEMENT ...............................................................................................................................................................16 REVALIDATE REFERENCES ..............................................................................................................................................18 RELEASE LOCK STATEMENT ..........................................................................................................................................20 RESTORING DATA SUMMARY.........................................................................................................................................22 REVIEW QUESTIONS .............................................................................................................................................................24 LAB 5 ..............................................................................................................................................................................................26 REFERENCES ............................................................................................................................................................................28
13- 4 Restoring Data
Restore-Related Statements The Archive and Recovery utility provides several recovery control statements you use during restore-related operations. Each command is described on the facing page:
You can invoke the Archive and Recovery utility from a channel-attached MVS or VM host system, the node, or a LAN attached workstation.
Restoring Data 13- 5
Restore-Related Statements
Reads an archive tape to display information about its contents.
Builds indexes for fallback and non-fallback tables. It also builds fallback rowsfor fallback tables, and can build journal tables by sorting the change images.(This statement causes rehashing of V1 data restored to a V2 system.)
Restores a copy of an archived file to a specified Teradata database system.
Deletes objects from a database. Does not remove journal tables.
Ends a session and terminates the utility.
Begins a session.
Releases host utility locks from specific databases or tables.
Restores a database or table from an archive file to specified AMPs.
Validates inconsistent constraints against a target table thereby allowingusers to execute UPDATE, INSERT and DELETE statements on the tables.
ANALYZE
BUILD
COPY
DELETE DATABASE
LOGOFF
LOGON
RELEASE LOCK
RESTORE
REVALIDATEREFERENCES FOR
13- 6 Restoring Data
ANALYZE Statement The ANALYZE statement reads data from an archive tape and displays information about tape contents. When you invoke the statement, you can choose a specific database or a range of databases from which to display information. This information will help you if you are trying to restore a specific database instead of the entire archive set. This statement does not require a prior logon.
The ANALYZE statement provides the following information about the database(s) you specify:
• Time and date of the archive operation
• The archive level: all-AMPs; clusters of AMPs; or specific AMPs
• The name of each database, data table, journal table, or object in each database and the fallback status of the tables. Information appears only if you use the keyword LONG with the DISPLAY option.
DISPLAY Option If no option is listed, display is the default. It shows the time, date and level of the archive. If you use the LONG option, the display includes the names of all data tables, journal tables or object.
VALIDATE Option This option reads each archive record in the specified database. It checks that each data block in the file can be read but does not check whether the data block read has valid rows or not, i.e., it does not check anything inside the data block record. It only checks whether or not the data block record can be read.
Restoring Data 13- 7
ANALYZE Statement
The ANALYZE statement instructs the ARC utility to read an archive file(created by ARC) and display information about its content.
The LONG option displays all table or object names.
The VALIDATE option reads each record to check that each block on thearchive file is readable.
ANALYZE [ * | ALL | [(Databasename) | (Databasename) TO (Databasename) ] [,...]
[ , DISPLAY [ LONG] | , VALIDATE ]
, FILE = name ;
13- 8 Restoring Data
The Restore Statement The RESTORE statement allows you to replace database objects from an archive tape to the same system or to another system. Teradata features the four types of RESTORE or RECOVER operations described below:
Data Tables The DATA option restores fallback, non fallback, or both types of data tables to all AMPs or clusters of AMPs.
Dictionary Tables The DICTIONARY option restores data dictionary rows that describe the databases or tables archived during a cluster-level restore. If you restore a database, the object definitions from the data dictionary are included. If a you restore a table, only table definition rows are included.
No Fallback Tables Use the no fallback option to restore a single processor.
Journal Tables This option restores an archived journal for subsequent use in a roll operation.
Restore Fallback This option applies only to data table restored on fallback tables, and allows the utility to restart the restore without returning to the first row (in the event of a processor failure).
No Build NO BUILD prevents secondary indexes on non-fallback tables from being restored or built. On fallback tables, it prevents the creation of secondary indexes and fallback table rows. It restores cluster archives to a reconfigured machine and prevents rehashing of V1 transfers to V2 data.
Release Lock This option causes ARC to automatically release the utility locks when a restore completes successfully.
Abort This option causes an all-AMP restore to abort with error messages if an AMP is offline and the restore includes a non fallback table. It does not affect a specific-AMP restore.
Restoring Data 13- 9
The RESTORE Statement
RESTORE
DICTIONARY
,
NO FALLBACK
JOURNAL
A
C
DATA TABLES (Dbname)
ALL
,B
ALL
, EXCLUDE
A
(Dbname)
,CB
, PN = ccc -p
, CLUSTERS = nnn
5
, 512
, RESTORE FALLBACK
(dbnametablename)
, ABORT, NO BUILD , RELEASE LOCK
, AMP= n
, 5
TABLE
CLUSTER
(dbname) TO (Dbname2)
GR01B010
, FILE = name ;D
COLLATION, USE ASCII
D
EBCDIC
Use the RESTORE statement to replace tables and/or databases from an archive.
13- 10 Restoring Data
Restoring Tables Two RESTORE statements are shown on the facing page.
The first example is restoring all databases from the entire system when all AMPs are online. The restore type is data and the restore object is all databases belonging to user Payroll. Since there is no mention of any restore levels, such as a specific AMP number, the system assumes all AMPs. The release lock option removes the utility lock after completing the restore operation. The name of the archive file is ARCHIVE.
The second example has a narrower scope. This statement is only restoring non fallback tables on AMP 5. The administrator has already performed an all-AMPs restore on the rest of the system. The release lock option removes the utility lock after completion of the restore operation. The archive filename is ARCHIVE2.
Any databases or users created since the Archive of the dictionary or any table, view, or macro created since the archive of a database will be dropped when you restore the DBC database or a user database.
Restoring Data 13- 11
Restoring Tables
Restore ALL AMPs with ALL AMPs online:
LOGON Sysdba,xxxxxxxx ;RESTORE DATA TABLES (Payroll) ALL,
RELEASE LOCK,FILE=ARCHIVEABORT ;
LOGOFF ;
ARC UTILITYRESTORE
DATA TABLE
DATA TABLE(S)
Perform Restore on AMP that was corrupted:
LOGON Sysdba,xxxxxxxx ;RESTORE DATA TABLES (Payroll) ALL,
AMP=5RELEASE LOCK,FILE=ARCHIVE2
LOGOFF ;
13- 12 Restoring Data
COPY Statement Use the COPY statement to recreate tables and/or databases that have been dropped or to restore them to the same system or to a different system.
The options for the COPY statement are:
NO FALLBACK Copies fallback tables into non-fallback tables. This option applies during the COPY of an all-AMP or DICTIONARY archive.
NO JOURNAL Copies all tables with journaling disabled. This option applies during the COPY of an all-AMP or DICTIONARY archive.
WITH JOURNAL TABLE =
Overrides the default journal table of the receiving database for tables that had journaling enabled. This option applies during the COPY of an all-AMP or DICTIONARY archive.
APPLY TO Specifies to which tables in the receiving system change images apply. This option is required when copying journal images.
NO BUILD Prevents secondary indexes on non-fallback tables from being copied or built. On fallback tables, it prevents the creation of secondary indexes as well as fallback table rows. There is no rehashing of V1 to V2 data.
ABORT Aborts ALL AMP copies with error messages if an AMP is offline and the restore includes a non-fallback table.
RELEASE LOCK Causes ARC to release utility locks when a copy completes successfully.
REPLACE CREATOR
The ARC COPY syntax includes the REPLACE CREATOR command, which replaces the creator name of the tables in the target database with the current username (specified in the logon command.)
Restoring Data 13- 13
COPY Statement
13- 14 Restoring Data
Copying Tables The COPY statement has two uses:
• It uses an archived file to recreate tables and/or databases that have been dropped.
• It copies archived files to a different system.
The COPY statement can perform one of the following tasks:
• Copy an object that has been dropped back into the original system.
• Copy an object from one system to another.
• Copy an object back to the same system.
Examples There are two examples on the next page. The first example copies an archived data table called Personnel.Department from an archive file to a different RDBMS system.
The second example copies the same archived data table from its old database, OldPersonnel, to a new database. The no fallback option indicates that the new table is to be non fallback on the receiving system even though it may have been fallback on the original one. The NO JOURNAL option indicates that you do not want permanent journaling on this table in the receiving database.
Restoring Data 13- 15
Copying Tables
ASF2/ARC UTILITYARCHIVE
DATATABLE
ORIGINAL DATA TABLE(S) RECEIVING DATA TABLE(S)
ASF2/ARC UTILITYCOPYDATATABLE
Copy a data table to a different system:
COPY DATA TABLE(Personnel.Department),FILE=ARCHIVE;
Copy a data table to a new database:
COPY DATA TABLE(Personnel.Department)(FROM (OldPersonnel), NO JOURNAL,NO FALLBACK),FILE=ARCHIVE ;
13- 16 Restoring Data
BUILD Statement The BUILD statement recreates unique and non-unique secondary indexes on non fallback and fallback tables. This statement also builds fallback rows for fallback tables when the restore statement was performed with the NO BUILD option and generates journal tables by sorting the change images. It rehashes data that is restored from a V1 system to a V2 system as well.
You must rebuild indexes for non-fallback tables after a restore operation if any of the following situations occur:
• An AMP is offline during an archive or restore.
• The restore operation is not an all-AMP restore.
• The archive did not include the INDEXES option.
• The restore included the NO BUILD option.
Examples The example on the next page illustrates the Build statement. The example builds unique and non unique secondary indexes for all tables on the archive tape. The release lock option removes the utility lock after successful completion of the build operation.
BUILD DATA TABLES (Personnel) ALL
,RELEASE LOCK;
Restoring Data 13- 17
BUILD Statement
NO FALLBACK DATA TABLE(S)
ARC UTILITYBUILDDATATABLE
FALLBACK DATA TABLE(S)
ARC UTILITYBUILDDATATABLE
• Recreates unique secondary indexes.
• Recreates non-unique secondary indexes.
• Recreates unique secondary indexes.
• Recreates non-unique secondary indexes.
• Builds fallback rows.
GR01B003
BUILDDATA TABLES
,
JOURNAL TABLESNO FALLBACK TABLES
A
;B
(dbname)ALL
(dbnametablename)
, B
ALL, EXCLUDE
A
(Dbname1) TO (dbname2)
(Dbname) , RELEASE LOCK
, ABORT
NO FALLBACK TABLE
13- 18 Restoring Data
Revalidate References When either referenced (parent) or referencing (child) table is restored, the reference is marked inconsistent in the database dictionary definitions. As a result, the sys tem does not allow application users to execute UPDATE, INSERT or DELETE statements on such tables.
The REVALIDATE REFERENCES FOR statement validates the constraints thereby allowing users to execute UPDATE, INSERT and DELETE statements on the tables.
The REVALIDATE REFERENCES FOR statement:
• Validates the reference index on the target table and its buddy table.
• Creates an error table.
• Inserts rows that fail the referential constraint specified by the reference index into the error table.
If inconsistent references remain after you execute the statement, you can use the statement, ALTER TABLE DROP INCONSISTENT REFERENCES, to remove them.
Required Privileges To use the REVALIDATE REFERENCES FOR statement, the username you have specified in the LOGON statement must have one of the following privileges:
• RESTORE privileges on the table you are revalidating
• Ownership privileges on the database or table
Example The facing page shows the syntax for the REVALIDATE REFERENCES FOR statement.
Restoring Data 13- 19
Revalidate References
GR01B013
REVALIDATE REFERENCES FOR
,
A
;B
(dbname)ALL
BA,
ALL, EXCLUDE (dbname)
, RELEASE LOCK , ERRORDBdbname
(Dbnametablename).
(dbname1 TO dbname2)
13- 20 Restoring Data
RELEASE LOCK Statement The ARC utility places locks on database objects while it performs archive and restore activities. These locks are referred to as utility-level locks.
The ARC utility does not automatically release these locks upon successful completion of an ARC command. In fact, these locks remain intact even when an AMP goes down and comes back online. You must submit the RELEASE LOCK statement to remove the locks.
Not everyone can issue the release lock statement. You must have either the DUMP or the RESTORE privilege on the locked object. You can also release a utility-level lock if you are the owner of the locked object.
You may submit the RELEASE LOCK option at the same time you issue ARCHIVE, ROLLBACK, ROLLFORWARD, RESTORE, and BUILD commands. This accomplishes the same purpose as issuing the RELEASE LOCK statement.
The release lock syntax is shown on the facing page.
Restoring Data 13- 21
RELEASE LOCK Statement
GR01B009
RELEASE LOCK
,
A
;C
(dbname)
(dbnametablename).
,B
ALL, EXCLUDE
A
(dbname)
,CB
, PN = ccc-p
, CLUSTERS = nnn
5
, 512
ALL
, ALL , OVERRIDE
, BACKUP NOT DOWN
, AMP= n
, 5
CLUSTER
(dbname1 TO dbname2)
BACKUP NOT DOWNAllows locks to remain on non fallback tables (with single after-imagejournaling) for those AMPs where the permanent journal backup AMPis offline. The utility releases all other locks requested.
RELEASE LOCK You must have ARCHIVE or RESTORE privilege on the object or be the owner.
OVERRIDEAllows locks to be released by a userother than the one who set them.User must have DROP DATABASEprivilege on the object or be an owner.
ALLReleases locks on offline AMPs.(Locks release when the AMP isreturned to service.)
13- 22 Restoring Data
Restoring Data Summary The opposite page summarizes some important concepts in this module.
Restoring Data 13- 23
Restoring Data Summary
– Restore operations transfer database information from archivefiles stored on portable media to all AMPs, AMP clusters, orspecified AMPs.
– Archive and Restore (ARC) is a command-line utility you can useto restore data.
– Veritas NetBackup, and BackBone NetVault are utilities you canuse to restore data.
– You can restore archived data tables to the database if the datadictionary contains a definition of the entity you wish to restore.
– The primary statements that you use in recovery operations are:
– REVALIDATE REFERENCES FOR– RELEASE LOCK– COPY
– ANALYZE– RESTORE– COPY– BUILD
13- 24 Restoring Data
Review Questions Check your understanding of the concepts discussed in this module by completing the review questions as directed by your instructor.
Restoring Data 13- 25
Review Questions
1. You cannot restore entities that are not defined in the data dictionary.
2. When you execute restore operations, any databases or users created since the archive of the database are dropped when you restore the DBC and user databases.
3. You can only COPY information to the system on which it was originally archived.
4. You can restore triggers.
T F
T F
T F
T F
Indicate whether a statement is True (T) or False (F).
13- 26 Restoring Data
Lab 5 The lab for this module is in Appendix B. Please follow your instructor’s directions for completing lab assignments.
Restoring Data 13- 27
Lab 5
Please see Lab 5 in Appendix B.
13- 28 Restoring Data
References For more information about the topics in this module:
• Teradata Archive/Recovery Reference - (B035-2412-060A)
• Teradata Client Command Summary - (B035-2401-099A)
Permanent Journals 14- 1
Module 14
After completing this module, you should be able to:
• Describe journaling options and the type of recovery each option provides.
• Determine when to use permanent journals instead of or in addition to Fallback to provide data integrity.
• Create, modify and delete permanent journals for databases and tables.
Permanent Journals
14- 2 Permanent Journals
Notes:
Permanent Journals 14- 3
Table of Contents
PERMANENT JOURNALS—WHERE ARE THEY? ....................................................................................................... 4 BEFORE-IMAGE JOURNALS ................................................................................................................................................ 6 AFTER-IMAGE JOURNALS ................................................................................................................................................... 8 JOURNAL SUBTABLES ..........................................................................................................................................................10 PERMANENT JOURNAL STATEMENTS ........................................................................................................................12 LOCATION OF CHANGE IMAGES ...................................................................................................................................14 CREATING A PERMANENT JOURNAL..........................................................................................................................16 ASSIGNING A PERMANENT JOURNAL.........................................................................................................................18 JOURNALS[X] VIEW ...............................................................................................................................................................20 PERMANENT JOURNALS S UMMARY............................................................................................................................22 REVIEW QUESTIONS .............................................................................................................................................................24 REFERENCES ............................................................................................................................................................................26
14- 4 Permanent Journals
Permanent Journals—Where Are They? You can use permanent journaling to protect data. Unlike the transient journal, using a permanent journal is partially a “manual” process. Existing data tables can write to a journal table defined in its parent or to a journal table located in another database or user. Journal tables require permanent space. Each database or user space can contain only one journal table.
You create permanent journal tables with the CREATE USER/CREATE DATABASE statement or the MODIFY USER/MODIFY DATABASE statement.
Permanent journal tables exist within a database or user space. Only one permanent journal can be assigned to that user or database. The journal may be located in the same database or user as the tables that use the journal or in a different database.
Permanent Journals 14- 5
Permanent Journals—Where Are They?
DATABASE DATABASE DATABASE
DATABASE DATABASE
DATA BASE
MULTIPLETABLES
MULTIPLETABLES
JOURNAL
JOURNAL
JOURNAL
JOURNAL
TABLE_ATABLE_BTABLE_CTABLE_4
SINGLETABLE
MULTIPLETABLES
#1 #2
#3 #4
14- 6 Permanent Journals
Before-Image Journals Before Images are used for ROLLBACK recovery as shown on the following page. Once a before-image journal is created, a snapshot of an existing row is stored in the journal table befo re any data is modified. In the event of a software failure, the before-image journal can roll back any unwanted changes. Permanent journals roll back all transactions from a table to a checkpoint. They may not be used to roll back specific transactions.
Permanent Journals 14- 7
Before-Image Journals
Allows rollback of changes to one or more tables by returning the data to a previous consistent state.
APPLICATION
DATA TABLE(S)
BEFORE data is modified, a copy is placed in the...
APPLICATION
BEFORE IMAGE JOURNAL
14- 8 Permanent Journals
After-Image Journals After you create an after-image journal, a snapshot of a row value is stored in the permanent journal after a change is committed. If a hardware failure occurs, the after-image journal can roll forward any changes made to data tables since the last full system backup.
Site Disaster To protect against the loss of data in the event of a site disaster, many applications require that data archives be kept off-site at all times. Ideally, users dump the database to magnetic tape daily and store the tape off-site.
Daily archives may not be practical for very large databases. To solve this problem, you can activate after-change journals and take a daily archive of the journal itself which provides archived copies of all changes made since the last full database archive. The full backup tapes along with the journal backup tapes could restore the entire database.
The facing page shows after images in the permanent journal are used for ROLLFORWARD recovery.
Permanent Journals 14- 9
After-Image Journals
Allows application of changes that have been made since the last full backup.
DATA TABLE(S)
AFTER data has been modified, a copy is placed in the ...
To recover data that must be restored, use the after images journal to rollforward users’ changes since the restored backup was taken.
AFTER IMAGE JOURNAL
MONDAY UPDATES
TUESDAY UPDATES
WEDNESDAY UPDATES
14- 10 Permanent Journals
Journal Subtables Each journal table consists of three subtables:
• Active subtable
• Saved subtable
• Restored subtable
The active and saved subtables together are referred to as the Current Journal. The restored subtable is called the Restored Journal. The contents and purpose of each subtable are discussed below.
Current Journal Each time you update a data table that has an associated journal table, a change image is appended to the active subtable. You cannot archive journal tables while the change images are in the active subtable. Instead, you must move the images to the saved subtable.
To move images from active to saved areas, you must submit the Checkpoint With Save statement. A checkpoint places a marker at the chronological end of the active subtable. The database assigns an event number any time a user submits the checkpoint statement. The With Save option of the checkpoint statement inserts a checkpoint in the active subtable and then appends the contents of the active subtable to the end of the saved subtable.
After the database appends the contents, it initiates a new active subtable automatically. You can now submit an ARCHIVE JOURNAL TABLE statement. Archiving the journal saves it to tape.
Restored Journal To restore a journal, move the journal table contents from the portable storage media back to the restored subtable using the Archive utility. The information stays there until you invoke roll operations.
Permanent journals are maintained in an internal Teradata database format. They are not accessible by SQL statements and cannot be used for audit trail purposes.
Permanent Journals 14- 11
Journal Subtables
Each permanent journal table consists of three subtables:
• Active subtable
• Saved subtable
• Restored subtable
Active Subtable
SavedSubtable
Current journalRestoredjournal
RestoredSubtable
A checkpoint with save creates a logical division in the current journal.
Subsequent journal images append to the active subtable.
You can dump and delete saved rows only.
Restored journals replace the contents of the restored subtable.
14- 12 Permanent Journals
Permanent Journal Statements Use the ARC (Archive and Recovery) utility on a channel-attached host, or Open Teradata Backup on a network-attached system to perform backup and recovery functions associated with permanent journals. The archive and recovery functions include:
ROLLFORWARD Replaces a data row by its after change image from the beginning of the journal, to either a checkpoint or to the end of the journal.
ROLLBACK
Replaces a data row by its before change image from the end of the journal, to a checkpoint or to the beginning of the journal.
DELETE
Deletes the contents of either the saved or restored journal areas.
Backing up tables on a Teradata System
1. Archive the data tables onto portable storage media.
2. Submit a checkpoint with a SAVE statement to move change images from the active journal to the saved journal.
3. Archive the journal tables onto portable storage media.
4. Submit the DELETE JOURNAL statement to erase the saved journal rows
Permanent Journals 14- 13
Permanent Journal Statements
First, back up data tables.
You can archive and delete saved journal rows.
UPDATEINSERTDELETE
DATA TABLE(S)
You cannotarchive or
delete rows in the active subtable.
ARC UTILITY
DUMP DATATABLE
CHECKPOINTWITH SAVE
DUMP JOURNALTABLE
CURRENT JOURNALACTIVE JOURNAL SAVED JOURNAL
ARC UTILITY ARC UTILITY
14- 14 Permanent Journals
Location of Change Images Tables that include fallback and journaling options automatically receive dual image journal protection. Tables with no-fallback protection can request either single or dual permanent journals.
The chart on the following page illustrates the location of change-image journals. The placement of permanent journals depends on: requested image type (either before or after) and the protection type (either fallback or no-fallback).
AMP Definitions
Primary AMP Holds before- and/or after-images for any table with fallback protection. Holds single before images and dual after-images for non-fallback protected tables.
Fallback AMP Contains before- and/or after-images for tables with fallback protection. The system distributes duplicate data rows to fallback processors by assigning the row's hash code to a different AMP in the cluster.
Backup AMP Holds three types of images: single or dual after images; and dual before images. Does not use a hashing algorithm for row distribution. All images for one AMP go to a single backup, which is always in the same cluster. For example, if AMPs 1, 2 , 3, and 4 are in the same cluster, 1 backs up 2, 2 backs up 3, 3 backs up 4, and 4 backs up 1. There is no way to predict the backup AMP.
After-Image Journals Save Storage Space If fallback protection is too costly in terms of storage space, after-image journals offer alternative data protection with minimal space usage. After-image journals write changes to the backup AMP. Since the system only duplicates changed rows rather than all of the rows, storage space is minimized.
Since changes are written to the backup AMP, a primary AMP failure does not cause a loss of data. You can recover all table data by restoring the appropriate archive tape and rolling forward the rows stored in the after-image journal.
Permanent Journals 14- 15
Location of Change Images
• Dual images are always maintained for Fallback tables.
• To determine the fallback AMP for a journal row, use the Fallback hash maps.
• Fallback TablesJournal Option Change Image LocationAfter images Primary AMP and Fallback AMPBefore images Primary AMP and Fallback AMP
• For non-Fallback tables, you may request either singe or dual journal images.
• The location of journal rows depends on the image type requested(before or after) and the protection type of the journaled tables.
• Non-Fallback TablesJournal Option Change Image LocationAfter images Backup AMP Before images Primary AMP Dual after images Backup AMP and primary AMPDual before images Primary AMP and backup AMP
• A backup AMP is another AMP in the same cluster as the primary AMP assigned to journal rows. These rows are not distributed using hash maps, but are directed to a specifically assigned backup AMP.
14- 16 Permanent Journals
Creating a Permanent Journal You create permanent journals when you create a user or database. To create permanent journals within an existing user or database, use the MODIFY statement. The facing page shows examples of using these statements.
The following restrictions apply to the use of permanent journals:
• If a journal table in another user/database is specified as the default, that other journal table must already exist.
• You can change a DEFAULT JOURNAL for a user or database only if no tables or other databases journal into it.
• Permanent journals are not supported across an AMP configuration change. Rollforward or Rollback operations terminate if there is a change in the hash maps for primary, fallback, or backup rows.
• Permanent journals are not supported across certain Data Definition (DDL) statements. Statements that may prevent a rollforward or rollback operation from passing that point in the journal include:
− ALTER TABLE
− RENAME TABLE
− MODIFY USER or MODIFY DATABASE
− COMMENT
Deleting a Permanent Journal Use the MODIFY USER or MODIFY DATABASE statement to delete a permanent journal. Before you delete the journal, you must use the ALTER TABLE statement to stop the journaling being done to that journal.
SYNTAX: ALTER [TABLE NAME]
,WITH [JOURNAL TABLE=JOURNAL TABLE NAME];
,NO BEFORE JOURNAL
,NO AFTER JOURNAL;
MODIFY DATABASE [DATABASE NAME AS]
DROP DEFAULT JOURNAL TABLE=[JOURNAL TABLE NAME];
Permanent Journals 14- 17
Creating a Permanent Journal
CREATE DATABASE Pay_DB AS PERM=10000000DEFAULT JOURNAL TABLE=Pay_Journal;
Or you can create them in an existing user or database:
MODIFY DATABASE Per_DB ASDEFAULT JOURNAL TABLE=Per_Journal;
They are identified in the DD as TableKind ‘J’:
DatabaseName TableName TableKind
Pay_DB Pay_Journal J
Per_DB Per_Journal J
Create permanent journals at the user/database level when you define a new user/database:
SELECT DatabaseName,TableName,TableKind
FROM DBC.TablesWHERE TableKind=‘J’ ;
14- 18 Permanent Journals
Assigning a Permanent Journal Permanent journals are optional. You can specify journal options at the database/user level or at the individual table level.
You can define a DEFAULT JOURNAL TABLE associated with a user or database. You can associate an individual table within the database with the DEFAULT JOURNAL (by default) or another journal table by specifying that on the CREATE or ALTER TABLE statement.
Users activate permanent journaling by including the JOURNAL option in the CREATE or MODIFY statements for users or databases.
Rules and Limitations You must allocate sufficient permanent space to a database or user that will contain permanent journals. If a database or user that contains a permanent journal runs out of space, all table updates that write to that journal abort.
DBC.Tables The DBC.Tables view can display the names of existing journal tables. The Tablekind field displays the letter J for any table set up as a permanent journal.
Permanent Journals 14- 19
Assigning a Permanent Journal
NOT LOCAL and LOCAL specify whether single after-image journal rows for non-fallback data tables are written on the same virtual AMP (LOCAL) as the changed data rows, or on another virtual AMP in the cluster (NOT LOCAL).
CREATE TABLE uses WITH JOURNAL TABLE
CREATE DATABASE uses DEFAULT JOURNAL NAME
14- 20 Permanent Journals
Journals[x] View The Teradata system provides a system view called DBC.Journals, that displays links between journal tables and the data tables that journal into them. DBC.Journals View is a restricted view. The restricted version of the view displays only those objects that you own or to which you hold access rights.
The example on the next page uses the SELECT statement to list all of the tables in the system that use a permanent journal. In addition, it requests to see a list of the journal names.
The response displays the table names first. Two data tables appear that have journal tables associated with them, department and employee. Both data tables journal into the same permanent journal, payroll_jnl. All three tables belong to the same database, Payroll_Test.
Columns Defined The Journals view has four different columns:
Tables_DB Displays the name of a database where a data table resides that has the journal option activated.
TableName Displays the name of a data table that records changed images in a journal table.
Journals_DB Displays the name of a database where a journal table resides.
JournalName Displays the name of a journal table associated with a listed data table.
Note: The source tables are Dbase and TVM.
Permanent Journals 14- 21
Journals[X] View
Tables_DB TableName
Journals_DB JournalName
DBC.Journals[X]
Tables Journals---------------------- -------------------Payroll_Test.department Assigned to Payroll_Test.payroll_jnlPayroll_Test.employee Assigned to Payroll_Test.payroll_jnl
EXAMPLE: List all tables in the system that use a journal andlist the names of the journals.
SELECT TRIM (Tables_DB) ll’.’TableName(TITLE ‘Tables’, CHAR (26)),Assigned to’ (TITLE ‘ ‘),TRIM (Journals_DB) ll’.’JournalName(TITLE ‘Journals’, CHAR (26))
FROM DBC.JournalsORDER BY 1,2 ;
Associates journals with the tables assigned to them.
14- 22 Permanent Journals
Permanent Journals Summary The opposite page summarizes some important concepts in this module.
Permanent Journals 14- 23
Permanent Journals Summary
– Permanent journals maintain a sequential history of all changes made to the rows of one or more tables (whereas RAID and fallback duplicate images of all table rows).
– You create a permanent journal when you create a user or a database.– Permanent journal image options:
• Single before-change images– Capture images before a change is made and allows rollback to a checkpoint.
Protects against software failures.• Single after-change images
– Capture images after a change is made and allows rollforward to a checkpoint. Protects against hardware failures.
• Dual images– Maintain two copies of before or after images. Protects against loss of journals.
– Use ARC or OTB utilities to perform backup and recovery operations associated to permanent journals.
– The location of changed images depends on the types of recovery options you have activated and the image type you are capturing.
– The Journals[X] view provides information about links between journal tables and the tables that journal to them.
14- 24 Permanent Journals
Review Questions Check your understanding of the concepts discussed in this module by completing the review questions as directed by your instructor.
Permanent Journals 14- 25
Review Questions
1. A permanent journal stores committed, uncommitted, and aborted changes to a row in a table.
2. A database or user space can have many permanent journals.
3. A permanent journal is a good substitute for RAID technology or fallback protection.
4. You can use the Permanent Journal for a daily archive (in conjunction with tape backup for disaster recovery).
5. You can have DUAL BEFORE and DUAL AFTER images stored by the Permanent Journal.
6. You use the CREATE JOURNAL statement to create a Permanent Journal.
7. Tables that use the Permanent Journal must be in the same database as the Permanent Journal.
T F
T F
T F
T F
T F
T F
T F
14- 26 Permanent Journals
References For more information on the topics covered in this module:
• Teradata RDBMS Database Design - (B035-1094-122A)
• Teradata RDBMS SQL Reference - (BO35-1001-122A)
• Teradata Archive/Recovery Reference - (B035-2412-122A)
Data Recovery Operations 15- 1
Module 15
After completing this module, you should be able to:
• Describe how to use the following statements to recover archived data back to the Teradata database:
CHECKPOINTDELETE JOURNALROLLBACKROLLFORWARD
• Use Recovery Control views to obtain ARC event information.
Data Recovery Operations
15- 2 Data Recovery Operations
Notes:
Data Recovery Operations 15- 3
Table of Contents
DATA RECOVERY USING ROLL OPERATIONS .......................................................................................................... 4 THE CHECKPOINT STATEMENT ...................................................................................................................................... 6 CHECKPOINT WITH SAVE STATEMENT ...................................................................................................................... 8 THE ROLLBACK STATEMENT .........................................................................................................................................10 USING THE ROLLBACK COMMAND .............................................................................................................................12 THE ROLLFORWARD STATEMENT...............................................................................................................................14 USING THE ROLLFORWARD COMMAND..................................................................................................................16 ROLLFORWARD RESTRICTIONS....................................................................................................................................18 DELETE JOURNAL STATEMENT.....................................................................................................................................20 RECOVERY CONTROL DATA DICTIONARY VIEWS .............................................................................................22 ASSOCIATION VIEW ..............................................................................................................................................................24 EVENTS[X] VIEW .....................................................................................................................................................................26 EVENTS_CONFIGURATION[X] VIEW ............................................................................................................................28 EVENTS_MEDIA[X] VIEW ...................................................................................................................................................30 DATA RECOVERY OPERATIONS SUMMARY............................................................................................................32 REVIEW QUESTIONS .............................................................................................................................................................34 LAB 6 ..............................................................................................................................................................................................36 REFERENCES ............................................................................................................................................................................38
15- 4 Data Recovery Operations
Data Recovery Using Roll Operations The restore statement allows you to move information from archive files back to the Teradata Database. The restore operation can restore data or journal tables.
After you execute a RESTORE statement, data tables are ready to use.
When you restore a journal table, the system restores the information to a permanent journal subtable. Before you can use the tables, you must perform a rollback or rollforward operation to move the journal tables back to the data tables.
Roll operations can use either the current journal or the restored journal. If you specify the current journal, then the ARC utility uses information stored in both the active and saved subtables.
A permanent journal is checkpoint-oriented rather than transaction-oriented. The goal of the journals is to return existing data tables to some previous or subsequent checkpoint. For example, if a batch program corrupted existing data, the rollback operation would return the data to a checkpoint prior to the running of the batch job.
A rollforward operation might occur after an all-AMP restore. After you move the data and journal archive files back to the database, the data tables would only include changes committed since the last full backup. Any intermediate changes would reside in the journal tables. The rollforward operation would replace the existing data with changes from the journal table.
Data Recovery Operations 15- 5
Data Recovery Using Roll Operations
Restore Saved Active
Past Present
Restored
• The RESTORE function copies journal archive files to the restored subtable of the permanent journal.
• ROLLBACK and ROLLFORWARD statements apply journal table contents to data tables.
• Roll operations can use:
– Current journal (active and saved subtables)
– Restored journal (restored subtable)
15- 6 Data Recovery Operations
The CHECKPOINT Statement Use the CHECKPOINT statement to indicate a recovery point in the Journal.
The CHECKPOINT statement places a marker row after the most recent change image row in the active subtable of a permanent journal. The database assigns an event number to the marker row and returns the number in response. You may assign a name to the CHECKPOINT command rather than use the event number in subsequent ARC activities.
Use the following options with the CHECKPOINT statement:
WITH SAVE Archives saved journal images to a host media. After you archive the saved area of the journal, you can delete this section of the current journal to make space for subsequent saved journal images. The saved journal subtable has no fixed size and can grow to the limit of the database.
USE LOCK By default, the system acquires a read lock on all tables assigned to the journal being checkpointed. A checkpoint with save may optionally use an access lock.
• The read lock suspends update activity for all data tables that might write changes to the journal table during checkpoint. This lock provides a clean point on the journal.
The access lock accepts all transactions that insert change images to the journal, but it treats them as though they were submitted after the checkpoint was written. The access lock option requires that you also use the WITH SAVE option. Since users do not know how the database treats particular transactions, a checkpoint with save under an access lock is only useful for coordinating rollforward activities from the restored journal, and then from the current journal.
NAMED checkpointname Checkpoint names may be up to 30 characters long and are not case-specific. Teradata software always supplies an event number for each checkpoint. Use the number to reference a checkpoint if a name is not supplied.
If there are duplicate checkpoint names in the journal and an event number is not specified:
• Rollforward uses the first (oldest) occurrence.
• Rollback uses the last (latest) occurrence.
Data Recovery Operations 15- 7
The CHECKPOINT Statement
CHECKPOINT
,
A
;C
(dbname)ALL
(dbnametablename).
,B
ALL, EXCLUDE
A
(Dbname)
, NAMEDchkptname
C
READ, WITH SAVE
B, USE ACCESS LOCK
TO (dbname2)dbname1)
CURRENT JOURNALRESTOREDJOURNAL
GR01B004
RESTOREDAREA
SAVEDAREA
ACTIVEAREA
Checkpoint With Save allows you to archive and delete saved journal images.
15- 8 Data Recovery Operations
CHECKPOINT WITH SAVE Statement The CHECKPOINT WITH SAVE option inserts a marker row and appends any stored images preceding the marker row from the active to the saved subtable. The database automatically initiates a new active subtable. You can archive the contents of the saved subtable to an archive file.
Example The facing page shows two different current journals, before and after a checkpoint operation. The active subtable before checkpoint contains five change image rows. After checkpoint with save, the active subtable is empty, and the saved subtable contains the five change rows and a marker row.
Checkpoint with Offline AMPs An individual AMP may be off-line when you issue the checkpoint command. In this case, the utility automatically generates a system log entry that marks the checkpoint as soon as the AMP comes back on-line. The system startup process generates the checkpoint and requires no user input.
Data Recovery Operations 15- 9
CHECKPOINT WITH SAVE Statement,
CHECKPOINT Personnel.Salaries_Jnl ,WITH SAVE;
090 135 367 007 189
CURRENT JOURNAL
ACTIVE
CHECKPOINT
SAVED
WITH SAVE
090 135 367 007 189
CURRENT JOURNAL
ACTIVE
15- 10 Data Recovery Operations
The ROLLBACK Statement To recover from one or more transaction errors, use the ROLLBACK statement. To use this statement, you must define the table with a before-image journal table. The ROLLBACK is performed to a checkpoint or to the beginning of the current or restored journal.
The system uses the before images to replace any changes made to the table or database since a particular checkpoint was taken.
The following page shows the format of the ROLLBACK statement.
TO checkpointname, eventno Checkpoint names need to match existing names used with a previous CHECKPOINT statement. An eventno is the software-supplied event number of a previous checkpoint. You can supply either one of these or both. To find the checkpoint names or event numbers, select information about the checkpoint from the DBC.Events view.
If there are duplicate checkpoint names in the journal and an event number is not supplied, rollback stops at the first one encountered with a matching name.
NO DELETE Option By default, the rollback procedure automatically deletes the contents of the restored journal subtable after successfully completing the command. The NO DELETE option overrides the default, enables you to recover selected tables first, and then later recovers other tables that may have changes in the journal.
Data Recovery Operations 15- 11
The ROLLBACK Statement,
NO DELETE option:
• Overrides automatic deletion of restored journal subtable
• Used only for restored journal subtables
• Never used with current journal subtables
15- 12 Data Recovery Operations
Using the ROLLBACK Command The ROLLBACK command helps you recover from one or more transaction errors. It reverses changes made to a database or table. To accomplish this reversal, it replaces existing data table rows with before-change images stored in a permanent journal. The before-change images must reside in either the restored or current subtables of a permanent journal. If you choose the current subtables for rollback procedures, the database uses the contents of both the active and saved subtables.
When you use the restored subtables for rollback procedures, you need to verify it contains the desired journal table. If it does not, submit the RESTORE JOURNAL TABLE command with the appropriate removable storage media. This process ensures that you restore the correct subtable contents. The Teradata database does not have any simple tools for looking at journal subtables to determine that they contain the desired data.
Example The example on the facing page illustrates a rollback procedure. First, (step 1), activate the ROLLBACK CURRENT JOURNAL statement to rollback any changes made since the journal table was archived. This statement rolls back the saved subtable first followed by the active subtable. Next (step 2), run the RESTORE JOURNAL TABLE command to load the appropriate archive file into the restored subtable of the permanent journal.
Finally (step 3), submit the ROLLBACK RESTORED JOURNAL command to reverse the changes by replacing any changed rows with their before-image rows stored in the restored journal. Repeat Steps 2 and 3 as necessary.
Data Recovery Operations 15- 13
Using the ROLLBACK Command
,
TABLE A TABLE B TABLE C
ARC UTILITYROLLBACK (DBC) ALL, USE RESTORED JOURNAL
RESTORED SUBTABLE
RESTORED JOURNALSAVED SUBTABLE
CURRENT JOURNALACTIVE SUBTABLE
ARC UTILITYRESTORE
JOURNAL TABLE
2
PERMANENT JOURNAL TABLE
ARC UTILITYROLLBACK (DBC) ALL, USE CURRENT JOURNAL
3 1
DATA TABLE(S)
Use ROLLBACK to recover from a transaction error.
CHKPT
CHKPT
CHKPT
ROLLFORWARD
ROLLBACK
15- 14 Data Recovery Operations
The ROLLFORWARD Statement Use the ROLLFORWARD statement to recover from a hardware error. Before you can rollforward, you must have a backup copy of the table rows and AFTER Image journal rows since the last archive.
The format of the ROLLFORWARD statement is shown on the next page. A description of some of the options follows:
PRIMARY DATA During a rollforward operation, this option instructs the software to ignore secondary index and fallback row updates. A BUILD operation will rebuild the invalidated fallback copy and indexes.
TO checkpointname, eventno
Checkpoint names need to match existing names used with a previous CHECKPOINT statement. An event number is the software-supplied event number of a previous checkpoint. You can supply either one or both of these. To find the checkpoint names or event numbers, select information about the checkpoint from the DBC.Events view.
If there are duplicate checkpoint names in the journal and an event number is not supplied, rollback stops when it encounters with a matching name.
Data Recovery Operations 15- 15
The ROLLFORWARD Statement,
Use ROLLFORWARD to recover from a hardware error.
CHKPT
CHKPT
CHKPT
ROLLFORWARD
ROLLBACK
15- 16 Data Recovery Operations
Using the ROLLFORWARD Command The ROLLFORWARD command helps you recover from a hardware error and changes existing rows in data tables by replacing them with after-change images stored in a permanent journal. The after-change images must reside in either the restored or current subtables of a permanent journal.
When you use the restored subtable for rollforward procedures, you need to verify that it contains the desired journal table. If it does not, submit the RESTORE JOURNAL TABLE command with the appropriate portable storage media. This process ensures that you restore the correct subtable.
Example The example on the facing page illustrates a rollforward procedure. First, the administrator runs the RESTORE DATA TABLE command. Then, she runs the RESTORE JOURNAL TABLE command to load the appropriate archive files into the restored permanent journal subtable. Next, she submits the ROLLFORWARD RESTORED JOURNAL command to replace existing data table rows with their after-image rows stored in the restored journal.
Lastly, she activates the ROLLFORWARD CURRENT JOURNAL statement to rollforward any changes made since the journal table was archived. This statement rolled forward the saved subtable first followed by the active subtable.
PRIMARY DATA Option This option replaces only primary row images during the rollforward process. It ignores secondary index and fallback rows.
If you use this option with a rollforward operation, you can reduce the amount of I/O. It also improves the rollforward performance when recovering a specific AMP from disk failure.
Unique indexes are invalid when recovering a specific AMP. Always submit a BUILD statement when the rollforward command includes the PRIMARY DATA option.
Data Recovery Operations 15- 17
Using the ROLLFORWARD Command
,
TABLE A TABLE B TABLE CTABLE A TABLE B TABLE CTABLE A TABLE B TABLE CTABLE A TABLE B TABLE C
ASF2/ARC UTILITYROLLFORWARD (DBC) ALL, USE RESTORED JOURNAL
ASF2/ARC UTILITYROLLFORWARD (DBC) ALL, USE RESTORED JOURNAL
ASF2/ARC UTILITYROLLFORWARD (DBC) ALL, USE RESTORED JOURNAL
ARC UTILITYROLLFORWARD (DBC) ALL, USE RESTORED JOURNAL
RESTORED SUBTABLERESTORED SUBTABLERESTORED SUBTABLERESTORED SUBTABLE
RESTORED JOURNALRESTORED JOURNALRESTORED JOURNALRESTORED JOURNALSAVED SUBTABLESAVED SUBTABLESAVED SUBTABLESAVED SUBTABLE
CURRENT JOURNALCURRENT JOURNALCURRENT JOURNALCURRENT JOURNALACTIVE SUBTABLEACTIVE SUBTABLEACTIVE SUBTABLEACTIVE SUBTABLE
ASF2/ARC UTILITYRESTORE
JOURNAL TABLE
ASF2/ARC UTILITYRESTORE
JOURNAL TABLE
ASF2/ARC UTILITYRESTORE
JOURNAL TABLE
ASF2/ARC UTILITYRESTORE
JOURNAL TABLE
PERMANENT JOURNAL TABLE
ASF2/ARC UTILITYROLLFORWARD (DBC) ALL, USE CURRENT JOURNAL
ASF2/ARC UTILITYROLLFORWARD (DBC) ALL, USE CURRENT JOURNAL
ASF2/ARC UTILITYROLLFORWARD (DBC) ALL, USE CURRENT JOURNAL
ARC UTILITYROLLFORWARD (DBC) ALL, USE CURRENT JOURNAL
•••
Ignores Fallback and secondary index rows.
Reduces amount of I/O.
Improves performance when recovering single AMP.
Always use BUILD statement with this option.
DATA TABLE(S)
ARC UTILITYRESTORE
JOURNAL TABLE
ARC UTILITYRESTORE
DATA TABLE2 1
3 4
PRIMARY DATA Option
15- 18 Data Recovery Operations
ROLLFORWARD Restrictions The diagrams on the facing page illustrate several important restrictions in using the ROLLFORWARD statement.
AMP-specific Restore If you perform a restore operation on a specific AMP rather than on all AMPs, the ROLLFORWARD command does not permit you to use the TO CHECKPOINT NAME option. Following an AMP-specific restore, the system permits a rollforward only to the end of the journal. You must follow up the restore process with a rollforward of the entire journal table.
All-AMP Restore When you perform an all-AMP restore, you choose whether to submit the ROLLFORWARD command with the TO CHECKPOINT NAME option, or to the end of the journal.
The PRIMARY DATA option of the ROLLFORWARD statement indicates that the operation should ignore secondary index and fallback rows that will reduce the amount of I/O during rollforward. If you use this option, follow up with the BUILD statement.
Note: Use the DBC.Events view to determine event numbers and/or checkpoint names.
Example SELECT EventNum FROM DBC.Events WHERE CreateDate = 940819;
SELECT CheckPointName FROM DBC.Events
WHERE CreateDate = 940819;
Data Recovery Operations 15- 19
ROLLFORWARD Restrictions,
ARCHIVETAPE
AMP-SPECIFIC RESTORE
CHKPT
CHKPT
CHKPT
PERMANENT JOURNALFollowing an AMP-specific restore, a rollforwardis permitted only tothe end of the journal.
After an ALL-AMP-restore, rollforward may be done to a checkpoint, or to the end of the journal.
ARCHIVETAPE
ALL-AMP RESTORE
CHKPT
CHKPT
CHKPT
PERMANENT JOURNAL
15- 20 Data Recovery Operations
DELETE JOURNAL Statement The DELETE JOURNAL command enables you to erase the contents of either the restored subtable or the saved subtable of a permanent journal. You cannot delete the contents of the active subtable. You must have the RESTORE privilege to execute this command.
The illustrations on the following page show the DELETE JOURNAL statement.
Restrictions You cannot delete a saved subtable when all the following conditions are true:
• A CHECKPOINT statement in the archive utilized an access lock, and
• The journal is not dual image, and
• One or more AMPs are off-line.
Transactions between an all-AMP archive and a single -AMP archive may not be consistent when a journal archive has all three of the above conditions. You cannot delete a saved subtable with an AMP off-line that does not have a dual journal.
The command does not delete the rows in the active journal.
The facing page shows the format of this statement.
Data Recovery Operations 15- 21
DELETE JOURNAL Statement,
To use the DELETE JOURNAL statement you must have the RESTORE privilege or own the database that contains the journal.
Note: You cannot delete rows from an active journal.
15- 22 Data Recovery Operations
Recovery Control Data Dictionary Views There are four system views that contain information about ARC utility events. The name, purpose, and dictionary table name of each view is listed below.
DBC. Association View Provides information about objects that have been imported from another RDBMS system or otherwise created using the ARC COPY statement. Table name: DBC.DBCAssociation.
DBC.Events [X] View Provides a row for each archive and recovery activity. Table name: DBC.RCEvent.
DBC.Events_Configuration [X] View Provides information about archive and recovery activities that did NOT affect all AMPs. Table name: DBC.RCConfiguration.
DBC.Events_Media [X] View Provides information about archive and recovery activities that involved removable media. Table name: DBCRCMedia.
Data Recovery Operations 15- 23
Recovery Control Data Dictionary Views,
DESCRIPTION
Provides information about about objects you import from another database system.
Provides an audit trail of all archive and recovery activity.
Provides information about archive and recovery activities that did not affect ALL AMPs.
Provides information about archive and recovery activities that involve removable media.
VIEW NAME
DBC.Association
DBC.Events [X]
DBC.Events_Configuration [X]
DBC.Events_Media [X]
15- 24 Data Recovery Operations
Association View The Association view allows you to retrieve information about an object imported from another Teradata RDBMS.
An existing object created with the ARC utility COPY statement also displays in the Association view. If you later drop a copied object from its new destination, the information is deleted from the Association table and is no longer available.
Example The example on the facing page uses the Association view to list all objects that were copied into the payroll database. The result of the query displays imported table names. The object column displays the current name of each table. The Source column provides the name of the original table. The event column shows the number assigned to the restore operation.
Data Recovery Operations 15- 25
Association View
,
DataBaseName Source Event------------------------------------- - ----------------------------------------- -------------Payroll_Prod.DEPARTMENT PAYROLL_TEST.department 00014Payroll_Prod.DEPT PAYROLL_TEST.dept 00014Payroll_Prod.EMP PAYROLL_TEST.emp 00014Payroll_Prod.EMPLOYEE PAYROLL_TEST.employee 00014Payroll_Prod.NEWHIRE PAYROLL_TEST.newhire 00014
EXAMPLE: List all tables, view or macros that were copied into the Payroll database.
SELECT TRIM (DatabaseName)ll’.’llTableName(NAMED Object, FORMAT ‘X (25)’)
TRIM (Original_DatabaseName)ll’.’llOriginal_TableName(NAMED Source, FORMAT ‘X(25)’)EventNum (NAMED Event, FORMAT ‘9(5)’)
FROM DBC.AssociationWHERE DatabaseName LIKE %Payroll%’ORDER BY Object ;
DatabaseName* TableName EventNumOriginal_DatabaseName Original_TableName Original_TableKind Original_VersionOriginal_ProtectionType Original_JournalFlag Original_CreatorName Original_CommentString
DBC.Association
*DatabaseName: The name of the database or user where the imported object now resides.
Retrieves information about an object imported from another Teradata database.
15- 26 Data Recovery Operations
Events[x] View The Events view tracks ARC activity. The ARC utility inserts a new row in the Events system table each time another ARC activity begins. The Events view returns a row for each activity tracked. Each event type is listed below:
Checkpoint Event Row Created for each journal checkpointed
Delete Event Row Created for each journal deleted
Dump Event Row Created for each database or table archived
Restore Event Row Created for each database or table restored
Rollback Event Row Created for each database or table rolled back
Rollforward Event Row Created for each database or table rolled forward
Example The SQL statement on the next page requests a list of all ARC activity that took place March 28th. The results display two ARC activities, one archive, and one restore.
Data Recovery Operations 15- 27
Events[x] View
,
Provides an audit trail of all archive and recovery activities for objects visible to you.
EventNum UserName EventType DatabaseName------------------ ------------------- ------------------- ---------------------------------
180 BRM Dump Payroll_Test181 RPK Restore Personnel_Test
EXAMPLE: List all ARC activity that took place on March 28.
SELECT EventNum,UserName (CHAR (12)),EventType (CHAR (12)),DatabaseName (CHAR (12))
FROM DBC.EventsWHERE CreateDate=990328ORDER BY EventNum ;
CreateDate AllAMPsFlag LockMode CreateTimeRestartSeqNum JournalUsed EventNum OperationInProcessJournalSaved EventType TableName IndexPresentUserName CheckpointName DupeDumpSet DatabaseNameLinkingEventNum* ObjectType DataSetName
DBC.Events[X]
*LinkingEventNum: The terminating event number specified for a rollforward or rollback operation.
15- 28 Data Recovery Operations
Events_Configuration[x] View The Events_Configuration view contains rows for each archive activity that does not affect all AMPs in the database configuration. If the ARC command specifies all AMPs and there are one or more AMPs offline, a row is inserted in the system table for each off-line AMP. If the statement is for specific AMPs, a row is inserted for each specified and online AMP.
Example The example on the next page submits an SQL statement to find out which user did not release the utility locks on AMP (i.e., vproc) 2. Query results show three different users, AMT, ALK, and JLR.
Data Recovery Operations 15- 29
Events_Configuration[x] View,
CreateTime EventNum EventType UserName Vproc------------------ --------------- --------------- --------------- ---------14:06:22 1,153 Dump AMT 216:06:39 1,159 Dump ALK 218:12:09 1,164 Restore JLR 2
EXAMPLE: Who left the utility locks on processor 2?
SELECT CreateTime,EventNum,EventType (CHAR (12)),UserName (CHAR (12)),vproc
FROM DBC.Events_ConfigurationWHERE vproc= ‘2’ORDER BY 2 ;
CreateDate CreateTime EventNum EventType UserName LogProcessor PhyProcessor VprocProcessorState RestartSeqNum
DBC.Events_Configuration[X]
Provides information about archive and recovery activities that did not affect a configuration of ALL AMPs for objects visible to you.
15- 30 Data Recovery Operations
Events_Media[x] View The Events_Media view provides information about ARC activities that used removable storage media. This information includes the volume serial numbers assigned to portable devices.
Example The example on the facing page submits an SQL statement that requests the volume serial number of a restore tape. The query results show two restore operations, each with their own serial number and dataset name.
Data Recovery Operations 15- 31
Events_Media[x] View,
EventNum EventType UserName VolSerialID DataSetName--------------- --------------- --------------- --------------- -------------------
179 Restore PJ MPC001 LDR.DMP1.JNL180 Restore PJ MPC002 RAN.DMP2.JNL
EXAMPLE: What was the volume serial number of the tape used for the restore?
SELECT EventNum,EventType (CHAR (12)),UserName (CHAR (12)),VolSerialID,DataSetName (CHAR (12))
FROM DBC.Events_MediaORDER BY EventNum ;
CreateDate EventType VolSerialID CreateTime UserName VolSequenceNumEventNum DataSetName DupeDumpSet
DBC.Events_Media[X]
Provides archive and recovery activities information about eventsthat used removable media for objects visible to you.
15- 32 Data Recovery Operations
Data Recovery Operations Summary The opposite page summarizes some important concepts in this module.
Data Recovery Operations 15- 33
Data Recovery Operations Summary,
– As with archive and restore operations, you use the ARC utility for recovery operations.
– Roll operations can use either current journals (active and saved subtables) or restored journals (restored subtable).
– The CHECKPOINT statement indicates a recovery point in a journal.– The CHECKPOINT WITH SAVE statement saves stored images before
a row marker in an active subtable and appends them to the savedsubtable.
– ROLLBACK commands help you recover from one or more transaction errors and reverses changes made to a database or table.
– ROLLFORWARD commands help you recover from hardware errors. These commands replace existing row data with after-change images.
– DELETE JOURNAL command erases the contents of either the restored subtable or the saved subtable in the permanent journal.
– Teradata features several recovery control system views that contain information about ARC utility events.
15- 34 Data Recovery Operations
Review Questions Check your understanding of the concepts discussed in this module by completing the review questions as directed by your instructor.
Data Recovery Operations 15- 35
Review Questions,
Indicate whether a statement is True (T) or False (F).
1. The purpose of a permanent journal is toreturn existing data tables to a previous(rollback) or subsequent (rollforward)checkpoint.
2. In general, rollback operations help yourecover from software failures androllforward operations help you recoverfrom hardware failures.
3. All AMPs must be online in order toissue the CHECKPOINT statement.
T F
T F
T F
15- 36 Data Recovery Operations
Lab 6 The lab for this module is in Appendix B. Please follow your instructor’s directions for completing lab assignments.
Data Recovery Operations 15- 37
Lab 6,
Please see Lab 6 in Appendix B.
15- 38 Data Recovery Operations
References For more information on the topics covered in this module:
• Teradata RDBMS Database Design - (B035-1094-122A)
• Teradata Archive/Recovery Reference - (B035-2412-122A)
Administrative Tasks and Tools 16- 1
Module 16
Administrative Tasks and Tools
Administrative Tasks and Tools Recap
16- 2 Administrative Tasks and Tools
Notes:
Administrative Tasks and Tools 16- 3
Table of Contents
TERADATA DATABASE SYSTEM ADMINISTRATION............................................................................................. 4 DICTIONARY TABLES TO MAINTAIN ............................................................................................................................ 6 DATABASE QUERY LOG – TABLES MAINTENANCE............................................................................................... 8 A RECOMMENDED STRUCTURE.....................................................................................................................................10
A NOTE ABOUT CAPACITY PLANNING.....................................................................................................................................10 ACCESS CONTROL MECHANISMS .................................................................................................................................12 PLAN AND FOLLOW-UP .......................................................................................................................................................14
16- 4 Administrative Tasks and Tools
Teradata Database System Administration On the facing page is a list of topics covered in the other modules of this course. The list reflects Teradata administrative tasks and tools available to use to perform these tasks.
Administrative Tasks and Tools 16- 5
Teradata Database System Administration
Access Rights
Access Control
Client Software Overview
TDP Flow and Exits
Resource Monitoring
Host Accessible Utilities
Console Accessible Utilities
System Restarts
Permanent Journals
Data Archiving
Data Recovery
Hierarchies, Owners and Parents
The Data Dictionary
Space Allocation and Usage
Users and Accounts
16- 6 Administrative Tasks and Tools
Dictionary Tables to Maintain You need to maintain some dictionary tables. The following pages list these tables and describe the maintenance.
Administrative Tasks and Tools 16- 7
Dictionary Tables to Maintain
Reset accumulators and peak values using DBC.AccountInfoview and the ClearPeakDisk macro installed by the DIP program.
AcctgResource usage by Acct/User
DataBaseSpaceDbase and Table space acctg
Teradata automatically maintains these tables, but good administrative practices can reduce their size.
AccessRightsUsers Rights on objects
AccountsAccount Codes by user
Archive these logging tables (if desired) and purge information 60-90 days old.Retention depends on shop requirements.
Purge these tables when the associated removable media is expired and over-written.
RCConfigurationArchive/Recovery config
RCMediaVolSerial for Archive/Recovery
RCEventArchive/Recovery events
ResUsageResource monitor log table
SW_Event_LogDatabase Console Log
AccLogTblLogged User-Object events
EventLogSession logon/logoff history
16- 8 Administrative Tasks and Tools
Database Query Log – Tables Maintenance The Database Query Log tables could get very large, depending on the type of logging you have set up at your site. Be sure to monitor them, and manually purge them when the data is no longer needed and/or the tables get too large.
Table Name Description DBQLExplainTbl Contains the EXPLAIN of the query DBQLObjTbl Populated if object info is requested for the query DBQLLogTbl The main table for DBQL DBQLRuleCountTbl Reserved for internal use DBQLRuleTbl The rule table for DBQL DBQLSQLTbl The SQL for the query DBQLStepTbl Step level information DBQLSummaryTbl Populated if summary info is requested
Administrative Tasks and Tools 16- 9
Database Query Log – Table Maintenance
DBQLExplainTblThe EXPLAIN of the query.
DBQLObjTblobject info for the query.
DBQLLogTblThe main table for DBQL
DBQLRuleCountTblReserved for internal use
DBQLRuleTblThe rule table for DBQL
DBQLSQLTblThe SQL for the query
DBQLStepTblStep level information
DBQLSummaryTblsummary info is requested
16- 10 Administrative Tasks and Tools
A Recommended Structure As the administrator, you have responsibility for access control. You may need to define the following to control the function of each user of the Teradata database system:
• Users
• User profiles
• Macros
• Views
• Databases
On the facing page is a recommended structure.
For more information on sample structures, refer to “System-Level Considerations” in Database Design.
For more information on profiles, refer to “Appendix C” in Database Design.
A note about capacity planning In addition to setting up and maintaining structures, you should keep up with activity and workload changes so that you can plan for additional capacity when needed. When analyzing workloads for capacity planning, be sure to look at:
• Batch windows
• Backup windows
• Maintenance windows
• Ad-hoc decision support queries.
Administrative Tasks and Tools 16- 11
A Recommended Structure
SELECTEXECUTE
SELECTEXECUTE
SELECT, EXECUTE,INSERT, DELETE, UPDATE
SELECT, INSERTDELETE, UPDATE,
SELECT, GRANT
DROP and CREATE TABLECHECKPOINT, DUMP, RESTORE
INQ_DB
UPD_DB
TABLEDatabase
GRANT
INQ_PROFILE
INQ_USER_1 INQ_USER_2
MAINT_PROF
MAINT_1 MAINT_2
UPD_PROFILE
UPD_USER_2UPD_USER_1
16- 12 Administrative Tasks and Tools
Access Control Mechanisms The Teradata software provides mechanisms you can use to define and control access through the logon process and the defining of user privileges.
Privileges can be further filtered by the definition of Views, Macros, and Stored Procedures. The Users can then be given access to a Database object through a View, Macro, or Stored Procedures. Privileges are limited by what has been defined in these Views, Macros, and Stored Procedures.
A well-designed system protects the data resource from corruption caused by careless or destructive user access.
Administrative Tasks and Tools 16- 13
Access Control Mechanisms
DBC.Dbase DBC.LogonRuleTbl
DBC.SessionTbl DBC.EventLog
DBC.AccessRights
DBC.TVM
Information (Data Tables)
User Privileges
DBC Logon Processing
DBC.AccLogRuleTbl
DBC.AccLogTbl
Views MacrosStored
Procedures
16- 14 Administrative Tasks and Tools
Plan and Follow-up Establish a set of procedures that will help you administer the Teradata Database. Document these procedures and periodically refer to them.
Administrative Tasks and Tools 16- 15
Plan and Follow-up
Review the material you have learned and experienced in this course.From it, develop a checklist of tasks. For example:
1. Set up a job that periodically checks the size of your dictionarytables.
2. Set up a job that periodically checks the size of your applicationdatabases. Evaluate them for even data distribution on AMPs.Ensure that user’s permanent space is being used efficiently.Reallocate space if necessary.
3. Verify adequate Spool_Reserve.
4. Set up and document the definition of users, privileges, and anaccounting system (if you don’t have one already).
5. Check the allocation of the Crashdumps database.
6. Install and run the ResUsage macros at regular intervals.Evaluate and review reports for even distribution of processing.
16- 16 Administrative Tasks and Tools
Notes:
1
Appendix A
Review Questions/Solutions
2
Table of Contents
Module 1—Getting to TeradataModule 2 – Building the Database EnvironmentModule 3 – Databases, Users and the Data DictionaryModule 4 – Space Allocation and UsageModule 5 – Teradata AccountingModule 6 – Access RightsModule 7 –Teradata UtilitiesModule 8 - Meta Data ServicesModule 9 – Teradata Warehouse MinerModule 11 – Disaster RecoveryModule 12 – Archiving DataModule 13 – Restoring DataModule 14 – Permanent JournalsModule 15 – Data Recovery Operations
3456781012 131415161718
3
Module 1: Getting to TeradataReview Questions
Indicate whether a statement is True (T) or False (F).
False. TDP facilitate communication between mainframe clients and the Teradata database.
1. The Teradata Director Program (TDP) facilitates communication between LAN clients and the Teradata database.
2. Channel processor (CP) refers to the database server hardware that connects directly to the channel.
3. TDP commands entered from the MVS or VM console are not executed until you execute the RUN command.
4. If your mainframe communicates with your database via a “PBSA,” this means that the host-channel adapter uses PCI Bus ESCON Adapter.
5. You use two physical LAN connections per node to support concurrent sessions.
6. CLI is an API that provides control overTeradata connectivity.
T F
T F
T F
T F
T F
T F
False. There are two LAN connections for redundancy.
4
Module 2: Building the Database Environment Review Questions
T F
T F
T F
Indicate whether a statement is True (T) or False (F).
1. You should use system user DBC to create application users and databases.
2. An owner or parent is any object (user or database) above the current/selected object in the hierarchy.
3. A child object can have only own owner.
False. You should create and logon as an administrative user to perform these tasks.
5
Module 3: Databases, Users and the Data Dictionary Review Questions
1. You can give the authority to use the CREATE DATABASE and CREATE USER statements only to system administrators.
2. All Profile designations are effective immediately.False. Spool and Temp space are immediate. Others are
effective upon next logon.
3. System views have been created to provide data dictionary data to users of the system.
4. What is an advantage of using Profiles?Simplifies user management.
5. In which two places are password security information defined?A:B:
Match the view name with its purpose.
False. You can give CREATE DATABASE and CREATE USER statement authority to any application user.
T F
T F
___ Children A. Data about tables, views, macros.
___ Databases B. Information about hierarchical relationships.
___ Tables C. Information about databases, users, and immediate parents.
6
Module 4: Space Allocation and UsageReview Questions
1. Space limits are enforced at the table level.
2. When you use the GIVE statement totransfer a database or user to a new owner, all space limits assigned to the transferred object remain the same.
3. You should reserve anywhere from 35-40% of total available space for spool.
T F
T F
T F
False. Space limits are enforced at the database level.
7
Module 5: Teradata AccountingReview Questions
1. What does AMPUsage monitor?
Logical I/Os and CPU time explicitly requested by the database software
2. If you use Account String Expansion, from what table must you be sure to delete rows?
The table is DBC.Acctg. Do this using the view DBC.AMPUsage.
3. If you want wanted to know the AMP CPU time and logical disk I/O for a particular user, how would you find out?
Use the DBC.AMPUsage view
8
Module 6: Access RightsReview Questions
Indicate whether statement is True (T) or False (F).
T F
T F
T F
T F
T F
T F
T F
False. There are three types of access rights. Automatic access rights is missing from the list.
Only SYSDBA would have those rights, normally.
You would GIVE the children to another parent.
You cannot use the GIVE command on tables.
1. There are only two types of access rights or privileges: explicit and implicit.
2. The statements you use to access rights are GRANT and REVOKE.
3. As the administrator, you can set up a hierarchy so that when new objects are added to the system, selected users can automatically gain appropriate rights on those objects.
4. If a user creates a table, that user automatically has the rights SELECT, INSERT, and DELETE.
5. If user SYSDBA creates the user Marketing, both users have CREATE\DROP Database\User rights on UserMarketing.
6. If you want to remove a user but keep its children, you do a modify user on the children.
7. You can GIVE both databases and tables.
9
Module 6: Access RightsReview Questions - Continued
Indicate whether statement is True (T) or False (F).
T F
T F
8. A user may use the SET ROLE command to set their current role to any defined role in the system.
9. Roles may only be granted to users and other roles.
10
Module 7: Teradata UtilitiesReview Questions
Indicate whether statement is True (T) or False (F)
1. Most host-based utilities can run only on channel-attached systems.
2. You can initiate AMP-based utilities via the DBC Console (HUTCNS), Teradata Manager Remote Console, cnsterm, and Database Window (DBW).
3. You can only access Ferret through the DBW.
4. You can access Query Session through the DBC Console Interface (HUTCNS) from a VM terminal.
T F
T F
T F
T F
False. There are versions for LAN-attached hosts as well.
11
Module 7: Teradata UtilitiesReview Questions - Continued
5. The CheckTable utility features two levels of internal table checking.
6. The Table Rebuild utility rebuilds tables differently depending on whether the table is a fallback, non-fallback, or permanent journal table.
7. You should run SCANDISK before runningCheckTable.
8. How can you display the recovery status of a DOWN AMP?
T F
T F
T F
False. The Checktable utility features three levels of checking.
Use the Recovery Manager LIST STATUS command.
12
Module 8: Meta Data ServicesReview Questions
Match each of the following terms with the description that best defines it.
_AIM D
_DIM G
_Objects B
_Classes C
_Relationships A
_Properties F
A. Description of an association between two classes
B. Metadata in the repository
C. Relationships between specific objects
D. Structure for data in the MDS repository
E. Definition of a specific type of metadata.
F . Data fields of a class object.
G. Specific structure for Teradata metadata
13
Module 9: Teradata Warehouse Miner Review Questions
1. Teradata Warehouse Miner is included with the Teradata Database.
2. You must define one or more Teradata ODBC data sources to use TWM.
3. TWM requires additional Perm and Spool space.
4. TWM log files are not accessible by the DBA.
T F
T F
T F
T F
14
Module 11: Disaster Recovery Review Questions
1. The Transient Journal and Down AMP Journals provide automatic data protection.
2. It can be more cost effective in terms of disk space to activate fallback protection for only those tables where an added measure of protection is needed.
3. While RAID 5 requires less disk space than RAID 1, a tradeoff of using RAID 5 is in the event of a failure, it takes longer to reconstruct data than to switch to a mirrored disk.
4. What does the Transient Journal store?
5. Why does the database halt if two AMPsin the same cluster are out of service, even when there is fallback protection?
T F
T F
T F
A “BEFORE” image of rows to be changed; image is discarded once the change is committed..
Fallback rows go to a different AMP in the same cluster, so fallback does not provide protection in this case.
15
Module 12: Archiving Data Review Questions
Indicate whether a statement is True(T) or False(F).
1. Since the archive process can be intensive, you may want to create a user just for archiving to free your user ID for other process while archive is running.
2. The Archive and Recovery utility protects against more types of potential data loss than automatic data protection features.
3. Recovery and FastLoad provide the same ease and speed to recover data.
T F
T F
T F
False. ARC can recover a large number of objects with one command. FastLoad operates on a table-by-table basis.
16
Module 13: Restoring DataReview Questions
Indicate whether a statement is True (T) or False (F).
1. You cannot restore entities that are not defined in the data dictionary.
2. When you execute restore operations, any databases or users created since the archive of the database are dropped when you restore the DBC and user databases.
3. You can only COPY information to the system on which it was originally archived.
4. You can restore triggers.
T F
T F
T F
T F
False. You can COPY information to another system.
17
Module 14: Permanent JournalsReview Questions
1. A permanent journal stores committed, uncommitted, and aborted changes to a row in a table.
2. A database or user space can have many permanent journals.
3. A permanent journal is a good substitute for RAID technology or fallback protection.
4. You can use the Permanent Journal for a daily archive (in conjunction with tape backup for disaster recovery).
5. You can have DUAL BEFORE and DUAL AFTER images stored by the Permanent Journal.
6. You use the CREATE JOURNAL statement to create a Permanent Journal.
7. Tables that use the Permanent Journal must be in the same database as the Permanent Journal.
T F
T F
T F
T F
T F
T F
T F
False. A database or user can have only one permanent journal.
False. Permanent journals maintain images for only those rows that have changed. RAID and fallback provide greater protection because they store duplicate images for all rows in a table.
False. You create the Permanent Journal when you define a user or database using CREATE/MODIFY USER/DATABASE.
A table can use a Permanent Journal in any database.
18
Module 15: Data Recovery OperationsReview Questions
,
Indicate whether a statement is True (T) or False (F).
1. The purpose of a permanent journal is to return existing data tables to a previous (rollback) or subsequent (rollforward) checkpoint.
2. In general, rollback operations help you recover from software failures androllforward operations help you recover from hardware failures.
3. All AMPs must be online in order to issue the CHECKPOINT statement.
T F
T F
T F
False. If an AMP is offline when you issue the statement, ARC generates a system log entry that marks the checkpoint as soon as the AMP is back online.
Appendix B
Lab Exercises
Lab 1 (Follows Module 3)
1. Use Teradata Administrator to find out how many levels are in your hierarchy.
2. What password attributes are in effect in the system?
Number of days to expire password: ________ Minimum number of characters required: ______ Maximum number of characters required: ______ Are digits allowed? Yes_____ No______ Are special characters allowed: Yes______ No______ Maximum failed logons permitted: ______ (0=never lock) Hours to elapse before unlocking: ______ Days to expire before password reuse: ______
3. How many Users and how many Databases are in your system?
4. How many Profiles have been defined in your system?
5. What Profile is your userid assigned to?
6. Create a user as a child of your user (name it your userid_A).
7. Create a Profile called your userid_P and assign your child to it.
8. Modify the Profile userid_P to have a spool limit that causes a query executed by userid_A to fail. (Hint: you may have to GRANT the child access to one of your tables.)
Lab 2 – (Follows Module 5)
1.Using the DBC.DiskSpace view, find the total disk storage capacity of the system to which you are logged on: Total capacity ___________________ 2.Using the same view, find how much of the space is currently in use: Current space utilization ________________ OPTIONAL: Write a query to show what percentage of system capacity is currently in use. OPTIONAL: Write a query to show what percentage of system capacity is currently in use by the user DBC. 3. Using the DBC.AMPUsage view, find the number of AMPS (hardware or Vproc) defined on your system. Number of AMPs (real or virtual) ______________
4.Using the DBC.AccountInfo view, list all of your valid account codes. ___________________ ___________________ ___________________ ___________________ 5. Using the DBC.AMPUsage view, write a query to show the number of AMP CPU seconds and logical disk I/Os that have been charged to your User ID and Account.
Lab 3 – (Follows Module 6)
1. Use Teradata Administrator to determine which database objects you have access rights on.
2. Using the DBC.UserRights view, project the databaseName, TableName, and AccessRight columns for the privileges you now hold. How may access rights do you have? 3. Create a view called “Active Sessions” that selects the session name and session number from the session_info view. Optional: run an EXPLAIN of the CREATE VIEW statement you just submitted. (You may need to change the view name to make the EXPLAIN run.) Were Access Rights generated automatically? (Use abbreviations.) 4. Create a Role that allows userid_A access to Accounts and Trans in your user. Test this by logging on as userid_A and trying to select from the three tables. 5. Determine what role you are assigned to. 6. Determine the Access Rights available to that Role and the members assigned to it.
Lab 4 – (Follows Module 7 ) 1. Using the DBC.DiskSpace view, find your per-AMP limits of: Perm space ________________ Spool space ________________ 2. Using the DBC.Tablesize View, locate the tables in the system that use the most perm space, and are not in user DBC. 1._____________________2._______________________ 3._____________________4._______________________ 5._____________________6.________________________ 3. Using the DBC.Tablesize View, locate the tables in the system that use the most perm space, that are in user DBC. Show both the current perm usage and the peak perm usage. Table Current Maximum 1._________________ ______________ ______________ 2._________________ ______________ ______________ 3._________________ ______________ ______________ 4._________________ ______________ ______________ 5._________________ ______________ ______________ 4. How much space is available for spool in this system at this time? 5. Using the DBC.Children view, list the ownership hierarchy for your user’s parents from DBC down to your user. 6. Using the view DBC.Databases, find your:
Immediate parent’s name ____________________ Default account name ____________________ Perm space limit ____________________ Spool space limit ____________________
Determine the number of users in your system: __________
7. Using the DBC.Tables view, find the number of tables in the Data Dictionary that are :
Fallback Protected _______________ Not Fallback Protected _______________
8. Using the DBC.Indices view, find the number of tables that have non-unique primary indexes: ___________ The number of join indexes: ___________ The number of value-ordered indexes: ___________ 9. Start the Database Window or cnsterm or Teradata Manager => Remote Console. Execute the Query Configuration utility. Execute the Get Config utility. Execute the Ferret utility and do a Showspace in the summary mode. Set the Scope for the 'resusagespma' table and do a Showblocks command. Execute it again using the /M option. 10. Open the Database Window and start the RcvManager utility. Are there any recovery sessions?_________________ 11. Log on to the database and enter the Help Database Crashdumps command. Are there any crashdumps saved?_________________ 12, Start Teradata Administrator. Expand the hierarchy to find your User ID. (You may need to run dbc.children to help.) Look at the database SysDBA and find the space used by all of its children. Find the space used by all of SysDBAs tables. Find the definition and Details for SysDBAs largest table. (Include distribution, row count, size, and columns.)
Lab 5 – (Follows Module 12 - Restoring Data)
1. Populate your Accounts, Customer, and Trans tables (reference the
database AU.)
2. Create a simple ARCHIVE script to archive the database to your C drive and store the script on your PC in the home directory of drive C.
3. Execute your ARC script and direct the output to a file for viewing. (e.g. arcmain.exe<c:\arctest.txt>c:\arctestout.txt)
4. Delete all from both your Accounts and Customer tables and drop
your Trans table.
5. Create a simple RESTORE script to Restore your database from the Archive file created in Step 3.
6. Execute your RESTORE script (e.g. arcmain.exe<c:\restoretest.txt>c:\restoreout.txt)
7. Check your results: There should be 10K Accounts rows, 7K
Customer rows, and 15K Trans rows.
Lab 6 – (Follows Module 14 - Data Recovery Ops)
We will use the Customer table to test Permanent Journaling. 1. Modify your user ID to create a permanent journal. You may leave the defaults set to NO BEFORE NO AFTER images … This means you must start permanent journaling at the table level. It will not default. 2. Clean out your Customer table by deleting all of the rows. ALTER your Customer table to activate SINGLE BEFORE and SINGLE AFTER image journaling assigned to the journal you created in question one. 3. Now insert a pair of rows into the Customer table and place a checkpoint in the permanent journal before AND after you execute the INSERTs. 4. Select the rows you added to the Customer table. Note: Steps 5-7 are optional and are used only if ARC is available. 5. Using ARCMAIN, rollback your two inserts. 6. Now select the rows you added to the Customer table. 7. The rows you added to the Customer table are gone because you rolled them out of the table. Again, using ARCMAIN, roll the rows back with a ROLLFORWARD request. Verify your results as you did in question six. 8. Remove the table assignment to the journal, and drop the journal from your user ID.
Appendix C
Lab Solutions
Lab 1-1
Use Teradata Administrator to find out how many levels are in your hierarchy.
Lab 1-2
What password attributes are in effect in the system?
Lab 1-3a
How many Users and how many Databases are in your system?
Lab 1-3a
How many Users and how many Databases are in your system?
Lab 1-4
How many Profiles have been defined in your system?
Lab 1-5
What Profile is your userid assigned to?
Lab 1-6
Create a user as a child of your user (name it your userid_A).
Lab 1-7
Create a Profile called your userid_P and assign your child to it.
Lab 1-8
Modify the Profile userid_P to have a spool limit that causes a query executed byuserid_A to fail. (Hint: you may have to GRANT the child access to one of your tables.)
Lab 1-8, continued
Modify the Profile userid_P to have a spool limit that causes a query executed byuserid_A to fail. (Hint: you may have to GRANT the child access to one of your tables.)
Lab 1-8, continued
Modify the Profile userid_P to have a spool limit that causes a query executed byuserid_A to fail. (Hint: you may have to GRANT the child access to one of your tables.)
Lab 1-8, continued
Modify the Profile userid_P to have a spool limit that causes a query executed byuserid_A to fail. (Hint: you may have to GRANT the child access to one of your tables.)
Lab 1-8, continued
Modify the Profile userid_P to have a spool limit that causes a query executed byuserid_A to fail. (Hint: you may have to GRANT the child access to one of your tables.)
Lab 2
2-1 Using the DBC.DiskSpace view, find the total disk storage capacity ofthe system to which you are logged on:
Total capacity ___________________
2-2 Using the same view, find how much of the space is currently in use:
Current space utilization ________________
2-2, continued
OPTIONAL: Write a query to show what percentage of system capacity is currently in use.
2-2, continuedOPTIONAL: Write a query to show what percent of system capacity is currently in use by user DBC.
2-3 Using the DBC.AMPUsage view, find the number of AMPS (hardware or Vproc) defined on your system.
Number of AMPs (real or virtual) ______________
2-4 Using the DBC.AccountInfo view, list all of your valid account codes.
___________________ ___________________
___________________ ___________________
2-5 Using the DBC.AMPUsage view, write a query to show the number of AMP CPU
seconds and logical disk I/Os that have been charged to yourUser ID and Account.
Lab 3
3-1 Use Teradata Administrator to determine which databaseobjects you have access rights on.
3-2 Using the DBC.UserRights view, project the databaseName, TableName, and AccessRight columns for the privileges you now hold.
3-3 Create a view called “Active Sessions” that selects the sessionname and session number from the session_info view.
3-3, continued
Select from the view active_sessions.
3-3, continued
Optional: run an EXPLAIN of the CREATE VIEW statement you just submitted. (You may need to change the view nam to make the EXPLAIN run.)
3-3, continued
Were Access Rights generated automatically? (Use abbreviations.)
3-4 Create a Role that allows userid_A access to Accounts and Trans in your user.
3-4, continued
3-4, continued
3-4, continued
Test this by logging on as userid_A and trying to select from the three tables.
3-5 Determine what role you are assigned to.
3-6 Determine the Access Rights available to that Role and the membersassigned to it.
Lab 4
4-1 Using the DBC.DiskSpace view, find your per-AMP limits of:
Perm space ________________
Spool space ________________
SELECT *FROM dbc.diskspaceWHERE databasename=user;
4-2 Using the DBC.Tablesize View, locate the tables in the system that use the most perm space, and are not in user DBC.
1._____________________2._______________________
3._____________________4._______________________
5._____________________6.________________________
4-3 Using the DBC.Tablesize View, locate the tables in the system that use the most perm space, that are in user DBC. Show both the current perm usage and the peak perm usage.
Table Current Maximum
1._________________ ______________ ______________
2._________________ ______________ ______________
3._________________ ______________ ______________
4._________________ ______________ ______________
5._________________ ______________ ______________
4-4 How much space is available for spool in this system at this time?
4-5 Using the DBC.Children view, list the ownership hierarchy for your user’s parents from DBC down to your user.
4-6. Using the view DBC.Databases, find your:Immediate parent’s name ____________________Default account name ____________________Perm space limit ____________________Spool space limit ____________________
Determine the number of users in your system: __________
4-7 Using the DBC.Tables view, find the number of tables in the Data Dictionary that are :
Fallback Protected _______________
4-7 Using the DBC.Tables view, find the number of tables in the Data Dictionary that are :
Not Fallback Protected _______________
4- 8 Using the DBC.Indices view, find the number of tables that have non-unique primary indexes: ___________
4-8, continued
Using the DBC.Indices view, find the number of tables that have join indexes: ___________
4-8, continued
Using the DBC.Indices view, find the number of tables that number of value-
ordered indexes: ___________
4-9 Start the Database Window or cnsterm or Teradata Manager => Remote Console.
Execute the Query Configuration utility.
4-9, continued - Execute the Get Config utility.
4-9, continued - Execute the Ferret utility and do a Showspace in the summary mode.
4-9, continued - Set the Scope for the 'resusagespma' table and do a Showblocks
4-9, continued - Set the Scope for the 'resusagespma' table and do a Showblocks
4-9, continued
Execute it again using the /M option.
4-10 Open the Database Window and start the RcvManager utility.
Are there any recovery sessions?_________________
4-11 Log on to the database and enter the Help Database Crashdumps command.
Are there any crashdumps saved?_________________
4- 12 Start Teradata Administrator.
4-12, continued - Expand the hierarchy to find your User ID.
You may need to run dbc.children or use the Teradata Administrator search facility.
4-12, continued - Find the definition and details for SysDBAs largest table. (Include distribution, row count, size, and columns.)
4-12, continued - Look at database SysDBA and find the space used by all its children.
4-12, continued - Find the space used by all of SysDBAs tables.
4-12, continued - Find the definition and Details for SysDBAs largest table. (Include distribution, row count, size, and columns.)
4-12, continued - Find the definition and Details for SysDBAs largest table. (This shows us the table choices that allow us to do the various dist,row count, size, etc)
Lab 5
5-1 - Populate your Accounts, Customer, and Trans tables.(reference the database AU.)
5-2 Create a simple ARCHIVE script to archive the database to your C drive
and store the script on your PC in the home directory of drive C.
5-3 Execute your ARC script and direct the output to a file for viewing.
(e.g. arcmain.exe<c:\arctest.txt>c:\arctestout.txt)
5-3 Execute your ARC script and direct the output to a file for viewing.
(e.g. arcmain.exe<c:\arctest.txt>c:\arctestout.txt) (continued)
5-4 - Delete all from both your Accounts and Customer tables and drop your Trans table.
5-6 Create a simple RESTORE script to Restore your database from
the Archive file created in Step 3.
5-7 Execute your RESTORE script (e.g.
arcmain.exe<c:\restoretest.txt>c:\restoreout.txt)
5-7 Check your results: There should be 10K Accounts rows, 7K
Customer rows, and 15K Trans rows.
Lab 6
6-1 Modify your user ID to create a permanent journal. You may leave the defaults set to NO BEFORE NO AFTER images … This means you must start
permanent journaling at the table level. It will not default.
6-2 Clean out your Customer table by deleting all of the rows.
6-2, continued
ALTER your Customer table to activate SINGLE BEFORE and SINGLE AFTER
image journaling assigned to the journal you created in question one.
6-3 Now insert a pair of rows into the Customer table and place a checkpoint in
the permanent journal before AND after you execute the INSERTs.
6-3, continued = Now insert a pair of rows into the Customer table and place acheckpoint in the permanent journal before AND after you execute the INSERTs.
6-3, continued - Now insert a pair of rows into the Customer table and place a checkpoint in the permanent journal before AND after you execute the INSERTs.
6-4 Select the rows you added to the Customer table.
6-5 Using ARCMAIN, rollback your two inserts.
6-5, continued - Using ARCMAIN, rollback your two inserts.
6-5, continued - Using ARCMAIN, rollback your two inserts.
6-6 Now select the rows you added to the Customer table.
6-7 The rows you added to the Customer table are gone because you rolled them out of the table.
Again, using ARCMAIN, roll the rows back with a ROLLFORWARD request.
6-7, continued - using ARCMAIN, roll the rows back with a
ROLLFORWARD request.
6-7, continued - using ARCMAIN, roll the rows back with a ROLLFORWARD request. Verify your results as you did in question six.
6-7, continued - Verify your results as you did in question six.
6-8 Remove the table assignment to the journal, and drop the journal from your user ID.
6-8, continued - Remove the table assignment to the journal, and drop the journal from your user ID.
Appendix D 1
Appendix D
Session Pools
Tuning with Teradata
Available Values for Compose Graph
2 Appendix D
Starting a Session PoolThe sessions in a pool are logged on to the Teradata server once usingthe START POOL operator command and remain logged on until they areexplicitly logged off by the STOP POOL operator commands.
Appendix D 3
Starting a Session Pool
Start Pool Num 8 LOGON Orders, xxxxxxxxx16:51:24 T: TDP0865 STARTING POOL ID: POOL000216:51:24 T: TDP0866 ADDING 0008 SESSIONS TO POOL ID: POOL000216:51:25 T: TDP0899 SESSION 1288 STARTED, JOB POOL000216:51:25 T: TDP0899 SESSION 1289 STARTED, JOB POOL000216:51:26 T: TDP0899 SESSION 1290 STARTED, JOB POOL000216:51:26 T: TDP0899 SESSION 1291 STARTED, JOB POOL0002 :
::
POOL ID: POOL0002SESSION NUMBERS1288, 1289, 1290, 1291,1292, 1293, 1294, 1295
T D P
PE
SESSION 1288SESSION 1292
PE
SESSION 1289SESSION 1293
PE
SESSION 1290SESSION 1294
PE
SESSION 1291SESSION 1295
4 Appendix D
Using a Session PoolWhen a pool is established, all sessions are not in use. When anapplication sends a logon request whose string matches that of thesession pool, the application is assigned an available session from thepool. The session is marked, “in-use”, and cannot be reassigned toanother application until the current application logs off. The Logoffreturns the session to the pool, where it becomes available for use byanother application.
Appendix D 5
2
Using a Session Pool
.LOGON ORDERS xxxxxxxxx .LOGON ANOTHER, yyyyyyyyy
.POOL ID: POOL0002SESSION NUMBERS1288, 1289, 1290, 1291,1292, 1293, 1294, 1295
T D P
PE
SESSION 1288SESSION 1292
PE
SESSION 1289SESSION 1293SESSION 1904
PE
SESSION 1290SESSION 1294
PE
SESSION 1291SESSION 1295
.SESSION CONTROLSESSION NUMBERS1904
• Using a session pool typically saves 2 to 3 secondslogon time.
• If all the sessions in the pool are active, additionallogons fail.
• If sessions in the pool are disabled, logons fail.
6 Appendix D
Ending a Session PoolThe following commands may be used in conjunction with ending asession pool:
DISABLE POOL Disables Logons but does not end the pool.
STOP POOL Logs off unused sessions and disables new logons.When all sessions have logged off, the pool isended.
LOGOFF POOL Logs off active sessions and ends the pool.
Appendix D 7
Ending a Session Pool
Stop Pool ID Pool000220:02:24 T: TDP0867 STOPPING POOL ID: POOL000220:02:24 T: TDP861 POOL ID: POOL002 IS NOW: STOPPED20:02:25 T: TDP0898 SESSION 1295 ENDED, JOB POOL0002 RC=000020:02:25 T: TDP0898 SESSION 1294 ENDED, JOB POOL0002 RC = 000020:02:26 T: TDP898 SESSION 1293 ENDED, JOB POOL0002 RC = 000020:02:26 T: TDP0898 SESSION 1292 ENDED, JOB POOL0002 RC = 000020:02:27 T: TDP0898 SESSION 1291 ENDED, JOB POOL0002 RC = 0000
T D P
PE PE
SESSION 1904
PE PE
.SESSION CONTROLSESSION NUMBERS1904
DISABLE POOL Disables Logons but does not end pool.
STOP POOL Logs off unused sessions and disablesnew logons. When all sessions havelogged off, the pool is ended.
LOGOFF POOL Logs off active sessions and ends pool.
8 Appendix D
UNIX Tunable ParametersThe UNIX tunable parameters must be set the same on all TPA nodes,just as all TPA nodes must have the same kernel image.Other nodes (non-TPA) can be tuned independently of the TPA nodes andof each other, however, all nodes connected via the BYNET must havethe same version of BYNET and PDE packages.If nodes are not in sync, PDE will work but XTP won’t work.You should use the command rallsh when sending commands to allnodes at once.
Appendix D 9
Do NOT TouchThe following table describes the UNIX tunable parameters that should notbe altered without the involvement of the LSGSC. Modifying theseparameters could have a serious impact on the performance of yoursystem to the point of rendering the system not functional.
Tunable Description What could happenNCALL Number of system call-out
table entriesUNIX panic
MAXUP Maximum number of childprocesses
TPA reset or UNIX panic
SEMMNU Maximum number ofsemaphore undo structures
TPA reset or UNIX panic
KDBSYMSIZE Maximum size of kdb symboltable
Kernel will not link; system willnot come up.
UFSNINODE Maximum number of open ufsfiles system wide
Site may increase, but do notdecrease. If this is set too low,the Kernel will not link and thesystem will not come up.
SEGMAPSZ Number of pages usable by fileI/O
Jobs will go very slow. Thesystem will appear to be hung.
SFNOLIM Soft limit on number of openfiles. The soft limit can be lessthan the hard limit, but notmore. Only superuser canchange the hard limit.
TPA reset or UNIX panic
NUMTIM Maximum number of timodSTREAMS modules
You will not be able to logonany sessions.
NUMTRW Maximum number of tirdwrSTREAMS modules
You will not be able to logonany sessions.
NPROC Maximum number ofprocesses
Database fails to come up ifset too low.Teradata sets it to 5000, whichis fine for most sites, but it mayneed to be raised if there is alarge number of vprocs in aclique (more than ~ 48-50) thatcould migrate to a node, or ifother applications require anunusually large number oftasks.
10 Appendix D
Likely to affect Teradata RDBMS performanceThe following table describes the UNIX tunable parameters that are likelyto affect the performance of the Teradata RDBMS, however, we have notyet determined the full impact of them. MODIFY THESE PARAMETERSAT YOUR OWN RISK.
Tunable Description Usage NotesAFFIN_ON Specifies whether affinity
scheduling is enabled ordisabled
We generally get betterperformance with this enabled, butone benchmark with very highmerge activity found that it causedsignificant performancedegradation.
BUFHWM Maximum amount of memory(in KB) used by block I/Obuffers. Teradata sets it to2048
DMAABLEBUF The number of pages toreserve exclusively for DMAI/O. Teradata uses thedefault value
FASTBUF Size of STREAMS buffers forfast allocation
Used for systems with 512 MB ormore memory, this is the maximum(512 bytes) by default. Systemswith 256 MB of memory may wantto increase this value.
NPBUF Number of physical I/O bufferheaders (20-4000, default20) Teradata sets it to 1024
STRTHRESH Maximum number of bytes inSTREAMS buffers used byhost and Teradata Gatewaysessions.
Recommended being 25% to 50%of total system memory.Other systems frequently set this to0 (no limit).
TRW_HIWATER Number of bytes in queues toenable flow control. Teradatauses the default value.
TRW_LOWATER Number of bytes to removefrom queues to disable flowcontrol. Teradata uses thedefault value.
Appendix D 11
Commonly Set by Other DatabasesThe following table describes the UNIX tunable parameters that arecommonly set by other databases, but are not applicable to the TeradataRDBMS. Generally, you can use these parameters to tune otherapplications without affecting the Teradata RDBMS, as long as you usereasonable or appropriate values for large systems. These parametersshould be changed only if you have a specific need to do so.
Tunable DescriptionDESFREE Desired minimum number of free pagesFLCREC Maximum number of record locking regionsHCORLIM Hard core file size limitHDATLIM Hard data segment size limitHSTKLIM Hard stack segment size limitHVMMLIM Hard virtual memory limitLOTSFREE Amount of free memory that is considered plentyMAXAIOS Number of asynchronous I/O system daemonsMAXRAIO Maximum raw asynchronous I/O buffersMINFREE Minimum number of free pagesMSGMAP Maximum number of entries in message control mapMSGMAX Maximum size of a messageMSGMNB Maximum size of a message queueMSGMNI Maximum entries in a message queueMSGSEG Maximum number of message segmentsMSGTQL Number of system message headersNUMSP Maximum number of STREAMS pipesSCORLIM Hard core file size limitSDATLIM Hard data segment size limitSEMMAP Number of entries in the semaphore control mapSEMMNI Number of semaphore identifiersSEMMNS Maximum number of semaphores in the systemSEMMSL Maximum number of semaphores per setSEMOPM Maximum number of semaphore operations per system callSEMUME Maximum number of undo entries per undo structureSHMMAX Maximum shared memory segment sizeSHMSEG Maximum shared memory segments per taskSSTKLIM Hard stack segment size limitSVMMLIM Soft virtual memory limit
12 Appendix D
May Change Based on Site’s Desires The following table describes the UNIX tunable parameters that do notaffect the performance of the Teradata RDBMS. These parameters can beset to site requirements. There are lots of others that fall into this class;these are ones that seem particularly useful and benign.
Tunable Description UsageARG_MAX Specifies the maximum
number of characters incommand lines
May want to increase if doingcomplex commands thathave huge number ofcharacters. These commandsusually have arguments withmultiple values that areplugged in.
DSTFLAG Indicates use of daylightsavings time
Set the flag if you want thesystem time to convertbetween daylight savings andstandard time.
PUTBUFSZ Size of kernel common errormessage buffer
The amount of kernelmessage history that isavailable in a Panic dump.
TIMEZONE Difference in minutes fromGMT
Set appropriately for yourspecific time zone.
For further information on UNIX tunable parameter with SVR4 MP-RAS,please reference the following manuals:NCR UNIX SVR4 MP-RAS General Administration, Command LineInterface, Volume 1NCR UNIX SVR4 MP-RAS Devices and Networks, Command LineInterface, Volume 2NCR UNIX SVR4 MP-RAS System Configuration, Command LineInterface, Volume 3
Appendix D 13
Compose Graph—Available ValuesThe tables on the next two pages list the values you can monitor withxperfstate and describe what each value indicates.
Value IndicatesAvailable Memory The amount of memory not currently in use. Not particularly
interesting because systems tend to leave segments inmemory until more is needed.
Paging The number of page faults encountered over the samplinginterval. (Page fault: a needed page was not in memory,therefore, it had to be read in.)
Process Switches The number of times the kernel switched between executingdifferent tasks.
Run QueueLength
The number of tasks waiting to execute.
Broadcast QueueLength
The number of messages in the broadcast queue for eachBYNET 0 and 1.
HP BroadcastQueue Length
The number of messages in the high-priority broadcastqueue for each BYNET 0 and 1.
Pt to Pt Bytes Sent The number of bytes of data sent point-to-point over theBYNET for each BYNET 0 and 1.
Pt to Pt BytesReceived
The number of bytes of data received point-to-point over theBYNET for each BYNET 0 and 1.
Broadcast BytesSent
The number of bytes of data broadcast over the BYNET foreach BYNET 0 and 1.
Broadcast BytesReceived
The number of broadcast bytes of data received from theBYNET for each BYNET 0 and 1.
BYN BlockedServices
The number of times BYNET services were blocked for eachBYNET 0 and 1.
BYN UnblockedServices
The number of times BYNET services were unblocked foreach BYNET 0 and 1.
Value IndicatesBYN CurrentlyBlocked
The number of service requests currently blocked for eachBYNET 0 and 1.
BYN Rx Bytes The number of bytes received by the BYNET driver for eachBYNET 0 and 1.
BYN Rx ChannelNot Ready
The number of times the “channel not ready” condition wasencountered for each BYNET 0 and 1.
BYN Rx No RCBs The number of times the “no RCBs available” condition was
14 Appendix D
Value IndicatesAvailable encountered for each BYNET 0 and 1.
BYN Rx No PagesAvailable
The number of times the “no pages available” condition wasencountered for each BYNET 0 and 1.
BYN Rx FlowControlled
The number of times the “flow controlled” condition wasencountered for each BYNET 0 and 1.
BYN Rx Bit BucketCount
The number of times the received data was discarded foreach BYNET 0 and 1.
BYN Rx Bit BucketBytes
The number of bytes of received data that was discarded foreach BYNET 0 and 1.
BYN Tx ChannelNot Ready
The number of times the “channel not ready” condition wasencountered during transmission for each BYNET 0 and 1.
BYN Tx No RCBsAvailable
The number of times the “no RCBs available” condition wasencountered during transmission for each BYNET 0 and 1.
BYN Tx No PagesAvailable
The number of times the “no pages condition” wasencountered during transmission for each BYNET 0 and 1.
BYN Tx NoChannel ProgAvailable
The number of times the “no channel program available”condition was encountered during transmission for eachBYNET 0 and 1.
Appendix E
Acronyms
E-2 Acronyms
Notes
Acronyms E-3
List of Acronyms ACM Alert Control Module AMP Access Module Processor ANSI American National Standards Institute AWS Administration Workstation BTEQ Batch/basic Teradata Query Facility BYNET Banyan Network - High speed interconnect CLI Call Level Interface CNS Console Subsystem CPU Central Processing Unit DB Database DBA Database Administrator DBS Database System DBW Database Window DMTEQ Database Manager for Teradata Queries DUC Dynamic Utilization Charting ELA Error Log Analyzer GSC Global Support Center GUI Graphical User Interface LAN Local Area Network LDV Logical Disk Vproc NUPI Non-Unique Primary Index NUSI Non-Unique Secondary Index ODBC Open Database Connectivity PC Personal Computer PDA Performance Data Analyzer PDE Parallel Database Extension PE Parsing Engine PM Performance Monitor (Data Collector) PMON Performance Monitor (Application) PN Processor Node RC Remote Console RDBMS Relational Database Management System RDC Resusage Data Collector RSS Resource Sampling Subsystem RTF Rich Text Format SI Session Information SNMP Simple Network Management Protocol SQL Structured Query Language TEQTALK Teradata Query Talk TMCLIENT Teradata Manager Client Application TMSERVER Teradata Manager Server Application TVM Tables, Views and Macros USI Unique Secondary Index VPROC Virtual Processor WinDDI Windows Data Definition Interface
E-4 Acronyms
Notes