+ All Categories
Home > Documents > Pa q Technical Manual

Pa q Technical Manual

Date post: 13-Apr-2018
Category:
Upload: zia-malik
View: 218 times
Download: 0 times
Share this document with a friend

of 68

Transcript
  • 7/27/2019 Pa q Technical Manual

    1/68

    TECHNICAL MANUALfor the

    POSITION ANALYSISQUESTIONNAIRE

    (PAQ)

    Ernest J. McCormick, Ph.D.

    Robert C. Mecham, Ph.D.

    P.R. Jeanneret, Ph.D.

  • 7/27/2019 Pa q Technical Manual

    2/68

    3rd

    Edition 1989, 1998, 2001PAQ Services, Inc.

    All rights reserved.Published by PAQ Services, Inc.,1911 C StreetBellingham, WA 98225.

    The Position Analysis Questionnaire (PAQ)is copyrighted by the Purdue Research Foundation,West Lafayette, IN 47907.

    PAQis a registered trademark of the Purdue Research Foundation.

  • 7/27/2019 Pa q Technical Manual

    3/68

    CONTENTS

    FOREWORD 1

    THE POSITION ANALYSIS QUESTIONNAIRE (PAQ) 2The Job Elements of the PAQ 2

    The Organization of the PAQ 2Response Scales for Use with PAQ Job Elements 2

    SCORING OF THE PAQ 3

    DEVELOPMENT OF THE POSITION ANALYSIS QUESTIONNAIRE (PAQ) 4

    RELIABILITY OF JOB ANALYSES WITH THE PAQ 5Reliability of PAQ Analyses for Individual Jobs 5Influences on Reliability 10

    PAQ JOB DIMENSIONS 14

    Sample of 2200 Jobs 14Factor Analyses 14Job Dimension Scores 16Reliability of Job Dimension Scores 17

    DERIVING PERSONNEL REQUIREMENTS OF JOBS WITH THE PAQ 20Discussion of Test Validity 20Methods of Test Validation in Personnel Selection 20

    Traditional Methods of Determining Test Validity 20Job Component Validity 20Validity Generalization 21

    Job Component Validity Based on the PAQ 21

    Job Component Validity Analyses with GATB Tests 22Job Component Validity with Commercially Available Tests 25Identification of Commercial Tests that Measure GATB Constructs 25Identification of Commercial Tests That Measure Personality Variables 28

    Comparative Effectiveness of Situationally Specific, Job Component, and Generalized ValidationMethods 29

    PAQ Computer Outputs of Job Component Validity Data 31Attribute Ratings of PAQ Job Elements 33

    USE OF THE PAQ IN JOB EVALUATION AND IN SETTING COMPENSATION RATES 38Criterion for Determining Job Values 38The Initial Studies of PAQ-based Job Evaluation 38

    Organization Specific PAQ-based Job Evaluation Studies 39Insurance Company 39Utility Companies 39Public Sector 40

    The Comparable Worth Issue 40

  • 7/27/2019 Pa q Technical Manual

    4/68

    PREDICTION OF EXEMPT STATUS UNDER THE U.S. FAIR LABOR STANDARDS ACT 45DEVELOPMENT OF JOB FAMILIES WITH THE PAQ 46THE PAQ IN PERFORMANCE APPRAISAL 47

    THE PAQ AND JOB PRESTIGE 48COMPARATIVE EVALUATION OF VARIOUS JOB ANALYSIS METHODS 49SUMMARY 51REFERENCES 52APPENDIX 57INDEX 60

  • 7/27/2019 Pa q Technical Manual

    5/68

    LIST OF TABLES

    TABLE 1 - Frequency Distributions of Reliability Coefficients for "Pairs" of PAQ Analyses 6

    TABLE 2 - Job Dimensions Based on Principal Components Analyses of PAQ Data For 2200 Jobs 15

    TABLE 3 - Illustration of the Basis for Deriving Job Dimension Scores from PAQ Ratings on Four Job

    Elements for Three Hypothetical Jobs 17

    TABLE 4 - High, Mid-range, and Low Dimension Reliability Coefficients and Corresponding Standard

    Errors of Measurement from 43 Studies 18

    TABLE 5 - Multiple Correlations of PAQ Overall Job Dimension Scores with GATB Test Criteria and

    Percentage of Agreement with Test Inclusion in a Specific Aptitude Test Battery (SATB) 24

    TABLE 6 - Correlations Between Predicted and Actual Criteria for Five Constructs as Measured by

    Various Commercial Tests 27TABLE 7 - Multiple Correlations Between PAQ Overall Dimensions and Median Occupational Scores

    on the Wonderlic Personnel Test 28

    TABLE 8 - Multiple Correlations Between PAQ Overall Dimension Scores and the Percentage of

    Persons in Occupations by Personality Indices 29

    TABLE 9 - Correlation of Estimated Values with Values in Holdout Samples by Validation Method

    Used to Make Estimates 30

    TABLE 10 - Correlations Between Percentiles of Ratings of Selected Attribute Requirements ofOccupations and of Test Data on Certain Attributes of Incumbents on Similar Occupations 35

    TABLE 11 - Correlations Between Percentiles of Selected Attribute Requirements and of Percentages of

    Incumbents in Similar Occupations with High Scores on Specified Indices of the Myers-Briggs Type

    Indicator (MBTI) 37

    TABLE 12 - Categorization of PAQ Job Dimensions by Skill, Effort, Responsibility, and Working

    Conditions 42

    TABLE 13 - Evaluation of Seven Job Analysis Methods by Experienced Job Analysts 50

    TABLE 14 - Conversion of Scores of Tests Used in Study to Standard Scores 58

  • 7/27/2019 Pa q Technical Manual

    6/68

    LIST OF FIGURES

    FIGURE 1 - Inter-Analyst Reliability Coefficients 8

    FIGURE 2 - Standard Error of Measurement 10

    FIGURE 3 - Reliability and Zero Responses 11

  • 7/27/2019 Pa q Technical Manual

    7/68

    1

    FOREWORD

    This manual presents an overview of the Position Analysis Questionnaire (PAQ), some backgroundinformation regarding its development, a summary of some of the research carried out with it, and adiscussion of certain of its potential applications. Other available materials include the Position Analysis

    Questionnaire and Answer Sheet, the PAQ Job Analysis Manual, the PAQ Users Manual, the Pre-Interview Job Description Form, the PAQ Interview Guide, the PAQ Workbook, and manuals for PAQrelated software, including the On-line Users Manual, and theEnter-Act Users Manual.

  • 7/27/2019 Pa q Technical Manual

    8/68

    2

    THE POSITION ANALYSIS QUESTIONNAIRE (PAQ)

    The Position Analysis Questionnaire (PAQ) is a structured job analysis instrument that consists of 187 jobelements (items) of a generic nature that provide for analyzing jobs in terms of work activities and work-situation variables. (There are also eight additional items that deal with compensation.)

    The Job Elements of the PAQ

    The job elements are of a worker-oriented nature in that they characterize, or strongly imply, thegeneric human behaviors that are involved in jobs, as contrasted with elements of a job-oriented ortask-oriented nature that deal more with the technological processes of jobs or with the specificobjectives or results of work (McCormick, 1959). The nature of the job elements of the PAQ makes itpossible for virtually any type of position or job to be analyzed with it. At the same time, the PAQ is notintended to serve as a substitute for all other job analysis methods nor to meet all of the purposes theyserve. (For example, it cannot replace a job description in characterizing the tasks, technical processes, oroperations that are performed by job incumbents, nor can it specify the role or operational objective ofthe job in the organization.) Consequently, the PAQ is often used in conjunction with other techniques

    when a job analysis study is performed.

    The Organization of the PAQ

    The basic organization of the job elements in the PAQ is predicated on a worker-job interaction frame ofreference, with elements being separated into divisions representing various types of such interaction.Elements in the first division (Information Input) are concerned with where and how workers obtain theinformation to perform their jobs; elements in the second division (Mental Processes) describe the mentalactivities required to perform jobs; and elements in the third division (Work Output) document thevarious types of responses or actions involved in jobs. The other divisions are: Relationships with OtherPersons, Job Context, and Other Job Characteristics. An example of a job element in each of the sixdivisions is given below:

    Division of PAQ Example of Job Element

    1.2.3.4.5.6.

    Information InputMental ProcessesWork OutputRelationships with Other PersonsJob ContextOther Job Characteristics

    1.42.65.

    103.136.165.

    Use of Written MaterialsCoding/DecodingUse of Keyboard DevicesInterviewingWorking in High TemperatureIrregular Hours

    Response Scales for Use with PAQ Job Elements

    When analyzing a job with the PAQ the analyst makes a response of the relevance of each element to thejob using one of six different types of response scales such as the Importance of the element to the job.Scales typically involve eleven scale points (consisting of six whole number points from 0 to 5 and fiveintermediate or mid-points). Several elements use a 0 1 (Does Not Apply Does Apply) scale.

  • 7/27/2019 Pa q Technical Manual

    9/68

    3

    SCORING OF THE PAQ

    When the analysis of a job with the PAQ is complete, a response will have been made for each of the jobelements discussed above. These responses usually are entered directly into the Enter-Act program on theInternet. Although some computer outputs of PAQ data are expressed in terms of these individual element

    responses, for most purposes the computer outputs are based on scores on various job dimensions.These job dimensions are actually statistically derived factors (combinations of several elements into asingle score), and can be thought of as reflecting the basic structure of human work as measured with thePAQ job elements. These job dimensions are presented and discussed in a later section of this manual.

  • 7/27/2019 Pa q Technical Manual

    10/68

    4

    DEVELOPMENT OF THE POSITION ANALYSIS QUESTIONNAIRE (PAQ)

    The current form of the Position Analysis Questionnaire (PAQ), Form C (1989), is the result of anevolutionary process covering several decades, in which the primary intent was to develop a structuredjob analysis questionnaire that would generally apply across the spectrum of jobs. The ancestors of the

    present PAQ (Form C) include the following: (1) the Checklist of Work Activities by McCormick andPalmer (Palmer, 1958); (2) the Worker Activity Profile (McCormick, Gordon, Cunningham, & Peters,1962); and (3) the Position Analysis Questionnaire (Forms A and B) developed by McCormick,Jeanneret, and Mecham (1967, 1969). A summary of the research based on Form A of the PAQ isreported by McCormick, Jeanneret, and Mecham (1972). Forms B and C of the PAQ are substantially thesame in their basic nature, content and format as Form A. Further information on the development of thePAQ may be found in McCormick (1979) and McCormick & Jeanneret (1988).

  • 7/27/2019 Pa q Technical Manual

    11/68

    5

    RELIABILITY OF JOB ANALYSES WITH THE PAQ

    Reliability generally refers to the stability or consistency of some measurement. The basic concept ofreliability as related to the actual analyses of jobs with the PAQ is concerned with the consistency of theresponses to the PAQ job elements as made by those individuals analyzing the jobs. (The reliability of job

    dimension scores is discussed later.)

    The measurement of reliability of PAQ job analyses typically is based on two or more sets of analyses.(Although it is possible to derive measures of reliability for three or more sets of responses, the discussionhere applies to pairs of responses, usually measured with a correlation.) These sets can be of either of twotypes. One of these consists of a comparison of the responses on the job elements made by two or moreindividuals herein referred to as analysts; this is inter-analyst reliability. The other consists of acomparison of responses by the same analyst at two or more different times; this is called rate-reratereliability.

    In the case of either of these types of reliability, however, it is possible to compute the reliability on eitherof two bases, namely for individual job elements (across a sample of jobs), or for individual jobs (across

    all job elements). The following discussion deals with the reliability for individual jobs across all jobelements.

    Reliability of PAQ Analyses for Individual Jobs

    This basis for the reliability calculations is the two sets of responses independently made by two analystsfor the same job across all job elements. In such an approach there usually would be a correlation for eachjob; the average of the correlations for several or many jobs usually would be accepted as an index of thereliability of the analyses.

    An illustration of such an approach was reported by Taylor and Colbert (1978) who arranged for the

    independent PAQ analyses by different individuals (typically incumbents) of 325 jobs in theadministrative offices of an insurance company in various parts of the country. The frequency distributionof the responses for pairs of analysts is given in Table 1. The average reliability coefficient was .68.

    In addition, some jobs were analyzed by the same individuals 90 days after the initial analysis. A total of427 pairs of responses resulted from this procedure, each pair consisting of the rate-rerate analysis bythe same individuals. The frequency distribution of these reliability coefficients is also given in Table 1.The average of these reliability coefficients is .78.

  • 7/27/2019 Pa q Technical Manual

    12/68

    6

    TABLE 1

    Frequency Distributions of Reliability Coefficients

    for Pairs of PAQ Analyses

    Class Intervals of

    Reliability Coefficients

    Pairs of Analyses

    By Different Analysts

    Pairs of AnalysesBy the Same Analyst

    Number % of Total Number % of Total

    .90 to 1.00 7 .6 8 1.9

    .80 to .89 103 8.7 159 37.2

    .70 to .79 411 34.5 176 41.2

    .60 to .69 400 34.4 66 15.5

    .50 to .59 198 16.6 16 3.7

    .40 to .49 52 4.4 2 .5

    .30 to .39 10 .8

    TOTAL 1190 100.0 427 100.0

    Average Reliability Coefficient .68 .78

    Source: Taylor and Colbert (1978a)

  • 7/27/2019 Pa q Technical Manual

    13/68

    7

    Reliability studies have been conducted on a routine basis with the PAQ and are recommended as part ofthe PAQ job analysis procedure. Reliability studies (Mecham, 1989a) from 35 organizations werecombined into a sample of 1116 jobs involving 3156 pairs of analysts (some jobs had been analyzed bymore than 2 analysts). The raw (solid line) and cumulative (dotted line) frequencies of reliabilitycoefficients at different levels are shown in Figure 1.

  • 7/27/2019 Pa q Technical Manual

    14/68

    8

    0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

    FIGURE 1

    Inter-Analyst Reliability Coefficients

    (N = 1116 Jobs. 3156 Analyst Pairs)

    PERCENT

    AGERAW

    10.0

    9.0

    8.0

    7.0

    6.0

    5.0

    4.0

    3.0

    2.0

    1.0

    0.0

    100.0

    90.0

    80.0

    70.0

    60.0

    50.0

    40.0

    30.0

    20.0

    10.0

    0.0

    PERCENT

    AGECUM.

    RELIABILITY COEFFICIENTS

  • 7/27/2019 Pa q Technical Manual

    15/68

    9

    As part of this same study, the standard error of measurement (Nunnally, 1978, pg. 241) for each of the3156 analyst pairs was also computed. (In general, this statistic shows how much the responses to theitems differ between the analysts in each pair. For example, a standard error of measurement of .5 meansthat 68% of the responses were within of a scale point of each other.) The results of this analysis arepresented in Figure 2. (The solid line is the raw percentage, the dotted line is the cumulative percentage.)

  • 7/27/2019 Pa q Technical Manual

    16/68

    10

    0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6

    STANDARD ERROR

    FIGURE 2

    Standard Error of Measurement

    (N = 1116 Jobs. 3156 Analyst Pairs)

    Influences on Reliability

    In interpreting PAQ reliability coefficients, one should be aware of certain statistical properties of suchindices, such as the effect of the number of 0 (Does Not Apply) responses of job elements. In this regardHarvey and Hayes (1986) point out that in the case of the analyses of jobs with many 0 responses, inflatedcoefficients can result. To determine the relationship between the size of reliability coefficients and thenumber of elements which received 0 responses in a large sample, the number of elements which receivedaverage responses of 0 for each job/position for the Mecham (1989a) study was correlated with the

    reliability coefficients for paired analysts for that job, across all jobs in the sample. A moderatecorrelation (r = .45) was found as reported in Figure 3, indicating that in general, reliability coefficientsare higher with increasing numbers of 0 responses. [The standard error of measurement was also found tobe related (correlation = -.59) to the number of zero responses. Additionally, the reliability coefficientsand standard error of measurement was found to correlate -.91.] While the inclusion of 0 responses in

    PERCENT

    AGERAW

    10.0

    9.0

    8.0

    7.0

    6.0

    5.0

    4.0

    3.0

    2.0

    1.0

    0.0

    100.0

    90.0

    80.0

    70.0

    60.0

    50.0

    40.0

    30.0

    20.0

    10.0

    0.0

    PERCENT

    AGECUM.

  • 7/27/2019 Pa q Technical Manual

    17/68

    11

    1.0

    0.8

    0.6

    0.4

    0.2

    0.0

    reliability analyses apparently tends to inflate estimates of reliability, (perhaps because (1) the presence orabsence of a work behavior can be more reliably rated than its level when present, (2) the most used PAQitems are better constructed, or (3) the response pattern alters the item variability present) it should benoted that analyst agreement that an element does not apply to a job also is an important and validmeasure.

    FIGURE 3

    Reliability and Zero Responses

    (N = 1116 Jobs)

    RELIABI

    LITYCOEFFICIENTS

    ITEMS WITH 0 RESPONSES

    0.0 25.0 50.0 75.0 100.0 125.0 150.0

  • 7/27/2019 Pa q Technical Manual

    18/68

    12

    From a more general perspective, the size of reliability coefficients is influenced by response variability.Such variability can result not only from errors of measurement, but also from actual (true) differencesin the nature of the jobs analyzed. Such differences in response variability are very likely to occur whenjobs of different types (e.g., clerical vs. shop jobs) are analyzed with the PAQ. For example, in the studydone by Mecham (1989a), the number of PAQ items given a response of 0 ranged from about 15 to 155(of 194 possible responses), with 72 being about average. This variance occurs primarily because the

    PAQ is a standardized questionnaire designed for use on jobs of all types. Because many jobs focus on a

  • 7/27/2019 Pa q Technical Manual

    19/68

    13

    few activities, it is not surprising that many items will be given responses of 0 (Does Not Apply) whileother jobs, being more general in nature, receive non-zero responses on many items. Differences inreliability may therefore be expected because of real differences in the nature of the jobs in question aswell as errors of measurement. This can be seen by noting that the reliability coefficient is the ratio oftrue score variance to observed score (true + error) variance as shown below:

    2

    trxx= 2

    t

    +2

    e

    Where: rxx= reliability coefficient

    2

    t= true score variance

    2e

    = error score variance

    As can be seen, the obtained reliability coefficient can be altered by changing the true score variancesas well as the error score variance.

    For this reason, the Standard Error of Measurement (SEM) which considers both the reliability and the

    variability of the responses to determine the degree of agreement, (Nunnally, 1978, pg. 241), has beenmade part of the standard reliability report as described in the PAQ Users Manual. The SEM may be abetter indicator of analyst agreement, especially when jobs having about the same number of zeroresponses are compared.

  • 7/27/2019 Pa q Technical Manual

    20/68

    14

    PAQ JOB DIMENSIONS

    During the development and experimental use of the PAQ, several factor analyses were carried out withdifferent samples of jobs that had been analyzed with the PAQ. The statistically derived factors fromthese analyses are usually called job dimensions. The analysis reported here was carried out with a sample

    of 2200 jobs. The resulting job dimensions are those that are used with the current computer analysispackage .

    Sample of 2200 Jobs

    The 2200 jobs used in the analysis represent a reasonably representative sample of jobs in the U.S. laborforce in terms of the proportions of employed persons in various major occupational groups of the 1970U.S. Census. [A comparison of the item responses weighted by the numbers of persons employed in the1970 and 1980 censuses indicated little change in the average level of work behaviors between the twodecades. Additionally, in a confirmatory factor analysis (Jeanneret, 1987) of PAQ data organized by the1980 detailed census codes, very similar factors emerged. These two studies led to the conclusion that thelittle that was to be gained by making the minor changes identified would not offset the confusion that

    would result from changing the dimensions. Hence, factor analyses based on the 2200 jobs as describedbelow was retained as the basis for the dimensions used in the current system.]

    Factor Analyses

    Eight factor analyses (technically principal components analyses) were carried out with the data from the2200 jobs. First, a separate principal components analysis was carried out with the data on the jobelements (items) within each of the first five divisions of the PAQ. Two analyses were also performed ondivision six items, one on the dichotomous (Does Not Apply/Does Apply) items, and one on theremaining items. The resulting factors were called Divisional job dimensions. Finally, data for most ofthe PAQ items were pooled together for another principal components analysis. The resulting factors arecalled Overall job dimensions.

    The title of each job dimension was based on what was considered to be the predominant construct ortype of behavior represented by the job elements that dominate the dimension (specifically those that hadthe highest statistical loadings on the dimension). It should be recognized, however, that some suchtitles cannot be completely descriptive of the predominant construct. The titles of the job dimensions aregiven in Table 2. This table includes the original title for each dimension. These are listed as TechnicalTitles. The table also includes an Operational Title for each dimension. In the case of dimensions forwhich these two titles are different, the Operational Title is intended to be somewhat simplified. A briefdescription of each dimension is given in the PAQ Users Manual, along with examples of several jobswith different scores on the dimension to illustrate the various levels of each dimension.

  • 7/27/2019 Pa q Technical Manual

    21/68

    15

    TABLE 2

    Job Dimensions Based on Principal Components

    Analyses of PAQ Data for 2200 Jobs

    # Technical Title Operational Title

    DIVISION DIMENSIONS

    Division 1: Information Input

    1. Perceptual interpretation

    2. Input from representational sources

    3. Visual input from devices/materials

    4. Evaluating/judging sensory input

    5. Environmental awareness

    6. Use of various senses

    Interpreting what is sensed

    Using various sources of information

    Watching devices/materials for information

    Evaluating/judging what is sensed

    Being aware of environmental conditions

    Using various senses

    Division 2: Mental Processes

    7. Decision making

    8. Information processing

    Making decisions

    Processing information

    Division 3: Work Output

    9. Using machines/tools/equipment

    10. General body vs. sedentary activities

    11. Control and related physical coordination

    12. Skilled/technical activities

    13. Controlled manual/related activities

    14. Use of miscellaneous equipment/devices

    15. Handling/manipulating/related activities

    16. Physical coordination

    Using machines/tools/equipment

    Performing activities requiring general body movements

    Controlling machines/processes

    Performing skilled/technical activities

    Performing controlled manual/related activities

    Using miscellaneous equipment/devices

    Performing handling/related manual activities

    General physical coordination

    Division 4: Relationships With Other Persons

    17. Interchange of judgmental/related information

    18. General personal contact

    19. Supervisory/coordination/related activities

    20. Job-related communications

    21. Public/related personal contacts

    Communicating judgments/related information

    Engaging in general personal contacts

    Performing supervisory/coordination/related activities

    Exchanging job-related information

    Public/related personal contacts

    Division 5: Job Context

    22. Potentially stressful/unpleasant environment

    23. Personally demanding situations

    24. Potentially hazardous job situations

    Being in a stressful/unpleasant environment

    Engaging in personally demanding situations

    Being in hazardous job situations

  • 7/27/2019 Pa q Technical Manual

    22/68

    16

    TABLE 2 (continued)

    Division 6: Other Job Characteristics

    25. Non-typical vs. typical day work schedule

    26. Businesslike situations

    27. Optional vs. specified apparel

    28. Variable vs. salary compensation

    29. Regular vs. irregular work schedule

    30. Job demanding responsibilities

    31. Structured vs. unstructured job activities

    32. Vigilant/discriminating work activities

    Working non-typical vs. day schedule

    Working in businesslike situations

    Wearing optional vs. specified apparel

    Being paid on a variable vs. salary basis

    Working on a regular vs. irregular schedule

    Working under job-demanding circumstances

    Performing structured vs. unstructured work

    Being alert to changing conditions

    OVERALL DIMENSIONS

    33. Decision/communication/general responsibilities

    34. Machine/equipment operation

    35. Clerical/related activities36. Technical/related activities

    37. Service/related activities

    38. Regular day schedule vs. other work schedules

    39. Routine/repetitive work activities

    40. Environmental awareness

    41. General physical activities

    42. Supervising/coordinating other personnel

    43. Public/customer/related contact activities

    44. Unpleasant/hazardous/demanding environment

    45. Non-typical schedule/optional apparel style

    Having decision, communicating, and general responsibilities

    Operating machines/equipment

    Performing clerical/related activitiesPerforming technical/related activities

    Performing service/related activities

    Working regular day vs. other work schedules

    Performing routine/repetitive activities

    Being aware of work environment

    Engaging in physical activities

    Supervising/coordinating other personnel

    Public/customer/related contacts

    Working in an unpleasant/hazardous/demanding environment

    Having a non-typical schedule/optional apparel style

    Source: Mecham (February 1977)

    Job Dimension Scores

    The score for a job on a particular job dimension is derived with an equation that is based on the sum ofthe cross-products of standardized responses for the jobs on the individual job elements multiplied bythe weights for the job elements. The weight for any given job element is a value that reflects thestatistically-derived importance of the element (i.e., its statistical contribution) to the dimension. Thegeneric form of the equation to calculate a job dimension score follows:

  • 7/27/2019 Pa q Technical Manual

    23/68

    17

    Job Dimension Score = (we-1x re-1) + (we-2x re-2) + (we-nx re-n)

    In this equation: w = weight of a job element; r = response to a job element; e-1, e-2, . . and e-n refer toelements 1, 2, . . . and n.

    Simplified, hypothetical examples of the method of deriving job dimension scores are presented in Table3 for three jobs (A, B, and C) using four job elements.

    TABLE 3

    Illustration of the Basis for Deriving Job Dimension Scores from PAQ Ratings on Four Job

    Elements for Three Hypothetical Jobs (A, B, and C).

    JOB JOB ELEMENT JOB

    DIMENSION

    SCORE

    1 2 3 4

    A

    B

    C

    w

    7

    7

    7

    r

    5

    3

    1

    wxr

    35

    21

    7

    w

    2

    2

    2

    r

    1

    2

    5

    wxr

    2

    4

    10

    w

    9

    9

    9

    r

    4

    2

    0

    wxr

    36

    18

    0

    w

    1

    1

    1

    r

    2

    3

    2

    wxr

    2

    3

    2

    75

    46

    19

    Legend: w = weightr = rating on job elementwxr = cross-product of w and r

    Note: The job dimension score is the sum of the cross products (wxr) of the four job elements.

    Reliability of Job Dimension Scores

    The reliability of job dimension scores is a measure of the consistency of the scores on a given jobdimension resulting from two or more PAQ analyses of the jobs in a sample of jobs. There are twoprimary ways for deriving estimates of such reliability. The first, inter-analyst reliability, would applywhen there are responses for the same jobs as analyzed by different analysts. The second type ofreliability, rate-rerate reliability, would apply when the same analyst has analyzed the same sample ofjobs on two or more separate occasions. Different statistical indices of reliability can be calculated, butthe most common index is the correlation between scores on a dimension derived from pairs of analysesperformed across a sample of jobs.

  • 7/27/2019 Pa q Technical Manual

    24/68

    18

    When interpreting such reliability data, certain considerations should be kept in mind. In the first place,inter-analyst reliability would be expected to be somewhat lower than rate-rerate reliability (analyses bythe same analyst at different times). Further, the possible restriction of range of the job dimension scoresfor a given sample of jobs results in a lower coefficient than would be the case if the range of scores isgreater. A coefficient based on a restricted range, however, can be adjusted to provide an estimate of whatthe coefficient would have been had the range been unrestricted. The formula for this adjustment is as

    follows (adapted from Nunnally, 1978, pg. 241242):

    2measrxx(adjusted) = 1

    2xu

    Where: 2meas = standard error of measurement, squared

    2xu = variance of cases in an unrestricted sample

    The reliability of job dimension scores, as summarized from studies from forty-three organizations isgiven in Table 4 (Mecham, 1988a). Although it was not clearly evident from the original data that theseparate analyses of the same job were by different analysts (i.e., inter-analyst reliability), this wasundoubtedly the situation in most cases.

    TABLE 4

    High, Mid-range, and Low Dimension Reliability Coefficients

    and Corresponding Standard Errors of Measurement from 43 Studies(19,961 Analyst Pairs)1

    DIMENSION RELIABILITY COEFFICIENTS2 STANDARD ERROR OF MEASUREMENT

    High High

    Quartile

    Median Low

    Quartile

    Low Low Low

    Quartile

    Median High

    Quartile

    High

    1

    2

    3

    4

    56

    7

    8

    9

    10

    1.00.99

    1.00.99

    1.001.001.001.001.00

    .99

    .98

    .94

    .96

    .94

    .98

    .97

    .96

    .93

    .95

    .94

    .94

    .87

    .90

    .90

    .93

    .94

    .91

    .88

    .92

    .87

    .83

    .77

    .82

    .82

    .88

    .80

    .87

    .78

    .75

    .80

    .00

    .41

    .31

    .00

    .00

    .38

    .69

    .34

    .21

    .40

    .050

    .109

    .067

    .117

    .084

    .076

    .086

    .019

    .061

    .119

    .168

    .259

    .223

    .246

    .159

    .185

    .221

    .277

    .230

    .264

    .255

    .374

    .320

    .322

    .275

    .247

    .303

    .352

    .293

    .365

    .413

    .484

    .434

    .426

    .355

    .450

    .374

    .474

    .504

    .457

    1.113.771.832

    1.048

    1.004.791.559.814.889.778

  • 7/27/2019 Pa q Technical Manual

    25/68

    19

    TABLE 4 (CONTINUED)

    DIMENSION RELIABILITY COEFFICIENTS2 STANDARD ERROR OF MEASUREMENT

    High High

    Quartile

    Median Low

    Quartile

    Low Low Low

    Quartile

    Median High

    Quartile

    High

    11

    12

    13

    14

    15

    16

    17

    18

    19

    20

    21

    22

    2324

    25

    26

    27

    28

    29

    30

    31

    32

    33

    34

    35

    36

    37

    38

    39

    40

    41

    42

    43

    44

    45

    1.001.001.001.00

    .99

    .991.001.001.00

    .991.001.00

    1.001.001.011.001.001.001.001.001.001.001.001.001.001.001.001.001.001.001.00

    .991.001.001.00

    .98

    .97

    .97

    .99

    .94

    .93

    .97

    .98

    .98

    .92

    .96

    .99

    .98.96

    .99

    .98

    .98

    .99

    .99

    .95

    .95

    .98

    .99

    .98

    .94

    .96

    .98

    .98

    .96

    .99

    .94

    .96

    .95

    .97

    .97

    .94

    .90

    .94

    .97

    .88

    .86

    .95

    .96

    .95

    .84

    .92

    .93

    .94.90

    .88

    .96

    .86

    .99

    .95

    .90

    .86

    .93

    .97

    .96

    .88

    .89

    .94

    .91

    .92

    .96

    .90

    .91

    .89

    .91

    .89

    .84

    .78

    .85

    .87

    .78

    .76

    .91

    .88

    .88

    .66

    .78

    .79

    .89.61

    .60

    .90

    .71

    .83

    .76

    .83

    .79

    .86

    .95

    .89

    .82

    .81

    .89

    .77

    .84

    .88

    .82

    .84

    .82

    .74

    .79

    .51

    .22

    .07

    .06

    .00

    .27

    .60

    .27

    .00

    .11

    .00

    .00

    .39.00

    .00

    .69

    .33

    .00

    .00

    .54

    .28

    .02

    .76

    .00

    .52

    .00

    .46

    .00

    .52

    .54

    .43

    .19

    .42

    .01

    .43

    .040

    .059

    .080

    .047

    .123

    .132

    .068

    .077

    .057

    .137

    .064

    .033

    .062.028

    .000

    .023

    .018

    .025

    .020

    .098

    .072

    .063

    .049

    .045

    .064

    .078

    .066

    .035

    .069

    .043

    .097

    .102

    .079

    .060

    .049

    .160

    .200

    .191

    .110

    .252

    .278

    .184

    .162

    .159

    .289

    .205

    .103

    .168.207

    .128

    .159

    .159

    .101

    .117

    .234

    .233

    .168

    .120

    .148

    .253

    .210

    .169

    .164

    .222

    .141

    .252

    .213

    .241

    .178

    .183

    .264

    .320

    .254

    .198

    .360

    .383

    .237

    .213

    .241

    .406

    .297

    .277

    .256.328

    .360

    .222

    .380

    .126

    .225

    .318

    .377

    .265

    .184

    .202

    .354

    .336

    .262

    .304

    .292

    .220

    .328

    .307

    .337

    .306

    .334

    .402

    .472

    .389

    .362

    .474

    .497

    .312

    .352

    .355

    .584

    .478

    .469

    .337.632

    .634

    .317

    .542

    .414

    .497

    .423

    .465

    .380

    .229

    .335

    .428

    .446

    .343

    .483

    .405

    .358

    .431

    .404

    .428

    .510

    .467

    .703

    .885

    .968

    .9711.015.859.635.857

    1.295.946

    1.0121.032

    .7861.3841.140.563.823

    1.1791.296.684.851.991.497

    1.132.696

    1.132.740

    1.050.699.679.758.904.767.998.755

    1 Studies taken from the PAQ Master Data Base representing the following industry sectors:

    Chemicals and Petroleum, Communications, Financial and Real Estate Services, Government,Health Care, Manufacturing, Processing, Public Utilities, Social Services, Trade andDistribution, and Transportation.

    2 Corrected for restriction of range.

    Source: Mecham, 1988a

  • 7/27/2019 Pa q Technical Manual

    26/68

    20

    DERIVING PERSONNEL REQUIREMENTS OF JOBS WITH THE PAQ

    The nature of the job-related data that can be obtained with the PAQ suggests certain potentialapplications. One application is for deriving estimates of the personnel requirements of jobs, i.e., thenature of the human attributes required for jobs. Personnel requirements for any job certainly need to be

    valid, in that they should make it possible to select job candidates who, in general, would be expected tobe successful on the job in question. The subsequent discussion of methods of test validation applies tovirtually all types of human attributes covered by personnel requirements, including aptitudes,achievements, personality characteristics, biographical data (bio-data), etc.

    Discussion of Test Validity

    The concept of validity is complicated and cannot be defined in a single statement; there are many facetsof this concept. The Society for Industrial and Organizational Psychology (1987) sets forth three types ofvalidity, as follows:

    1. Content Validity (the extent to which a test measures some domain of knowledge or

    behavior).2. Construct Validity (the extent to which a test measures some basic, underlying human

    attribute or quality, i.e., a construct).3. Criterion-Related Validity (the extent to which test scores are related to a separate criterion

    of interest, such as job performance).

    Methods of Test Validation in Personnel Selection

    Test validation in personnel selection and placement often involves some form of criterion-relatedvalidity. Three general approaches are discussed below.

    Traditional Methods of Determining Test Validity. The traditional methods of determining thecriterion-related validity of tests are as follows: (1) concurrent validity procedures (in which tests areadministered to present employees on a job with the test scores then being correlated with an appropriatecriterion); or (2) predictive validity procedures (in which tests are administered to candidates for a jobwith the test scores being correlated with a criterion subsequently obtained).

    Job Component Validity. The traditional methods discussed above have been criticized for two basicreasons: (1) such procedures frequently are not practical because of such factors as limited sample sizes,the time required and the costs involved; (2) it seems that there should be some general basis fordetermining the types of human attributes (as measured by tests) required for each of various types ofwork activities. These arguments served as a primary objective in the development of the PAQ, and led tothe crystallization of the concept of job component validity. This is essentially a variant of the concept of

    synthetic validity proposed by Lawshe (Lawshe & Balma, 1966).

    The job component validity model is based on the hypothesis that, when different jobs have in common agiven job component, the human requirements for fulfilling that component would be the same in all jobsin which that component exists.

  • 7/27/2019 Pa q Technical Manual

    27/68

    21

    The development of a procedure for establishing the job component validity of tests or other measures ofhuman attributes would consist of the following: (1) some method of identifying the constituentcomponents of jobs; (2) a method for determining, for an experimental sample of jobs, the humanattribute(s) required for successful job performance when a given job component is common to severaljobs; and (3) some method of combining the estimates of human attributes required for individual jobcomponents into an over-all estimate of the human attribute requirements for an entire job. The

    background research dealing with job component validity as based on the PAQ is discussed in a latersection of this manual.

    Validity Generalization. The concept of validity generalization is predicated, as is job componentvalidity, on the idea that jobs having similar characteristics will likely have similar personnelrequirements. Accordingly, if a test can be shown to be valid for one job, it should by inference, be validfor another job with similar characteristics. This reasoning can be extended to families of similar jobs sothat if a test is valid for one or several jobs within the family it should be valid, to the degree that jobswithin the family are similar, to other jobs in the family. Furthermore, it is expected that by increasing thesample size by combining results across studies within a family that the validity estimates will stabilizeand will increase in accuracy.

    Schmidt and Hunter (1977) developed a theoretical and statistical framework for validity generalizationwhich has led them to conclude that test validities can be generalized across broad groups of jobs. In oneanalysis, Hunter (U.S. Department of Labor, 1983) summarized test validity data published by the U.S.Employment Service for a set of 515 jobs. The results of his study implied that validity generalization isso broad that almost all jobs can be grouped into just five categories based on job complexity (asdetermined from job analyst ratings on the degree to which the worker must function with data, peopleand things), with tests of only three types of abilities being necessary to tap the human aptitudes requiredfor jobs in all five categories (these being cognitive, perceptual, and psychomotor abilities). Thisformulation provides for the use of a cognitive test for jobs in all five categories along with somecombination of perceptual and/or psychomotor tests for jobs in each of the five categories.

    The subsequent use of Hunters formulation by the U.S. Employment Service in many locations within

    the United States has led to a critique of its appropriateness by the National Academy of Sciences(Hartigan & Wigdor, 1989). The use of this approach by an agency of the Federal Government and thesupport given it by the National Academy report (Hartigan & Wigdor, p. 281) also appears to have donemuch to legitimize it in the minds of personnel selection specialists. Continuing research and legalfindings might be expected to clarify the appropriate ways in which to apply the generalized validitymethodology.

    Job Component Validity Based on the PAQ

    As indicated earlier, the PAQ consists of job elements of a worker-oriented type which reflect the typesof human behaviors required in the performance of jobs. Because of the human behavior orientation of

    the PAQ job elements, it is possible that these elements might be used as the basis for a job componentvalidity approach (McCormick, 1959). Thus, it is believed that the PAQ meets the first requirement of thejob component validity concept discussed earlier, namely the need for identifying relevant constituentcomponents of jobs.

  • 7/27/2019 Pa q Technical Manual

    28/68

    22

    The second requirement involves the identification of the human attributes that presumably are requiredfor the successful performance of jobs that have any given job component in common. In the simplestcase, such aptitude requirements would be expressed in terms of test scores. There are different possibleprocedures for determining the attribute requirements of job components that are in common to a numberof different jobs. Perhaps the most direct approaches involve the collection and analysis of test data forindividuals on each of many different jobs whose job components can be quantified. The use of such data

    generally is predicated upon the assumption that people tend to gravitate (drift or selectivelymigrate) into those jobs that are commensurate with their own levels of some given attribute. Abundantevidence of the differences between distributions of test scores by occupation is found in Yerkes (1921),Strong (1943, Chap. 7 & 8), Harrell and Harrell (1945), Stewart (1947), Schaie (1958), Thorndike andHagen (1959, pp. 2734), Tyler (1965, pp. 335340), U.S. Department of Labor, (1970, Table 92), E.F.Wonderlic and Associates, Inc. (1983), Myers and McCaulley (1985, Appendix D), . Furthermore, ananalysis of the sample means and standard deviations from 742 validity studies performed using the nineaptitudes of the General Aptitude Test Battery (GATB) revealed that the distributions of reported meanscores for jobs had a low probability (p < .001) of resulting from a random selection of scores found inthe general working population. Additionally, the reported standard deviations were smaller for eachaptitude (except manual dexterity) than those expected using random selection from within the generalworking population (p < .001) (Mecham, McCormick, & Jeanneret, 1984).

    Taken together, this evidence has led the authors to conclude that each job has associated attributebands indicated by characteristic means and standard deviations. Such bands are related to the natureof the job itself and the processes by which individuals get into such jobs. In this regard, however, thepresence of employees who possess a high degree of a given attribute on a given job does not necessarilymean that the attribute is required for satisfactory job performance. A given attribute might not itself berequired for a given job, but might be correlated with some other attribute that is required by the job. Theconverse, that persons might have lower than minimally required levels of an attribute, is less likely, asthey would not be expected to remain on the job for any length of time. Although such possibilities exist,their effects have not been found to materially limit the potentialities of the job component validityapproach.

    When some estimate of the attribute(s) required for any given job component is known, it is then possibleto build up the total requirements for the job, by knowing what components exist in any given job. Thiswould fulfill the third requirement of the job component validity strategy.

    The use of the PAQ as the basis for testing the job component validity approach is primarily rooted in theuse of job dimension scores for jobs. The basic approach has consisted of determining the relationshipbetween the job dimension scores for a sample of jobs and certain sets of test data for incumbents on suchjobs. The test data for the job incumbents are used as a criterion of the importance to the respectivejobs of the attributes measured by the tests. The PAQ job dimension scores may then be used aspredictors of the related test data for the jobs for which PAQ data have been collected.

    Job Component Validity Analyses with GATB Tests. An important analysis of the job component

    validity concept has involved the use of the nine tests of the General Aptitude Test Battery (GATB) of theUnited States Employment Service (USES), these data being available for samples of job trainees orincumbents in hundreds of different jobs. The test data available for each job includes the mean test scoreof the incumbents, the standard deviations of the incumbent scores, and validity coefficients. In addition,

  • 7/27/2019 Pa q Technical Manual

    29/68

    23

    for many of the USES studies, specific tests were recommended for use in selection because they metsome combination of the following criteria: the aptitude was judged to be important by job analysts; thetest scores were relatively high in comparison with scores on the other tests for the sample studied; thestandard deviation was low, a significant validity coefficient was found; and the test, in combination withother tests evaluated, aided in distinguishing the more from the less productive workers (United StatesDepartment of Labor, 1970, pp. 4794). Tests so identified are part of the Specific Aptitude Test Battery

    (SATB) recommended for the job studied. (In recent years, since the adoption of Hunters validitygeneralization methodology by the USES, SATBs are no longer determined nor are job specific validitystudies conducted. Instead jobs are assigned on the basis of complexity to one of the five categoriesmentioned before, and the category regression equations are used to estimate the proficiency of jobapplicants in all job categories. Applicants, classified by ethnic group membership, who have the highestpercentile scores within ethnic group and job category are then referred for available openings.)

    With the continued growth in both the GATB and PAQ databases, it has been possible to conductsuccessive studies using samples of increasing sizes. Three major studies have now been conducted(McCormick, et. al. 1972; McCormick, Mecham & Jeanneret, 1977, pp. 1114, and the study reported inthis manual). In these different analyses various criteria have been used as indices of the relativeimportance of the test construct to individual jobs. These have included for each of the nine GATB

    constructs (tests) for each job studied: (1) the mean test score of the incumbents on the job; (2) the testscore one standard deviation (r) below the mean (referred to as 1 SD below the mean, or Mean SD or as a possible cutoff score); (3) the validity coefficient; (4) the standard deviation of the testscore; and (5) whether the test was selected to be part of the SATB for the job.

    The rationale for the use of mean test scores of job incumbents as the criterion of the relative importanceof one or more tests for personnel selection for various jobs is predicated on the assumption, mentionedearlier, that people tend to gravitate into those jobs that are commensurate with their own aptitudes orother attributes. (This is sometimes called a natural selection theory.) Thus for a given test, high meantest scores of people on various jobs could imply that those jobs require high levels of the attributemeasured, and low scores would imply the opposite.

    The rationale for the use of the 1 Standard Deviation below the mean criterion is essentially the same aswith the use of the mean test score criterion, except that the 1 Standard Deviation below the meancriterion would generally more nearly approximate typical test cutoff scores for personnel selection by theU.S. Employment Security Office. (It is, of course, recognized that test cutoff scores vary considerably,depending upon labor market conditions, but at the same time these values are more typical of cutoffscores that have been used by the U.S. Employment Security Office. The use of such cutoff scores for thecombination of tests found in SATBs would have typically resulted in not hiring about one-third of thosepresently employed in the jobs studied.) The possible rationale for the use of coefficients of validity as thecriterion of the relative importance of tests for personnel selection is predicated on the assumption that themagnitude of validity coefficients indicates the relative importance to job success of the attributemeasured by the test. The fact that a test was included as part of the SATB is likewise an indication of theusefulness of the test both on judgmental and empirical grounds. Additionally, in some studies, the

    standard deviation was included as a criterion as it indicates the range of scores typical of thoseindividuals holding the job.

    In the most recent job component validity analyses with the GATB (Mecham, 1987), PAQ job analysesthat matched a total of 460 jobs for which test data were available from the USES were also found in thePAQ data bank on the basis of common 9-digitDictionary of Occupational Titles (DOT) (U.S.

  • 7/27/2019 Pa q Technical Manual

    30/68

    24

    Department of Labor, 1977) numbers. For many of these jobs several independent PAQ analyses wereavailable, and in such instances a single composite PAQ analysis was derived for all of the PAQanalyses having the same 9-digit DOT code.

    These composite or averaged PAQ analyses as well as individual analyses (where only a single PAQrecord was available for a DOT code) were then used as predictors of the test-related criteria given in

    Table 5. The table presents shrunken multiple correlation coefficients for all nine of the GATB tests.(Shrunken correlation coefficients are adjusted downward from the original correlation coefficients toadjust for possible chance inter-relationships that might produce spuriously high correlations and estimatethe level of prediction that one would expect within the population.)

    TABLE 5

    Multiple Correlations of PAQ Overall Job Dimension Scores with GATB Test Criteria

    and Percentage of Agreement with Test Inclusion in a

    Specific Aptitude Test Battery (SATB)

    (N = 460 Validity Studies)

    GATBTest

    Correlation1with MeanTest Scores

    Correlationwith ValidityCoefficients

    PercentageAgreementwith Inclusionin SATB2

    G:V:N:S:

    P:Q:K:F:

    M:

    IntelligenceVerbal AptitudeNumerical AptitudeSpatial Aptitude

    Form PerceptionClerical PerceptionMotor CoordinationFinger DexterityManual Dexterity

    .77

    .78

    .75

    .72

    .61.70

    .67

    .45

    .24

    .41

    .31

    .29

    .38

    .22.19

    .20

    .22

    .33

    80826978

    6972747973

    Source: Mecham, 1987

    1

    The results reported use data remaining after an initial set of regression equations werecalculated and cases were removed which yielded large differences between actual and predicted results(i.e. there were extreme residuals of 3 or more Z-score units outside of the predicted value on one or moreof the test predictions). This procedure was followed to reduce the chances that the operational regressionweights would be unduly influenced by extreme cases (which might have occurred because of miss-classification of PAQ data, data entry errors, etc.). The results from the first study of unrestricted dataaveraged .02 lower than those reported here but in NO case were in excess of .04. The sample of validity

  • 7/27/2019 Pa q Technical Manual

    31/68

    25

    studies represents 298 different SATBs; that is, some of the studies were replications of earlierstudies or several different jobs were classified under the same SATB study for which more than onePAQ profile was available. Combining test validation data and PAQ data within the same SATBdesignations and computing a new set of equations resulted in very similar results. The non-combineddata however, involved more cases and were used because more predictors were typically brought into theequation which was thought to add some stability to predictions from PAQ data submitted by the typical

    user.

    2 Multiple discriminant analysis was used to predict the probability that a test would be chosen aspart of the Specific Aptitude Test Battery (SATB) for the job. SATB and PAQ data were available for236 validity studies. Reported are the percentage of cases from those studied that were correctlyclassified.

    Note: A study (Mecham, 1988b) was done to determine the equivalence of the predictions from the1977 equations and those derived in 1987. Predictions of mean test scores from the two sets of equationswere correlated across 2485 average PAQ profiles taken from the master database. The correlations werehighest for the cognitive tests (.90 to .94), high for the perceptual tests (.80 to .91) and moderate for thedexterity tests (.27 to .85). Somewhat lower correlations between the 1977 and 1987 data sets were found

    when the validity coefficients were the criteria.

    It is clear from Table 5 that the predictions of mean GATB test scores is much better than the predictionof the size of the validity coefficients. Further, the predictions of the mean scores of the cognitive tests(G, V, and N) is best, followed by the perceptual tests (S, P, and Q). The prediction of the dexteritytests (F and M) were lowest. (It should be added that the correlations with scores 1 Standard Deviationbelow the mean were very similar to those for the mean test scores. Thus, results for 1 Standard Deviationbelow the mean are not given.)

    The use of combinations of job dimension scores to identify the tests used by the USES in their Specific

    Aptitude Test Battery (SATB) is also given in Table 5. The percentages of agreement are generally quitehigh, being in the 70s and 80s.

    Job Component Validity with Commercially Available Tests.With the viability of the job componentstrategy demonstrated with the GATB data, the field application of such information has much potentialvalue to both employers and job applicants. However, because of a restricted use policy by the U.S.Employment Security Office (USES), private employers in the U.S. are not permitted to administer theGATB or to receive the necessary test scores from job applicants to apply GATB predictions directly.(The GATB is available for purchase however, to qualified users in Canada). Consequently, twostrategies were followed to make the results useful to employers who do not use the services of the USES.First, commercially available tests that measured some GATB constructs were identified, and, secondlydata on commercially available tests which could be used directly in a job component validity study were

    obtained.

    Identification of Commercial Tests that Measure GATB Constructs. McCormick, DeNisi & Shaw(1979) obtained test data for incumbents on 202 jobs from various sources, including data furnisheddirectly by a number of private and public organizations, and some data from published test validationstudies. Wherever PAQ analyses could not be obtained for the jobs from the collaborating organizations,PAQ analyses for the same jobs in other organizations were drawn from the PAQ data bank.

  • 7/27/2019 Pa q Technical Manual

    32/68

    26

    There was admittedly a problem in matching certain of the commercially available tests with the GATBtests, but in general terms this was accomplished by the researchers on the basis of the judged similarityof test content. Given the test(s) that were so matched with any given GATB test, it was then necessaryto convert the scores on each such test to a common metric. The metric used was a standard score systemwith a mean of 100 and a standard deviation of 20 (the same as the metric used with the GATB tests). Amajor problem in this conversion was that of obtaining an appropriate set of norms (the type of sample on

    which the GATB test norms are based). Normative data for such populations were not available for mostof the commercially available tests, so it was necessary in some cases to build up such norms from twoor more different samples.

    It must be recognized that the matching of commercially available tests to the GATB tests involved acertain amount of judgment, and that the subsequent step of converting scores on such tests to a commonmetric also involved judgment. [Recognizing these limitations, however, the results of these processes aregiven in Table 14 in the Appendix. That table shows, for each test construct, the common metric thatwas used (a mean of 100 and a standard deviation of 20), and the scores on the tests which wereconsidered as matching that construct. Further, it gives the raw scores on each such test which wereequated to the scores of the common metric.]

    The conversion of scores on the various commercially available tests to the standard score system wasnecessary in order to be able to derive the criteria of mean test scores and 1 Standard Deviation belowthe mean for the incumbents on the various jobs in the sample. Once the standardized criterion valueswere derived for the individual jobs in the sample, the regression equations developed for each of theGATB tests (McCormick, et al., 1977) for the three criteria (mean test score, the value 1 StandardDeviation below the mean, and validity coefficient) were applied to the jobs in the sample to obtainpredicted criterion values. Those predicted criterion values were then correlated with the actual criterionvalues. Such analyses were carried out for five of the constructs represented by the GATB tests, sinceadequate data were not available for the other constructs. The resulting correlations are given in Table 6.

  • 7/27/2019 Pa q Technical Manual

    33/68

    27

    TABLE 6

    Correlations Between Predicted and Actual Criteria for Five

    Constructs as Measured by Various Commercial Tests

    CRITERION

    Construct Mean TestScores

    1 SD Below

    The MeanValidity

    CoefficientsN

    G :

    V :

    N :

    S :

    Q :

    Intelligence

    Verbal

    Numerical

    Spatial

    Clerical

    Average

    .74***

    .71***

    .67***

    .74***

    .53*

    .66***

    .66***

    .71***

    .63***

    .76***

    .60**

    .68***

    .30

    .29**

    .27

    -.02

    33/

    50/36

    64/76

    26/43

    15/29

    * Significant, p < .05

    ** Significant, p < .01

    *** Significant, p < .001

    + The N at the left applies to the first two criteria; the N at the right applies to the third. (There werenot enough jobs with validity coefficients for the G construct to report.)

    In an additional study conducted by Mecham (l988c), tests from the Differential Aptitude Battery (DAT)which correlated well with certain of the GATB tests were identified. Combining correlation coefficientsfrom between 5 and 11 studies (N = 801 to 1772 persons) yielded the following results: GATB Verbalwith DAT Verbal Reasoning (average r = .71), Language Usage - Spelling (.65), Language Usage -Sentences (.71); GATB Numerical with DAT Numerical Ability (.63); GATB - Spatial with DAT SpaceRelations (.65); GATB - Clerical with DAT Clerical Speed and Accuracy (.56).

    In the case of cognitive tests used to predict GATB constructs, some confirmatory data can be found in ajob component validity study involving the Wonderlic Personnel Test (WPT). The WPT is a test ofgeneral intelligence. The study (Mecham, 1988d) involved the correlation of PAQ and WPT test datafrom over 100 jobs and over 200,000 people. The WPT data were those reported for 1970 and 1983normative samples (Wonderlic Personnel Test Manual, 1983, Table 4, p. 13). The multiple correlations ofrelevant PAQ overall job dimension scores with the median WPT test data for these 100+ occupationsare given in Table 7.

  • 7/27/2019 Pa q Technical Manual

    34/68

    28

    TABLE 7

    Multiple Correlations Between PAQ Overall Dimensions

    and Median Occupational Scores on the Wonderlic Personnel Test

    Date of Normative Study Correlation

    19701983Combined

    .81

    .79

    .82

    Source: Mecham, 1988d

    Identification of Commercial Tests That Measure Personality Variables. Although the PAQ has beenused primarily to derive estimates of the aptitude requirements of jobs, some data reflect the possibilityof deriving estimates of other human attributes, such as personality variables.

    One such study (Mecham, 1988e) involved the use of the Myers-Briggs Type Indicator (MBTI). TheMBTI is a self-report personality instrument that measures basic preferences regarding perception andjudgment. It yields scores on four bi-polar indices: El-Extraversion vs. Introversion; SN Sensing vs.Intuitive perception; TF Thinking vs. Feeling judgment; JP Judgment vs. Perception. People areusually classified by indicating the poles on each index for which they received the highest scores. Forexample, the designation ESTP indicates preferences for extraversion, sensing, thinking, and perception.

    These scores tend to be associated with occupational choice and the percentages of persons withinoccupations who report a preference for each of the dichotomous poles of each index, as well as for thevarious possible combinations of the indices have been published (Myers, & McCaulley, 1985, AppendixD).

    The procedures used in this study will not be described in detail, but involved the use of MBTI test datafor 26,298 persons on 95 occupations. The multiple correlation coefficients of relevant overall PAQ jobdimension scores with test data from the MBTI are given in Table 8. (Specifically, these are correlationsof the PAQ data with the percentages of persons in specific occupations who obtained their highest scoreson specified poles of the MBTI indices.)

  • 7/27/2019 Pa q Technical Manual

    35/68

    29

    TABLE 8

    Multiple Correlations Between PAQ Overall Dimension Scores and

    The Percentage of Persons in Occupations by Personality Indices

    MBTI Index Correlation

    Extraversion vs. IntroversionSensing vs. IntuitiveThinking vs. FeelingJudgment vs. Perception

    .51

    .55

    .57

    .57

    Source: Mecham, 1988e

    Although the MBTI multiple correlation coefficients are not as high as those found with most aptitude

    tests, they do support the contention that the PAQ job dimension scores have considerable potential forderiving estimates of personality requirements for jobs.

    Comparative Effectiveness of Situationally Specific, Job Component, and Generalized Validation

    Methods

    How effective is the job component validation method for identifying tests to be used in personnelselection and the test values associated with high levels of employee job performance? One practical wayto answer this question is to identify existing acceptable validation methods, obtain results using them,and then compare the results obtained with job component validity results. This approach was used in astudy (Mecham, 1985) which drew data from among 742 criterion-related validity studies conducted bythe U.S. Employment Service (USES) using the GATB.

    In the study, normative test (averages and standard deviations of scores around the averages) and validitydata were identified which permitted the effectiveness of the three validation methods to be estimated andcompared. The first method to be researched was the criterion-related and situationally specific validation(SV) method. As this method has a history of professional and legal acceptance (when certainmeasurement assumptions are met), it was considered to be the standard against which the other methodscould be compared.

    To determine the effectiveness of SV, studies from the USES validity study database were identified forwhich normative and validity data were available from two or more samples of employees/trainees on thesame or a very similar job. This search resulted in finding 175 jobs for which there were two or more suchstudies.

    The rationale behind this part of the study was that when a SV study is conducted in the practicalsituation, the results are gathered on one sample of persons and then applied to another sample (those whoare then selected, based on the earlier results). It could be expected, therefore, that by comparing theresults from one sample with those of another sample one could determine the approximate effectivenessof this approach in the practical setting.

  • 7/27/2019 Pa q Technical Manual

    36/68

    30

    To determine the effectiveness of the first validity studies for jobs in predicting the results of the secondstudies, the respective values from the first and second studies for each job were correlated across all 175jobs. (For example, if the average score on test G was 100 in the first study, and in the second study it was97, these values, along with those obtained for all 175 jobs would be correlated together.) This procedurewas used for the averages of each of the nine tests of the GATB, for the unadjusted validity coefficients,and for the standard deviations of scores about the averages. The results are reported under the SV

    columns in Table 9.

    Next, multiple regression equations were developed using the job component validity (JCV) method on asample (A) of 194 jobs and then applied to a holdout sample (B) of 193 jobs. This process was alsoreversed and the equations developed on sample B were used to predict values in sample A. Averagingthe correlation coefficients obtained between predicted and actual values resulted in the coefficientsreported in the JCV columns of Table 9.

    Finally, the validity generalization (VG) approach developed by Hunter (U.S. Department of Labor,1983) and applied by the U.S. Employment Service in a number of states was used to predict the valueson 226 validity studies added to the GATB database after Hunter had formed his original job families anddeveloped estimates of average validity coefficients for those families. Additionally, mean test averages

    and standard deviations were calculated within the original families and used as estimates of values in theholdout sample. The results are found in the VG columns of Table 9.

    TABLE 9

    Correlation of Estimated Values with Values in Holdout Samples by

    Validation Method Used to Make Estimates

    Averages Validity Coef. Std. Dev.s

    Tests

    SV JCV VG SV JCV VG SV JCV VGG-IntelligenceV-VerbalN-NumericalS-SpatialP-Form PerceptionQ-Clerical PerceptionK-Motor CoordinationF-Finger DexterityM-Manual Dexterity

    .77

    .80

    .77

    .74

    .63

    .66

    .73

    .54

    .40

    .77

    .75

    .75

    .75

    .62

    .66

    .70

    .49

    .30

    .65

    .60

    .61

    .65

    .45

    .39

    .29

    .02

    .09

    .15

    .14

    .08

    .06

    .11

    .07

    .12

    .16

    .26

    .24

    .17

    .19

    .19-.08-.05.14.08.21

    .17

    .19

    .15

    .03-.02-.06-.03.11.24

    .16

    .10

    .23

    .12

    .15

    .16

    .07

    .06

    .15

    .09

    .09

    .26

    .26

    .07

    .16

    .19-.05.01

    .08

    .08

    .38-.01.32.23.06

    -.03.09

    Note: SV = Situationally specific validity; JCV = Job Component Validity;VG = Hunters Validity Generalization method. This table is from data found in Tables 1, 4, and 6of Mecham, 1985.

    While somewhat different samples were used by necessity in order to test each of the validation methods,thereby precluding a definitive statement about which methods are most effective, the data do seem to

  • 7/27/2019 Pa q Technical Manual

    37/68

    31

    indicate that the SV and JCV methods were basically comparable in effectiveness. None of the methodshowever, were very effective at predicting variations between jobs in validity coefficients or standarddeviations for specific tests. It should be noted however that the means of the validity coefficients for alltests were positive (rather than zero), indicating that the tests were generally useful predictors ofemployee job performance. Additionally, while there was little to be gained by predicting standarddeviations, the variation of the standard deviations was small across jobs for particular tests (although the

    variations were relatively large between tests).

    From a practical standpoint, these results suggest that JCV can be used about as effectively as thetraditional situationally specific validation method and with as good or better effect than the VG methodin use in a number of states by the Employment Security Office. Given this comparability and the factthat it can be used on jobs with small numbers of employees (for which SV studies cannot be performedbecause of sample size limitations), the JCV method holds the promise of effective job-related andscientific personnel selection. The same can be said, but with somewhat less confidence, about VG as itwas implemented by Hunter and the USES. The use of more sensitive measures of job content (such asthe PAQ) to yield more homogeneous groupings of job into families may well improve VG predictions,however.

    Research by Gutenberg, Arvey, Osborn and Jeanneret (1983) has offered some support, as well as raisedsome questions, regarding the Hunter recommendations. These researchers found that the information-processing/decision-making PAQ dimensions moderated the validities of various ability tests. Inparticular, information-processing/decision-making (Hunters complexity dimension) moderatevalidities for mental ability tests (with positive correlations) and for finger and manual dexterity tests(with negative correlations). However, physically and manually oriented PAQ dimensions had nomoderating effects for either cognitive, perceptual or psychomotor test validities. Thus validity studies ofjobs that have a considerable amount of information-processing and decision-making will likely havehigher validity coefficients for tests of mental ability (i.e., verbal and numerical reasoning) than for testsof psychomotor abilities (i.e., dexterity). Further, jobs that require little in the way of information-processing and decision making would be expected to have lower validity for mental ability tests. If suchjobs are more manual or physical in nature then tests of psychomotor abilities should be expected to have

    stronger validity relationships. This latter finding by Gutenberg, et al., supports the situational specificityhypothesis regarding test validities and is somewhat contrary to arguments by Hunter that a cognitive testis valid for jobs with all levels of complexity.

    PAQ Computer Outputs of Job Component Validity Data

    Computer generated estimates of predicted test data are provided by using the regression equationsdeveloped in the studies previously described. When PAQ data are scored, the resulting predictionsmay be in hardcopy form, recorded on a computer readable medium, or transmitted over acommunication carrier (such as telephone lines) from one computer to another. While the amount ofinformation varies to some extent as a function of the method of transmittal, for each test of the GATB an

    estimate is given in terms of a low (1 standard deviation below the average), average (mean), and high (1standard deviation above the average) score. This attribute band (going from the low to the high score)is where approximately 68% of job incumbent GATB scores are expected to be found. (The accuracy ofsuch estimates are, of course, a function of several factors, including the reliability and validity of thePAQ ratings and the accuracy with which the regression equations capture and weight the appropriatePAQ dimensions, etc.)

  • 7/27/2019 Pa q Technical Manual

    38/68

    32

    Also provided are estimates of the raw (uncorrected) validity coefficients (which are almost always lowerthan the true correlations between test scores and job performance). Additionally, the probability thateach test would have been useful in selecting persons for jobs with the content of the type reported in thescored PAQ is given. The three tests having the highest probability of use are marked (with a

  • 7/27/2019 Pa q Technical Manual

    39/68

    33

    Predicted MBTI data take the form of predictions of the percentage of workers on a job who will scorehighest on each of the four bi-polar indices. By multiplying together the percentages that are found for thepole on each index for which the applicant has the highest score, an estimate of the percentage of personson the job with that MBTI profile can be obtained.

    Attribute Ratings of PAQ Job Elements

    In addition to the use of the job component validity approach for deriving personnel requirements forjobs, PAQ-based data can be used in another manner for this purpose. This procedure is based on the useof attribute ratings of the PAQ job elements. After the PAQ had been developed, arrangements weremade for obtaining ratings of 76 human attributes that were judged to be relevant for the performance ofthe job activities or for tolerance of the conditions indicated by the individual PAQ job elements. Twenty-seven of the attributes were of a personality or temperament nature, and the remaining 49 were of anaptitude nature (Marquardt & McCormick, 1972). The following rating scale was used for the purpose oflinking the attributes with the PAQ job elements:

    Rating Degree of Relevance

    0 No relevance1 Very limited relevance2 Limited relevance3 Moderate relevance4 Substantial relevance5 Very substantial relevance

    Ratings were obtained from persons who were considered to be experts in personnel psychology,primarily industrial psychologists. Eight or more experts rated each attribute in terms of its relevance toeach PAQ job element. The median rating of each attribute for each job element was derived. Thesemedians served as the basis for a matrix of 76 attributes and 187 job elements.

    While there are many possible ways in which attribute relevance ratings can be combined with PAQ itemresponses to arrive at a composite attribute rating for a job (Shaw & McCormick, 1976), the followingmethod (suggested by Jeanneret) has both logical appeal and empirical support. First, only PAQ itemshaving a response of 3 or more are considered (on the premise that these are items which define the corenature of the job). Secondly, a weighted average of the relevance ratings for such items is calculated oneach attribute using the following formula:

    AI

    Attribute Score =I

    Where: A = Attribute relevance rating

    I = PAQ item response (if 3 or above)

  • 7/27/2019 Pa q Technical Manual

    40/68

    34

    This formula, when applied across all 76 attributes yields a weighted average relevance score for each.By comparing the score on each attribute with the distribution of such scores found in the normativesample of 2200 jobs used for the factor analysis, a percentile score is also determined for each attribute asit applies to each job.

    To check the value of such attribute estimates, they have been correlated with other measures of the

    degree to which the attributes were present in incumbents on several occasions on various jobs (Mecham& McCormick, 1969b; Shaw & McCormick, 1976; Carter & Biersner, 1987).

    Recent research has examined the predictive power of scores and percentiles derived in the mannerdescribed above. Carter & Biersner (1987), in a study of 25 U.S. Navy jobs found that the attribute ratingsof mental abilities correlated significantly with mean test scores on the Armed Services VocationalAptitude Battery (ASVAB) in 11 of the expected 15 cases (mean r = .36). They also found that thephysical strength attribute ratings correlated well with strength requirements determined using a differentmethodology (biserial rs = .69 to .87).

    In the largest study of this type to date (Mecham, 1989b) attribute ratings and percentiles from the masterPAQ database were matched with test data from incumbents on each of the occupations in the databases

    used for the recent job component validity studies (described earlier) on the basis of common 9-digit DOTnumbers. Correlations between attribute scores and percentiles were computed with the test data judged tomeasure the same or similar attributes. Specifically, attribute data for each occupation were correlatedwith General Aptitude Test Battery (GATB) mean test scores, validity coefficients, and whether the testwas included as part of a Specific Aptitude Test Battery (SATB) for the job. Correlations were alsocomputed between attribute data and occupational median applicant scores on the Wonderlic PersonnelTest (WPT) and the percentages of persons in each occupation with high scores on each of four indices ofthe Myers-Briggs Type Indicator (MBTI).

    Because the correlation coefficients involving both the attribute score and the percentiles were verysimilar, only the correlations involving the percentiles are presented. Table 10 shows the results obtainedfor attributes of an aptitude type. Table 11 presents the results for attributes of an interest or temperament

    nature.

  • 7/27/2019 Pa q Technical Manual

    41/68

    35

    TABLE 10

    Correlations Between Percentiles of Ratings of Selected Attribute Requirements

    of Occupations and of Test Data on Certain Attributes

    of Incumbents on Similar Occupations

    Mean Aptitude Test Scores

    Attributes Tests

    G V N S P Q K F M WPT

    28. Verbal comprehension.31. Numerical computation.32. Arithmetic reasoning.33. Convergent thinking.

    34. Divergent thinking.35. Intelligence.39. Visual form perception.42. Perceptual speed.45. Spatial visualization.57. Finger dexterity.61. Manual dexterity.65. Rate of arm movement.66. Eye-hand coordination.69. Simple reaction time.

    .63

    .69

    .73

    .66

    .67

    .70-.16-.01-.34-.39-.48-.60-.54-.52

    .73

    .68

    .73

    .73

    .74

    .75-.32-.16-.50-.46-.57-.67-.62-.60

    .64

    .70

    .72

    .67

    .67

    .69-.20-.03-.37-.39-.48-.60-.54-.51

    .44

    .59

    .61

    .48

    .50

    .53

    .06

    .16-.11-.21-.27-.40-.34-.38

    .51

    .55

    .56

    .52

    .52

    .54-.12.00

    -.29-.17-.31-.43-.37-.47

    .63

    .57

    .59

    .62

    .61

    .61-.32-.16-.48-.31-.45-.55-.51-.53

    .59

    .49

    .52

    .58

    .57

    .57-.33-.20-.47-.28-.43-.51-.47-.53

    .23

    .27

    .25

    .23

    .23

    .24-.03.02

    -.13.08

    -.05-.14-.08-.28

    .07

    .14

    .14

    .09

    .09

    .12

    .10

    .12

    .03

    .10

    .04-.03.01

    -.12

    .73

    .75

    .84

    .77

    .76

    .79-.27-.15-.49-.41-.55-.68-.63-.65

    Validity Coefficients

    28. Verbal comprehension.31. Numerical computation.32. Arithmetic reasoning.33. Convergent thinking.34. Divergent thinking.35. Intelligence.39. Visual form perception.

    42. Perceptual speed.45. Spatial visualization.57. Finger dexterity.61. Manual dexterity.65. Rate of arm movement.66. Eye-hand coordination.69. Simple reaction time.

    .24

    .35

    .33

    .25

    .26

    .28

    .02

    .11-.06-.11-.13-.20-.18-.19

    .27

    .29

    .29

    .26

    .27

    .28-.07

    -.00-.15-.17-.20-.24-.23-.22

    .20

    .27

    .25

    .19

    .20

    .22-.04

    .04-.08-.12-.13-.17-.16-.14

    -.10.12.05

    -.06-.06-.04.25

    .27

    .22

    .17

    .17

    .11

    .14

    .04

    -.10-.05-.08-.10-.10-.11.05

    .05

    .08

    .09

    .09

    .10

    .09

    .04

    .07

    .08

    .06

    .06

    .05

    .04-.09

    -.04-.07-.09-.09-.07-.09-.07

    -.10-.15-.14-.10-.11-.13-.06

    -.08-.01-.01.02.07.05.05

    -.20-.19-.20-.20-.21-.21.07

    .02

    .13

    .12

    .15

    .18

    .17

    .14

    -.26-.30-.29-.24-.26-.28.05

    -.02.13.13.17.21.20.19

  • 7/27/2019 Pa q Technical Manual

    42/68

    36

    TABLE 10 (continued)

    Inclusion in a Specific Aptitude Test Battery (SATB)

    G V N S P Q K F M

    28. Verbal comprehension.31. Numerical computation.32. Arithmetic reasoning.33. Convergent thinking.34. Divergent thinking.35. Intelligence.39. Visual form perception.42. Perceptual speed.45. Spatial visualization.57. Finger dexterity.61. Manual dexterity.

    65. Rate of arm movement.66. Eye-hand coordination.69. Simple reaction time.

    .53

    .33

    .40

    .50

    .51

    .50-.41-.29-.48-.43-.49

    -.50-.50-.37

    .41

    .22

    .28

    .36

    .38

    .37-.36-.28-.40-.38-.41

    -.40-.41-.24

    .24

    .32

    .31

    .26

    .27

    .29-.04.02

    -.09-. 16-.18

    -.22-.20-.16

    -.13.14.08

    -.06-.05-.01.38.34.32.18.18

    .13

    .16

    .05

    -.23-.14-.14-.19-.19-.18.24.16.24.27.25

    .21

    .22

    .09

    .30

    .23

    .23

    .26

    .25

    .24-.22-.09-.26-.19-.24

    -.27-.26-.18

    -.10-.18-.18-.12-.13-.16-.07-.10-.04.09.09

    .09

    .08

    .03

    -.36-.33-.36-.36-.38-.38.12.03.20.28.31

    .36

    .33

    .26

    -.49-.46-.47-.48-.49-.49.22.10.35.27.38

    .45

    .41

    .39

    Source: Mecham, 1989.

    Note:Mean scores and validity coefficients were from 459 studies, SATB data from 236 studies. WPTrefers to the Wonderlic Personnel Test, a general measure of intelligence, with median applicant scoresreported for 108 studies. The underlined correlations are those of corresponding constructs represented by

    the attributes and the tests. Tests from the General Aptitude Test Battery (GATB) are identified asfollows: G Intelligence; V Verbal; N Numerical; S Spatial; Q Clerical Perception; K MotorCoordination; F Finger Dexterity; M Manual Dexterity

    As can be seen, the correlation between mean cognitive test scores (G, V, N, WPT) and the attributepercentiles was generally moderate to high and near zero or negative for the perceptual and psychomotortests. The correlations with validity coefficients, while low, were generally in the expected direction andcompare favorably to those found earlier when validity coefficients from an initial study were used topredict those in a follow-up study (see Table 9, column 4). The correlations with whether a test wasincluded in a SATB were generally moderate and in the expected direction. Of special interest was the

    fact that the attribute data were indicative of whether the finger and manual dexterity tests were found tobe useful in a selection battery (even though the mean scores correlated at a near zero level with attributedata).

  • 7/27/2019 Pa q Technical Manual

    43/68

    37

    TABLE 11

    Correlations Between Percentiles of Selected Attribute Requirements and of

    Percentages of Incumbents in Similar Occupations with High Scores

    on Specified Indices of the Myers-Briggs Type Indicator (MBTI)

    Indices

    Attributes

    Extraversionvs.

    Introversion

    Sensingvs.

    Intuition

    Thinkingvs.

    Feeling

    Judgmentvs.

    Perception

    6. Dealing with people.7. Social welfare.8. Influencing people.9. Directing/controlling/planning.

    10. Empathy.12. Conflicting/ambiguous information.14. Sensory alertness.15. Attainment of set standards.18. Separation from family/home.19. Stage presence.21. Tangible/physical/end products.22. Sensory/judgemental criteria.23. Measurable/verifiable criteria.24. Interpretation from personal viewpoint.26. Dealing with concepts/information.

    .21

    .17

    .19

    .12

    .18

    .08-.35-.41.24.20

    -.17-.18-. 16

    .17

    .08

    -.29-.28-.30-.35

    -.31-.37.16.11

    -.32-.27.28

    -.40-.45-.34-.42

    .09

    .11

    .11

    .20

    .11

    .24

    .03

    .04

    .17

    .07-.15.34.36.15.21

    .40

    .38

    .42

    .40

    .41

    .42

    .05

    .04

    .34

    .42-.36.44.37.39.40

    Note: Sample consisted of 96 occupations (r = .24 is significant at the .01 level using a one-tailed test).

    Source: Mecham, 1989b

    From Table 11 it can be seen that while most correlations with personality variables are modest, a numberare significant and tend to be in the expected direction.

    Taken as a whole, the attribute scores seem to be indicative of many of the human attributes associatedwith various occupations. While they are usually not as highly correlated with specific scores of job

    incumbents on tests presumed to measure the same attributes as are most estimates based on jobcomponent validity studies (compare Tables 10 and 11 with Table 5), they broaden the number ofconstructs addressed and generally have some empirical support. They have also been applied in anumber of practical situations [see, for example, Jeannerets (1988a) development of requirements forproduction operators and space mission specialist functions (Jeanneret, 1988b)].

  • 7/27/2019 Pa q Technical Manual

    44/68

    38

    USE OF THE PAQ IN JOB EVALUATION AND IN SETTING COMPENSATION RATES

    One of the important practical applications of the PAQ is in job evaluation and in the setting ofcompensation rates for jobs. Support


Recommended