+ All Categories
Home > Documents > Public sector use of technology in managing human resources

Public sector use of technology in managing human resources

Date post: 04-Sep-2016
Category:
Upload: brian-cronin
View: 214 times
Download: 1 times
Share this document with a friend
15
Public sector use of technology in managing human resources Brian Cronin a, , Ray Morath a , Pat Curtin a , Michael Heil b a Caliber, An ICF International Company, 9300 Lee Highway, Fairfax, VA 22031, USA b Monster Government Solutions, 8280 Greensboro Drive, Suite 900 McLean, VA 22102, USA Abstract The use of technology in human resources has increased dramatically and is now a vital aspect of many personnel-related decisions such as collecting job information, recruitment, and employee selection. This paper will describe a number of large-scale technology interventions within the public sector. Through these descriptions, the paper will also describe how technology can successfully be used to improve human resource processes and examine the unique obstacles that technology can sometimes present. © 2006 Elsevier Inc. All rights reserved. Keywords: Technology; Recruitment; Human Resources; Selection; Public sector; Job analysis; Learning management system 1. Introduction Over the past 5 years, the use of technology in human resources has increased dramatically and is now a vital aspect of many personnel-related decisions such as collecting job information, recruitment, employee selection, training, and performance management (Chapman & Webster, 2003). Technology is, in many cases, the interface between human resource (HR) personnel and organizational applicants and employees. This is evident when we look at the recent success of HR technology companies that offer software systems that automate HR functions such as job analysis, recruitment, and classification (e.g., SkillsNET, AVUE Digital Systems, and Monster Government Solutions) as well as the widespread use of computerized selection tests. As a result, computer-based applications are rapidly becoming the vehicle through which human resource departments collect and organize personnel-related data. The application of technology to HR management functions is not a recent event. Early examples of the influence of technology on HR management included the development and use of optical scanning equipment in the 1950s. Optical mark readers facilitated the scoring of large numbers of tests and were partially responsible for the sudden expansion in educational (Scholastic Aptitude Test, American College Test) and military (Armed Services Vocational Aptitude Battery) entrance exams as well as large-scale job analyses such as the Comprehensive Occupational Data Analysis Program (CODAP) (Christal, 1974). The use of optical mark readers in the advancement of large-scale assessment was soon followed by the utilization of computer applicationsfirst mainframe and then PC basedto further extend the influence of technology on personnel assessment (Epstein & Klinkenberg, 2001; Ree & Carretta, 1998). With the explosion of the Internet over the past 1015 years, however, the impact of technology and computer applications in HR management functions appears to be even more salient. With respect to recruitment, job boards such Human Resource Management Review 16 (2006) 416 430 www.socscinet.com/bam/humres Corresponding author. Tel.: +1 703 934 3633. E-mail address: [email protected] (B. Cronin). 1053-4822/$ - see front matter © 2006 Elsevier Inc. All rights reserved. doi:10.1016/j.hrmr.2006.05.008
Transcript
Page 1: Public sector use of technology in managing human resources

Human Resource Management Review 16 (2006) 416–430www.socscinet.com/bam/humres

Public sector use of technology in managing human resources

Brian Cronin a,⁎, Ray Morath a, Pat Curtin a, Michael Heil b

a Caliber, An ICF International Company, 9300 Lee Highway, Fairfax, VA 22031, USAb Monster Government Solutions, 8280 Greensboro Drive, Suite 900 McLean, VA 22102, USA

Abstract

The use of technology in human resources has increased dramatically and is now a vital aspect of many personnel-related decisionssuch as collecting job information, recruitment, and employee selection. This paper will describe a number of large-scale technologyinterventions within the public sector. Through these descriptions, the paper will also describe how technology can successfully beused to improve human resource processes and examine the unique obstacles that technology can sometimes present.© 2006 Elsevier Inc. All rights reserved.

Keywords: Technology; Recruitment; Human Resources; Selection; Public sector; Job analysis; Learning management system

1. Introduction

Over the past 5 years, the use of technology in human resources has increased dramatically and is now a vital aspectof many personnel-related decisions such as collecting job information, recruitment, employee selection, training, andperformance management (Chapman & Webster, 2003). Technology is, in many cases, the interface between humanresource (HR) personnel and organizational applicants and employees. This is evident when we look at the recentsuccess of HR technology companies that offer software systems that automate HR functions such as job analysis,recruitment, and classification (e.g., SkillsNET, AVUE Digital Systems, and Monster Government Solutions) as well asthe widespread use of computerized selection tests. As a result, computer-based applications are rapidly becoming thevehicle through which human resource departments collect and organize personnel-related data.

The application of technology to HR management functions is not a recent event. Early examples of the influence oftechnology on HR management included the development and use of optical scanning equipment in the 1950s. Opticalmark readers facilitated the scoring of large numbers of tests and were partially responsible for the sudden expansion ineducational (Scholastic Aptitude Test, American College Test) and military (Armed Services Vocational AptitudeBattery) entrance exams as well as large-scale job analyses such as the Comprehensive Occupational Data AnalysisProgram (CODAP) (Christal, 1974). The use of optical mark readers in the advancement of large-scale assessment wassoon followed by the utilization of computer applications–first mainframe and then PC based–to further extend theinfluence of technology on personnel assessment (Epstein & Klinkenberg, 2001; Ree & Carretta, 1998).

With the explosion of the Internet over the past 10–15 years, however, the impact of technology and computerapplications in HR management functions appears to be even more salient. With respect to recruitment, job boards such

⁎ Corresponding author. Tel.: +1 703 934 3633.E-mail address: [email protected] (B. Cronin).

1053-4822/$ - see front matter © 2006 Elsevier Inc. All rights reserved.doi:10.1016/j.hrmr.2006.05.008

Page 2: Public sector use of technology in managing human resources

417B. Cronin et al. / Human Resource Management Review 16 (2006) 416–430

as Headhunter.net and Monster.com, and federal and state agency Internet postings of job openings serve as the nexusbetween a hiring organization and the job seeker—both passive and active. Similarly, on-line recruiting that isembedded within organizational websites is now commonplace.

Regarding the application of technology to personnel screening and assessment, organizations are currently usingthe Internet to administer prescreening instruments, background checks, job application blanks, structured interviews,and personnel selection tests (Bartram, 2000). These online assessments typically involve 24/7 Internet availability to alocal, national, or even global audience of applicants (Bartram, 2000). However, the challenges involved with suchonline assessment have been well documented regarding issues such as maintaining assessment integrity and validity,test security, as well as legal in fairness issues involving equal access to protected subgroups of applicants (Stanton,1999).

Crespin and Austin (2002) describe advantages of on-line versus paper and pencil assessment that include (1) fasterknowledge of assessment results, (2) reduction in printing and shipping costs, (3) ease in scheduling, (4) multimediaassessment stimuli (sound, video), and (5) process tracking (time on item/task). Disadvantages that they describeinclude (1) frustration of those less facile with computers, (2) increased system requirements to support distanceassessment, (3) subgroup differences in access to the on-line assessment, (4) potential privacy concerns (Stone &Stone-Romero, 1998), and (5) lingering test security issues associated with high-stakes assessments administered overthe Net.

Crespin and Austin (2002) highlighted the role technology plays in the utilization of test item banks andcollaborative item pools via shared relational databases as well. In their review of how computer technologyapplications are influencing the practice of I/O psychology, they describe how a consortium of over 190 cooperatingpublic agencies have leveraged technology in the form of the Western Region Item Bank (WRIB). The WRIB providesthe member agencies with services such as draft test questions with complete item histories, ‘print ready’ exams, andexam scoring and item analyses (Crespin & Austin, 2002).

Technology has also had a sizable influence on the collection and analysis of occupational information. In the 1960sand 1970s, the Air Force was utilizing technology to facilitate the analysis of occupational data. Beginning with the useof computer programs (Archer, 1966) to cluster jobs based on similar profiles one key dimensions, to interrelatedprograms used in analyzing and recombining data from job task inventories (comprehensive occupational datacomputer program system) (Christal, 1974), computer and information technology has played a large role in advancingthe efficacy of job analysis in organizations.

Similarly, technology has been instrumental in the collection and representation of job analysis data. Examplesinclude the computerized versions of the PAQ and PMPQ (McCormick, 1979) as well as the occupational informationnetwork (O⁎NET). O⁎NET is a comprehensive database of occupational information, used to replace the dictionary ofoccupational titles, containing worker attributes and job characteristics of more than 1100 occupations found within theUS economy (Peterson et al., 2001).

Automated content analysis is another area where technology has demonstrated the potential for significantlyaltering the traditional methods for scoring open-ended responses to surveys and test questions. Most of thecurrently existing content analysis methods analyze text by determining frequency patterns of words and phrases.Latent semantic analysis has been found to be one of the most promising forms of automated content analysis inthat it goes beyond the mere counting of frequency patterns of words and phrases (Laham, Bennett, & Landauer,2000). This approach scores open-ended responses against a template or “gold standard” that is created byentering into the computer algorithm a number of open-ended responses of a known or determined quality and itis against this synthesized set of responses (i.e., gold standard) which the open-ended responses are compared.The US Office of Personnel Management has also recently been investigating the efficacy of another type ofcontent analysis involving automated analysis of test item banks (Ford, Stetz, & Boot, 2000). This technologyinvolves the equating or comparison of alternate forms of tests to ensure that the forms are equivalent in terms oftheir content.

Finally, in the area of training, both automated and computer-assisted training is now commonplace. Fifty years agoresearchers employed the latest technology, in the form of analog computers, in the development of flight simulators(Harter & Fitts, 1956). Those early flight simulators have evolved into highly advanced training simulators and systemswith physical and psychological fidelity that was thought to be unattainable 50 years ago.

To shed light on several of the trends and related issues identified in the literature, this paper will describe anumber of recent, large-scale technology interventions within the public sector. It will begin by describing

Page 3: Public sector use of technology in managing human resources

418 B. Cronin et al. / Human Resource Management Review 16 (2006) 416–430

innovative methods of collecting job-related data, recruiting new applicants, and selecting employees. Withinthese sections, we will detail how technology can be successfully used to manage human resources as well as theunique obstacles that technology can sometimes present. Next, we will describe how to choose a softwareplatform to administer a computerized assessment. In this section, we provide a typical example of a well-meaning and otherwise conscientious agency that was unsuccessful in matching the appropriate technology tomanage their assessment needs and an example of an agency that was successful in matching their assessmentrequirements with the appropriate technology. Finally, we will discuss extraneous factors that may affectcomputerized assessment validity.

2. Automated collection of job data

Collecting accurate and reliable job data is the foundation of almost all human resource (HR) management systems(e.g., Guion, 1998; Morgeson, Delany-Klinger, Mayfield, Ferrara, & Campion, 2004). Recently, automated datacollection procedures have become an increasing popular way to collect job analytic data from job incumbents,analysts, and supervisors. These data are used for a variety of purposes ranging from recruitment to classification.Automated approaches are popular for a variety of reasons but they are probably most sought-after because they cangather data more efficiently than traditional paper-based methods (Drasgow, 2005). The use of technology, however,introduces new, unique issues beyond traditional paper-based topics that should be carefully considered (Stanton,1999).

To learn more about the unique concerns related to computer-based approaches of collecting job data, we recentlyconducted a benchmarking study of public sector agencies that use automated systems to make human resourcedecisions. The benchmarking study was part of a larger organizational assessment and specifically focused on systemsused to make classification decisions for public sector organizations. These tools collect job analytic and administrativedata that are ultimately used to make employee classification determinations.

The study included 20 participants from federal agencies that currently use automated classification tools and twosystem vendors. To gather data, we developed a structured-interview protocol based on the project team's experiencewith market research and position classification systems as well as information gathered during a related literaturereview. The final protocol included 20 open-ended questions with specific follow-up items to probe for additionalinformation.

The primary goal of the protocol was to identify and understand the functionality offered in using automatedsystems, lessons learned, “dos and don'ts”, and obstacles to overcome. We collected a detailed description oftypical system features, information about the federal agency that is currently using each system, and informationabout each system's potential usability. The final protocol was designed to collect information around threetopics:

▪ Understanding the organization and the automated system▪ Understanding how the organization chooses, implements, and maintains the system▪ Understanding the IT components necessary to support the system.

All participants were human resources professionals and each interview conducted with this protocol lastedapproximately one hour.

After all interviews were completed, the interviewers reviewed their notes then discussed and summarized theresponses to each question on the protocol. Interviews then systematically reviewed each question and providedinformation on the ideas, thoughts, and attitudes expressed by the participants. This process allowed the interviewers toidentify the common themes and ideas that emerged across the interviews. In the next two sections, we describe thesethemes in terms of the benefits of these types of systems and the lessons learned from the benchmarking study as theyrelate to this paper.

2.1. Benefits of automated job data collection

Agencies that participated in our benchmarking study reported many benefits of using an automated system. Allparticipants indicated that the consistency and ease-of-use provided by their automated systems resulted in a large

Page 4: Public sector use of technology in managing human resources

419B. Cronin et al. / Human Resource Management Review 16 (2006) 416–430

reduction in time program managers spent performing HR related functions, a reduction in time HR staff spentperforming administrative tasks, and an increase in satisfaction related to HR processes. Additionally, participantsreported that the reduction in time spent performing administrative tasks allowed HR personnel to spend more timeproviding workforce consultation to program managers.

These findings were substantiated through a separate phase of the same organizational assessmentconducted within the contracting organization. In this phase, we conducted an internal assessment of theagency's current automated hiring system. Results of the agency's hiring system assessment suggested that thesystem saved all users (i.e., HR staff and managers) a large amount of time as compared to their previousmanual process since users spent less time completing routine tasks. For example, the hiring system requiredapplicants to complete all required information before submitting their application online, which significantlyreduced the amount of time HR staff used to follow up with applicants in order to collect required forms andit also reduced the occurrence of having to eliminate qualified applicants on a technicality (e.g., not indicatingcitizenship).

When we review the results of the benchmarking study and the hiring system assessment together, we find that awell-designed automated system can add standardization to HR processes. In manual processes, data collectionprocedures can be difficult to monitor across various geographical locations. For instance, it is extremely difficult forlarge government agencies to collect job information using traditional paper-based methods since their personnel arelocated around the country (or even around the world) in many different environments. This lack of standardization cancontribute to perceptions of inconsistent treatment and errant data collection procedures. Inconsistent job informationin severe instances can lead to incorrect personnel decisions and at the very least, increase the amount of time it takes tomake decisions. Automated systems can help to alleviate these conditions, as they provide consistent rules andprocedures to all system users.

Our benchmarking study revealed that one branch of the military is using an automated job analysis instrument toobtain the type of benefits described. Their automated software system is collecting data from all of their civilianemployees to assess the tasks they perform on the job and the knowledge, skills, and tools required to complete requiredtasks. This web-based instrument allows for the collection of extensive demographic and job information from usersacross the world, which otherwise might not be possible. All data collected from the system is electronicallytransmitted to one central location where data can be processed and analyzed. As a result, there is virtually no time lostin delivering job surveys or receiving data from employees. This tool has provided this organization with theopportunity to be more responsive to employee needs and the opportunity to implement strategic HR decisions veryrapidly even in a time of war. Yet, this system has also introduced some database management issues that will bediscussed in the following section.

2.2. Lessons learned related to automated job data collection

Results of the benchmarking study provided a number of ‘lessons learned’ related to the use of automated job datacollection tools. The results were grouped into five topic areas: (1) developing customized system content, (2)marketing the system to stakeholders, (3) delivering training, (4) developing information resources for users, (5)database management. Each topic is discussed in more detail below.

2.2.1. Developing customized system contentWithout carefully developed system content that meets the specific needs of the host agency, it is unlikely that any

automated system can be a success. System content may include information related to the tasks and KSAs performedby job incumbents, administrative questions, as well as navigation options. To help ensure that content is developed ina well thought-out manner, our benchmarking partners made a number of suggestions.

To begin the development process, an agency should invest staff resources to identify and develop good contentbefore implementing the system. It is critical that job incumbents or other types of Subject Matter Experts (SME) areinvolved in the creation of this content. SME involvement will increase the relevance and accuracy of the content andwill increase user “buy-in.” Content development is an intensive process, but participants emphasized that the benefitsof using accurate system information outweigh the costs. During the content development process, it is also importantto encourage HR and managers to collaborate. This will help minimize future disagreements about the content in thesystem and contribute to acceptance of the system.

Page 5: Public sector use of technology in managing human resources

420 B. Cronin et al. / Human Resource Management Review 16 (2006) 416–430

For example, while assessing the automated hiring system described earlier, it was found that poorly developedsystem content led to early system problems. Originally, the agency relied on generic system data provided with theirhiring software to create position descriptions, job announcements, and on-line applicant screening tools. Since thegeneric information did not meet process needs, system users became frustrated with the application and as a result, itwas difficult for this HR process to function properly. In response, the agency built customized system data over a 24-month period and ultimately was able to use the system to its highest capability. However, their early struggles and stafffrustrations may have been avoided if system content was more carefully developed in advance.

2.2.2. Marketing the system to stakeholdersOur benchmarking partners also emphasized the importance of marketing any new system to stakeholders,

including top management, HR staff and program managers. Marketing the system is another method to create systembuy-in and reduce apprehension about using a new system. Successful marketing methods suggested during the studyincluded

▪ Offering system demonstrations to HR staff and managers▪ Sending e-mails that describe the functionality of the new system▪ Posting a link to the new system on the HR website▪ Disseminating information about the system via word of mouth▪ Providing just-in-time training to users▪ Building ‘buy-in’ among department leaders and having leaders promote the system.

2.2.3. Delivering trainingManager and HR training is another important aspect of the system success. Participants suggested that this training

should be developed before implementation and delivered “just-in-time” for system implementation. The trainingshould provide an orientation to the system prior to implementation. In addition, specific training on how to use thesystem should be made available to HR staff and employees when they are about to use the system for the first time.Aside from the clear benefits of teaching staff how to use the system, training is another way to increase buy-in andreduce apprehension related to the new system. Using a “just-in-time” training reduces the likelihood that staff willforget training material and, as a result, maximizes the organization's return on investment.

2.2.4. Database managementEfficient and effective collection of personnel and job data is one of the clear benefits of using an automated system.

Computer-based systems allow HR professionals to collect large amounts of information since job surveys can bedelivered in electronic form rather than in cumbersome paper formats. For example, in the large military job analysiseffort described above, data were collected from all of their civilian employees, working in over 20 civilian jobcommunities with employees in numerous job classifications, through a web-based system. Even employees withinone community worked in different geographical locations, different positions, and under various titles. While thisamount of data can be extremely useful in making strategic HR decisions, it can also create complex databasemanagement problems if related issues are not considered during system design.

Data structure and database management issues must be considered in advance since most systems automaticallydeposit all collected information into a central data warehouse so it can be then used to conduct related analyses. If dataof this scale are improperly stored, it can be virtually impossible to transpose, merge, and analyze. To alleviate thesepotential database problems, practitioners should create an analysis plan prior the system design that includes the dataneeded, the database structure, the intended analyses, and the anticipated format of final reports. These considerationswill allow analyses and report generation to be more efficient.

3. Online recruitment and testing

Online recruitment and testing has become increasingly popular as both organizations and job seekers find it tobe cost-effective, efficient, and convenient (Cronin, 2003; Drasgow, Luecht, & Bennett, 2005). Online testing hasbecome easier as web connections have become faster and people have greater access to the Internet (Bartram,2000; Lievens & Harris, 2003). However, there are several issues that should be considered when conducting

Page 6: Public sector use of technology in managing human resources

421B. Cronin et al. / Human Resource Management Review 16 (2006) 416–430

online recruitment and selection. For example, web access is not uniform, which may create an uneven playingfield among applicants. This lack of uniform access could preclude some well-qualified individuals from applyingfor a job. Also, the equipment people use differs considerably, which could have an impact on the delivery ofcontent, especially if test timing impacts scoring (Marrelli et al., 2004). Practitioners interested in conducting onlinerecruitment or assessments should take time to match the technology with the assessment. Important questionsinclude

▪ What role should the Internet play in the recruiting and assessment process?▪ Will an Internet-based assessment produce desired results?▪ Will the Internet-based assessments serve as a minimum qualification screen, a pre-screen on basic skills andinterests, or a full-on assessment/selection system?

▪ What are the implications of transferring an existing assessment to an Internet-based assessment?▪ How will data from older assessments compare to data from Internet-based assessments?

Each of the above questions deserves a bit more discussion.

3.1. What role should the Internet play in the assessment process?

To answer this question an organization should assess its current needs and capabilities and its goals for personnelselection (Marrelli et al., 2004). Performing this type of gap analysis will allow an organization to better understandwhat sort of first steps are appropriate and realistic. For instance, if an organization has limited technological resourcesin terms of equipment and personnel and a relatively low hiring rate, then the best use of the Internet may be as arecruiting tool. Here, information about the organization and the job can be provided to the applicant through theorganization's website (Cronin, 2002; Lievens & Harris, 2003). It is also possible to provide more active means ofrecruiting through a low-fidelity realistic job preview that is delivered through the organization's website by means ofan interest inventory, a sample assessment, and/or a work sample or video (Bartram, 2000; Cronin, 2003; Lievens &Harris, 2003). As the capabilities of the organization increase or the hiring needs for an organization grow, then the useof more advanced forms of Internet-based recruiting and assessment are justified and are more likely to provide a solidreturn on investment.

3.2. Will the Internet-based assessments produce desired results?

The answer to this question is largely dependent on what the organization's goals are for their selection process(Drasgow et al., 2005). Based on the answers from a gap analysis, the organization will have a better idea of what rolethe Internet can play in their recruiting and selection efforts. An organization may want several things from an Internet-based selection system: greater recruiting ability, shorter hiring times, less burden on HR and hiring officials, aprescreen, or a more fully developed selection system. The ability of the Internet to help fulfill the organization's goalsdepends on many factors related to the organization itself and the types of assessments used by the organization(Drasgow, 2005).

3.3. Will the Internet-based assessments serve as a minimum qualification screen, a pre-screen on basic skills andinterests, or a full-on assessment/selection system?

While the goal of using the Internet to deliver a minimum qualification pre-screen or a training and experiencequestionnaire that serves as a first step in a selection process is a very reasonable goal for an organization, especiallyconsidering the level of Internet-technology most people have in their homes today, the goal of having a completeInternet-based selection system may be too ambitious for any organization at this point in time (Drasgow et al., 2005).This is the case because, beyond financial resources, there are many factors related to test security and applicantidentification that simply prohibit the use of solely Internet-based selection in all but a very few cases (Stanton, 1999).In most instances, a testing center or controlled environment will have to be used in order to make sure test contentremains reasonably secure and that the applicant that is claiming to take the test is in fact actually the applicant that theorganization ends up hiring.

Page 7: Public sector use of technology in managing human resources

422 B. Cronin et al. / Human Resource Management Review 16 (2006) 416–430

3.4. What are the implications of transferring an existing assessment to an Internet-based assessment? How will datafrom older assessments compare to data from Internet-based assessments?

Transferring the content of existing assessments to Internet-based assessments can involve a whole host of issues fromthe ability to display test content (images, schematics, etc.) on applicant equipment that is not standardized to dealing withthe issue of comparability between the results from existing test forms to the Internet or computer-based forms. Deliveringassessments via the Internet also may alter the assessments reliability and validity (Crespin & Austin, 2002). For instance,constructs such as computer aptitude may play more of a role than desired and as a result adversely impact the reliabilityand validity of scores from an online assessment (Stanton, 1999). Other issues to consider are the inability to control accessto the content of the test and how to control for retesting when an assessment is put online. Without some sort ofinstantaneous verification system, applicants could apply under a false name simply to preview or practice an assessmentbefore they take it for real. The content of an organization's assessments once used onlinewithout ameans to control accessare wide open to all test takers, coaches, and others that could profit by having access to the assessment's content.

Beyond the issues mentioned above, there are many other important technical concerns related to Internet-basedtesting that should be considered by an organization before “going live” with an Internet-based selection effort(Drasgow et al., 2005; Lievens & Harris, 2003; Marrelli et al., 2004). These issues include

▪ How will applicant tracking be conducted in an electronic format?▪ How should the organization prepare its staff for a transition to a more Internet-based selection system?▪ Should the online system be developed internally or by an outside vendor?▪ What types of information will the system be able to provide to the applicant (i.e. Pass/Fail scores, scores forsections, overall scores)?

▪ How will retesting be handled?▪ How will assessment content be updated over time to compensate for allowing public access to the assessments?

3.5. How will applicant tracking be conducted in an electronic format?

Applicant tracking can be a particularly difficult issue to deal with because the applicant can move through anInternet-based assessment in relative autonomy. Consequently, very well thought out and robust systems need be putinto use to keep track of the applicant's movement within the assessment process. These systems will, among otherthings, need to store an applicant's answers, score them, possibly provide results to the applicant, keep track of thenumber of attempts an applicant makes at a particular assessment, mark the advancement of the applicant through theassessment process, and place the applicant on ranked-ordered registers.

3.6. How should the organization prepare its staff for a transition to a more Internet-based selection system?

An additional and large expense related to creating and using Internet-based assessments involves the training ofstaff to administer the new form of testing and to train and prepare other organizational staff members to be consumersof the new assessment system. As the discussed earlier, preparing staff through marketing efforts and training sessionsis a critical component of system success since these efforts increase user ‘buy-in’ and system usability.

3.7. Should the online system be developed internally or by an outside vendor?

If internal organizational capabilities are not yet capable of developing, housing, and administering an Internet-based assessment system, then the costs associated with using outside assistance to develop and maintain such a systemneed to be considered.

3.8. What types of information will the system be able to provide to the applicant (i.e., Pass/Fail scores, scores forsections, overall scores)?

For reasons ranging from the practical to the legal, the type of feedback that will be provided to the applicant shouldalso be carefully considered. In some instances, giving applicants more feedback on their performance may benefit the

Page 8: Public sector use of technology in managing human resources

423B. Cronin et al. / Human Resource Management Review 16 (2006) 416–430

applicant and the organization as the case may be when an assessment is being used as a self-selection device. For otherreasons such as test security and for legal reasons, a more straightforward pass/fail form of feedback may be the way togo if the assessment is to be used as a selection tool. Providing applicants with more specific feedback may allow themto improve their scores artificially by honing their answers in subsequent testing sessions.

3.9. How will retesting be handled?

The fact that an applicant can retake an assessment more than once presents several challenges to the use of Internet-based testing. Most obviously, how does an organization deal with a system that allows an applicant to enter into thetesting session without supervision? What controls can an organization use to limit access when it needs to do so? Theanswers to these questions depend on the scale of the assessment effort the organization is undertaking and the Internettechnology resources the organization has at hand in terms of people and systems. Possible solutions, however, mightinclude the use of social security numbers and the use of unique identifiers such as organizationally issued login namesand passwords. Another solution would be to establish or use existing satellite testing centers, universities, orcommunity colleges.

3.10. How will assessment content be updated over time to compensate for allowing public access to the assessments?

One last technical concern that is also related to retesting is this issue of replenishing the content of the assessment(Drasgow et al., 2005). An organization considering the use of Internet-based testing should consider to what extent theywill need to update the content of their on-line assessments. Assessments such as training and experience questionnaireswill need less maintenance than job knowledge or aptitude assessment tests. The type of system used and the amount ofcontrol the organization has over the delivery the assessment also play roles in determining how practical it will be to useonline assessments. If an organization uses a program that easily allows for the exchange of content then the issue ofreplenishing an assessment is not as much of a problem as when a self-contained program or more complex system isused to deliver assessment content. If a program has no way of exchanging content without being “disassembled” andthen recompiled, the cost associated with replacing content on a regular basis may be prohibitive.

For similar reasons, the importance of test security cannot be emphasized enough (Marrelli et al., 2004; Stanton, 1999).An organization cannot fully control access to the questions once a selection test is put on the Internet. This is a seriousproblem when conducting high-stakes testing, as the practitioner cannot be certain if an applicant's score is due to actualability or because theywere trained to dowell on the test by someone elsewho had access to the questions.A related issue isthat it is nearly impossible to positively identify the person taking an online assessment. We do not know for sure that theperson taking the test online is the actual applicant or someone who was asked to help the applicant pass the test.

Practitioners can take steps to avoid these problems. For example, online assessments can be used as prescreens,rather than as the full assessment. The full assessment can then take place at a later point in time in a secure location,such as a testing center. This approach will help to protect the security of the primary assessment and also provide anopportunity for proctors to verify the identity of the applicants. Certain types of assessments, such as biodata,applications, Training and Experience (T&E) assessments, and self-assessments are better suited for the “free andopen” Internet environment, than tests of cognitive ability or job knowledge.

As stated above, there are certain types of assessments that are not suited for Internet delivery. Many jobs requirecomplex cognitive or physical abilities that cannot be adequately assessed online. For example, the air traffic controlleroccupation requires a number of complex cognitive abilities that cannot be fully assessed with biodata or T&E tests thatpresent one page of information at a time. A sample of these required cognitive abilities is presented in Table 1.

Slower Internet connection speeds and slower computer processing speed also put the applicant at a disadvantage bypotentially lowering their overall score on a complex, interactive cognitive tests (Curtin, Phillips, Heil, & Foster, 2005).A potential solution to this problem is to have applicants download the test onto their computer so that Internet speeddoes not impact test timing. However, downloading the test raises the issue of test security, which would then becompromised. Practitioners would have difficulty determining if performance on this test reflected true ability or theresults of practice and/or coaching. Due in large part to the issues of cheating/applicant identity, test security, andInternet speed and test timing, air traffic control agencies are reluctant to conduct the full applicant assessment via theInternet. Table 2 outlines examples of how some air traffic control organizations are using the Internet for recruitmentor assessment of candidates.

Page 9: Public sector use of technology in managing human resources

Table 1Selected air traffic controller worker requirements

Prioritization: the ability to identify activitiesthat are most critical and requireimmediate attention

Dynamic visual-spatial: ability to deal withdynamic visual movement

Time sharing/multi-tasking: the ability toperform two or more job activities at thesame time

Situational awareness: being cognizant of allinformation within a four-dimensionalspace (i.e., separation standards plus time)

Intermediate-term memory: the ability toremember pertinent information over a1–10 min period visual-spatial reasoning

Rule inference: the ability to efficiently applytransformational rules inferred from thecomplete portions of the stimulus array to theincomplete portion of the array

Planning: the ability to determine theappropriate course(s) of action to takein any given situation

Visualization: the ability to translate materialinto a visual representation of what iscurrently occurring

Perceptual speed and accuracy: the ability toperceive visual information quickly andaccurately and to perform simple processingtasks with (e.g., comparisons)

Execution: the ability to take timely action inorder to avoid problems and to solveexisting problems

Visual scanning: the ability to quickly andaccurately search for information on acomputer screen, radar scope, or computerprintout

Decisiveness: the ability to make effectivedecisions in a timely manner

Thinking ahead: the ability to anticipate orrecognize problems before they occur and todevelop plans to avoid problems

Source: Morath, Quartetti, Bayless, and Archambault (1999).

424 B. Cronin et al. / Human Resource Management Review 16 (2006) 416–430

Due to the high stakes nature of air traffic controller selection, as well as the difficulties and disadvantagesassociated with administering complex cognitive tests over the Internet, online assessment for ATCs is often limited to

▪ Recruitment▪ Applications▪ Self-selection▪ Screening.

Qualified applicants will often then move on to later phases of the selection process and undergo more extensivetesting in a secure location. This is not to say that the Internet does not produce useful results that are a benefit to theorganization and the applicant. Recruitment and screening through the Internet can help to create a more qualifiedapplicant pool and increase the selection ratio (Buijck, & de Lange, 2003; Lund Hansen, 2003).

The overview of online air traffic controller recruitment and selection presented later in this paper helps illustrate thepoint that online assessment is not appropriate for all worker requirements, particularly complex cognitive abilities thatare assessed with timed tests. Until new technologies adequately address concerns about test timing, security, andapplicant identity, full assessment of these cognitive skills and abilities should be done in a secure location. Despite theselimitations, the Internet still provides a very powerful and useful tool for collecting applicant information, educatingapplicants about the job, and providing applicants with an opportunity to self-select (Bartram, 2000).

4. Matching technology with assessment need

Practitioners face many challenges when choosing a software platform to administer a computerized assessment(Curtin et al., 2005). There is much more involved in the development of an effective computer assessment than a moretraditional assessment (Marrelli et al., 2004). One of the first challenges involved is determining whether theassessment will be administered online or via an individual workstation or network. Secondly, the software platformchosen to host the assessment has a profound impact on almost every aspect of the assessment being developed. Forexample, the software platform determines the types of questions that can be asked, the type of information that can bereported (e.g., item statistics and individual test results), and the type of feedback that is provided to the test-taker or testadministrator. All too often carefully developed and otherwise well-constructed assessments are found to be unusablewhen the software platform does not match the needs and goals of the agency.

Because the software platform plays such a crucial role in the delivery of a computer/Internet-based assessment, theselection or development of the software platforms should take place very early on in the assessment developmentprocess (Curtin et al., 2005). In doing so, the assessment developers can determine if the software will operate as

Page 10: Public sector use of technology in managing human resources

Table 2Examples of air traffic controller online recruitment and assessment

Organization Online recruitment/assessment

Federal Aviation Administration(FAA) United States

▪FAA website provides basic information about ATC Occupation and Openings▪No online selection

Luchtverkeersleiding Nederland(LVNL) Netherlands

▪LVNL uses a website for recruitment and image-building▪This website stimulates interaction and makes contact with potential controllers▪Website features–Sound and movement–Radar screen to generate technical learning–Extensive application form–Self-selection test

Airservices Australia, Australia ▪Airservices Australia website provides description of the ATC occupation, requirements for the job,and applicant assessment information▪Step 1: Complete detailed online application–Education–Aeronautical experience–Air traffic experience–Other work experience–Key skills

▪Step 2: Selected applicants will be asked to complete one or both of the online cognitive abilitiestest and self-description questionnaire–This information will be used to assess the cognitive, arithmetic and verbal reasoning capability

of individual candidates, as well as their potential suitability and job fit for Air Traffic Control–Each test requires up to 30 min of uninterrupted time

▪Step 3: One full day of testing for candidates who scored the highest on the online cognitiveabilities test and self-description questionnaire

Naviair Denmark ▪Naviair developed a website to increase awareness of the profession and training of ATCs and toobtain more qualified applicants▪This website provides a pre-selection opportunity by asking potential applicants to “Test Yourself”▪System generates instant and automatic feedback▪Those who score above the norm receive an encouraging e-mail from the head of recruitment▪Website contents–Information–“Control the Sky”–“The Logical Test”

425B. Cronin et al. / Human Resource Management Review 16 (2006) 416–430

desired. To increase the likelihood of selecting a software platform that best matches an assessment need, the followingquestions, at a minimum, should be asked of potential software vendors:

▪ In what ways can the software present assessment content (e.g., frames, video, graphics)?▪ How can test takers move through the test (e.g., revisiting items)?▪ What security systems are available to control access?▪ Can item and scale statistics be generated and in what format?▪ What types of reports or feedback can be generated?▪ How will the system allow for the long-term upkeep of the assessment (e.g., item refinement, modification ofcontent, scoring, and feedback)?

Practitioners must also communicate with the software vendor regarding the marriage of software functionality andassessment requirements by providing details about the following:

▪ Type of assessment items desired (e.g., multiple-choice, matching, short answer, essay)▪ Level of interactivity desired (e.g., simulations vs. more traditional objective tests)▪ Media needs for an assessment (e.g., audio, video, text)

Page 11: Public sector use of technology in managing human resources

426 B. Cronin et al. / Human Resource Management Review 16 (2006) 416–430

▪ Administration options (e.g., can users complete the assessment in more than one session?)▪ Ease of use for test takers across a variety of access methods (e.g., both slow and fast Internet connects, throughfirewalls)

▪ Ability to work with the software and delivery systems to make reasonable accommodations for disabled test takers.

While all of the above factors are critical to consider, it is essential to make sure that the software vendor candemonstrate that their systems can in fact deliver and manage your assessment in the way you envision. It is advisableto be wary of software vendors that push their one-size-fits-all application. Too often it is the case that such systems, bytheir very nature, cannot meet the particular needs of most assessment applications.

One of themost important standards bywhich to judge a potential software platform is the ability of the platform to allowfor the long-term maintenance of the assessment. Most assessments will need periodic maintenance to keep the content ofthe assessment reliable, valid, and up-to-date. This maintenance can include the removal and introduction of assessmentcontent (e.g., items, video, graphics) and the use of draft items to create item banks that contain items with knownparameters. Some software platforms require that the vendor be heavily involved in this maintenance processes because theexchange of assessment content requires essentially a dismantling and reconstruction of a new assessment. If the agencythat is using the software has the internal capability to develop andmanage their own test content, then it might be advisablefor such an organization to find a software platform and vendor that provides software that allows the organization tomaintain its own assessments. Finding a software platform that provides a good match between and agency's abilities andthe maintenance needs of an assessment can result in considerable cost savings over the life of an assessment.

There are, of course, many other factors to consider when developing a computer or Internet-based version of anassessment. The exact type and number of factors is determined by the unique aspects of each assessment (Drasgow etal., 2005). The key point to take away from this section is that the abilities and limitations of the software platform(s)that are to be used to deliver an assessment must be taken into consideration right from the start of the assessmentdevelopment process. This statement is also true when an organization is preparing to convert an existing assessment toan electronic format.

A failure to clearly communicate assessment and functionality needs can result in lost time and money, whereassuccessful matching can help the agency meet their assessment needs with cutting-edge technology (Curtin et al.,2005; Marrelli et al., 2004). In the following section, we provide an example of an agency that was unsuccessful inmatching the assessment to the technology. We then provide an example of an agency that was successful inmatching their assessment requirements with the appropriate technology.

4.1. Unsuccessful matching of needs and technology

Apublic agencywanted to improve employees' technology proficiency, so they developed standards for the applicationof technology to the job and decided to create a technology assessment to identify employees' developmental needs in thisarea. This agency was in the process of acquiring a learning management system (LMS)–an integrated online learningsystem that included: assessment, individual competency profiles and development plans, course enrollment, ordering anddistribution of educational materials, authoring and delivery of online courses, and reporting. The agency decided that theywanted to administer the technology assessment online through the LMS so they issued an RFP for development of theassessment; The most qualified contractor from a group of soliciting contractors was then selected to perform the work.

The agency requested that the technology assessment be comprised of interactive items that require employees tomanipulate information to make decisions. The contractor began work by first requesting LMS documentation toensure the items that were written could be administered by the LMS. Several weeks passed during which time thecontractor repeatedly informed the agency that it was important that the contractor understand the LMS functionality todo their job. The contractor was finally invited to a demonstration of the LMS functionality and received system accessand a user manual.

After reviewing the LMS and user manual, the contractor informed the agency that the LMS functionality did notpermit the interactive items desired. The agency, however, insisted that the LMS did meet their needs and the contractorjust “needed training.” Four months into the project the contractor finally persuaded the client agency to ask the LMSvendor for a demo of interactivity. Unable to provide a demo, the LMS vendor responded that, for a price, they wouldcustomize the LMS to meet agency needs. After much time and money wasted, the agency finally understood that theselected LMS could not support their assessment. This situation could have been avoided had the agency defined their

Page 12: Public sector use of technology in managing human resources

427B. Cronin et al. / Human Resource Management Review 16 (2006) 416–430

assessment needs in sufficient detail, matched point-by-point their needs with the functionality of the software, andasked the vendor for proof of functionality.

4.2. Successful matching of needs and technology

Throughout most of the 1980s and the early 1990s, the Federal Aviation Administration (FAA) screened Air TrafficControl Specialist (ATCS) candidates using a written Office of Personnel Management (OPM) test and a 9-weekAcademy Screen. Candidates who were sent to the FAA Academy in Oklahoma City for screening after passing theOPM test were required to leave their existing jobs and homes for 9 weeks with no assurance that they would pass thescreen and be hired by the FAA. The Academy Screen was discontinued in 1992 partly due to issues related to cost tothe government and fairness to candidates.

After considering and attempting other options for screening candidates, the FAA decided to develop a job-relatedselection test that candidates could complete (in one day) at a testing center near their home community. Due to the largevolume of ATCS candidates expected to take the selection test, the FAA wanted to make sure that this test could beadministered to a large number of candidates as efficiently as possible. Therefore, the test needed to be computerized andself-administered so that only one or two test administratorswould be needed tomonitor a room full of examinees. The roleof the test administrator would be to verify the identity of the pre-registered candidates, monitor test-taking activities andbreaks, answer questions about the testing procedures, resolve simple hardware or software problems, and transmit testoutput files to the FAA. The administratorwould not be required to have knowledge of the content of the test; Therefore, thetest needed to be developed so that the instructions were clear, the battery would automatically proceed through all tests,tests would be scored automatically by the computer software, and the output files would need to be written into a driveeasily identifiable and accessible to the administrator. Furthermore, the FAA needed an output file that contained a totalscore that would be used for selection decisions as well as a comprehensive output file that included item level data forresearch and validation. Finally, the battery also needed to be able to assess the complex cognitive skills and abilitiesrequired for the job, necessitating the inclusion of dynamic, interactive, scenario-based tests within the battery.

The FAA contracted with a team of contractors, to create a valid, legally defensible, job-related computerized testbattery to select ATCSs. The FAA and consulting teamworked together to identify specific assessment needs as well as theneeds for functionality. The team determined that dynamic and interactive computerized tests would be more appropriatemeasures of some of the ATCS job requirements (e.g., perceiving, assessing, and responding to electronic/digitalizedinformation) than traditional paper-and-pencil tests. The product of this effort was the Air Traffic-Selection and Training(AT-SAT) battery, whichwas designed tomeasure the cognitive, perceptual, and psychomotor abilities critical to successfulair traffic controller performance. The battery is comprised of eight tests, which are described in Table 3.

One of these tests, the Experience Questionnaire, measures personal characteristics that are required for the job. Theother seven tests measure required cognitive skills and abilities. An empirical, concurrent validation study using asample of over 1000 air traffic control specialists produced evidence of high criterion-related validity. This battery isnow operational and is being used by the FAA to select new ATCSs.

5. Extraneous factors that may affect assessment validity

Despite the advantages of computer-based test administration, test developers and users must be cautious of extraneousfactors that may potentially lower the validity of the test over time (Bradley & Russel, 1997; Dimock & Cormier, 1991;Drasgow et al., 2005; Levine & Donista-Schmidt, 1998). For example, research has indicated that the employmentselection process is not immune to the effects of practice and coaching (Sackett, Burris, & Ryan, 1989). As examineesbecomemore familiar with a test, particularly one that requires interaction with dynamic scenarios, improved performanceover time may be due to practice rather than to actual changes in ability. Additionally, improvements to the test due to theavailability of more advanced computer technologies may raise the questions of the transferability of the original validityresults to the newer, upgraded test.

Extraneous factors that may affect computer-based test validity include

▪ Computer experience▪ Practice and Coaching▪ Software and Hardware changes.

Page 13: Public sector use of technology in managing human resources

Table 3ATSAT subtest descriptions

Subtest name Subtest description Example KSAs measured

1. Letter Factory Test Simulates a factory assembly line. The test requires that subjects use a mouse toperform multiple and often current tasks. Subjects are also asked to answersituational questions related to each test section.

▪Time sharing/multi-tasking▪Tolerance for high intensity▪Situational awareness▪Attention to detail▪Recall from interruption

2. Dials Test Designed to test the participant's ability to quickly identify and read dials on aninstrument panel.

▪Scanning▪Perceptual speed and accuracy

3. AT Scenarios Low-fidelity simulation of an air traffic control radar screen. The goal is tomaintain separation and control of varying numbers of simulated aircraft(represented as data blocks) within the participant's designated airspace.Separation and control are achieved by communicating and coordinating witheach aircraft.

▪Dynamic visual-spatial▪Situational awareness▪Execution▪Scanning▪Perceptual speed and accuracy▪Tolerance for high intensity▪Movement detection

4. Angles Test Measures ability to recognize angles. ▪Angles5. Applied Math Test Contains 30 applied multiple-choice questions and allows participants up to

21 min to complete them▪Mathematical reasoning▪Numeric ability

6. Analogy Test Measures the ability to apply rules to solve a given problem. Items provide a pairof either words or figures related in a particular way. Participants choose responsethat best completes second pair.

Rule application▪Reasoning▪Rule inference▪Visual-spatial reasoning▪Confirmation

7. Scan Test Participants monitor a field that contains discrete objects (called data blocks),which are moving in different directions. Score based on the speed and responseto data blocks outside a specified range.

▪Perceptual speed and accuracy▪Scanning▪Dynamic visual-spatial

8. ExperienceQuestionnaire

The Experience Questionnaire assesses whether participants possess certainwork-related attributes by asking questions about past experiences.

▪Decisiveness▪Working cooperatively▪Self-monitoring▪Interpersonal tolerance

428 B. Cronin et al. / Human Resource Management Review 16 (2006) 416–430

5.1. Computer experience

A review of the literature reveals two general findings. First, people with more computer experience reported lesscomputer anxiety, while people with higher levels of computer anxiety failed to perform as well on computerized tests(Bradley & Russell, 1997; Dimock & Cormier, 1991; Levine & Donista-Schmidt, 1998). Second, differences inperformance between paper-and-pencil and computerized tests were negligible (Finegan & Allen, 1994; Mead &Drasgow, 1993).

5.2. Practice and coaching

The distinction between practice and coaching lies in whether or not there is an outside intervention. If practice and/or coaching increases an applicant's score on a selection test, then the predictive validity of that selection device isundermined (Sackett et al., 1989). The implications for personnel selection are

▪ Practice and coaching may interfere with the ability of a test or test battery to accurately predict future jobperformance

▪ Test scores that improve due to practice and coaching effects might influence hiring decisions based on testperformance.

5.3. Software and hardware changes

There has been limited published research on this topic, and most of it is between 10–20 years old. The relevantissues include software upgrades, operating system, screen size, fonts, colors, and changes to “look and feel”. The

Page 14: Public sector use of technology in managing human resources

429B. Cronin et al. / Human Resource Management Review 16 (2006) 416–430

research literature addressed font size, display size, display color, and changes from paper-and-pencil to computer.Practical issues are readability and whether or not changes make the test easier. According to Wildstrom (1998),differences in monitor size can be offset by changing the resolution and object size. A summary of the literature showsthat there are no differences in performance due to display size (Duchnicky & Kolers, 1983); reading ease (display hadminimal impact on reading speed) (Gould, Alfaro, Finn, & Haupt, 1987); or format (paper-and-pencil vs. computer(Honaker, 1988).

6. Summary

Technology has become a vital aspect of human resources and personnel-related decisions. The research andprojects described in this article provide tips for how to successfully implement technology-guided solutions. Theimportant point to remember, however, is that technology is not a panacea for all human resource related issues. Infact, technology introduces new concerns that must be addressed to preserve the validity and reliability of HRprocesses. When properly used, technology can improve the ability of HR professionals to manage personnel-related processes. However, technology does not replace the human judgment necessary to create effective HRsystems.

References

Archer, W. B. (1966). Computation of group job descriptions from occupational survey data. USAF PRL Technical Report 66:31.Bartram, D. (2000). Internet recruitment and selection: Kissing frogs to find princes. International Journal of Selection and Assessment, 8,

261−274.Bradley, G., & Russel, G. (1997). Computer experience, school support, and computer anxieties. Educational Psychology, 17, 267−284.Buijck, A, de Lange, G. (2003). The use of the Internet as the main recruitment and self-selection tool for controller applicants. The Second

EUROCONTROL Selection Seminar: FEAST and More-Progress in European Controller Selection Development, Luxembourg.Chapman, D. S., & Webster, J. (2003). The use of technologies in the recruiting, screening, and selection processes for job candidates. International

Journal of Selection and Assessment, 11, 113−120.Christal, R. E., (1974). The United States Air Force occupational research project. US AFHRL Technical Report No. 0099-3239, 73–75.Crespin, T. R., & Austin, J. T. (2002). Computer technology applications in industrial and organizational psychology. Cyberpsychology & Behavior,

5, 279−303.Cronin, B. E. (2002). The impact of realistic job previews and information salience on applicant attraction in web-based recruitment. Master's thesis,

The Pennsylvania State University.Cronin, B. E. (2003). Changing the face of technology in recruitment: The real story. Poster presentation at the 18th Annual Conference of the

Society for Industrial and Organizational Psychologist, Orlando, FL.Curtin, P., Phillips, H., Heil, M., & Foster, T. (2005). Using Web-based technology in assessment: Examining benefits and challenges, lessons

learned, and the influence of organizational culture. Seminar and webinar presented at the June, 2005 meeting of the International PublicManagement Association for Human Resources (IPMAAC) Conference, Orlando, FL.

Dimock, P. H., & Cormier, P. (1991). The effects of format differences and computer experience on performance and anxiety on a computeradministered test. Measurement and Evaluation in Counseling and Development, 24, 19−126.

Drasgow, F. (2005, October). Technology and testing. Presentation, Personnel Testing Council of Metropolitan Washington.Drasgow, F., Luecht, R., & Bennett, R. (2005). Technology and testing. In R. L. Brennen (Ed.), Educational measurement (4th ed.). Washington, DC:

American Council on Education.Duchnicky, R. L., & Kolers, P. A. (1983). Readability of text scrolled on video display terminals as a function of window size. Human Factors, 25,

683−692.Epstein, J., & Klinkenberg, W. D. (2001). From Eliza to Internet: A brief history of computerized assessment. Computers in Human Behavior,

295−314.Finegan, J. E., & Allen, N. J. (1994). Computerized and written questionnaires: Are they equivalent? Computers in Human Behavior, 10, 483−496.Ford, J. M., Stetz, T. A., & Bott, M. M. (2000). Automated content analysis of multiple choice test item banks. Presented at the Annual Conference of

the Society of Industrial and Organizational Psychology.Gould, J., Alfaro, D., Finn, L., & Haupt, R. (1987). Reading from CRT displays can be as fast as reading from paper. Human Factors, 29(5),

497−517.Guion, R. M. (1998). Assessment, measurement, and prediction for personnel decisions. Mahwah, NJ: Lawrence Erlbaum Associates.Harter, G.A., Fitts, P.M. (1956). The functional simulation of complex systems by means of an analog computer, with the F-86D, E-4system as a

specific example. Part I. USAF Personnel Training Research Center Research Report. Vol. 12.Honaker, L. M. (1988). The equivalency of computerized and conventional MMPI administration: A critical review. Clinical Psychology Review, 8,

561−577.Laham, D., Bennett, W., & Landauer, T. K. (2000). An LSA-based software tool for matching jobs, people, and instruction. Interactive Learning

Environments, 8, 171−185.

Page 15: Public sector use of technology in managing human resources

430 B. Cronin et al. / Human Resource Management Review 16 (2006) 416–430

Levine, T., & Donitsa-Schmidt, S. (1998). Computer use, confidence, attitudes, and knowledge: A causal analysis. Computers in Human Behavior,14, 125−146.

Lievens, F., & Harris, M. M. (2003). Research on Internet recruitment and testing: Current status and future directions. In C. L. Cooper, & I. T.Robertson (Eds.), International Review of Industrial and Organizational Psychology, Vol. 18. Chicester: John Wiley & Sons, Ltd..

Lund Hansen, J. C. (2003).Web-based pre-selection as part of a new student controller marketing strategy and a new corporate identity. The SecondEUROCONTROL Selection Seminar: FEAST and More-Progress in European Controller Selection Development, Luxembourg.

Marrelli, A., Morath, R., Cronin, B., Mulvaney, R., Curtin, P., Parmenter, T., & Petro, J. (2004, June). Matching the test to the technology: A casestudy. Presentation at the Annual Conference of the International Personnel Management Association Assessment Council, Seattle, WA.

McCormick, E. J. (1979). Job analysis: Methods and applications. New York: American Management Association AMACOM.Mead, A. D., & Drasgow, F. (1993). Equivalence of computerized and paper-and-pencil cognitive ability tests: A meta-analysis. Psychological

Bulletin, 114(3), 449−458.Morath, Quartetti, Bayless, & Archambault (1999). Air traffic controller job analysis. In R. Ramos, M. C. Heil, & C. A. Manning (Eds.), (2000).

Documentation of validity for the AT-SAT computerized test battery. Washington, DC: Federal Aviation Administration Office of AviationMedicine (DOT/FAA/AM-01-06).

Morgeson, F., Delany-Klinger, K., Mayfield, M., Ferrara, P., & Campion, M. (2004). Self-presentation processes in job analysis: A field experimentinvestigating inflation in abilities, tasks, and competencies. Journal of Applied Psychology, 89(4), 674−686.

Peterson, N. G., Mumford, M. D., Borman, W. C., Jeanneret, P. R., Fleishman, E. A., Levin, K. Y., et al. (2001). Understanding work using theOccupational Information Network (O⁎NET): Implications for practice and research. Personnel Psychology, 54, 451−492.

Ree, M. J., & Carretta, T. R. (1998). Computerized testing in the United States Air Force. International Journal of Selection and Assessment, 6,82−106.

Sackett, P. R., Burris, L. R., & Ryan, A. M. (1989). Coaching and practice effects in personnel selection. In L. Cooper, & I. T. Robertson (Eds.),International review of industrial and organizational psychology. Chichester, NY: Wiley.

Stanton, J. M. (1999). Validity and related issues in web-based hiring. The Industrial Psychologist (TIP), 36, 69−77.Stone, D. L., & Stone-Romero, E. F. (1998). A multiple stakeholder model of privacy in organizations. In M. Schminke (Ed.), Managerial ethics:

Moral management of people and processes (pp. 35−59). Mahwah, NJ: Lawrence Erlbaum Associates.Wildstrom, S. H. (1998, January 12). Getting the most from a monitor. Business Week, 24.


Recommended