+ All Categories
Home > Documents > AVIATION ADMINITRIATION VA3HNGTON De SYSI134-47e pts ... · 7 rpert no. faa-rd-79-100 c44 field...

AVIATION ADMINITRIATION VA3HNGTON De SYSI134-47e pts ... · 7 rpert no. faa-rd-79-100 c44 field...

Date post: 06-Sep-2019
Category:
Upload: others
View: 4 times
Download: 0 times
Share this document with a friend
34
76-OT 8% FEDERAL AVIATION ADMINITRIATION VA3HNGTON De SYSI134-47e pts 14,02 CTFIELD IMPACT EVALUATION PROCESS ON ELECTRONIC TABULAR DISPLAY S-gyC(U) UNLSIFIEDmmAhhW7hhhhI ,11111 Tpx
Transcript

76-OT 8% FEDERAL AVIATION ADMINITRIATION VA3HNGTON De SYSI134-47e pts 14,02

CTFIELD IMPACT EVALUATION PROCESS ON ELECTRONIC TABULAR DISPLAY S-gyC(U)

UNLSIFIEDmmAhhW7hhhhI,11111

Tpx

7

Rpert No. FAA-RD-79-100

C44 FIELD IMPACT EVALUATION PROCESSON ELECTRONIC TABULAR DISPLAY SUBSYSTEM

*ETABS)

THE ELECTRONIC TABULAR DISPLAY SUBSYSTEMEVALUATION ASSESSOR SUBGROUP

oiok D~ E) Cz

S ES U

° " U U LbI ULE

OCTOBER 1979

..... J Document is available to the U.S. public through..... . ....--- the-. Nati.on.aTechntkal.lnformation Service,C l Springfield, Virginia 22161.

Prepvsd for

U.S. DEPARTMENT OF TRANSPORTATIONFEDERAL AVIATION ADMNSTRATION

Systems Reserch & Devebpment ServiceWuh....2t 1 018

14 01

1

i

NOTICE

This document is disseminated under the sponsorship of theDepartment of Transportation in the interest of informationexchange. The United States Government assumes no liabilityfor its contents or use thereof.

I

Technical Report Documentation Page

1. Report No. 2. Government Accession No. 3. Recipient's Catalog No.

-FAA-RD- 79-100

A4 TI .t- - ut4 h .. . ... 5., RJE ... !. ...

, FIELD IMPACT VALUATION PROCESS ON ELECTRONIC /1 OctdMmi.79 /TABULAR DISPLAY SUBSYSTEM (ETABS), 6. Performing Organization Code

. .._ _ _,.._ I8. Performing Organization Report No.

7. Author' s)h e Electronic Tabular Display Subsystem

Evaluation Assessor Subgroup9. Performing Organization Name and Address 10. Work Unit No. (TRAIS)

U.S. Department of Transportationystems Research and Development Service and Office of Jim -,,Contractor GrantNo.

Systems Engineering Management -4Federal Aviation Admin., Washington, D.C. 20590 131' ,, jp-d Covered

12. Sponsoring Agency Name and Address -.

U.S. Department of Transportation L Dec 178 - Julf*979.Systems Research and Development Service and Office...

of Systems Engineering Management 14. Sponsoring Agency Code

Federal Aviation Admin., Washington, D.C. 2059015. Supplementary Notes

16. Abstract

This report describes the process used in conducting a field impact evaluationof the Electronic Tabular Display Subsystem (ETABS). Various group structuraland process techniques are described. These include a diagonal slice approachto team formulation and several different methods of team building, processcontrol and conflict management.

17 Ke Words 18. Distribution Statement

Electronic Tabular Display Subsystem Document is available to the U.S. publicField Impact Evaluation through the National Technical InformationNational Airspace System Service, Springfield, Virginia 22161.Automated Flight DataUpgraded Stage A Flight Strip Printer

k 19. Security Classif. (of this report) I 20. Security Classif. (of this page) 21. No. of Pages 22. Price

UNCLASSIFIED UNCLASSIFIED 31

Form DOT F 1700.7 (8-72) Reproduction of completed page authorized .3,/ " .. _

,.b .

EI

.s.

C~ E .2 E

0 0

PA 0* 1 aELa4 *3 ~1

0 0.

9, 0 0 0

I..I 9.. I...-.r , III Ill p '~. II

8 7 ,6 54 3 1 iche

E E E5_EI

43 4

I-L

Vv L ,1,1I

TABLE OF CONTENTS

Page

EXECUTIVE SUMMARY v

INTRODUCTION 1

CHAPTER 1. STRUCTURE AND IMPLEMENTATION OF EVALUATION TEAM 3

1.0 GENERAL 3

1.1 INITIATING OFFICIAL 3

1.2 TIMING OF EVALUATION PROCESS 3

1.3 OUTPUT OF EVALUATION PROCESS 4

1.4 MODERATOR 4

1.5 FACILITATOR 5

1.6 COMPOSITION OF EVALUATION TEAM 5

1.7 ESTABLISHMENT AND SELECTION OF TEAM MEMBERSHIP 6

1.8 EVALUATION ASSESSOR SUBGROUP 7

CHAPTER 2. EVALUATION PROCESS 8

2.0 GENERAL 8

2.1 PHASE I - GENERATION OF IMPACT STATEMENTS 8

2.1.1 DEVELOPMENT OF EVALUATION TEA1 Accussion ior 8

2.1.2 TECHNICAL BRIEFING OF TEAM 1D0 TAB 10r ~ ~ E Un : n c d I

2.1.3 GENERATION OF IMPACT WORKBOOKI Justi z:ic t, L 1

2.1.4 USE OF IMPACT WORKBOOK 10By 10

2.2 PHASE II - VERIFICATION AND ANALYSIS. i " . 12

2.2.1 CONSOLIDATION 1--- 13

2.2.2 TEAM MODIFICATION ist = - j 13

2.2.3 INTRODUCTION TO QUANTIFICATI14

lii .. .. -

I

ITable of Contents (continued)

Page

2.3 PHASE III - QUANTIFICATION OF IMPACT STATEMENTS 14

2.3.1 QUANTIFICATION 15

2.3.2 VERIFICATION 15

2.4 PHASE IV - ALTERNATIVES, RECOMMENDATIONS, AND CONCLUSION 15

2.4.1 GENERATION AND CONSOLIDATION OF ALTERNATIVES 16

2.4.2 FEASIBILITY 16

2.4.3 SELECTION OF ALTERNATIVES 16

2.5 PHASE V - DISPOSITION OF RECOMMENDATIONS 17

2.5.1 MONITOR RESULTS 18

2.5.2 IMPROVE EVALUATION PROCESS 18

CHAPTER 3. ASSESSMENT OF FIELD IMPACT EVALUATION PROCESS 19

3.0 GENERAL 19

3.1 BACKGROUND 19

3.2 EVALUATION ASSESSOR SUBGROUP OBJECTIVES 19

3.3 EVALUATION ASSESSOR SUBGROUP ASSEMBLY 19

3.4 EVALUATION ASSESSOR SUBGROUP ASSESSMENT FUNCTIONS 20

3.5 EVALUATION ASSESSOR SUBGROUP PROCEDURES 20

3.5.1 INITIAL ACTIVITIES 20

3.5.2 ONGOING EVALUATION 21

3.5.3 DATA COLLECTION 21

3.5.4 PRESENTATIONS 22

CHAPTER 4. ETABS FIELD IMPACT EVALUATION OBSERVATIONS 23

iv

EXECUTIVE SUMMARY

At the suggestion of the Deputy Administrator, a Diagonal Slice Team ofaffected field personnel was commissioned by the Director of the SystemsResearch and Development Service to conduct a field impact evaluation ofthe Electronic Tabular Display Subsystem (ETABS).

The final result of the Evaluation Team is a Field Impact EvaluationReport showing various impacts, feasible alternatives, advantages, anddisadvantages of these alternatives with recommendations. The purposeof these recommendations is to minimize the impact of ETABS on the workforce. Upon completion of the Field Impact Evaluation Report, the mainportion of the Evaluation Team was dissolved.

The evaluation process was broken down into five phases:

Phase I - Generation of Impact Statements

Phase II - Verification and Analysis

Phase III - Quantification of Impact Statements

Phase IV - Alternatives, Recommendations, and Conclusions

Phase V - Disposition of Recommendations

The Evaluation Team met for 1 week of intensive meetings, approximatelyevery 6 weeks. The first two meetings represented a learning processon group dynamics, and also about each others perspectives. The factthat we had done some learning became readily apparent at a subsequentmeeting when we were able to break up into subgroups and still besensitive to all of the perspectives reflected by the entire team,

"Homework" associated with the ETABS assignment consumed about 15 percentof each member's time, when back at his home facility.

The team was lead by two separately selected individuals. The "Team

Leader," who acted as moderator for the team also performed administrative

functions.

The facilitator taught the team group dynamics and helped to keep the

team within the operating guidelines of those dynamics.

It is recommended that a subgroup continue, until the implementation ofA ETABS, to monitor the recommendations of the Evaluation Team and to

determine the success of the process. The Evaluation Assessor Subgroupwithin the team had generated methods to improve the evaluation processfor similar projects in the future.

v

-a---K

I

A field impact evaluation enjoys the advantage of surfacing deficienciesearly enough in the program that it becomes an improvement to theengineering model rather than a modification to the production modelafter it is in the field. The savings in this area alone, more thancompensate for the expense of a Field Impact Evaluation.

Each participant is richer for his experience as a member of a DiagonalSlice Group. The additional perspectives attained make the individuala more valuable employee to the FAA.

vi

INTRODUCTION

1. PURPOSE. This report describes how to construct an Evaluation Teamand perform an assessment of a new technology or system.

2. BACKGROUND. The information in this report was generated by theElectronic Tabular Display Subsystem (ETABS) Evaluation Team and

is designed to assist future evaluation teams. The report includesthe rationale of the Team's response to the pitfalls they encountered.This report has three chapters. The first chapter describes how toconstruct an evaluation team, and the second chapter describes howto perform the evaluation. Chapter three describes how to assessthe evaluation process.

3. SELECTION OF THE EVALUATION PROCESS. The evaluation process does notduplicate any existing procedure, policy, or function performed byany segment of the organization. However, it does interact with theorganization's responsibility in the procurement of the new system.This new concept of having field personnel participate during thedevelopment phase of a new system may be met with resistance by

personnel in headquarters. The inter-departmental conflict (head-quarters versus field) should not be avoided. This energy must bechanneled to produce the best product.

One major advantage of an evaluation team over a committee or staffgroup is objectivity. Since they are analyzing someone else's project,they do not have any preconceived ideas. By virtue of its makeup,an evaluation team can surface a greater number of viewpoints withmore candor. A staff group tends to have a "staff view." Also, astaff group or committee feels pressures of hierarchy that do not

exist with a diagonal slice group.

In addition, the short, part-time existence of the group prevents thedevelopment of the "staff view" and "empire building."

4. COST OF EVALUATION PROCESS. The cost would vary according to themaqnitude of what it is to be evaluated. The cost of the ETABSEvaluation Team in employee hours, travel, and per diem is estimatedat $120,000 (1979 dollars). This is insignificant compared to theoverall cost of the project and the possible rewards which couldoutweigh the cost many times over. The cost of future evaluationsshould be considerably less since the initial evaluation was alearning experience.

I

5. RECOMMENDED PROCEDURES. This report has various recommendations onhow to perform the evaluation of new technology or systems. Theseprocedures were recommended based on the experience of the ETABSEvaluation Team. They are only suggested procedures and any futureevaluation team may deviate from these recommendations as much asrequired to accomplish the assigned task.

6. ATMOSPHERE OF EVALUATION PROCESS. It may be impossible to definethe true atmosphere of the evaluation process.

Since ETABS was the first use of the evaluation process, everythingwas being designed and tried dynamically. The amount of frustratior,experienced by each member was at times very high. Meetings andactivities were conducted with much conflict, tension, and frustratc-'.In addition to the responsibility of evaluating a new system, being

placed in a "fish bowl" caused an increase in the amount of pressureexperienced.

The whole evaluation process was a difficult learning process foreveryone involved. However, despite these difficulties, the over-whelming majority of participants acknowledge the need for theprocess and their willingness to participate in future evaluations.

This report was generated by the following individuals: Brad Cook,Denver Center; Gerald Mikuenski, Houston Center; and Ron Seppala,Boston Center.

Point of contact for questions or comments regarding this report isBill Koch, FAA Washington, AEM-301, FTS 426-8794.

2

;A~

CHAPTER 1. STRUCTURE AND IMPLEMENTATION OF EVALUATION TEAM

1.0 GENERAL. This chapter describes how to organize, construct, andimplement an evaluation team.

1.1. INITIATING OFFICIAL. The initiating official should be the highestaffected manager responsible for the operation impacted by the newtechnology. (The ETABS Evaluation Team was initiated by the DeputyAdministrator. If the evaluation team process is to become thenormal or accepted way of doing business, the authority to initiatethe process can be delegated down.) The initiating official shallidentify the major objectives and any boundaries or limitations.The initiating official shall select the moderator and facilitator.

1.2 TIMING OF EVALUATION PROCESS. The evaluation team should beassembled anytime a change or a new technology comes to the surfacewhich would affect a wide range of the overall organization. Inthe case of technology, the team should be established as soon aspossible after prototype specifications are generated. The teamshould be dissolved after generation of its recommendations. Thisshould be prior to the generation of the production specifications.A subgroup should continue to exist until the disposition ofrecommendations is determined. This subgroup would watchdog therecommendations to determine the results of the evaluation teamprocess on the project.

Diagram 1

Flexibility System Requirements Specifications(ability tochange what Prototype Specificati.onsis happening)

Production Specification

\Implementation Plan

Knowledge (about what is happening)

The ETABS Evaluation Team stayed in existence for approximately 8months. During that period, only 6 weeks of nonconsecutive timewas spent on team activities. When back at their facilities,members will spend in the neighborhood of 15 percent of their timeon team related work. The rationale for having an approximate6-week break between meetings is to keep team members in touchwith the real world and test their progress after each phase.

3

7 :V.

1.3 OUTPUT OF EVALUATION PROCESS. The output of -he evaluation processis a Field Impact Evaluation Report with recommeniations regardinoa new technology. This report indicates both the positive andnegative impacts of this new technology and makes recommendationswith quantized alternatives which are intended to be acceptable tothe agency. This report will have a higher degree of credibilit'than can be obtained via any other method currently in use.

1.4 MODERATOR. The moderator has a multi-function role. This individualshould have to some degree both technical knowledge and responsibili>.for the project. This person shapes the attitude and environmentof the team activities. Thus, the moderator is the designer ofteam functions. The experience gained as a member of the evaluatiorteam can be used as the basis to convert that member to a moderatorfor a subsequent evaluation team. It would be undesirable to have

full-time moderators (i.e., carry over from team to team). Thechance of them developing fixed perspectives and curtailingcreativity would increase if they were permanent moderators. Thefollowing items indicate the major functions performed by themoderator:

.Performs administrative functions for the team and is thefocal point between meetings.

.Ensures timely completion of the task within establishedconstraints, without suppressing creativity.

.Helps to keep some of the more heated discussions on trackand also frees the facilitator to function unhampered.

.Selects the evaluation team's ccmposition and membership.

.Coordinates establishment of evaluation team with allinterested parties including headquarters, regional, andfield personnel.

.Ensures all necessary information, material, and techniquesare provided to the team.

.Recommends the elimination of unproductive members who areundermining or interfering with the accomplishment of theobjectives. The major responsibility to remove this elementfrom the team rests with the team members.

.Encourages an energetic and productive team.

.Recommends members to form assessment assessor subgroup.

4

Ii

1.5 FACILITATOR. The facilitator has a multi-function role. Thefacilitator teams up with the moderator to participate in thedesign of the evaluation team functions and activities. Inaddition to being a designer, the facilitator is an expert in thearea of information transfer. This person does not become involvedin the content of the team's task, but oversees the process theteam is using and helps to keep that process on track. (The ETABSEvaluation Team used a professional facilitator on a contractbasis which has several advantages. One such advantage is that theservice is available on an as needed basis. This type offacilitator would not be contaminated by the way the administrationhas previously managed new projects.) The following items indicatethe major functions performed by the facilitator:

.Exposes the team to various methods in accomplishing specifictasks. If team members have little or no experience in teamaction dynamics, the facilitator provides the basic information.

.Provides the external force to steer unproductive conflictinto constructive conflict and assists the team in developingtheir annrnach in resolving conflict.

.Provides a variety of techniques which can be used to resolveproblems.

.Forces the evaluation team to identify and examine the methodsit uses in decision making along with the quality of thesedecisions.

.Participates in the establishment and selection of evaluationteam composition.

.Participates in the selection of the assessment assessor subgroup.

1.6 CCMPOSITION OF EVALUATION TEAM. The key to the success of theevaluation process is field personnel participation at theappropriate time.

The evaluation team should be composed of people from a diagonalslice of the field organization. A diagonal slice is a selectionof personnel representing as many organizational perspectives aspossible. This allows for a highly creative and productiveatmosphere. The team is limited in size to prevent it from becomingcumbersome. From 5 to 12 people seems to be a manageable size.An essential ingredient of the diagonal slice is equal statuswithin the team for all members. Each member of the team mustparticipate. In this fashion, all viewpoints are surfaced allowingall members to "see" all facets. This also prevents domination byone or two members of the group. Any attempt to withdraw fromthe team should be dealt with in a positive manner.

5

. I0 °

j

During the Verification and Analysis Phase, the concept of therollina diagonal slice is implemented. Several team members arereplaced by individuals who have direct responsibility of theproject being evaluated. These n', t-eam members should haveparticipated from the beginning of the evaluation process asinformation sources, but not as active team members. (ETABSexperience: The new members had not participated from thebeginning. The modification of the team composition caused agreat deal of confusion. It required some time for the newmembers to establish themselves as part of the team.) The rationalefor having the new members is to ensure the recommendations wouldnot get discarded because of the lack of ownership or involvementby the personnel responsible for the project. The replacedmembers become the evaluation assessor subgroup who are availableas information sources. This is in addition to their new role.

1.7 ESTABLISHMENT AND SELECTION OF TEAM MEMBERSHIP. The moderator asthe focal point shall request nominations from as wide a spectrumof the affected areas as is practical. The moderator andfacilitator shall select the team members from the availablccandidates. Participation on the evaluation team is an excellentopportunity for personal growth. Exposure to such a wide rangeof viewpoints and expertise will improve the individual's overallperformance. In the selection of team members, certain qualitiesare desirable. The following items indicate these qualities andsome rationale behind them:

.A member must be able to function in a group, be creative,intelligent, and able to handle ambiguity and conflict in aconstructive fashion. Each member must be honest, open,candid, dedicated, flexible, and have the ability toparticipate under pressure.

.Despite the large variation in status between team memberswithin the FAA, each member has equal status within the team.No individual member has the power or can control the team'sactivities as long as the membership enforces this internalquality.

.Team members should be experts in their specific fields. Theyshould be the primary data source, but additional data orfeedback can be obtained from their technical peer groups. Aminimal amount of their time is used as a data collector.

.Each team member is expected to use his expertise to makerecommendations with alternatives. Some of their administrativefunctions include writing reports, taking minutes, giving verbalpresentations, and making arrangements for meetings. Member'sknowledge and perspective must be expressed even in the faceof opposition.

6

1.8 EVALUATION ASSESSOR SUBGROUP. As stated in paragraph 1.6,Composition of Evaluation Team, the assessor subgroup is formedfrom the rolling diagonal slice.

The evaluation assessor subgroup is charged with the responsibilityof improving the evaluation process. Currently, experience is thebest method to improve and develop the process. There must be anongoing evaluation within the team and from team to team to makethe necessary changes to perfect the process and update this report.The assessors are charged with the following task: refine theevaluation process and provide a more objective immediate feedbackto the team.

The major functions of this subgroup is to write reports and giveverbal presentations on their observations of the evaluation process.

This subgroup is to generate an interim report after the generationof the recommendations indicating the results of the evaluationprocess. Their final report is generated after the disposition ofrecommendations is determined. Much of the data used to generatethese reports is obtained by the assessors as observers of theevaluation process. The moderator and facilitator recommend whichmembers form this subgroup with the team's approval.

7

CHAPTER 2. EVALUATION PROCESS.

2.0 GENERAL. The evaluation process with respect to this appendix isthe assessment of new technology by field personnel. The impacton the workforce by this new technology is described by bothpositive and negative quantized impact statements. Recommendationswith alternatives are generated which would be accepted by theagency and minimize the impact on the workforce.

The evaluation process have five phases as follows:

Phase I - Generation of Impact Statements

Phase II - Verification and Analysis

Phase III - Quantification of Impact Statements

Phase IV - Alternatives, Recommendations, and Conclusions

Phase V - Disposition of Recommendations

2.1 PHASE I - GENERATION OF IMPACT STATEMENTS. The Generation ofImpact Statements Phase is probably the most singular importantphase. It is comprised of four distinct areas as follows:

.Development of Evaluation Team

.Technical Briefing of Team

.Generation of Impact Workbook

.Use of Impact Workbook

This phase does not begin or end with the first meeting. It canextend and overlap subsequent phases. (In the case of ETABS, thisphase crystallized at the third meeting during the quantificationphase.) The entire objective of this phase is to generate clear,distinct, stand alone, positive, and negative impact statements.(In the case of ETABS, the preliminary impact statements weresuggestions, recommendations, and alternatives without identifyingthe exact effect on the workforce.

2.1.1 DEVELOPMENT OF EVALUATION TEAM. The development of thisevaluation team begins with the selection of team membersby the moderator and facilitator. After solicitingnominations from the appropriate organizations, thecandidates are screened and selected to ensure a balanceddiagonal slice composed of the proper qualities.

8

The moderator schedules the first meeting and selects thelocation based on the required technical orientation ofthe team. Included in opening briefings should be oneby a high level manager indicating his interest/support/goals for the group.

The moderator and facilitator generate an informationpackage and distribute it to team members prior to thefirst meeting. This advance material should contain the

prototype specifications and an executive summary of thenew technology being evaluated. In addition, any appropriateinformation on the evaluation process.

The first few days of the first meeting are used to developthe evaluation team. This is accomplished by informalpresentations by the moderator and facilitator on thehistory and background of the project, along with thepurpose and goals of the evaluation team.

The facilitator should present various tools and conceptson group dynamics. Some of major items presented by thefacilitator should be use of diagonal slice groups, thenature of highly effective groups, giving and receivingfeedback, methods of conflict resolution and problemsolving, and constructive conflict in discussions. Severalteam development exercises can be employed to develop therequired awareness and skills. (The ETABS Evaluation Teamwas introduced to the Nominal Group Technique (NGT) withan in-depth exercise in its use and application. The teamwas also exposed to PIT (Polemics-Inquiry-Theorizing)sessions which became a continuously used tool.)

The objective of the team orientation process is to integratea group of individuals into a productive team. Any methodwhich generates a high trust and acceptance level within

the team in the shortest possible timeframe can be employed.(In the case of ETABS, during the first meeting, long hoursand continuous contact were successful in diverting energyaway from keeping up personal appearances to bringing outthe true personal characteristics of the individuals. Onceit was evident that the trust and acceptance were present,each individual became more open, candid, and productive.)

It is recommended that all meetings be held away fromofficial offices, preferably in hotels or motels. (Aportion of the second ETABS meeting was held at a regionaloffice. This period of time was not effective. The rationaleto hold the meeting at the regional office was based onthe fact that the Regional Director was assigned overseerresponsibility of the ETABS project. It is suggested that

9

one of the members of the evaluation team be from thatparticular regional office. This individual could providethe Director with any required information. In addition,this individual should seek assistance from the Directorif seriois problems threaten the performance of theevaluation process. The Regional Director's involvementin the evaluation process should be limited to theevaluation and disposition of the recommendations.)

The development and growth of the team is an ongoing process,which ends with the termination of the team.

2.1.2 TECHNICAL BRIEFING OF TEAM. The technical orientationbegins with the advance material and briefing by aknowledgeable individual. The amount and depth oftechnical information presented about the new technologyshould be of sufficient proportion to ensure an under-standing of its functional capabilities and the basicdesign philosophy behind it. Each member will need adifferent level of information based upon his expertiseand perspective.

A demonstration of a prototype, model, mock-up, or breadboard version of the new technology is extremely beneficial.A series of presentations or briefings by knowledgeablepersonnel including question and answer periods is veryuseful. The team should be given the means to obtainfurther technical information as the need arises. Themembers to be added by the rolling diagonal slice conceptcould provide this additional technical knowledge or atleast the means to obtain it.

2.1.3 GENERATION OF IMPACT WORKBOOK. The generation of an impactworkbook is the first goal of the evaluation team. Thepurpose of an impact workbook is to provide the evaluationteam a common format to assemble field impact informationregarding the new technology. The workbook should containall possible factors of impact to be investigated by theteam. The data collected is assembled and verified inPhase II, Verification and Analysis.

2.1.4 USE OF IMPACT WORKBOOK. The individual members of the teamare to take this workbook back to their facilities anddecide if these preliminary impact factors have any validityand to what degree. (In the case of the ETABS EvaluationTeam, the previously generated technology assessmentworkbook was expanded and adapted as the basic tool to

collect data. The method to collect this data was discussed,but it was left up to each member to perform this task asit seemed appropriate.)

10

i

It is recommended that the nominal group technique be usedto generate the impact factors to be considered. (In thecase of the ETABS evaluation, the decision to use thepreviously generated technology assessment workbook waschallenged by several members. Despite the fact that manyfactors in this workbook were inappropriate, not one singlefact or question was removed from it in generating theETABS workbook. It was repeatedly stated by one of theoriginators of the technology workbook that nothing in itwas sacred but the ETABS Team still treated it as such.During the third meeting while performing the quantificationfunction, it became clear to the team that the use of thetechnology workbook was a mistake.)

Early during each meeting, one member of the team should beassigned the role of decision tracker. This individualshall prepare the minutes for that meeting, and record alldecisions and unanswered questions generated by the team.

In the generation of the workbook, three items may appearwhich should be resolved at that time. (In the case ofETABS, the resolution of some of these items was postponeduntil later meetings.) These items are as follows:

.How decisions are to be made. It is recommended thatdecisions be made by consensus and that all outputsfrom the team be acceptable to each member. This doesnot prevent the generation of any minority report.

.How to handle irresolvable items. (In the case ofETABS, the majority rule technique was established tohandle irresolvable items. Once this was establishedand used on an irresolvable item, the situation never

arose again.)

.How to cope with conflict. Many people have atendency to suppress conflict with people that theyare not familiar with. This suppression can reducethe quality of the final product. (In the case ofETABS, the PIT sessions, with its role playing,assisted in surfacing conflict and acceptable methodsof handling it. These sessions produced a high levelof openness, acceptance, and trust. Several membersobjected to these sessions and in reality by the thirdmeeting the PIT sessions had accomplished enough torender their use unnecessary.)

After the generation of the workbook, the team should clarifyexactly what their homework assignment between meetings is.

11

___

I

The assignment is to return to their facility and answerthe questions in the workbook. As the experts, they cananswer these questions or collect additional data from theirpeer group. During this task, they can generate additionalitems of impact they may discover. (In the case of ETABSmuch confusion was encountered in performing the homeworkassignments. This was based on a lack of clear understandingof assignments; despite the fact that during each meetingeveryone indicated a complete understanding of theirassignments. During the third meeting, an exercise wasused to determine how well individuals understood theirassignment and to reinforce their understanding. Thisexercise assumed that they had completed the assignment andthen the data was applied to the task at hand.)

As this point, the team should decide on the goals, location,and dates of the next meeting. The recommended goals arestated in Phase II, Verification and Analysis.

2.2 PHASE II - VERIFICATION AND ANALYSIS. During the Verification andAnalysis Phase, all the collected data on the various preliminaryimpact statements and questions are consolidated, examined, andsubstantiated. The result of this phase is the final impact state-ment document. This phase begins by collecting data (remember, ateam member can be the sole source) and verifying the preliminaryimpact statement at the team member's facility. It continues withthe consolidation of each member's input into a substantiated impactstatement that has been analyzed and accepted by the team. It endswith the generation of the final impact statement report.

A majority of the work in this phase is performed at the secondevaluation meeting. This meeting has three major activities.

.Consolidation

.Team Modification

.Introduction to Quantification

The previously indicated activities are performed to accomplish thetwo goals for the second meeting.

.Generate a draft impact evaluation document which describesthe nature and extent of all known impacts.

.Develop an understanding of quantification in terms oftotal system performance.

12

2.2.1 CONSOLIDATION. The consolidation activity combines all thevarious input data and comments presented by team membersinto a draft impact evaluation report. During this activity,the data is analyzed and verified by team members to ensureits accuracy. (In the case of ETABS, over 600 individualcomments were discussed and processed in the preparation ofthe rough draft report.)

The moderator and facilitator as the team's designers willprepare a schedule of the team's activities. This scheduleshould have enough flexibility in it to allow any requiredchanges. The flexibility should allow for the possibilityof increasing and decreasing the number of days for anygiven meeting., The moderator and facilitator should presentvarious methods of how to consolidate and evaluate all theavailable data. (In the case of ETABS, the moderator andfacilitator forced an unacceptable method on the team.This method was to break the team into small groups whichwould consolidate and verify specific factors and areas ofimpact. This approach caused confusion and questionableresults and was abandoned after a short period of time.The method may have been unsuccessful because of the diagonalslice with its numerous different perspectives. A groupdiscussion was held on each and every factor, comment, andquestion. The information was analyzed, consolidated, andverified by the entire team. This method took a great dealof time and the first few items received much moreattention than the last items. The technique of working insmall groups was attempted unsuccessfully several timesand was finally used successfully during Phase IV,Recommendations. The reason for its eventual success mayhave been an increase in the trust level between membersand confidence that team members thoroughly understoodeach other's position.)

The consolidation impact evaluation statements are thenorganized and form the rough draft impact evaluation report.

2.2.2 TEAM MODIFICATION. The purpose of modifying the team'scomposition is to accomplish the required objectives usingthe most efficient method.

As many subgroups as required can be formed. In paragraph1.6, Composition of Evaluation Team, the rolling diagonalslice method of modifying the evaluation team is described.(In the case of ETABS, this was performed to accomplishthe two assigned objectives.) The addition of headquarterspersonnel should not reduce the validity of the field impact

statements since they are added after the generation of therough draft impact evaluation report.

13

t_ _ _ _ _ _ _ _ _ _ _ _ _

-a.-2

I

During this activity, the creation of the evaluation assessorsubgroup is accomplished. The detail data concerning thepurpose and responsibility of this subgroup are identifiedin chapter 3 of this document.

It is envisioned that the need for this particular subgroupwould decrease and be eliminated by future teams as theevaluation process is otpimized. (In the case of ETABS, anedit subgroup was formed to revise the rough draft impactstatements into a near finished product. All members of theteam were to provide this subgroup with any additional data.The edit subgroup was to meet just prior to the next meeting.The subgroup was to have complete rewrite license to reviseall the statements. The approach is excellent; however, itwas unsuccessful because of the lack of direction given andreceived. The edit subgroup did not take full advantage oftheir complete rewrite license.)

2.2.3 INTRODUCTION TO QUANTIFICATION. The purpose of theintroduction to quantification activity is to prepare theteam for Phase III, Quantification of Impact Statements.

This activity is intended to identify all the possible termswhich would express the size of an impact statement inrelation to the total system performance. The quantifiersshould be defined by the team during this activity. In thecase of ETABS this was to quantify the various impacts ofthe evaluation report in terms of the following items:

.Staffing

.Workload

.Availability

.Reliability

.Labor Management Relations

.Manpower

.Operational Impact

.Dollars

Quantifications were to be considered on a reasonable worstcase basis.

2.3 PHASE III - QUANTIFICATION OF IMPACT STATEMENTS. The purpose of thequantification phase is to take each impact statement and determinehow significant this impact is in understandable and/or measurableterms.

14

(

Phase III begins during the second meeting with the introduction toquantification. The majority of the work is performed prior to thethird meeting in the form of homework. Team members calculate thesize of the impact at their facilities. It is completed at thethird meeting where all the data is consolidated and verified.

At the completion of this phase, the evaluation team should havegenerated a document which contains quantized impact statements.

Some forethought should be given to the format of this document.(For example, in the case of ETABS, delaying a decision for ar.acceptable format until Phase IV, Recommendations, caused seriousproblems in writing both the original impact statements and thequantized statements.)

Each quantized statement that is generated goes through averification process.

2.3.1 QUANTIFICATION. Each team member, who has expertise orother sources of expertise available on specific impactstatements, generates a sizing (quantification) statement.In most cases, the resources to calculate accurately thesizing of these impacts is limited to the human resourceswithin the team.

These sizing statements are expressed in the termsgenerated during the introduction to quantificationactivity in Phase II.

Since much of this data is generated between meetings,this permits investigation and analysis of the sizingstatements.

2.3.2 VERIFICATION. The purpose of the verification processis to ensure that the sizing statements are as accurateand realistic as possible.

This is performed using the complete resources and expertiseof the entire team. Each quantized statement is testedand analyzed in a impartial and honest manner.

2.4 PHASE IV - ALTERNATIVES, RECOMMENDATIONS, AND CONCLUSIONS. The

purpose of this phase is to select from feasible alternatives thebest recommendations to reduce and/or eliminate unacceptable impactsgenerated by the new technology.

The phase begins after the completion of the third meeting as anassigned homework task. Each team member is to generate feasiblealternatives with cost estimates where possible, to reduce and/oreliminate undesirable impacts.

15

!(

The phase is completed with the selection of the most realisticalternatives to make the new technology as acceptable as possibleto the workforce.

There are three parts to Phase IV as follows:

.Generation and consolidation of alternatives

.Feasibility

.Selection of alternatives

The generation of the recommendation is only second in its importanceto the generation of the impact statements. This phase completes amajority of the work of the evaluation team and produces the finalproduct, i.e., how to resolve the problems of impact.

2.4.1 GENERATION AND CONSOLIDATION OF ALTERNATIVES. Prior tothe fourth meeting, each member prepares a list ofalternatives to resolve the problems identified by thequantized impact statements. Then they generate the bestcost/feasibility data available.

During the fourth meeting, the alternatives are consolidatedand clarified.

This activity is by far the most creative and innovative inthe entire evaluation process. Members using their expertiseand understanding as to what field personnel really need, andhave an opportunity to propose new ways in which to dobusiness.

2.4.2 FEASIBILITY. The purpose of the feasibility activity is toanalyze and verify suggested alternative solutions toundesirable impacts. During this activity, team members willstart to put forth a determined effort for acceptance ofalternatives which will be of greatest advantage to theirparticular specialty. (In the case of ETABS, Air TrafficControl and Airway Facilities personnel took extremely

strong opposite stands on the feasibility of certainalternatives.)

All alternatives which are considered to be unfeasible arediscarded. The determination of feasibility is reached byteam consensus after completely understanding the alternative.

2.4.3 SELECTION OF ALTERNATIVES. At this point the problems havebeen clearly identified and all feasible solutions have beenverified. The next step is to recommend the best alternatives.During this activity all the special interest within the teamwill again attempt to sway the team in making recommendations

16

most advantageous to them. The possibility exists forcompromise between various factors within the team inmaking the final recounendations or for dualrecommendations. (In the case of ETABS, the selection ofthe final recommendations was made by consensus. Muchdiscussion and testing of the different perspectivesoccurred prior to final selections.)

The final step is to establish the priority of variousrecommendations. From the field's perspective, all therecommendations should be implemented. Some recommendationswill have a more substantial positive impact to fieldpersonnel as compared to others and should be given moreemphasis.

This concluded the evaluation process except for Phase V,Disposition of Recommendations.

2.5 PHASE V - DISPOSITION OF RECOMMENDATIONS. If the evaluationprocess was to end with Phase IV, two very important questionswould remain unanswered.

1. Was the evaluation of the new technology successful?

2. What action has to be taken to improve the process?

The purpose of Phase V (Disposition of Recommendations) is toanswer these questions. The following is purely theoretical sincethe ETABS evaluation process is only in Phase IV, Alternatives,Recommendations, and Conclusions.

To ascertain the value of the evaluation process, the results mustbe evaluated from an objective viewpoint. It must be determinedif the process has any merit or if it is a waste of time and energy.Regardless of the results of any individual evaluation, some actionshould be taken to improve the process. To accomplish these goals,two activities are required.

.Monitor Results

.Improve Evaluation Process

To perform these activities, a special subgroup is formed whose chiefpurpose is objectivity. This subgroup should exist until theimplementation of the neu technology. This subgroup will spend verylittle time in performing this important task.

17

t

2.5.1 MONITOR RESULTS. During this activity, the subgroup shalldetermine which recommendations were implemented. Inaddition, they shall determine if the recommendations wereworthwhile. The subgroup will investigate and trace exactlywhat happens to each recommendation. The subgroup will alsodetermine if the predicted quantized impact statements werevalid. The results of this activity will indicate if theevaluation process was successful. It may also verify thatthe normal way of doing business is also valid.

This information should be documented by generating asupplement to the field impact evaluation of the new technology.

2.5.2 IMPROVE EVALUATION PROCESS. The purpose of this activity isto improve the evaluation process by a thorough critique ofthe process after the fact. This Monday morning quarter-backing activity should generate a supplement to thisappendix. This supplement should indicate pitfalls toavoid and successful procedures to use. In addition, it shouldindicate the emotional climate of the evaluation process.

18

0.I: ,_:, '. .

CHAPTER 3. ASSESSMENT OF FIELD IMPACT EVALUATION PROCESS

3.0 GENERAL. Ths purpose of the Field Impact Evaluation (FIE) Teamwas to identify the impact that the Electronic Tabular DisplaySubsystem (ETABS) will have on the workforce. The EvaluationAssessor (EA) Subgroup is charged with the mission of overseeingthe functions of the FIE Team and recommending changes to theevaluation process.

This chapter is a guideline for establishing an evaluationassessor subgroup and assessing the field impact evaluation process.

3.1 BACKGROUND. The EA Subgroup associated with ETABS was composed ofthree members of the Field Impact Evaluation Team. The EA Sub-group is commissioned to recommend procedures and further develop-ment of the FIE process. Historically, the FAA has suffered frompoor field acceptance when implementing new systems. The FieldImpact Evaluation procedure is a realistic method of addressingfield consideration in a systematic manner.

The potential for enhancing field acceptance of new technologythrough its use looks very promising.

3.2 EVALUATION ASSESSOR SUBGROUP OBJECTIVES. Once the Field ImpactEvaluation Project Initiating Official has determined that anongoing assessment within the evaluation team is required, theEA Subgroup objectives should be well defined. EA subgroup reportsand presentations should be clearly specified when orientingthe EA Subgroup Members. Time tables and resource limitationsshould be included.

3.3 EVALUATION ASSESSOR SUBGROUP ASSEMBLY. The size of the EA Subgroupmay vary proportionally to the size of the Field Impact EvaluationTeam. For example, the ETABS EA Subgroup consisted of 3 of the 13members. The subgroup was formed using the rolling diagonal sliceconcept. A variety of procedures may be employed to determine whoshould serve as subgroup members. It is suggested that a list ofqualification requirements, personal attributes, and characteristicsbe established to aid in identifying potential candidates.

.A high level of commitment to the project

.Writing skills

.Good oral communication skills (public speaking)

.Time availability

.High conceptual ability

.Experienced in field operations

.Have a high tolerance level for frustration and ambiguity

19

I. A

Participation by the Field Impact Evaluation Team is consideredessential in selecting the subgroup membership. In the case ofETABS, the moderator and facilitator nominated individuals butselection was by team consensus. There was no specific leaderassigned or established in the subgroup, but general guidance wasprovided by the team facilitator.

The timeliness of the selection of the subgroup membership and theinitiation of their activities is critical to the effective outcomeof the assessment project.

The subgroup membership selection must be made after all evaluation

team members have gained the fundamental experience of the subjectevaluation process. It must not occur so early as to distractinterest or weaken their commitment to the project.

3.4 EVALUATION ASSESSOR SUBGROUP ASSESSMENT FUNCTIONS. The followinglist of functions were developed and employed by the ETABS AssessorSubgroup. It is presented as a guideline. It may not be the bestbr most complete list, but is is a starting point for futuresubgroups.

.Participation in the field evaluation process

.Observation of the methods employed by field evaluation team

.Examination and validation testing of the FIE Team reportand its supporting data

.Interview FIE Team members

.Generation of questionnaires to collect data from FIE Team members

.Provide constructive feedback to the evaluation team and raisequestions pertaining to the validity of the process includingits conclusions and recommendations

.Initiate action to ensure a procedure be established to trackthe recommendations generated by the evaluation team

3.5 EVALUATION ASSESSOR SUBGROUP PROCEDURES. The following list ofinitial and ongoing procedures served the purposes of the ETABSTeam well. These procedures can be tailored to meet the needs

of any evaluation assessment.

3.5.1 INITIAL ACTIVITIES. The Evaluation Assessor Subgroup mustgenerate an initial assessment plan which should include

the following activities.

20

.Determine if final assessor reports are required and ifso, when and in what form

.Prepare oral presentations

.Determine a method of data verification

.Determine and assign assessor functional workloads

.Develop a procedure to maintain the interface with theevaluation team

.Establish an interim coordination and communicationprocedure to assist each other with homework assignments

3.5.2 ONGOING EVALUATION. Critique of the ongoing Field ImpactEvaluation sessions is an essential activity of the assessorsubgroup.

The following items illustrate the ongoing evaluation process:

.Meeting separately from the FIE Team, the subgroupcritiques the team progress. Including the value ofeach session or meeting.

.Feedback is given to the ETABS Evaluation Team after areview and analysis of a completed task.

.Recommendations are made to improve the evaluationprocess.

3.5.3 DATA COLLECTION. Early implementation of procedures to

gather and preserve a wide variety of data is considered amust activity for the assessor subgroup. The suggestedmethods are listed below:

.Arrange for the subgroup to have copies of all reportsand working documents of the FIE Team.

.The subgroup members should make a complete set ofnotes to ensure group memory. The notes should capturethe mood, feelings, commitments, and productivitylevels of the FIE Team.

.Use questionnaires designed to capture specific data.This should provide statistical information that may beuseful in formulating future decisions.

.Confidential written and oral critiques about theprocess with individual team members.

21

tNI

3.5.4 PRESENTATIONS. The Evaluation Assessor Subgroup is taskedto provide various presentations. These presentations willbe concerned with both the evaluation of new technology

and the evaluation process. The following items indicatethe required activities to achieve this:

.Correlate all appropriate data in final written report

.Develop the required visual and oral presentation

package

.Prepare guidelines and changes for use by futureevaluation teams as required

f 22

I

wI

CHAPTER 4. ETABS FIELD IMPACT EVALUATION OBSERVATIONS

4.0 The original concept that selection of ETABS Evaluation TeamMembers was a critical phase of the process has proven to be wellfounded. The feelings and emotions of team members ran very highduring most meeting sessions. The candor required by members ofthe ETABS Evaluation Team to arrive at consensus on high interestissues forced all members near their limits of patience on numerousoccasions. Different team members threatened to leave the team attimes when progress seemed impossible. On some occasions, memberswithdrew physically and mentally for short periods of time, only torejoin the team of their own initiative or to be drawn back by otherteam members and then to make greater contributions than beforetoward the resolution of the current issue. Emotional reactions tosome issues resulted in agreement to delay some discussions anddecisions until a later time when members could be more objective.

The selection criteria served well in this case since the team wasnever dominated by any single member and peer pressure control wasnever necessary, but the importance of membership selection is tobe highly emphasized.

Likewise, great care must be exercised when selecting team membersto serve as evaluation assessors. Additional workload and stressof the dual role can easily exceed the commitment level of some teammembers.

The assessor subgroup functioned primarily as a leaderless group,which appears to have been the correct procedure for this group. Werecommend it be employed by future teams.

The assessment procedure included keeping data records of theevaluation team activities, observing the progress of each teamsession while making notes of the depth of concern, feelings, logicused, and data presented in support of each impact issue. Theassessors attempted to identify the pitfalls and traps thattriggered the highly emotional responses of team members. Personalattacks, polarization by technical specialties, lack of expertise,unexpected boundary conditions, and inattentiveness of some membersalways resulted in a loss of productivity. The assessors observedin contrast, that when the task at hand was well defined andunderstood, with adequate data, expertise,and harmony, the quantityand quality of the output was greatly enhanced.

Decision making by consensus was agreed to by all ETABS FieldEvaluation Team Members with majority vote to be employed only whenconsensus could not be reached. Majority vote was used only twicein 5 weeks of meetings of the ETABS Field Evaluation Team. Any teammember could halt the action on any issue presented when strongfeelings were expressed and/or convincing supporting data was provided.

23

' r '

All team members agreed: their experience while participating inthis evaluation would benefit the FAA by making them more capableand valuable employees; the field acceptance of ETABS would begreatly enhanced by the improvements to the systems resulting fromthe Field Impact Evaluation Team recommendations; that fieldacceptance would also be good as a result of the credibility gainedby involving field employees in addressing field impact problems inthe early stages of systems design.

All ETABS Field Impact Evaluation Team Members expressed feelings ofconfusion and frustration when directions were unclear or boundaryconditions were not well defined. It was necessary to struggle withthe team task definition at the beginning of each series of meetingsfollowing a 5- or 6-week break between meetings. In some cases, newdata or points of view gained between meetings required discussionand evaluation before progress could proceed on the subject task athand. However, trying some sessions seemed, all members of the teamsaid they would serve on other evaluation teams if requested.

Outbursts of emotional frustrations were not limited to field teammembers. The team facilitator became involved on two occasionswhen progress lagged and team members felt they were being manipulatedor some guidance was lacking or misleading and some lack of trustresulted and the team facilitator and moderator had to prove them-selves and regain the confidence of the team members. As observedduring this Field Impact Evaluation Process, the facilitator rolewas a very demanding one because the team members, at times, werevery goal oriented and impatient to proceed with the project at therisk of sacrificing performance quality. However, the team facili-tator's keen observations and rational logic persisted and resultedin a strong project commitment by all team members as was the completecommitment in evidence by the team facilitator and moderator. Theevaluation assessor group rated the overall facilitator performanceas excellent in the ETABS Field Impact Evaluation Effort.

The moderator role appeared equally as difficult as that of thefacilitator during the ETABS Field Impact Evaluation. Observationsrevealed that great expectations for suggestions and new techniquesto progress were directed toward the moderator by team members whenteam activities were bogged down, and in most cases, problem summaries,objective redefinition, and critique were just the ticket for renewedemphasis and productivity.

Role playing was employed as a critique technique in a processreferred to as PIT sessions following each ETABS session for the firstthree meetings of the team. Two team members were selected to presentand discuss the Pro's of each session and likewise, two would presentand discuss the Con's of each session while acting a role. Thistechnique provided the acceptable cover, enabling open discussion of

24

"

* .. .....

- - - - --

the sensitive areas limiting progress in the group and complementingthe productive contributions of specific team members. The grouptired of this process and initiated a modified version of PIT withoutthe role-playing procedure for the last two series of meetings. Itwas felt that sufficient trust existed among team members to enableopen and candid discussions of the most positive and negative aspectsof team performance at this point in time. Therefore, it is agreedthe modified PIT was a successful form of critique since theconfidence level was sufficiently high.

Most ETABS Field Impact Evaluation Team Members expressed concernthat removal of the three evaluation assessors from the mainstreamof activity during the second meeting to initiate the assessmentfunctions resulted in an adverse effect on the team progress due tothis loss of experience and data resource. They suggested thatspecific assessors be assigned to conduct the assessor functions ifassessment is considered essential and leave all original evaluationteam members in the team. Likewise, it was recommended that head-quarters and technical resource members participate as full teammembers from the first meeting.

Field testing of some of the ETABS concepts and how ETABS would beaccepted in the field was less than encouraging since the systemprocurement and installation is so far in the future. Most fieldemployees interviewed, concerning the field impact of ETABS, lostinterest in discussing the system's merits and impacts when informedthat systems installation would be some time in 1984 or 1985.

The Field Impact Evaluation Team Members endorse the evaluation processas a useful tool with many applications in FAA.

25

_ ______


Recommended