14
DOTIFAA/AM-94/9 Validity of the Air Traffic Control Specialist Nonradar Screen as a Predictor of Office of Aviation Medicine Performance in Radar-based Washington, D.C. 20591 Air Traffic Control Training AD- A280 119 Dana Broach UI•IU~l1EhUt•IIII Carol A. Manning Civil Aeromedical Institute Federal Aviation Administration Oklahoma City, Oklahoma 73125 April 1994 ELEC'Tf- Final Report "L Wf q'A=IY INSPE=~DB This document is available to the public through the National Technical Information Service, Springfield, Virginia 22161. - ,O U.S. Department of Transportation Federal Aviation Administration 94 6 074

AD- A280 119 UI•IU~l1EhUt•IIII€¦ · Control Specialist Nonradar Screen as a Predictor of Office of Aviation Medicine Performance in Radar-based Washington, D.C. 20591 Air Traffic

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: AD- A280 119 UI•IU~l1EhUt•IIII€¦ · Control Specialist Nonradar Screen as a Predictor of Office of Aviation Medicine Performance in Radar-based Washington, D.C. 20591 Air Traffic

DOTIFAA/AM-94/9 Validity of the Air TrafficControl Specialist NonradarScreen as a Predictor of

Office of Aviation Medicine Performance in Radar-basedWashington, D.C. 20591 Air Traffic Control Training

AD- A280 119 Dana BroachUI•IU~l1EhUt•IIII Carol A. Manning

Civil Aeromedical InstituteFederal Aviation Administration

Oklahoma City, Oklahoma 73125

April 1994 ELEC'Tf-

Final Report "LWf q'A=IY INSPE=~DB

This document is available to the public

through the National Technical InformationService, Springfield, Virginia 22161.

- ,O

U.S. Departmentof TransportationFederal AviationAdministration

94 6 074

Page 2: AD- A280 119 UI•IU~l1EhUt•IIII€¦ · Control Specialist Nonradar Screen as a Predictor of Office of Aviation Medicine Performance in Radar-based Washington, D.C. 20591 Air Traffic

NOTICE

This document is disseminated under the sponsorship ofthe U.S. Department of Transportation in tb interest of

information exchange. The United States Governmentassumes no liability for the contents or use thereof.

Page 3: AD- A280 119 UI•IU~l1EhUt•IIII€¦ · Control Specialist Nonradar Screen as a Predictor of Office of Aviation Medicine Performance in Radar-based Washington, D.C. 20591 Air Traffic

1. ReportNO 2. Govenment Acces•ion No. 3. Recipients Catalog No.

DOTIFAAIAM-94191

4. Tit and Subtitle s. Report Date

Validity of the Air Traffic Control Specialist Nonradar Screen as April 1994a Predictor of Performance in Radar-Based ATC Training

6. Performing Organization Code

7. Autho(s) 8. Performing Organization Report No.

Dana Broach and Carol A. Manning

9. Performing Organization Name and Address 10. Work Unit No. (rRAIS)

FAA Civil Aeromedical InstituteP.O. Box 25082Oklahoma City, OK 73125 11. Contract or Grant No.

12. Sponsoring Agency name and Address 13. Type of Report and Period Covered

Office of Aviation MedicineFederal Aviation Administration800 Independence Avenue, S.W.Washington, DC 20591

14. Sponsoring Agency Code

15. Supplemental Notes

Work was performed under research task AAM-C-92-HRR-12316. Abstract

Between January1986 and March 1992, the Federal Aviation Administration's 42-day Nonradar Screen was used to identify AirTraffic Control Specialist (ATCS) candidates with the highest potential to succeed in the rigorous ATCS field training pro-gram. The central question addressed in this study was whether or not the Nonradar Screen was a valid employee selection proce-dure in view of the prevalence of radar in today's air traffic control system. To answer that question, we investigated theNonradar Screen's criterion-related validity as a predictor of subsequent performance in radar-based air traffic control training.We hypothesized that the Nonradar Screen would add incremental validity over aptitude test scores in predicting performance inradar-based air traffic control (ATC) training conducted at the FAA Academy 1 to 2 years after entry into the occupation. Studentaptitude test scores and Nonradar Screen final composite scores were regressed on final composite scores earned in radar-basedATC training. Results showed that Nonradar Screen composite scores had incremental validity over the written ATCS aptitudetest for predicting radar-based training scores in both en route (AR2 = .08, F(2,438) = 36.52, p ! .001) and terminal (R2 =. 10,R(2,658) a 77.66, p s.001) radar training without correcting for range restriction due to explicit selection on the NonradarScreen final composite score. After corrections for restriction in range, Nonradar Screen scores accounted for an additional 20%of variance in en route (AR 2 = .20, F = 146.84, p :9 .001) and an additional 16% of variance in terminal (R2 = .16, F=178.58, p s .001) radar training scores. These results indicated that the Nonradar Screen was a valid predictor of performance inradar-based ATC training. Similarities in nonradar and radar procedures and techniques are offerred as possible explanationsfor the finding of criterion-related validity for the ATCS Nonradar Screen.

17. Key Words 18. Distribution Statement

Air Traffic Controller Document is available to the public through the

Selection Validity National Technical Information Service,Selection____alidity_ Springfield, Virginia 22161.19. Security Clhasf. (of this report) 20. Security Classif. (of this page) 21. No. of Pages 22. Price

Unclassified Unclassified 14

Form DOT F 1700.7 (8-72) Reproduction of com t page authorized

i

Page 4: AD- A280 119 UI•IU~l1EhUt•IIII€¦ · Control Specialist Nonradar Screen as a Predictor of Office of Aviation Medicine Performance in Radar-based Washington, D.C. 20591 Air Traffic

ACKNOWLEDGMENTS

The authors gratefully acknowledge the many years of support for data collection givenby Gwen Sawyer and others at the FAA Academy in the course of this research. This workwas performed under research task AAM-C-92-HRR- 123. The opinions expressed in thispaper are those of the authors and do not necessarily reflect the official position of theFederal Aviation Administration. An earlier version of this paper was presented at the 4thAnnual Meeting of the American Psychological Society, San Diego, CA.

Dision For

NT Ai CRAMDTIC TABUnannounced [Justification .. .

Dinstribution I

Availability Codes

Avail and I orDist Special

iPI

iii++

Page 5: AD- A280 119 UI•IU~l1EhUt•IIII€¦ · Control Specialist Nonradar Screen as a Predictor of Office of Aviation Medicine Performance in Radar-based Washington, D.C. 20591 Air Traffic

VAUDITY OF THE AIR TRAFFIC CONTROL SPECIALIST (ATCS)NONRADAR SCREEN AS A PREDICTOR OF PERFORMANCE IN RADAR-

BASED AIR TRAFFIC CONTROL TRAINING

The FAA air traffic control specialist (ATCS) selec- The purpose of this research was to build on thattion system, from October 1985 through January previous study by empirically assessing the crite-1992, consisted of two stages. The first stage was the rion-related validity of the Nonradar Screen for thewritten Office of Personnel Management (OPM) air prediction of performance in radar-based ATC train-traffic control selection test battery. Less than 10% of ing.. Radar training was used in this study as aover 200,000 applicants that completed the first stage surrogate criterion for actual radar ATC job perfor-written examination were chosen to progress to the mance because adequate on-the-job measures weresecond stage, the 42-day Nonradar Screen. During this not available. We hypothesized that the final corn-period, of the 14,392 persons that entered the Nonradar posite score in the Nonradar Screen would showScreen, 56.6% were successful and assigned to field significant incremental validity over the OPM ap-facilities for up to 5 years of developmental training. titude test score in the prediction of radar-basedSome 80% of these trainee controllers, termed ATC training performance."developmentals," were assigned to training in termi-nal or en route facilities equipped with radar. Terminal Methodfacilities provide air traffic control (ATC) services atand around airports, while en route facilities generally Sampleprovide ATC services between airports. In view of the The sample used in this study was comprised ofhigh placement rate into radar-equipped facilities, 1,639 first-time competitive entrants to the Screenconcern was raised about the validity of the Nonradar who had also attended the en route or terminal radarScreen as a predictor of performance in the radar-based training programs. The sample entered the Nonradarenvironment oftoday's air traffic control system (Aero- Screen during the years 1987 to 1990, and attendedspace Sciences, Inc., 1991). the radar course at some time between 1988 and 1991.

Della Rocco, Manning, and Wing (1990) had also Table 1 presents overall sample demographic charac-questioned the validity of the Nonradar Screen as a teristics, as compared with the population of all com-predictor of success in radar-based air traffic control. parable first-time, competitive Nonradar ScreenThey compared the content of the Nonradar Screen to entrants entering the system since October 1985. Fewertasks performed by an en route radar controller and minorities and women were represented in thisconcluded that many of the behaviors assessed in the sample than were in the population of NonradarNonradar Screen were similar to those required in the Screen entrants. Controllers assigned to terminalradar environment (p. 19). No statistical analyses of facilities were also over-represented in comparisonthe relationship of Nonradar Screen score to radar- to historical placements due to differences in pro-oriented criteria were presented to buttress the argu- gram enrollment policies.ment for content validity. However, Della Rocco, et al.(1990) did show that the Screen predicted status in Measuresfield training for persons assigned to en route air traffic Aptitude score. The written aptitude test was ad-facilities (r - -.24, N . 406, p 1 .0 1). Field training ministered by the U.S. Office of Personnel Manage-status in that study was coded: 1 - Full Performance ment (OPM) as the first stage in the ATCS selectionLevM• 2 - In Training in Original Option; 3 - Switched system. The general development, psychometric char-Optionr, and 4 - Failed The estimated population acteristics, and validity of this test battery have beenvalidity coeffficient for the Screen after correcting for extensively described (Della Rocco, Manning, & Wing,

restriction in range was -.44. 1990; Manning, Della Rocco, & Bryant, 1989; Rock,

Page 6: AD- A280 119 UI•IU~l1EhUt•IIII€¦ · Control Specialist Nonradar Screen as a Predictor of Office of Aviation Medicine Performance in Radar-based Washington, D.C. 20591 Air Traffic

Tabe 1Sample demographic characteristics

Population SampleCharacteistic Cateoy (N= 14,392) (N= 1,639)

Age Mean 26.00 25.42SD 2.99 2.76

Sex Male 79.6% (11,460) 86.6% (1,420)Female 20.4% (2,932) 13.4% (219)

Race Native American 0.6% ( 91) 0.5% ( 8)Asian 1.4% (195) 0.4% ( 7)African American 5.7% (819) 3.2% (52)Hispanic Non-white 3.6% (525) 1.90% ( 31)White Non-hispanic 85.9% (12,366) 91.3% (1,496)Unknown 2.8% ( 396) 2.7% ( 44)

Assigned Option En Route 33.3% (4,786) 39.7% ( 651)Terminal 23.3% (3,354) 60.3% ( 988)Not Applicable' 43.4% (6,252)

NOTES: 1Assigned option not applicable for persons who failed or withdrew from the Nonradar Screen

Dailey, Ozur, Boone, & Pickrel, 1982; Sells, Dailey, & entrants for which scores were available in Table 2.Pickrel, 1984). The written civil service ATCS apti- Mean RATING differences between the samples and

tude battery was composed of. (a) the Muhiplex Con- population were not statistically significant.toller Aptitude Test; (b) a test of Abstrot Reasoning;and (c) an OccupationalKnowledge Test Results from ATCS Screen score. Persons competitively hiredthe test battery were combined with any veteran's into the ATCS occupation (GS-2152; U.S. Depart-preference points to yield a final civil service rating ment of Labor (1977) Dictionary of Occupational Titles(RATING) for competitive entrants. This rating was job code 193.162-018) by these civil service proce-used to rank-order competitive ATCS job applicants dures reported to the FAA Academy and were enrolledwithin statutory guidelines, such that hiring was done in the Nonradar Screen. The Nonradar Screen wason the basis of merit (Aul, 1991). A candidate with a originally established in response to recommendationsqualifying aptitude score was also required to undergo made by the U.S. Congressional House Committee onmedical and security evaluations and complete an Government Operations (U.S. Congress, 1976) tointerview before being hired. The successful applicant reduce field training attrition rates. The Nonradarwas then hired by the FAAand enrolled in theNonradar Screen was based upon a miniaturized training-test-Screen. This overall civil service RATING was used as ing-evaluation personnel selection model (Siegel, 1978,the measure of aptitude for the ATCS occupation in 1983; Siegel & Bergman, 1975) in which individualsthis study. Mean aptitude scores for the sample of with no prior knowledge ofthe occupation were trainedNonradar Screen students who attended radar training and then assessed for their potential to succeed in a job.are compared with the population of Nonradar Screen

2

Page 7: AD- A280 119 UI•IU~l1EhUt•IIII€¦ · Control Specialist Nonradar Screen as a Predictor of Office of Aviation Medicine Performance in Radar-based Washington, D.C. 20591 Air Traffic

t- 00S00

9 w-

t.00

en400

N00 @0

~~C4

00 0

Page 8: AD- A280 119 UI•IU~l1EhUt•IIII€¦ · Control Specialist Nonradar Screen as a Predictor of Office of Aviation Medicine Performance in Radar-based Washington, D.C. 20591 Air Traffic

Overall, 56.6% of all entrants successfully completed applied those skills in a live traffic environment duringthe Nonradar Screen over the period of January 1986 subsequent phases of on-the-job training. Instructionto March 1992. covered topics such as, principles of radar, radar iden-

Thirteen assessments of performance, including tification procedures, radar separation procedures,classroom tests, observations of performance in labo- vectoring, s*eed control, and radar handoffs (Boone,ratory simulations of non-radar air traffic control, and van Buskirk, & Steen, 1980; Federal Aviation Admin-a final written examination, were made during the istration, 1991). Topics specific to a facility type wereNonradar Screen (Della Rocco, Manning, & Wing, also covered in the respective courses, such as depar-1990). The final summed composite score (SCREEN) ture and arrival procedures in Terminal RTF. Didacticwas weighted 20% for academics, 60% for laboratory classroom instruction was provided to students, andsimulations, and 20% for the final examination. A written, multiple-choice examinations were adminis-minimum SCREEN score of 70 was required to pass tered at the end of academic instruction. Since 1986,the Nonradar Screen. This final composite score was those written examinations comprised 30% of the finalthe predictor of interest in this study. SCREEN scores radar composite score in each option. Students thenfor this sample are compared in Table 2 with the applied that knowledge in a series of increasinglypopulation of first-time competitive Nonradar Screen complex radar control problems. The last five controlentrants. Mean SCREEN scores for the terminal and problems were graded, and comprised 70% of the totalen route samples were higher than those of the popu- composite grade in each radar course. Mean finallation, due to explicit selection of the sample on radar composite scores (ENRTRAD for en route,SCREEN. TERM-RAD for terminal) for this sample are com-

pared with their respective population means inRadar score. Those persons who successfully com- Table 2.

pleted the Nonradar Screen entered into long-termoccupational technical training as developmentals in Procedureeither en route or terminal facilities. Both en route and Multiple regression was used to test the hypothesisterminal controllers ensure the separation of aircraft that performance in the Nonadar Screen (SCREEN)by using information about the speed, direction, and added incremental validity over aptitude (RATING)altitude of aircraft to (a) formulate clearances and (b) in the prediction of performance in radar trainingcommunicate those clearances to pilots. Clearances are (ENRTRAD or TERMRAD). Separate analyses weresets of instructions designed to ensure the safe, orderly, conducted for the en route and terminal courses inand expeditious flow of air traffic. light of apparent differences between control tech-

After successfully completing the Screen, ATCSs niques, procedures, and rules in the two ATC environ-generally reported to their specific facility assignments, ments. The analyses were conducted using both rawand received training on the ATC procedures specific data and a correlation matrix corrected for explicit andto their assigned airspace. During the period examined incidental restriction in range due to selection onin this study, most controllers assigned to facilities SCREEN as required by the Uniform Guidelines onutilizing radar procedures returned to the FAA Acad- Employee Selection Procedures (Equal Employmentemy for initial radar training at the FAA Academy Opportunity Commission, 1978). Specifically, corre-Radar Training Facility (RTF). Radar training courses lations between RATING and SCREEN for each groupwere conducted separately for each type of facility were corrected for range restriction due to prior selec-using the high-fidelity simulation capabilities of the tion on SCREEN. Correlations between RATINGAcademy RTF. and radar composite scores for each group were cor-

The Academy RTF terminal ("Terminal RTF") and rected for incidental restriction in range due to theen route ("En Route RTF") courses instructed the selection of the sample on SCREEN. Finally, thedevelopmental controller in basic radar techniques in correlations between SCREEN and ENRTRAD andthe safety of a simulated airspace before the controller TERMRAD were corrected for explicit restriction in

4

Page 9: AD- A280 119 UI•IU~l1EhUt•IIII€¦ · Control Specialist Nonradar Screen as a Predictor of Office of Aviation Medicine Performance in Radar-based Washington, D.C. 20591 Air Traffic

range due to selection on SCREEN, using formulae validity of the written aptitude examination (Man-presented by Ghiselli, Campbell, and Zedeck (1981, ning, Della Rocco, & Bryant, 1989; Manning, Kegg,p. 299). Corrected matrices for en route and terminal & Collins, 1989).trainees were submitted separately for regression analy- The results of the en route regression analysis aresis. In the regression analyses, RATING was first presented in Table 4. RATING was forced into theentered into the prediction equation to assess the equation in the first step (R = .17, F(1,439) = 13.19, pamount of variance in radar composite scores ac- 1 .001), and accounted for about 3% of variance incounted for by aptitude. SCREEN was then regressed ENRTRAD. SCREEN was then regressed onon radar performance by using a stepwise entry to ENRT-RAD in the second block, using a stepwiseassess the incremental validity of SCREEN for each procedure to control variable entry. SCREEN ac-ATC environment. counted for an additional 8% of variance in

ENRTRAD (AR 2 = .08, F= 36.52, p _ .001) withoutResults correction for restriction in range. After correcting the

En Route SCREEN - ENRTRAD correlation for restriction inThe zero-order correlations between RATING, range, SCREEN accounted for an additional 20% of

SCREEN, and ENRTRAD are presented in Table 3. variance in radar training performance (AR2 = .20, F=These correlations are likely to be underestimated, 146.84, p 1 .001). The corrected partial correlationbecause of the high degree of restriction in range due between SCREEN and radar performance for en routeto explicit, successive selections on both RATING and developmentals was .48 ((1,438) = 12.12, p <_.001).SCREEN. Performance in the Nonradar Screen wassignificantly correlated with performance in en route Terminalradar training (r = .28, N- 533, p 1.001). Correcting The zero-order correlations between RATING,for explicit restriction in range increased the RATING SCREEN, and TERMRAD are presented in Table 5.- SCREEN and SCREEN - ENRTRAD correlations The uncorrected correlation of.26 (N= 827, p < .001)to .37 and.50 respectively (as shown in parentheses) in between RATING and SCREEN was consistent withTable 3. The correlation between RATING and other studies examining the validity of the writtenSCREEN increased from.20 (N- 549,p<.001) to .22 aptitude examination (Manning, Kegg, & Collins,after correction for incidental restriction in range. 1989), as was the correlation of. 19 (N= 661) betweenThis was consistent with other studies examining the RATING and TERMRAD. These correlations were

Table 3

En Route descriptive statistics, zero-order correlations, and correlations corrected for range restriction

RATING 92.37 4.47 549

SCREEN 79.88 5.91 651 .20**(.37)

ENRT RAD 84.14 5.27 533 .17*** .28***(.22) (.50)

Measure Mean SD N RATING SCREEN

NOTES: Correlations corrected for range restrictions shown in parentheses. p !.001

5

Page 10: AD- A280 119 UI•IU~l1EhUt•IIII€¦ · Control Specialist Nonradar Screen as a Predictor of Office of Aviation Medicine Performance in Radar-based Washington, D.C. 20591 Air Traffic

*o~ *0 *00 0*

~ entio(

.E*

At*

It*

It*

C40

c; - 75

00

~J 0 - (

I- (N a-

Page 11: AD- A280 119 UI•IU~l1EhUt•IIII€¦ · Control Specialist Nonradar Screen as a Predictor of Office of Aviation Medicine Performance in Radar-based Washington, D.C. 20591 Air Traffic

Tables

Terminal zero-order raw and correlations corrected for range restriction

RATING 92.34 4.65 827

SCREEN 77.78 6.78 988 .26**(.41)

TERMRAD 80.32 6.44 988 .19*** .31***(.32) (.50)

Measure Mean SD N RATING SCREEN

NOTES: Correlations corrected for range restrictions shown in parentheses p S.001

likely to be underestimated, because of the high degree Discussionof restriction in range due to explicit, successive selec-tions on both RATING and SCREEN. SCREEN was The results suggest that, for developmentals as-significantly correlated with performance in terminal signed to en route facilities, the Nonradar Screen scoreradar training (r - .31, N- 805, p 1.00 1). Correcting added significantly to the prediction of radar trainingfor explicit restriction in range due to selection on performance, over and above the contribution of theSCREEN increased the RATING - SCREEN and aptitude score. The percentage of variance accountedSCREEN - TERMRAD correlations to .41 and .50 for by both predictors in this study (10% uncorrected)respectively (as shown in parentheses) in Table 5. The is consistent with that found by Manning (8% uncor-RATING - TERMRAD correlation increased to .32 rected; 1991) when predicting status in facility-spe-after correction for incidental restriction range due to cific en route field training. However, in Manning'sselection of the sample on SCREEN. (1991) study, the aptitude score contributed about as

The results of the terminal regression analysis are much to the prediction of field training performance aspresented in Table 6. RATING was forced into the did the Nonradar Screen score after correcting forequation in the first step (R- .19, F(1,659) - 2 3.6 6 ,,p range restriction. Manning reported a partial correla-1 .001), and accounted for about 3% of variance in tion between aptitude and en route field status of .29TERMRAD. SCREEN was then regressed on and a partial correlation of .39 between NonradarTERMRAD in the second block, using a stepwise Screen score and field status. In this study, the Nonradarprocedure to control variable entry. SCREEN scores Screen score had the higher partial correlation withaccounted for an additional 10% of variance in termi- performance in radar training. The results obtained innal radar training performance (AM - .10, F- 77.66, the present study also suggested that the Nonradarpl.001).AftercorrectingtheSCREEN-TERMRAD Screen was a reasonably valid predictor of terminalcorrelation, SCREEN scores accounted for an addi- radar training performance. Moreover, the correlationtional 16% of variance in TERMRAD (ASR -. 16, F between Nonradar Screen and terminal radar training- 178.58, p 1 .001). The corrected partial correlation performance (.31 uncorrected) in this study was con-between SCREEN and radar performance for sistent with the correlation between the Nonradardevelopmentals assigned to terminal facilities was .44 Screen score and instructor's assessment of develop-(t(1,659) - 13.63, p .r .00 1). mental performance in the facility radar qualification

7

Page 12: AD- A280 119 UI•IU~l1EhUt•IIII€¦ · Control Specialist Nonradar Screen as a Predictor of Office of Aviation Medicine Performance in Radar-based Washington, D.C. 20591 Air Traffic

0i

2-1 ell0'.. 0~

'0 00

CA *

If)

Page 13: AD- A280 119 UI•IU~l1EhUt•IIII€¦ · Control Specialist Nonradar Screen as a Predictor of Office of Aviation Medicine Performance in Radar-based Washington, D.C. 20591 Air Traffic

phase of field training (.30, uncorrected) reported by mance in each option's training program was predictedManning, Della Rocco, and Bryant (1989). We con- about equally well by the combination of nonradar andcluded that performance on nonradar air traffic con- aptitude scores.trol tasks was a vilid indicator of potential to perform The results of this study provide evidence of theradar-based air traffic control tasks. validity of the Nonradar Screen in predicting radar

However, there may be alternative explanations for training performance (over and above the predictionthe significant relationship between the Nonradar of the first-stage OPM selection battery). This pro-Screen score and radar training performance. First, vides empirical support for the logical arguments for aprocedures for grading of classroom examinations ard relationship between nonradar and radar ATC activi-laboratory simulation problems in the Nonradar Screen ties. These cesults also indicate that nonradar simula-and the En Route and Terminal Radar courses were tions cannot be dismissed as predictors of radar-basedvery similar. Both courses contained written tests com- ATC on the basis of "face validity." Rather, additionalprised of multiple choice items. In the laboratory research is required to elucidate the cognitive con-problems administered during each course, instructors structs underlying this empirical relationship betweenobserved student performance over a 30-minute time nonradar and radar air traffic control performance as aperiod, recorded specific technical errors made by the part of the development of new ATCS selection tests.student, and provided feedback after the problem wascompleted. Errors included the failure to maintain Referencessufficient separation between aircraft, failure to useappropriate procedures, failure to coordinate appro- Aerospace Sciences, Inc. (1991). Air traffic control spe-priately with other controllers, and failure to use proper cialist Pre-Training Screen preliminary validation:phraseology in the Nonradar Screen and both radar Final report. (Final report delivered to FAA undertraining courses. Scores on the laboratory problems in contract DFAO1-90-Y-01034). Washington, DC:

all courses were based upon a weighted composite of Federal Aviation Administration, Office of the

the errors committed, where the more serious errors

(e.g., separation) received higher weights than did the Alexander, J. R., Ammerman, H. L., Fairhurst, W. S.,less serious errors. Similarity in the content of the Hostetler, C. M., & Jones, G. W. (1989). FAA airlessserious a herora g otraffic control operations concepts volume VIII:materials and the grading of the laboratory problems TRACON Controllers. (DOT/FAA/AP-87-01).may have resulted in shared method variance that Washington, DC: Federal Aviation Administra-might account for a portion of the correlation in tion, Advanced Automation Program.performance between the nonradar and radar courses. Alexander, J. R., Ammerman, H. L., Fairhurst, W. S.,

On the other hand, many of the activities involved Hostetler, C. M., & Jones, G. W. (1990). FAA airin controlling air traffic using en route radar, terminal trafficcontroloperations concepts volume VL.ARTCC/radar, or en route nonradar procedures are similar. HOSTEn Route Host Controllers. (DOT/FAA/AP-Both en route and terminal radar controllers perform 87-01, Change 2). Washington, DC: Federal Avia-

the primary activities of monitoring aircraft positions, tion Administration, Advanced Automation Pro-

resolving aircraft conflicts, managing air traffic se- gram.

quences, planning flights, assessing the impact of Aul, J. C. (1991). Employing air traffic controllers. In H.

weather, and managing position resources (Alexander, Wing, & C. A. Manning (Eds.). Selection of airAmmerman, Fairhurst, Hostetler, & Jones, 1989, traffic controllers: Complexity, requirements, andpub-

Ac interest. (DOT/FAA/AM-91/9). Washington,1990). However, within each global activity, DC: Federal Aviation Administration, Office ofsubactivities differ for en route and terminal radar Aviation Medicine.controllers. In spite of some equipment and proce-dural differences between the two options, perfor-

9

Page 14: AD- A280 119 UI•IU~l1EhUt•IIII€¦ · Control Specialist Nonradar Screen as a Predictor of Office of Aviation Medicine Performance in Radar-based Washington, D.C. 20591 Air Traffic

Boone, J. O., van Buskirk, L., & Steen, J. A. (1980). The Manning, C. A., Kegg, P. S., & Collins, W. E. (1989).FederalAviation Administration 's radar trainingfa- Selection and screening programs for air trafficcility and employee selection and training. (DOT/ control. In R. S. Jensen (Ed.), Aviation psychology,FAA/AM-80/15). Washington, DC: Federal Avia- pp. 321 - 34 1. Brookfield, MA: Gower Technical.tion Administration, Office of Aviation Medicine. Rock, D. B., Dailey, J. T., Ozur, H., Boone, J. 0., &

Broach, D., Voth, T., and Worner, W. (1989). Job task Pickrel, E. W. (1982). Selection ofapplicants for theanalysis forA TC nonradar Screen program. Unpub- air traffic controller occupation. (DOT/FAA/AM-lished manuscript (Available from the Author, FAA 82/11). Washington, DC: Federal Aviation Ad-Civil Aeromedical Institute, Oklahoma City, OK). ministration, Office of Aviation Medicine.

Della Rocco, P. S., Manning, C. A., & Wing, H. (1990). Seigel, A. I. (1978). Miniature job training and evalua-Selection of air traffic controllers for automated sys- tion as a selection/ classification device. Humantems: Applications from current research. (DOT/ Factors, 20, 189 - 200.FAAiAM-90i13). Washington, DC: Federal Avia- Seigel, A. I. (1983). The miniature job training andtion Administration, Office of Aviation Medicine, evaluation approach: Additional findings. Person-

Equal Employment Opportunity Commission. (1978). nel Psychology, 36, 41 - 56.Uniform guidelines on employee selection proce- Seigel, A. I., & Bergman, B. A. (1975). A job learningdures. 29 CFR 1607. approach to performance prediction. PersonnelPsy-

Federal Aviation Administration (1991). Radar training chology, 28, 325 - 339.facility. (ETG-IOA). Oklahoma City, OK: Federal Sells, S. B., Dailey, J. T., & Pickrel, E. W. (Eds.) (1984).Aviation Administration Academy. Selection of air traffic controllers. (DOT/FAAIAM-

Ghiselli, E. E., Campbell, J. P., & Zedeck, S. (198J). 84/2). Washington, DC: Federal Aviation Admin-Measurement theoryfir the behavioral sciences. San istration, Office of Aviation Medicine.Francisco: Freeman. United States Congress. (January 20, 1976). House Coin-

Manning, C. A. (1991). Procedures for selection of Air mittee on Government Operations recommendationsTraffic Control Specialists. In H. Wing, & C. A. on air traffic control training. Washington, DC:Manning (Eds.). Selection of air traffic controllers: Author.Complexity, requirements, andpublic interest. (DOT/ United States Department of Labor. (1977). DictionaryFAA/AM-91/9). Washington, DC: Federal Avia- of occupational titles. (3rd Ed.). Washington, DC:tion Administration, Office of Aviation Medicine. Author.

Manning, C. A., Della Rocco, P. S., & Bryant, K. D.(1989). Prediction ofsuccess in FAA air traffic controlfield training as a function ofselection and screeningtestperformance. (DOT/FAAIAM-89/6). Washing-ton, DC: Federal Aviation Administration, Officeof Aviation Medicine.

10*U.S. GOVERNMENT MRINTING OMiCE: M114 - 560-S•S418•