23
APPLIED PSYCHOLOGY: AN INTERNATIONAL REVIEW, 2004, 53 (2), 237–259 © International Association for Applied Psychology, 2004. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. Blackwell Publishing Ltd Oxford, UK APPS Applied Psychology: an International Review 0269-994X © International Association for Applied Psychology, 2004 April 2004 53 2 1 000 Original Articles ASSESSMENT IN ORGANISATIONS BARTRAM Assessment in Organisations Dave Bartram* SHL Group plc, UK Cet article aborde les pratiques actuelles et les tendances émergentes de l’évalua- tion dans les organisations. Une attention particulière est accordée a l’évaluation en vue du recrutement et de la selection, là où l’apparition de la méta-analyse a fondamentalement changé la conception que l’on pouvait avoir des tests psychologiques et autres techniques de sélection. On analyse aussi l’impact d’Internet sur les pratiques de sélection. En ce qui concerne l’évaluation post- embauche, l’obligation pour les organisations d’assumer des changements rapides est raportée à l’importance de la modélisation des compétences. Quelques points-clés de l’évaluation post-embauche sont passés en revue (leadership, feedback à 360 ° ). Des perspectives pour la nouvelles recherches sont esquissées; il s’agit de créer des théories et des modèles plus pertinents, mais aussi de progresser dans les études de validité à partir les données existantes. Finalement, on s’aperçoit que l’essentiel de la littérature actuelle s’appuie sur des recherches réalisées aux Etats-Unis, un peu au Royaume Uni ou dans d’autres pays d’Europe. Beaucoup de ces recherches sont limitées dans leurs possibilités d’application en ce sens qu’elles ont été men c es sur de grandes organisations. On insiste sur le manque de travaux interculturels et sur la nécessité de s’intéresser à l’ensemble des organisations de travail (des grandes aux petites, des enterprises locales aux multinationals, aux secteurs public et privé). The article considers current practice and merging trends in assessment in organisations. Particular attention is paid to assessment for recruitment and selection, where the use of meta-analysis techniques has radically changed the way in which psychological tests and other selection techniques are viewed. The impact of the Internet on selection practice is also discussed. For post- hire assessment, the impact of the need for organisations to undergo rapid change is considered in relation to the importance of competency modelling. Some key areas (leadership, 360-degree feedback) of post-hire assessment are reviewed. Issues for future research are outlined. These include the need for better theory and models, together with the need to move ahead of a reliance on old data sets. Finally, it is noted that much of the current literature is based on research in the United States (with some from the United Kingdom and other parts of Europe). Much of the research is also limited in applicability in that it is based on large organisations. The need for more cross-cultural studies * Address for correspondence: Dave Bartram, SHL Group plc, The Pavilion, 1 Atwell Place, Thames Ditton, Surrey KT7 0NE, UK. Email: [email protected]

Assessment in Organisations

Embed Size (px)

Citation preview

Page 1: Assessment in Organisations

APPLIED PSYCHOLOGY: AN INTERNATIONAL REVIEW, 2004,

53

(2), 237–259

© International Association for Applied Psychology, 2004. Published by Blackwell Publishing,9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA.

Blackwell Publishing LtdOxford, UKAPPSApplied Psychology: an International Review0269-994X© International Association for Applied Psychology, 2004April 20045321000Original ArticlesASSESSMENT IN ORGANISATIONSBARTRAM

Assessment in Organisations

Dave Bartram*

SHL Group plc, UK

Cet article aborde les pratiques actuelles et les tendances émergentes de l’évalua-tion dans les organisations. Une attention particulière est accordée a l’évaluationen vue du recrutement et de la selection, là où l’apparition de la méta-analysea fondamentalement changé la conception que l’on pouvait avoir des testspsychologiques et autres techniques de sélection. On analyse aussi l’impactd’Internet sur les pratiques de sélection. En ce qui concerne l’évaluation post-embauche, l’obligation pour les organisations d’assumer des changements rapidesest raportée à l’importance de la modélisation des compétences. Quelquespoints-clés de l’évaluation post-embauche sont passés en revue (leadership,feedback à 360

°

). Des perspectives pour la nouvelles recherches sont esquissées;il s’agit de créer des théories et des modèles plus pertinents, mais aussi deprogresser dans les études de validité à partir les données existantes. Finalement,on s’aperçoit que l’essentiel de la littérature actuelle s’appuie sur des recherchesréalisées aux Etats-Unis, un peu au Royaume Uni ou dans d’autres paysd’Europe. Beaucoup de ces recherches sont limitées dans leurs possibilitésd’application en ce sens qu’elles ont été men

c

es sur de grandes organisations.On insiste sur le manque de travaux interculturels et sur la nécessité des’intéresser à l’ensemble des organisations de travail (des grandes aux petites,des enterprises locales aux multinationals, aux secteurs public et privé).

The article considers current practice and merging trends in assessment inorganisations. Particular attention is paid to assessment for recruitment andselection, where the use of meta-analysis techniques has radically changed theway in which psychological tests and other selection techniques are viewed.The impact of the Internet on selection practice is also discussed. For post-hire assessment, the impact of the need for organisations to undergo rapidchange is considered in relation to the importance of competency modelling.Some key areas (leadership, 360-degree feedback) of post-hire assessment arereviewed. Issues for future research are outlined. These include the need forbetter theory and models, together with the need to move ahead of a relianceon old data sets. Finally, it is noted that much of the current literature is basedon research in the United States (with some from the United Kingdom andother parts of Europe). Much of the research is also limited in applicability inthat it is based on large organisations. The need for more cross-cultural studies

* Address for correspondence: Dave Bartram, SHL Group plc, The Pavilion, 1 Atwell Place,Thames Ditton, Surrey KT7 0NE, UK. Email: [email protected]

Page 2: Assessment in Organisations

238

BARTRAM

© International Association for Applied Psychology, 2004.

and the need to cover the full range of work organisations (large to small;local to global; private to public sector) is emphasised.

INTRODUCTION

Assessment is carried out by organisations as a means of measuring thepotential and actual performance of their current (post-hire assessment) andpotential future employees (pre-hire assessment). Measurement is importantbecause it enables organisations to act both tactically and strategically toincrease their effectiveness. Pre- and post-hire assessment practices aretypically referred to as assessment for selection and recruitment, and assess-ment for development and performance management, respectively. Whileorganisations make considerable use of objective assessment techniques forboth, by far the most extensive literature is on the validity of assessmentmethods for recruitment and selection. Most research on post-hire inter-ventions tends to focus on the efficacy of training or development interventionsrather than the assessments associated with them. The main area of post-hireassessment where instrumentation has been researched in some detail is thatof management and leadership assessment. In most other areas, there hasbeen a tendency for organisations to be far less concerned about the quality andvalidity of the tools they use for post-hire assessment (e.g. for 360-degreefeedback) than they are about those they use for selection.

The present paper will consider both pre- and post-hire assessment. Whilethe main focus will be on the former, aspects of post-hire assessment will alsobe addressed. In particular the paper addresses the issue of the need for ageneralisable conceptual framework for assessing behaviour in the workplace.

ASSESSMENT FOR SELECTION AND RECRUITMENT

The present paper will not attempt to provide a detailed review of selectionassessment practices as there have been a number of excellent reviews pub-lished recently that cover this area (Murphy & Bartram, 2002; Robertson,Bartram, & Callinan, 2002; Robertson & Smith, 2001; Hough & Oswald,2000; Salgado, 1999). A common feature of these reviews is their emphasison the way in which the meta-analytic procedures developed by Hunter andSchmidt (1990) have revolutionised thinking in this area.

Selection tools that have become traditional within our Western cultureinclude application forms (open or structured), tests of knowledge and skill,tests of ability and personality, interviews (more or less structured), andvarious assessment centre exercises (in-baskets, leaderless group discussions,group problem solving exercises, etc.). Schmidt and Hunter (1998) providea useful list of validity estimates from meta-analysis studies for a range ofdifferent selection assessment tools.

Page 3: Assessment in Organisations

ASSESSMENT IN ORGANISATIONS

239

© International Association for Applied Psychology, 2004.

General Ability and Personality

While the importance of general ability as a predictor of job performancehas been long accepted, meta-analysis research has resulted in the realisationthat other tools are also of importance (notably personality questionnaires).What is more, the research has shown that variations in validity coefficientsfrom one study to another, which used to be attributed to situational speci-ficity, are due largely to various sources of error in the data. When these aretaken into account, the research has shown that personality questionnaires,ability tests, structured interviews, and biodata all have good validity andthat this validity is generalisable. That is, measures of general ability andmeasures of some of the “Big Five” (Norman, 1963; Barrick & Mount, 1991;Digman, 1990; Matthews, 1997) personality variables (notably, conscientious-ness) appear to be generally valid for

all

jobs. The main factor moderatingthe validity of general ability as a predictor of job performance and trainingperformance is the complexity of the job: validity increases as job complexityincreases.

One of the major changes in the past decade has been the emergenceof evidence in the academic literature to support the use of personalityassessment for selection. There is now a very extensive literature on thistopic and the validity of personality attributes for predicting job perform-ance is well supported. This is an area where research has followed practice.For a period of time the received wisdom in the academic world was thatpersonality questionnaires had little if any validity—while practitionerscontinued to use personality measures as part of their selection measure-ment toolkit. Following the development of meta-analytic techniques andthe publication by Barrick and Mount (1991) of their landmark paper, thisview changed. Subsequently, a large body of evidence has been published(Barrick, Mount, & Judge, 2001; Borman, Penner, Allen, & Motowildo, 2001;Hermelin & Robertson, 2001) attesting to the importance of personalityattributes in work.

What is more, Robertson and Kinder (1993), Saville, Sik, Nyfield,Hackston, and MacIver (1996), and Robertson and Callinan (1998) haveshown how substantial increases in validity can be obtained by the a prioriprediction of patterns of relationship between personality variables andcriterion behaviours. Recent work by the present author has shown thatwhere there is a careful a priori matching of relevant personality attributesto specific work performance criteria, very high validities can be obtained.Correlations between composites using scores on scales of the OPQ32i(Occupational Personality Questionnaire; SHL, 1999) and ratings of workbehaviours on the 16 scales of the Inventory of Management Competencies(SHL, 1993) show an average validity of around .48 (uncorrected) and rangefrom .29 to .69 (zero-order correlations).

Page 4: Assessment in Organisations

240

BARTRAM

© International Association for Applied Psychology, 2004.

Murphy and Bartram (2002) conclude that:

There is evidence that a variety of personality characteristics are consistentlyrelated to job performance. In particular measures of agreeableness, conscien-tiousness, and openness to experience appear to be related to performancein a wide range of jobs. Average validities for measures of these traits aretypically not as high as validities demonstrated by cognitive ability tests, butthe evidence does suggest that personality inventories can make a worthwhilecontribution to predicting who will succeed or fail on the job.

As personality measures tend to be independent of measures of ability,they can add significant increments to the overall validity of any selectionbattery. The most powerful combination, on the basis of current evidence(Ones, Viswesveran, & Schmidt, 1993; Ones & Viswesveran, 1998), is thatprovided by ability tests and measures of integrity or conscientiousness.Together these provide (adjusted) validities around .65. Ones and Viswesveran(2001) have shown that, paradoxically, while measures of integrity providebetter prediction of job performance than honesty criteria, the reverse is thecase for Conscientiousness. A possible explanation for this is that Consci-entiousness is actually a narrower measure than that provided by integritytests. The latter may well provide an assessment of what Digman (1997)has termed the Alpha Factor, which brings together Agreeableness, Consci-entiousness and Emotional Stability. Digman (p. 1249) argued that factoralpha “represents the socialization process itself . . . concerned with thedevelopment of impulse restraint and conscience, and the reduction of hostility,aggression, and neurotic defense”.

Finally, personality measures have the advantage of reducing adverseimpact. A major concern over the use of ability tests is that they tend toshow large group differences, with blacks scoring from .5 to 1.0 SD loweron average than whites (Roth, Bevier, Bobko, Switzer, & Tyler, 2001). Suchdifferences can result in adverse impact (i.e. the selection of a disproportionatenumber from one group relative to another) if applicants are selected on thebasis of ability alone. Research has shown, however, that when ability andpersonality measures are combined, the group differences on composite scoremeasures are greatly reduced, generally to a level where adverse impact is nolonger a problem (Baron & Miles, 2002).

Though the above discussion has focused on ability and personalitytesting, meta-analysis has also clarified issues of validity in relation to arange of other selection tools. For example, it is now clear that the inter-view, once dismissed by academic psychologists as having no validity, canhave good validity if it is appropriately structured and criterion referenced.While meta-analysis has revolutionised thinking in the area of personnelselection, it is important to keep in mind a number of caveats.

Page 5: Assessment in Organisations

ASSESSMENT IN ORGANISATIONS

241

© International Association for Applied Psychology, 2004.

• First, meta-analysis outcomes are dependent on the data sets used. Itis probably true to say that, in the past ten years, we have nowexhausted the information that can be drawn from the body of studiesthat exists in the literature.

• Second, meta-analyses are historical. That is, they draw general con-clusions from historical bodies of data. These data sets were collected,by and large, at a time when measures of criterion performance wereweak, and when the formal use of instruments like personality ques-tionnaires was not encouraged. Hence there is a danger of using theirresults to shape current and future practice in a way that holds backprogress if one is not careful.

• Third, there is a danger that people will rely on the results of meta-analyses as somehow defining

the

validity of procedures, and regard itas unnecessary to do further validity studies.

• Finally, the data sets used in these analyses are drawn predominantlyfrom the United States. While the general pattern of results has beensupported by some more recent analyses (Salgado, 1998; Salgado,Anderson, Moscoso, Bertua, de Fruyt, & Rolland, 2003) that havelooked at European data sets, it is important to consider how selectionand recruitment practices differ around the world.

International Variations

With globalisation, international companies have had to try to decide howto implement assessment practices on a worldwide scale. Local companiesare also now competing in their national markets with global multinationals.As such, they are under pressure to adopt the practices of the “big players”.

Employee selection is probably the area where both formal and informalassessment methods are most intensively used by organisations. It is also thearea that has most exposure, in that organisations are assessing people whoare external to the organisation. As a consequence, the assessment practicesadopted are very public and will affect the image of the company. They canhave either positive or negative public relations value.

It is of interest to study differences in assessment practices betweencountries for two reasons. First, it provides a potential basis for comparingthe effectiveness of alternative approaches to assessment and provides thebasis for practitioners in each country to reflect on the value of the practicesthey use. Second, it is becoming increasingly important because of theglobalisation of industry and the world of work. International organisationsare now seeking to adopt common selection and recruitment practices acrossdifferent countries. Organisations who are seeking to grow their businessin new markets will be concerned both to recruit the best local managersand workers in that market and to identify people on their current staff who

Page 6: Assessment in Organisations

242

BARTRAM

© International Association for Applied Psychology, 2004.

will operate effectively as expatriates to kick-start growth in such markets.Should they adopt local practices in selection assessment or should theyimpose their head office practices on this new environment?

Various national and international surveys of the use of these methodswere published in the 1990s (Shackleton & Newell, 1991, 1994; Rowe,Williams, & Day, 1994; DiMilia, Smith, & Brown, 1994; Gowing & Slivinski,1994; Funke, 1996; Levy-Leboyer, 1994; Bartram, Lindley, Foster, &Marshall, 1995; Ryan, McFarland, Baron, & Page, 1999). Unfortunatelymost of these studies tend to cover only a single country or group of countriesin the developed world (typically, North America, Europe, and Australia)and most rely on postal surveys with low return rates.

Levy-Leboyer (1994), in her review of surveys of selection practices inEurope, noted that all countries tend to use application forms and inter-views. France is unusual in placing a high reliance on graphology as ascreening method and in not giving much credence to third-party references.Situational tests and assessment centres are used more in the UK, Germany,and the Netherlands than in France or Belgium, but there are also generallyhigher levels of test use in France, Belgium, Spain, and the UK than in theother countries.

Schuler et al. (in a 1993 study reported in Funke, 1996) found very high levelsof use of structured interviews in the Benelux countries and the UK, with lessin Germany. In Germany they found very little use of personality or performancetesting compared to Spain, Benelux, and the UK—with France in-between.

The use of objective assessment techniques (tests, structured interviewing,assessment centres) raises issues of training. Many authors, for example,have expressed concerns about the indiscriminate use of tests in countrieswhere there is little if any provision for training in these methods (e.g.O’Gorman, 1996, with reference to Australia; Engelhart, 1996, with referenceto France; Henley & Bawtree, 1996, with reference to the UK; and Smith &George, 1992, with reference to New Zealand). It is in response to theseconcerns that the British Psychological Society established the procedures forCertification of Competence in Testing for people working in occupationaltesting (Bartram, 1995). More recently the International Test Commission haspublished international guidelines for test use (Bartram, 2001; InternationalTest Commission, 2001).

The various surveys listed above are interesting in that they tell us some-thing about how countries differ. But they are limited in that they almostall only concern large organisations (1,000 or more employees), they rely onpostal surveys that often have very low response rates, and they say nothingabout why these differences in practice occur.

An exception to the postal review methodology is a study reported byBartram et al. (1995). They carried out detailed face-to-face interviews withrecruiters using a structured sampling technique covering small businesses

Page 7: Assessment in Organisations

ASSESSMENT IN ORGANISATIONS

243

© International Association for Applied Psychology, 2004.

in the UK. This research focused on the question of how young people areselected for employment by small businesses. It is often forgotten that themajority of people work for small companies. In the UK at the time of thisresearch, over 88 per cent of businesses employed fewer than 25 people and73 per cent employed fewer than ten. About one-third of all employedpeople work for small businesses, yet very little attention has been paid totheir recruitment and assessment practices. The research confirmed theprediction that small firms would carry out assessment in a very casual andinformal fashion. The impression gained from the psychological literature,that assessment in industry is objective and well formalised, is a misleadingone. Actual practice differs considerably from what assessment specialistswould define as best practice.

It is also interesting to consider what recruiters assess and their perceptionsof the worth of various assessment methods. The Bartram et al. (1995) studyand subsequent work (Coyne & Bartram, 2000) has shown that the mostimportant personal characteristics for employers are: honesty, integrity,conscientiousness, interest in the job, and the “right general personality”.Research by Scholarios and Lockyer (1999) with small consultancy firmsin Scotland again found an emphasis on the importance of honesty andconscientiousness, with general ability as the third most important attribute.

These psychological attributes are regarded as more important thanqualifications, experience, or training. One of the main reasons given byemployers in the Bartram et al. (1995) study for their reliance on the inter-view was that they thought this was the best way for them to judge anapplicant’s personal qualities. This emphasis on the importance of personalqualities probably accounts for the growing use of personality tests, measuresof “emotional intelligence” and “honesty and integrity” testing by medium-sized and larger organisations together with the relative lack of emphasisgiven to formal educational and work-related qualifications (Jenkins, 2001).

Why do recruiters focus on these particular characteristics? Evidencesuggests that these are characteristics that they see as relatively difficult tochange and “high risk” areas. A person can be trained to acquire the skillsand knowledge necessary to do a job, but their attitude, their way of dealingwith other people, their honesty, are seen as characteristics that are immut-able. This is highlighted in small firms where the impact of one person on theperformance of the business as a whole can be very considerable.

Campbell, Lockyer, and Scholarios (1997), who studied medium- tosmall-sized firms (most of their sample employed fewer than 200 people),also found that such organisations tended not to use objective assessmenttechniques (such as psychological tests) and also tended not to use assessmentcentres (because of their cost).

While Boyle, Fullerton, and Yapp (1993) show that use of assessmentcentres is mainly confined to large firms, there is some evidence from an

Page 8: Assessment in Organisations

244

BARTRAM

© International Association for Applied Psychology, 2004.

Institute for Recruitment Studies survey (IRS, 1997), that medium-sizedfirms are making increasing use of these techniques.

In the most extensive of the published surveys of international selectionpractices, Ryan et al. (1999) attempted both to increase the breadth ofcoverage of practices around the world and to explore some possibleexplanations of why they differ. Through a postal survey, they mailed 300organisations employing more than 1,000 people in each of 22 countries (i.e.

n

=

6,600 in total), including Singapore, Hong Kong, Malaysia, and SouthAfrica. A total of 959 usable responses were obtained. Cultural “norms” foreach country on Uncertainty Avoidance and Power Distance were obtainedfrom Hofstede’s (1991) data.

The Ryan et al. study is worth considering in some detail as it directlyfocuses on the issues of how practices differ between countries and whethersuch differences can be related to differences in cultural norms. They foundthat between-nation effects could explain a modest proportion of variancein staffing practices. In particular, they found that national differencesaccounted for a substantial proportion of the variance in the use of fixedinterview questions (43% of the variances), and in using multiple methodsof verification, testing, and number of interviews (over 10% of the variancein each case).

In relation to the predictions based on Hofstede, some interesting resultswere found. Those cultures high in Uncertainty Avoidance tended to usefewer selection methods, do rather less verification and backgroundchecking, but use more types of tests and use them to a greater extent, andconduct more interviews with candidates. They were also more likely to usefixed sets of interview questions and more likely to audit their selectionprocedures in some manner. Less clear results were found in relation toexpected effects of Power Distance differences. While countries high on thisscale were less likely to use peers as interviewers, peers were more likely tobe involved in hiring decisions in some other way (possibly because of theimpact of unionised labour in such countries).

Future research needs to tackle the difficult questions of sampling. Ryanet al. had very different response rates from different countries. Therewere also geographical “gaps”: South America, East and West Africa, theMiddle East, and North Africa. Arguably these are the areas where we wouldexpect to find the greatest cultural differences and are key emerging marketsfor multinationals. It is also possible that other cultural frameworks wouldhave provided different views of the data (e.g. Schwartz & Bilsky, 1987;Smith, Dugan, & Trompenaars, 1996). Nevertheless, this study provides avaluable starting point for future research in this area.

What the various research studies on recruitment practice have shown isthat there is wide diversity in the use of assessment methods in organisationsboth within and between countries. This diversity is very much a function

Page 9: Assessment in Organisations

ASSESSMENT IN ORGANISATIONS

245

© International Association for Applied Psychology, 2004.

of both organisation size and culture. Small organisations have neither theresources nor the people with the necessary specialist skill and knowledgeto devote to testing and assessment. When comparing practices in differentcountries, it is important that we do not compare the practices of small firms indeveloping countries with those of large ones in developed countries. Smallfirms in the UK, for example, rely heavily on personal recommendationand family and friendship network connections for finding new employees.This sort of practice is often, wrongly, seen as occurring only outside theNorth American and European economic areas.

Ryan et al.’s study is important for showing clearly that national andcultural factors do account for substantive proportions of variance indifference in selection practices. Such differences raise questions for inter-national organisations seeking to carry out “fair” recruitment campaignsacross a number of different countries. Should they impose the same prac-tices on all countries? If they do, then these practices will be more familiarin some countries than others, and more acceptable (both to applicants andrecruiters) in some than others. The alternative is to set quality standardsfor the recruitment process that do not prescribe methods as such, and leaveeach country to adopt the methods it is most comfortable with so long asthe outcome is one that meets the quality standard.

Despite the variations in assessment practice, there does appear to begood agreement across organisations and cultures on the importance ofpersonal qualities, on the importance of a person’s values, and the degreeto which a new recruit will “fit” the culture of the organisation.

ASSESSMENT IN THE WORKPLACE

Competency-Based Assessment

The competency approach to selection and assessment is one based on iden-tifying, defining, and measuring individual differences in terms of specificwork-related constructs that are relevant to successful job performance.Over the last 25 years this approach has gained rapidly in popularity, duepartly to the way in which the concepts and language used have currencywithin the world of human resources management.

The profiling of jobs in terms of competency requirements has increas-ingly supplemented or replaced more traditional task-based job analysis,most noticeably in countries outside the United States. Competency profilingdiffers from job analysis in that the focus of the former is on the desirableand essential behaviours required to perform a job, while the latter focuseson the tasks, roles, and responsibilities associated with a job. These are com-plementary ways of looking at the same thing, with the competency analysisproviding a person specification and the job analysis a job description. The

Page 10: Assessment in Organisations

246

BARTRAM

© International Association for Applied Psychology, 2004.

main advantage of the competency modelling approach has been its successin building the models that lay the foundations for organisation-wideintegrated human resources applications.

The problem with competency as a construct is that there is considerableconfusion and disagreement about what competencies are and how theyshould be measured (Shippmann, Ash, Battista, Carr, Eyde, Hesketh, Kehoe,Pearlman, Prien, & Sanchez, 2000). Competency-based assessment has alsosuffered in the past from being used and developed by a wide range of prac-titioners many of whom had not had a psychologist’s background of trainingin scientific method and measurement. However, Shippmann et al. (2000)note that there is evidence of increasing rigour in the competency approach.

Bartram, Robertson, and Callinan (2002) define competencies as “setsof behaviours that are instrumental in the delivery of desired results oroutcomes”. In terms of this definition, competencies relate to behaviouralrepertoires: the range and variety of behaviours we can perform, and out-comes we can achieve. A competency is not the behaviour or performanceitself but the repertoire of capabilities, activities, processes, and responsesavailable that enable a range of work demands to be met more effectivelyby some people than by others. This approach is elaborated in Kurz andBartram (2002).

Models of Job Competency

Most of the work on defining models of job performance has focused on themanagerial area. There are some exceptions, such as Hunt’s (1996) work onentry-level jobs in the service industries, and analyses of the competenciesrequired for jobs in the military (e.g. the work of Campbell, McHenry, &Wise, 1990, on Project A). In relation to managerial competencies, Tett,Guterman, Bleier and Murphy (2000) reference 12 different models from theacademic literature dating back to Flanagan (1951). They also note thatwhile there is considerable overlap in terms of content between these variousmodels, there are also marked differences in detail, description, definition,emphasis, and level of aggregation.

The merging of the academic and practice-based approaches to com-petency models can be found in the recent development of hierarchicalapproaches to model building. General high-level constructs can provide thebasis for accounting for major portions of variance in performance, whilemore detailed dimensions are required for everyday use by practitioners.Even more finely grained constructs may be required for the detailedcompetency profiling of jobs.

• Tett et al. (2000) developed a taxonomy of 53 competencies clusteredunder nine general areas. These 53 competencies were derived from the

Page 11: Assessment in Organisations

ASSESSMENT IN ORGANISATIONS

247

© International Association for Applied Psychology, 2004.

results of subject matter experts sorting 147 behavioural elements.The nine general areas were: Traditional functions; Task orientation;Dependability; Open-mindedness; Emotional control; Communication;Developing self and others; Occupational acumen and Concerns.

• Borman and Brush (1993) propose a structure of 187 behavioursmapping on to 18 main dimensions, which in turn map to four verybroad dimensions: Leadership and supervision; Interpersonal relationsand communication; technical behaviours and mechanics of manage-ment; and useful behaviours and skills (such as job dedication). Thisstructure has been supported by subsequent meta-analysis research(Conway, 1999).

• Kurz and Bartram (2002) describe a job competency framework whichalso adopts a three-tier structure. The bottom tier of the structureconsists of a set of 112 component competencies. These 112 componentcompetencies were derived from extensive content analyses of bothacademic and practice-based competency models. This analysis coveredmanagerial and non-managerial positions. As a consequence, thecontent of the components covers a wider domain than that addressedby Tett et al. in their work on managerial competencies. The frameworkarticulates the relationships between these components, their mappingon to a set of 20 competency dimensions (the middle tier) and theirloadings on eight broad “competency factors” (the top tier in theirTable 1).

These models provide the basis for the development of the measures ofbehaviour at work that are needed if we are:

• To assess the general value of trait measures in applications like per-sonal development, potential for leadership, assessment for promotionand so on.

• To get a better-articulated model of criterion measures for the validationof selection and recruitment procedures.

In the past, there has been a tendency to use tools in selection (and for post-hire assessment) that were designed to provide coverage of some particularpsychological attribute domain (e.g. personality) rather than having beendesigned to provide coverage of the relevant criterion domain. A betterunderstanding of the factorial structure of the domain of criterion behavi-ours will help us to better design predictors both in terms of coverage andvalidity.

As noted earlier in this article, two factors came together to create theshift in attitude relating to the validity of personality assessment. One wasthe availability of meta-analysis techniques. The other was the “Big Five”taxonomy. This allowed various different instruments to be mapped on to

Page 12: Assessment in Organisations

248

BARTRAM

© International Association for Applied Psychology, 2004.

a common structure for the purposes of analysis, and issues of generalis-ability to be explored across different instrument structures. A similar issuearises in relation to assessing behaviours in the workplace. So long as thereis nothing comparable to the “Big Five” for this domain, we will continueto face the problem of trying to aggregate the results from studies that usecriterion measures that are not comparable with each other.

However, evidence is beginning to emerge that there may be a smallnumber of broad factors that account for most of the variance in criterionworkplace behaviors. The research reported in Kurz and Bartram (2002)supports the view that variance in competency measures can be accountedfor by eight broad factors. Correlations between competency measures andmeasures of psychological attributes suggest that these eight factors reflectthe psychological constructs that underlie competencies. Specifically thetrait markers for the eight competency factors can be identified as:

• “g” or general reasoning ability• the “Big Five” personality factors• two factors relating to motivation factors: need for achievement and

need for power or control.

This “Great Eight” structure has been replicated in a number of differentdata sets including analysis of the ratings of 54 competencies in the OPQ32UK national standardisation sample data (SHL, 1999), analysis of jobapplicant data collected over the Internet in the USA (

n

=

26,000), fromSwedish data, and from analyses of data obtained from two generic 360-degreecompetency inventories.

This model suggests that optimal selection batteries will be those thatprovide coverage of general ability, the Big Five personality factors and twobroad motivation factors. Meta-analyses of selection data have already sup-ported the importance of the ability and personality domains. The fact thatmotivation factors have not emerged from meta-analyses is a reflection of thefact that there are very few studies in the literature that contain systematicmeasures of motivation. We should now be designing new validity studiesthat start from a consideration of the criterion domain rather than from theavailability of classes of predictor instrument.

Assessment in organisations tends to be focused around competencymodels. The use of personality questionnaires and other trait-based measuresis generally carried out within the context of assessing competency potential—rather than for any direct interest in the traits themselves. Trait measurestend to be used in Development Centres as part of the process of exploringpeople’s potential for future development. More generally, assessment ofbehaviour and of performance is more direct; relying on instrumentsdesigned to measure “samples” of behaviour rather than “signs” of under-lying traits.

Page 13: Assessment in Organisations

ASSESSMENT IN ORGANISATIONS

249

© International Association for Applied Psychology, 2004.

Leadership and Leadership Assessment

Following the work of people like Fiedler (1967) and Vroom (Vroom &Yetton, 1973), who considered the behaviours that predict effective outcomesare dependent upon various situational contingencies, attention has shiftedback to focus on the individual characteristics of leaders. The increasinglevels of uncertainty and change within organisations have made situationalmodels very difficult to uphold. Working in the political leadership context,Burns (1978) developed the concept of transformational leadership toidentify those qualities in a leader that inspire others to work beyond theirown self-interest for the common good. In contrast, others operate by trans-actions: offering people something in return for their efforts. This distinctionbetween transactional and transformational leadership attributes has beenapplied to organisational leadership by Bass (1985), and forms the basisfor one of the most widely used leadership questionnaires in this area (theMultifactor Leadership Questionnaire; Bass, & Avolio, 1990a, 1990b).

Concerns have been expressed (see Bryman, 1996) over the generalisabilityof Bass’s work, as it has been based on US top managers in the privatesector. The extent to which the same notions are relevant for lower levelmanagers or supervisors, and the extent to which they apply to countrieswith cultures different from the USA is an open question. Work in the UKwith managers from a range of levels from both public (Alimo-Metcalfe &Alban-Metcalfe, 2001) and private sector organisations (Alban-Metcalfe &Alimo-Metcalfe, 2002) has resulted in a rather different set of prioritiesbeing assigned to the various transactional and transformational attributesdescribed by Bass.

The Global Leadership and Organisational Effectiveness (GLOBE)Research Program consists of a network of 170 social scientists and manage-ment scholars from 62 cultures across the world (House, Hanges, Ruiz-Quintanilla, Dorfman, Javidan, Dickson, Gupta, & GLOBE, 1999; House,Javidan, & Dorfman, 2001). They are working in a coordinated long-termeffort to examine the inter-relationships between societal culture, organ-isational culture and practices, and organisational leadership, and are seekingan empirically based theory to describe, understand, and predict the impactof cultural variables on leadership and its effectiveness. So far, they haveidentified six global leadership dimensions of which four are universallyendorsed across cultures (Charismatic/valued-based leadership; Team-orientated leadership; Humane leadership; and Participative leadership)and two are not (Self-protective leadership; Autonomous leadership). Theyalso describe 21 specific leader attributes and behaviours that are universallyviewed as contributing to leadership effectiveness and eight specific leaderattributes were universally viewed as impediments to leadership effectiveness.Interestingly, 35 specific leader attributes were identified as contributors in

Page 14: Assessment in Organisations

250

BARTRAM

© International Association for Applied Psychology, 2004.

some cultures and impediments in others. However, the GLOBE projecthas only asked about the perceived desirability of behaviours. It is an openquestion as to whether such behaviours are also effective in facilitatingorganisationally desired outcomes.

EMERGING TRENDS

From what has been said earlier, it is clear that a number of factors areimpacting the role of assessment in and by organisations. These can bedivided into some general issues and others which are more specificallyassociated with pre- or post-hire assessment.

General Issues

Meta-analysis and validity generalisation have had a major impact onthinking and practice in the field. The emphasis on small local validationstudies has been replaced by a focus on the need to consider aggregateddata sets, where various sources of small-sample error have been taken intoaccount. The strengths and weaknesses of this approach have been discussed(see also, Murphy, 2000). A current limitation of this literature is its relianceon old data sets. These do not reflect current thinking on the structure andbreadth of the criterion space. It will be some time before there are sufficientwell-designed studies completed which provide the range of predictionmeasures needed to fully evaluate the potential upper limits on validity forassessment in selection.

A second general issue is that associated with the growth in the use of“objective” assessment in developing countries. Multinationals operateworldwide; increasingly they are looking to use the same techniques in devel-oping nations for selection and development as are used in the so-calleddeveloped world (tests, Assessment and Development Centres, structuredinterviews, biodata inventories and so on). These raise issues of local usercompetence as well as cultural appropriateness.

One consequence of this trend has been the development of “black box”solutions. Such solutions seek to “de-skill” the user requirements by providinga computer-based expert system that guides use through the process ofjob analysis and competency profiling; selects and configures a battery ofrelevant tests and provides the user with a merit list of applicants, sorted bytheir performance on the tests in relation to the job requirements. Suchsystems will also, typically, provide the user with reports that give an inter-view structure and identify key areas for questioning. While such systemsput the knowledge and skill of the expert I /O psychologist into the hands ofline managers, concerns have been raised about the impact of this on thefuture role of I /O psychologists in assessment in organisations.

Page 15: Assessment in Organisations

ASSESSMENT IN ORGANISATIONS

251

© International Association for Applied Psychology, 2004.

Pre-Hire Issues

This review has shown that not only do employers value honesty andconscientiousness above all other attributes, but also that meta-analyseshave shown that these attributes are well measured by integrity tests andpersonality questionnaires. More and more organisations want to assessemployee dependability, especially in blue-collar and lower level white-collar positions. While research tends to show that both covert and overtmeasures of “honesty” do have generalisable validity, there is a range ofethical and legal concerns about their use. Outside the United States,professional concerns are expressed about the use of overt honesty measures,while less concern is shown over the use of personality attributes likeconscientiousness. Within the United States, the opposite appears to be thesituation, where overt questions are preferred to what would be regardedas the “covert” use of personality to infer honesty. As noted earlier, it isparadoxical that the measures of conscientiousness actually appear to assesshonesty better than general job performance while the reverse is the case forintegrity tests. Future research needs to better understand the mechanismsbehind these patterns of prediction. The ethical issues would seem to bemore about how these instruments are used than whether they should beused—so long as their job relevance can be supported.

Probably the biggest change in recruitment and selection practice in thepast few years has been brought about by the use of the Internet. Use of theInternet as the medium for job search and making job applications israpidly replacing the traditional paper-based application procedures. Detailsof how this technology has impacted on assessment practice are reviewedin Bartram (2000), Lievens and Harris (2003), Robertson, Bartram, andCallinan (2002), and Stanton (1999). From the assessment point of view,one impact has been the pressure on test developers to design fast, openaccess instruments that still meet good psychometric measurement criteria.This has resulted in the emergence of four key research areas.

• In order to ensure better test security and provide the ability to refreshtest content on a regular basis, there has been a rapid increase inthe application of item and test generation methodologies (Irvine &Kyllonen, 2002). A practical example of this is provided by Baron,Miles, and Bartram (2001).

• The increasing use of computerised testing has resulted in research onthe equivalence of computer and paper-and-pencil tests. In general,this has produced generally positive results indicating that tests,especially un-timed self-report measures, need not be affected by thischange of medium (Bartram, 1994; Donovan, Drasgow, & Probst, 2000;Neuman & Baydoun, 1998). However, highly speeded ability tests(such as clerical checking tasks) may need re-norming as the ergonomics

Page 16: Assessment in Organisations

252

BARTRAM

© International Association for Applied Psychology, 2004.

of the test can significantly affect the speed with which people canperform the task.

• While organisations are looking for means of increasing test securityand protecting tests from compromise through coaching, at the sametime they are wanting to make assessment procedures more open andless constrained by the need to provide supervision, and more accessibleto line managers or hiring managers with little or no expertise in testing.This has increased the need for research on the impact supervisionhas on performance in assessment situations, on how one can maintainvalidity under conditions of reduced control, and on how effectivelyselection solutions can be “black-boxed” for use by relatively inexperi-enced line managers.

• Finally, there is increased interest in the search for valid predictorsthat minimise adverse impact. Organisations are increasingly concernedabout potential litigation and the socio-political implications of selectionpractices. In the past, attention tends to have focused on the problemsassociated with the use of ability tests, rather than the overall selectionassessment strategy. We are now seeing how, with the judicious use ofcombinations of instruments, the bottom line risk of adverse impactcan be greatly reduced.

While meta-analyses might imply that the only tools one needs are anability test and an integrity test and that these could be used for all jobswithout any job analyses being needed, the reality is different. As arguedearlier, meta-analysis has provided some important breakthroughs, but it hasits limitations. Selection is a process of negotiation between applicants andhirers. Assessment techniques have not only to work, but also to appear relev-ant and acceptable. In a study comparing paper-and-pencil, computerised,and multi-media versions of a test (Richman-Hirsch, Olson-Buchanan, &Drasgow, 2000), managers rated the multi-media version as more “valid”and had more positive attitudes towards it than did those completing theother versions. Assessment procedures are seen as reflections of an organ-isation’s values and culture. The choice of which tools to use and how toapply them is governed by more than just considerations of psychometricquality.

Post-Hire Assessment Issues

The most significant change within the workplace has been the developmentof more rigorous approaches to competencies, and the growing emphasis onthe importance of competencies rather than specific job skills. Globalisationand the speed of change within organisations is moving the emphasis awayfrom selecting people to do a specific job and towards selecting people

Page 17: Assessment in Organisations

ASSESSMENT IN ORGANISATIONS

253

© International Association for Applied Psychology, 2004.

who have the qualities necessary to work flexibly and adaptively within anorganisation. This in turn has pushed the emphasis away from task-basedjob analysis as the basis for developing person specifications, towards theuse of competency modelling. Competency models provide the meanswhereby an organisation can integrate the tactical and strategic use ofassessment. As discussed earlier, there is a growing consensus on the natureand structure of the competency domain. The “Great Eight” factors providea useful starting point for relating assessments of traits to measures ofworkplace behaviours.

This emphasis on the need for a flexible workforce has increased the useof assessment for development purposes within organisations. Approachessuch as 360-degree feedback are becoming increasingly common. The logist-ical difficulties of carrying out a 360-degree assessment have been largelyovercome by the development of Internet-based solutions (Bartram, Geake,& Gray, in press). In contrast to pre-hire assessment, post-hire approachesare characterised by the use of multiple sources: most leadership assessmentinstruments, for example, involve both self-ratings and peer or subordinateratings.

We are also seeing an increasing use of assessment instruments for organ-isational development (OD) applications: Identifying organisational cultureand values, looking at value fit between individuals, groups, and the organ-isation and so on. These raise issues of the impact of cultural and nationaldifferences between organisations, which are being addressed using modelsdeveloped for cross-cultural analysis (e.g. Hofstede, 1991; Smith et al., 1996;Schwartz, 1999).

AREAS FOR FUTURE RESEARCH

Much of our research data on assessment in organisations is from largeorganisations and comes from the United States or, to a lesser degree,Europe. We need to broaden this view. Further cross-cultural studies areneeded to consider the potential differential impact of assessment practicesboth pre- and post-hire.

Evidence is accumulating through validity generalisation meta-analysesand other work that much of the variance in workplace behaviour can bepredicted by eight main factors (the “Great Eight”). These have been identifiedthrough various research programmes and cover the Big Five personalityfactors, “g” or general cognitive ability, and two motivational factors: theneed for control and the need for achievement. Future research is neededthat builds on these models. To date, most of the emphasis in assessment inorganisations has been on the measurement of predictors—personalityscales, ability, etc. Very little systematic research has been carried out on thenature and properties of criterion measures. The competency approach

Page 18: Assessment in Organisations

254

BARTRAM

© International Association for Applied Psychology, 2004.

provides a way forward in developing a generalisable structure of factorsthat can be used to measure workplace behaviours, and so provide a commonstructure for future validation studies. In so doing, it is important to drawa distinction between two categories of criterion: workplace behaviours(as addressed by competency models) and outcomes (performance judgedagainst the achievement of goals or objectives).

Schneider (1996), reviewing a series of papers on personality and work,noted that:

. . . knowing that Conscientiousness correlates with various performancecriteria across a wide variety of jobs and in a wide variety of settings is notequivalent with understanding the behaviour that is emitted by those who areconscientious. Campbell (1990) is correct when he notes that we have focusedon

outcomes

of behaviour as correlates of personality (and other predictors)and have relatively little insight into the

behaviour

that intervenes between thepersonality and the outcome. In the absence of such information, we haveno understanding of the process by which personality becomes reflected inoutcomes. This lack of information leaves us relatively impotent so far asinterventions in the workplace are concerned. (pp. 291–292)

He goes on to point out that “It is

behaviour

, not

personality

that causesoutcomes”.

This important distinction has been missed in much research on assess-ment in organisations over the past decade. Future research needs to beguided by better models and by theory. We should be seeking to understandthe dynamics of the processes that relate attributes to behaviours, and howbehaviours can be shaped to produce outcomes that meet organisationalgoals. Without demeaning the very real contribution made by the meta-analytic literature, we need to be less driven by the pure empiricism that cancharacterise such approaches and the concomitant recycling of data fromthe past, and focus more on building a sound theoretical base from which tomodel the processes that lead from individual behaviours to organizationaleffectiveness.

REFERENCES

Alban-Metcalfe, R.J., & Alimo-Metcalfe, B. (2002).

The development of the Trans-formational Leadership Questionnaire—Private Sector Version

. Proceedings of theBritish Psychological Society. Leicester: BPS.

Alimo-Metcalfe, B., & Alban-Metcalfe, R.J. (2001). The construction of a newTransformational Leadership Questionnaire.

Journal of Occupational and Organ-izational Psychology

,

74

, 1–27.Baron, H., & Miles, A. (2002). Personality questionnaires: Ethnic trends and

selection.

Proceedings of the British Psychological Society Occupational PsychologyConference

(pp. 110–116). Leicester: BPS.

Page 19: Assessment in Organisations

ASSESSMENT IN ORGANISATIONS

255

© International Association for Applied Psychology, 2004.

Baron, H., Miles, A., & Bartram, D. (2001). Using online testing to reduce time-to-hire. Paper presented at 16th Annual Conference of the Society for Industrialand Organizational Psychology, San Diego, April.

Barrick, M.R., & Mount, M.K. (1991). The Big Five personality dimensions and jobperformance: A meta-analysis.

Personnel Psychology

,

44

, 1–25.Barrick, M.R., Mount, M.K., & Judge, T.A. (2001). Personality and performance

at the beginning of the new millennium: What do we know and where do we gonext?

International Journal of Selection and Assessment

,

9

, 9–30.Bartram, D. (1994). Computer-based assessment.

International Review of Industrialand Organizational Psychology

,

9

, 31–69.Bartram, D. (1995). The development of standards for the use of psychological tests:

The competence approach.

The Psychologist

,

8

, 219–223.Bartram, D. (2000). Internet recruitment and selection: Kissing frogs to find princes.

International Journal of Selection and Assessment

,

8

, 261–274.Bartram, D. (2001). The development of international guidelines on test use: The

International Test Commission Project.

International Journal of Testing

,

1

, 33–53.Bartram, D., Geake, A., & Gray, A. (in press). The Internet and 360-degree feedback.

In S. Martin (Ed.),

360-grad-beurteilungen, diagnose und entwicklung von fuhrung-skompetenzen

. Gottingen: Verlag fur Angewandte Psychologie Hogrefe. (ReihePsychologie fur das Personalmanagement; Herausgeber: Prof. Dr Werner Sarges.)

Bartram, D., Lindley, P.A., Foster, J., & Marshall, L. (1995). The selection of youngpeople by small businesses.

British Journal of Occupational and OrganizationalPsychology

,

68

, 339–358.Bartram, D., Roberton, I.T., & Callinan, M. (2002). Introduction: A framework

for examining organisational effectiveness. In I.T. Robertson, M. Callinan, &D. Bartram (Eds.),

Organisational effectiveness: The role of psychology

(pp. 1–12).Chichester, UK: Wiley.

Bass, B.M. (1985).

Leadership and performance beyond expectations

. New York: TheFree Press.

Bass, B.M., & Avolio, B.J. (1990a).

Multifactor Leadership Questionnaire

. Palo Alto,CA: Consulting Psychologists Press.

Bass, B.M., & Avolio, B.J. (1990b).

Transformational leadership development:Manual for the Multifactor Leadership Questionnaire

. Palo Alto, CA: ConsultingPsychologists Press.

Borman, W.C., & Brush, D.H. (1993). More progress towards a taxonomy of mana-gerial performance requirements.

Human Performance, 6, 1–21.Borman, W.C., Penner, L.A., Allen, T.D., & Motowildo, S.J. (2001). Personality

predictors of citizenship performance. International Journal of Selection andAssessment, 9, 52–69.

Boyle, S., Fullerton, J., & Yapp, M. (1993). The rise of the assessment centre: Asurvey of assessment center usage in the UK. Selection and Development Review,9(3), 1–3.

Bryman, A. (1996). Leadership in organizations. In S.R. Clegg, C. Hardy, & W.R. Nord(Eds.), Handbook of organizational studies (pp. 276–292). London: Sage.

Burns, J.M. (1978). Leadership. New York: Harper & Row.Campbell, E., Lockyer, C., & Scholarios, D. (1997). Selection practices in Scottish

private sector companies: Linkages to organisation size and industrial sector

Page 20: Assessment in Organisations

256 BARTRAM

© International Association for Applied Psychology, 2004.

(Occasional Paper 10). Department of Human Resource Management, Universityof Strathclyde.

Campbell, J.P., McHenry, J.J., & Wise, L.L. (1990). Modeling job performance in apopulation of jobs. Personnel Psychology, 43, 313–333.

Conway, J.M. (1999). Distinguishing contextual performance from task performancefor managerial jobs. Journal of Applied Psychology, 84, 3–13.

Costa, P.T., & McCrae, R.R. (1992). The NEO PI-R Professional Manual. Odessa,FL: Psychological Assessment Resources Inc.

Coyne, I., & Bartram, D. (2000). Personnel managers’ perceptions of dishonesty inthe workplace. Human Resource Management Journal, 10(3), 38–45.

Digman, J.M. (1990). Personality structure: Emergence of the five-factor model.Annual Review of Psychology, 41, 417–440.

Digman, J.M. (1997). Higher-order factors of the big five. Journal of Personality andSocial Psychology, 73, 1246–1256.

DiMilia, L., Smith, P.A., & Brown, D.F. (1994). Management selection in Australia:A comparison with British and French findings. International Journal of Selectionand Assessment, 2, 80–90.

Donovan, M.A., Drasgow, F., & Probst, T.M. (2000). Does computerizing paper-and-pencil job attitude scales make a difference? New IRT analyses offer insight.Journal of Applied Psychology, 85, 305–313.

Engelhart, D. (1996). The usage of psychometric tests in France. In M. Smith &V. Sutherland (Eds.), International review of professional issues in selection andassessment (Vol. 1, pp. 163–164). New York: Wiley.

Fiedler, F.E. (1967). A theory of leadership effectiveness. New York: McGraw-Hill.Flanagan, J.C. (1951). Defining the requirements of the executive’s job. Personnel,

28, 28–35.Funke, U. (1996). German studies in selection and assessment. In M. Smith &

V. Sutherland (Eds.), International review of professional issues in selection andassessment (Vol. 2, pp. 169–175). New York: Wiley.

Gowing, M.K., & Slivinski, L.W. (1994). A review of North American selectionprocedures: Canada and the United States of America. International Journal ofSelection and Assessment, 2, 103–114.

Henley, S., & Bawtree, S. (1996). Training standards and procedures for trainingpsychologists involved in selection and assessment in the United Kingdom.In M. Smith & V. Sutherland (Eds.), International review of professional issues inselection and assessment (Vol. 1, pp. 51–58). New York: Wiley.

Hermelin, E., & Robertson, I.T. (2001). A critique and standardization of meta-analytic validity coefficients in personnel selection. Journal of Occupational andOrganizational Psychology, 74, 253–278.

Hofstede, G. (1991). Cultures and organizations: Software of the mind. London:McGraw-Hill.

Hough, L.M., & Oswald, F.L. (2000). Personnel selection: Looking towards thefuture—remembering the past. Annual Review of Psychology, 51, 631–664.

House, R.J., Hanges, P.J., Ruiz-Quintanilla, S.A., Dorfman, P.W., Javidan, M.,Dickson, M., Gupta, V., & GLOBE (1999). Cultural influences on leader-ship and organisations: Project Globe. Advances in Global Leadership, 1, 171–233.

Page 21: Assessment in Organisations

ASSESSMENT IN ORGANISATIONS 257

© International Association for Applied Psychology, 2004.

House, R.J., Javidan, M., & Dorfman, P. (2001). Project GLOBE: An introduction.Applied Psychology: An International Review, 50, 479–488.

Hunt, S.T. (1996). Generic work behaviour: An investigation into the dimensions ofentry-level hourly job performance. Personnel Psychology, 49, 51–83.

Hunter, J.E., & Schmidt, F.L. (1990). Methods of meta-analysis. Newbury Park, CA: Sage.International Test Commission (2001). International guidelines on test use. Inter-

national Journal of Testing, 2, 93–114.IRS (1997). The state of selection: An IRS survey. Employee Development Bulletin,

85. London: Industrial Relations Services.Irvine, S., & Kyllonen, P. (Eds.) (2002). Item generation for test development.

Hillsdale, NJ: Lawrence Erlbaum.Jenkins, A. (2001). Companies’ use of psychometric testing and the changing demand

for skills: A review of the literature. London: Centre for the Economics of Educa-tion, London School of Economics and Political Science.

Kurz, R., & Bartram, D. (2002). Competency and individual performance: Model-ling the world of work. In I.T. Robertson, M. Callinan, & D. Bartram (Eds.),Organisational effectiveness: The role of psychology (pp. 227–258). Chichester,UK: Wiley.

Levy-Leboyer, C. (1994). Selection and assessment in Europe. In H.C. Triandis,M.D. Dunnette, & L.M. Hough (Eds.), Handbook of industrial and organizationalpsychology (Vol. 4, pp. 173–190). Palo Alto, CA: Consulting PsychologistsPress.

Lievens, F., & Harris, M.M. (2003). Research on internet recruitment and testing:Current status and future directions. In C.L. Cooper, & I.T. Robertson (Eds.),International Review of Industrial and Organizational Psychology (Vol. 18,pp. 131–165). Chichester: John Wiley.

Matthews, G. (1997). The Big Five as a framework for personality assessment. InN. Anderson & P. Herriot (Eds.), International handbook of selection and assess-ment (pp. 475–492). Chichester: Wiley.

Murphy, K. (2000). Impact of assessments of validity generalization and situationalspecificity on the science and practice of personnel selection. International Journalof Selection and Assessment, 8, 194–206.

Murphy, K., & Bartram, D. (2002). Recruitment, personnel selection and organ-isational effectiveness. In I.T. Robertson, M. Callinan, & D. Bartram (Eds.), Organ-isational effectiveness: The role of psychology (pp. 85–114). Chichester, UK:Wiley.

Neuman, G., & Baydoun, R. (1998). Computerization of paper-and-pencil tests: Whenare they equivalent? Applied Psychological Measurement, 22, 71–83.

Norman, W.T. (1963). Towards an adequate taxonomy of personality attributes:Replicated factor structure in peer nomination personality rating. Journal ofAbnormal and Social Psychology, 66, 574–583.

O’Gorman, J.G. (1996). Selection and assessment in Australia. In M. Smith &V. Sutherland (Eds.), International review of professional issues in selection andassessment (Vol. 1, pp. 25–35). New York: Wiley.

Ones, D.S., & Viswesveran, C. (1998). Gender, age and race differences in overtintegrity tests: Results across four large-scale job applicant data sets. Journal ofApplied Psychology, 83, 35–42.

Page 22: Assessment in Organisations

258 BARTRAM

© International Association for Applied Psychology, 2004.

Ones, D.S., & Viswesveran, C. (2001). Integrity tests and other criterion-focusedoccupational personality scales (COPS) used in personnel selection. InternationalJournal of Selection and Assessment, 9, 31–39.

Ones, D.S., Viswesveran, C., & Schmidt, F.L. (1993). Comprehensive meta-analysisof integrity test validities: Findings and implications for personnel selection andtheories of job performance. Journal of Applied Psychology, 78, 679–703.

Richman-Hirsch, W.L., Olson-Buchanan, J.B., & Drasgow, F. (2000). Examining theimpact of administration medium on examinee perceptions and attitudes. Journalof Applied Psychology, 85, 880–887.

Robertson, I.T., Bartram, D., & Callinan, M. (2002). Personnel selection and assess-ment. In P. Warr (Ed.), Psychology at work (Chapter 5, pp. 100–152). London:Penguin Books.

Robertson, I.T., & Callinan, M. (1998). Personality and work behaviour. EuropeanJournal of Work and Organizational Psychology, 7, 321–340.

Robertson, I.T., & Kinder, A. (1993). Personality and job competencies: The criterion-related validity of some personality variables. Journal of Occupational and Organ-isational Psychology, 66, 225–244.

Robertson, I.T., & Smith, M. (2001). Personnel selection. Journal of Occupationaland Organizational Psychology, 74, 441–472.

Roth, P.L., Bevier, C.A., Bobko, P., Switzer III, F.S., & Tyler, P. (2001). Ethnicgroup differences in cognitive ability in employment and educational settings: Ameta-analysis. Personnel Psychology, 54, 297–330.

Rowe, P.M., Wiliams, M.C., & Day, A.L. (1994). Selection procedures in NorthAmerica. International Journal of Selection and Assessment, 2, 74–79.

Ryan, A.M., McFarland, L., Baron, H., & Page, R. (1999). An international look atselection practices: Nation and culture as explanations of variability in practice.Personnel Psychology, 52, 359–392.

Salgado, J.F. (1998). Big Five personality dimensions and job performance in armyand civil occupations: A European perspective. Human Performance, 11, 271–288.

Salgado, J.F. (1999). Personnel selection methods. In C.L. Cooper & I.T. Robertson(Eds.), International review of industrial and organizational psychology (Vol. 14,pp. 1–54). Chichester, UK: Wiley.

Salgado, J.F., Anderson, N., Moscoso, S., Bertua, C., de Fruyt, F., & Rolland, J.P.(2003). A meta-analytic study of General Mental Ability validity for differentoccupations in the European Community. Journal of Applied Psychology, 88(6),1068–1081.

Saville, P., Sik, G., Nyfield, G., Hackston, J., & MacIver, R. (1996). A demonstra-tion of the validity of the Occupational Personality Questionnaire (OPQ) in themeasurement of job competencies across time and in separate organisations.Applied Psychology: An International Review, 45, 243–262.

Schmidt, F.L., & Hunter, J.E. (1998). The validity and utility of selection methodsin personnel psychology: Practical and theoretical implications of 85 years ofresearch findings. Psychological Bulletin, 124, 262–274.

Schneider, B. (1996). Whither goest personality at work? Applied Psychology: An Inter-national Review, 45, 289–296.

Scholarios, D., & Lockyer, C. (1999). Recruiting and selecting professionals: Context,qualities and methods. International Journal of Selection and Assessment, 7, 142–156.

Page 23: Assessment in Organisations

ASSESSMENT IN ORGANISATIONS 259

© International Association for Applied Psychology, 2004.

Schwartz, S.H. (1999). A theory of cultural values and some implications for work.Applied Psychology: An International Review, 48, 23–48.

Schwartz, S.H., & Bilsky, W. (1987). Towards a universal psychological structure ofhuman values. Journal of Personality and Social Psychology, 53, 550–562.

Shackleton, V., & Newell, S. (1991). Management selection: A comparative surveyof methods used in top British and French companies. Journal of OccupationalPsychology, 64, 23–36.

Shackleton, V., & Newell, S. (1994). European management selection methods: Acomparison of five countries. International Journal of Selection and Assessment,2, 91–102.

Shippmann, J.S., Ash, R.A., Battista, M., Carr, L., Eyde, L.D., Hesketh, B., Kehoe, J.,Pearlman, K., Prien, E.P., & Sanchez, J.I. (2000). The practice of competencymodeling. Personnel Psychology, 53, 703–740.

SHL (1992). Motivation questionnaire manual & user’s guide. Thames Ditton, UK:SHL Group plc.

SHL (1993). Inventory of management competencies: Manual and user’s guide.Thames Ditton, UK: SHL Group plc.

SHL (1999). OPQ32: Manual and user’s guide. Thames Ditton, UK: SHL Group plc.Smith, M., & George, D. (1992). Selection methods. In C.L. Cooper & I.T. Robertson

(Eds.), International review of industrial and organizational psychology (Vol. 7,pp. 55–97). Chichester, UK: Wiley.

Smith, P.B., Dugan, S., & Trompenaars, F. (1996). National culture and the valuesof organizational employees: A dimensional analysis across 43 nations. Journalof Cross-Cultural Psychology, 27, 231–264.

Stanton, J.M. (1999). Validity and related issues in web-based hiring. The IndustrialPsychologist (TIP), 36(3), 69–77.

Tett, R.P., Guterman, H.A., Bleier, A., & Murphy, P.J. (2000). Development andcontent validation of a “hyperdimensional” taxonomy of managerial competence.Human Performance, 13, 205–251.

Vroom, V.H., & Yetton, P.N. (1973). Leadership and decision making. Pittsburgh,PA: University of Pittsburgh Press.