20
World Englishes, English as a Lingua Franca and Language Testing: change on the horizon? Lynda Taylor - Discussant Language Testing Research Colloquium Seoul, Korea – July 2013

World Englishes, English as a Lingua Franca and Language Testing: change on the horizon? Lynda Taylor - Discussant Language Testing Research Colloquium

Embed Size (px)

Citation preview

World Englishes, English as a Lingua Franca and Language

Testing: change on the horizon?

Lynda Taylor - Discussant

Language Testing Research ColloquiumSeoul, Korea – July 2013

Overview

The background contextAn historical/evolutionary perspective?

Key themes addressed and insights gained

Where next… some controversial ‘hypotheses’?

BackgroundPerceived challenges for language testing and language testers:issue of whose norms should take primacy in assessment standards (the WE challenge)issue of whether current models of language proficiency reflect sufficiently the communicative demands of speech communities where norms are fluid (the ELF challenge)

An evolutionary expectation?

World Englishes (1993) - to present-day 2013 – a 20-year period

seemingly long enough to expect some degree of change to take place? (‘change on the horizon’)

growing awareness of linguistic/lingua-cultural variety/diversity and its (sociolinguistic) implications for assessment practices

An evolutionary expectation ?

‘despite these conceptual advances, in practical terms only small progress has been made in answering each challenge in language testing design’

why is that?what do we think could/should have

happened?what might such test design look like? how do we think WE/ELF challenges might

have been addressed in practical terms?

An evolutionary expectation?

might we have expected to see by now:the inclusion of Hispanic or Indian English speakers in the iBTOEFL/IELTS listening tests?

the development by the Council of Europe of tests of English as a Lingua Franca (ELF) designed for the European context (e.g. European Parliament)?

would these have moved us forward?or something else even more innovative?

how far can we expect research provide us with solutions?

Key themes addressed

1. the complex area of listener/speaker attitudes and behaviours

2. the complex issues surrounding purpose and context of communication

3. the complex endeavour of construct definition for assessment purposes

1. Attitudes and behaviours

reality of the individual listener experience of listening and (not) understanding (clarity?)

reality of individual listener perceptions regarding accentedness (fluency+pronunciation?)

the importance of distinguishing between ‘comprehensibility’ and ‘intelligibility’

1. Attitudes and behaviours

the impact of listener/speaker attitudes on behaviours (though not necessarily in predictable ways?)

the likely interaction betweenindividual attitudes and behaviours, and

interactional contexts involving high-pressure/high-stakes communicative demands

2. Communicative contexts

the communicative reality and demands in a multilingual assessment context (generally high-stakes for at least one of the participants)

the communicative reality and demands in a multilingual professional context (high-pressure – due to urgency, safety, distress – and therefore high-stakes for a wide range of stakeholders)

2. Communicative contexts

the complexity of decision-making processes – especially when made under time pressure, heavy cognitive load and/or in an emotionally charged situation, e.g. as a rater, a pilot, an air traffic controller (but also in healthcare, military)

complexities independent of L1/L2 distinctions professional competence/knowledge/experienceprocedural/convention compliancelinguistic competence, including accommodation and listener/speaker effort

3. Construct definitionwhat should be included when defining

the construct for assessment purposes? how does linguistic variety interface with construct representativeness for listening test material?

how is accentedness associated with construct underrepresentation and construct-irrelevant variance?

could rater variability regarding perceptions of accentedness somehow be ‘embraced’ as construct-relevant?

3. Construct definitionpotential additional content for inclusion in

a broader listening/speaking construct: ‘problematising’ accentedness (positively or negatively?)

‘compensating’ in response to accent-related difficulty, e.g. inferencing

embracing a stronger social dimension to interactional competence (tolerance of ‘the other/stranger’, dynamic of co-construction)

but not just ELF-driven construct/competencies?

World Englishes (1993)

title of special issue = ‘Testing across cultures’emphasised ‘distinctive’ linguistic differences across WE categories/groupings

similarly, more recent ELF enterprise stresses the distinctive features of ELFbut dangers of ever greater diversification?

time to move in the opposite direction?refocusing on the features of linguistic

competence shared by listener/speakers regardless of L1 status

What next…..

….some controversial ‘hypotheses’?

Hypothesis OneThe native speaker / non-native speaker paradigm has largely outlived its usefulness in language testing; this means that for our construct definition, test design and rating scale development we should instead move towards a paradigm premised upon notions of expertise (i.e. novice – expert user continuum) as discussed in the psycholinguistic literature.

Hypothesis TwoNeither the WE construct nor the more recent ELF construct offer language testers much help in the practice of designing and constructing language tests. The first (WE) is too ideologically driven and the fixed categories have not evolved as the world has changed; the second (ELF) remains too underspecified, e.g. does not properly account for the role of suprasegmentals in comprehensibility .

Hypothesis ThreeThe language proficiency construct underpinning English language tests should be reconceptualised to reflect the reality that all listener/speakers in English (regardless of L1) need to be able to cope with the challenges of intelligibility and the demands of co-construction, including accommodation and listener/speaker effort; proficiency tests should be redesigned to reflect this in accordance with test purpose and context.

Hypothesis FourRater variability (bias) should no longer be perceived as a negative dimension when assessing spoken language proficiency; instead it should be reconceptualised as a potentially desirable component when evaluating spoken performance, contributing valuable information which can be incorporated into the measurement outcomes and the interpretation of scores.

Change on the horizon…..?

….time for some discussion!