5
http://ltj.sagepub.com/ Language Testing http://ltj.sagepub.com/content/31/4/532 The online version of this article can be found at: DOI: 10.1177/0265532214529704 2014 31: 532 Language Testing Kathryn Hill Book Review: Classroom-based Language Assessment Published by: http://www.sagepublications.com can be found at: Language Testing Additional services and information for http://ltj.sagepub.com/cgi/alerts Email Alerts: http://ltj.sagepub.com/subscriptions Subscriptions: http://www.sagepub.com/journalsReprints.nav Reprints: http://www.sagepub.com/journalsPermissions.nav Permissions: http://ltj.sagepub.com/content/31/4/532.refs.html Citations: What is This? - Oct 12, 2014 Version of Record >> at TEXAS SOUTHERN UNIVERSITY on November 11, 2014 ltj.sagepub.com Downloaded from at TEXAS SOUTHERN UNIVERSITY on November 11, 2014 ltj.sagepub.com Downloaded from

Book Review: Classroom-based Language Assessment

  • Upload
    k

  • View
    214

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Book Review: Classroom-based Language Assessment

http://ltj.sagepub.com/Language Testing

http://ltj.sagepub.com/content/31/4/532The online version of this article can be found at:

 DOI: 10.1177/0265532214529704

2014 31: 532Language TestingKathryn Hill

Book Review: Classroom-based Language Assessment  

Published by:

http://www.sagepublications.com

can be found at:Language TestingAdditional services and information for    

  http://ltj.sagepub.com/cgi/alertsEmail Alerts:

 

http://ltj.sagepub.com/subscriptionsSubscriptions:  

http://www.sagepub.com/journalsReprints.navReprints:  

http://www.sagepub.com/journalsPermissions.navPermissions:  

http://ltj.sagepub.com/content/31/4/532.refs.htmlCitations:  

What is This? 

- Oct 12, 2014Version of Record >>

at TEXAS SOUTHERN UNIVERSITY on November 11, 2014ltj.sagepub.comDownloaded from at TEXAS SOUTHERN UNIVERSITY on November 11, 2014ltj.sagepub.comDownloaded from

Page 2: Book Review: Classroom-based Language Assessment

532 Language Testing 31(4)

reminded me of Weigle’s (1998) article on using FACETS to model rater training effects, which made rater training effects accessible to general language testing readers by explaining how they could be investigated using FACETS. Green has done a similar job here, but has applied it to much broader statistical analysis techniques. This book defi-nitely targets a new audience: language test developers and item writers. The choice of the three most commonly used statistical packages in language test analyses and the provision of clear instructions on how to run the analyses and interpret the results has made statistical analysis very accessible to newcomers to language testing. There is cov-erage of an appropriate variety of test analyses for test development. The provision of an explanation at the end of each statistical output and the signposting will help readers to identify what is important in the data and how to interpret it. I believe this book will have a special place in the library of many practitioners for years to come.

References

Alderson, J. C., Clapham, C., & Wall, D. (1995). Language test construction and evaluation. Cambridge: Cambridge University Press.

Bachman, L. F. (1990). Fundamental considerations in language testing. Oxford: Oxford University Press.

Bachman, L. F. (2004). Statistical analyses for language assessment. Cambridge: Cambridge University Press.

Bachman, L. F., & Kunnan, A. J. (2005). Statistical analyses for language assessment workbook and CD ROM. Cambridge: Cambridge University Press.

Brown, J. D. (2005). Testing in language programs: a comprehensive guide to English language assessment (new edn). New York: McGraw-Hill College.

Brown, J. D. (2007). Book review: Statistical analyses for language assessment. Language Testing, 24(1), 129–135.

Henning, G. (1987). A guide to language testing: Development, evaluation and research. Cambridge, MA: Newbury House.

McNamara, T. (2000). Language testing. Oxford: Oxford University Press.Weigle, S. C. (1998). Using FACETS to model rater training effects. Language Testing, 15(2),

263–287.

D. Tsagari and I. Csépes (Eds.), Classroom-based Language Assessment.Frankfurt: Peter Lang GmbH, 2011. 203 pp. ISBN 978-3-631-60643-8, US$64.95/£38.50 (hbk)

Reviewed by: Kathryn Hill, La Trobe University, Australia

With assessment for learning principles incorporated into official policy and curriculum documents in an increasing number of countries, Classroom-based Language Assessment provides some timely and instructive insights into how this policy is being enacted in classrooms around the world.

The first six chapters report on qualitative classroom-based studies, with the remain-der involving larger scale, survey-based studies in nine different contexts across

at TEXAS SOUTHERN UNIVERSITY on November 11, 2014ltj.sagepub.comDownloaded from

Page 3: Book Review: Classroom-based Language Assessment

Book reviews 533

Europe, China, Hong Kong, and Canada. With no explicit organizing principles or cross-referencing between chapters this is a book to dip into rather than read cover to cover. However, there are a number of recurring themes. For example, while the success of the assessment for learning policy crucially depends on the ability of the teacher, this often requires a significant shift in existing beliefs and practices. Hence, a number of studies investigate the issue of teacher perceptions and teacher assessment literacy and the need for professional development. As Rea-Dickins observes in her introduction, the volume covers classroom based assessment at both the “macro-” or political level and at the “micro-” level of teachers and learners (Inbar-Lourie, 2008) as well as the tension between the two. In Cheng’s words, “teachers are caught, without sufficient training, between … the continued existence of [high-stakes external assessment] and this recent curriculum reform aiming to promote alternative assessment practices” (p. 191).

Only three of the 11 chapters (Poehner & Ableeva, Lewkowicz & Zawadowska-Kittel, and Tarnanen & Huhta) include the assessment of languages other than English. In the first chapter Poehner and Ableeva focus on three university students as they engage in French listening comprehension activities. They report on part of Ableeva’s doctoral study (2010), which moves away from the preoccupation with the teacher as mediator to focus on “learner reciprocity,” or learner responsiveness to, and agency in Dynamic Assessment (Poehner, 2008). In this chapter, the authors use a selection of classroom transcripts to discuss how “imitation,” characterized by “learners’ awareness of intentional uses of the form” (p. 26), may provide evidence of progress towards internationalization.

Next, Hamp-Lyons, and Tavares describe one of four action-research projects designed to provide professional development for EFL teachers in understanding and implementing assessment for learning in junior high school classrooms in Hong Kong. The researchers worked with a group of teachers as they attempted to implement “inter-active assessment” described as a “language oriented, dialogic, and collaborative approach to assessment” (p. 29). They conclude by proposing strategies to promote inter-active assessment, including the use of visual cues, dealing with large classes and mixed ability students, and grouping and questioning techniques.

The next two chapters focus on the use of diagnostic assessment in post-secondary EAP classes in Canada. In the first study by Fox and Hartwick, the results of systematic diagnostic testing were used to design individually tailored portfolio learning activities. These tasks were developed with the help of a group of postgraduate students with train-ing in teaching and assessment. The authors found that the approach – which involved comparison of self-assessment questionnaires with test results, progress on set tasks and completion of reflective tasks – produced a number of benefits for learners, including greater awareness of their learning needs and increased motivation and engagement, in addition to evidence of learning.

In the second study, Doe makes the point that diagnostic assessment is only formative if the information provided is actually used. She used a case-study approach to investigate a single teacher’s “diagnostic competence” (Edelenbos & Kubanek-German, 2004), defined as the “ability to integrate diagnostic feedback into teaching” (p. 73). In the study, diagnostic feedback from an academic English test (CAEL) was used to develop activities tailored to groups of students with similar learning profiles. As with the Fox and Hartwick study, the study depended on the support of a group of suitably qualified teaching

at TEXAS SOUTHERN UNIVERSITY on November 11, 2014ltj.sagepub.comDownloaded from

Page 4: Book Review: Classroom-based Language Assessment

534 Language Testing 31(4)

assistants as well as on the existence of a high level of teacher autonomy in decisions about course content and delivery which, the author acknowledges, is “not typical of other contexts” (p. 74). Yet, despite these “optimal” conditions, the teacher reported a tension between using diagnostic feedback and getting through the planned syllabus.

Butler and Zeng report on the only study with a specific focus on the assessment of young learners, a group of Grade 6 students in China. Reflecting the growing interest in the use of a paired interview format for assessing speaking (see Taylor & Wigglesworth, 2009) they compared participant roles and interactions in paired and individual speaking assessments. They found the quality of the interaction in the paired format depended on dyad type (Storch, 2002) and that teachers did little to regulate asymmetric interactions. The individual interview elicited a more limited range of language and strategies and tended to favour weaker or more passive learners who, despite the dominant and authori-tative style adopted by the teachers, also preferred this format.

Sahinkarakas and Buyukkarci studied changes in two ELT teachers’ personal theories regarding formative assessment over a 15-week cycle of teaching. The authors used the repertory grid technique, which employs a structured interview format and factor analy-sis, to generate a set of constructs (e.g., “giving feedback” and “engaging students in self/peer assessment”) for “effective formative assessment practice” based on participants’ descriptions of colleagues who they perceived as “effective,” “average” or “ineffective” in terms of their formative assessment practices. As a result of this process, the authors found that both participants modified and expanded their original set of constructs and developed greater awareness of their own and other’s views on assessment, which the authors hoped might lead to improved formative assessment practices. However, the claim that “teacher constructs directly affect classroom practices” (p. 95) is not sup-ported by any reference to the literature on teacher cognition (e.g., Borg, 2006).

The final set of chapters report on survey studies, with the first two examining con-texts in which a previously high level of teacher autonomy has given way to increased centralization and control. In the first study, Blair, Moe and Barsnes surveyed ESL teach-ers’ perceptions regarding the effect of a series of such policy and curriculum reforms (motivated by “unfavourable” rankings on PISA and PIRLS) on assessment practices in Norwegian primary schools. While the perception was that standards had improved overall, teachers expressed a concern that Norwegian education had moved away from its egalitarian principles and that the new system may be failing many of the more vul-nerable students. In the second study, Lewkowicz and Zawadowska-Kittel report on part of an ongoing study into the impact of the introduction of national standardized tests at end of primary and secondary school in Poland. In this case, the shift away from the traditional emphasis on classroom-based teacher assessment was motivated by concerns regarding fairness and the reliability of results under the existing system. The study involved a large-scale survey of changes in teachers’ classroom-based assessment under-standing and practices, including the types of tests they used and what influenced their choices. They found evidence that the reforms had impacted what was taught and the frequency of testing, but had done little to alter pre-reform assessment methods, again highlighting the need for professional development.

Tarnanen and Huhta report on part of a larger study of L1–L2 literacy and peda-gogy in Finland following the introduction of a national curriculum based on

at TEXAS SOUTHERN UNIVERSITY on November 11, 2014ltj.sagepub.comDownloaded from

Page 5: Book Review: Classroom-based Language Assessment

Book reviews 535

socio-constructivist principles. The study, which involved a large-scale questionnaire survey and a small number of teacher interviews (N = 7), compared teacher and student perceptions of classroom-based assessment and feedback practices. They found evidence of a number of positive changes in the assessment culture, including, for example, a high level of learner involvement in (ungraded) assessment. However, the most commonly used type of feedback was not consistent with the teachers’ own views on best practice.

Tsagari reports on a large-scale survey of Greek EFL teachers’ assessment practices and perceived training needs. The survey, which included teachers from all levels of schooling, was designed to inform professional development in assessment literacy for pre- and in-service language teachers (p. 172). The results demonstrated, inter alia, how external reporting requirements inevitably encourage an emphasis on assessment of learning. They also highlight the influence of extra-national frameworks with “using the European Language Portfolio,” the most frequently requested topic for professional development.

In the final chapter, Cheng investigated the implementation of the Chinese National English Language Curriculum Guidelines for Senior Secondary School, which mandates an assessment for learning orientation. Fifty-seven high school teachers were surveyed regarding their perceptions of the purpose of assessment and the methods and procedures they used. The results indicated that teachers attempted to use assessment for learning principles, but were hampered by the continued existence of high-stakes national external assessment, as well as by the demands of teaching classes of between 50 and 70 students.

To my mind, two key messages emerge from these studies: first, that “the dividing line between practitioners and policy makers remains intact” (Blair et al., p. 112); and second, that teachers often face considerable obstacles to practicing assessment for learn-ing in their classrooms. These include large class sizes, inadequate training, limited autonomy in teaching and assessment and the pressure to prepare students for high-stakes external assessment. For these reasons, as Tsagari suggests, assessment policy and training need to take account of existing practices and constraints, and teachers need to be more involved in assessment development and research.

References

Ableeva, R. (2010). Dynamic assessment of listening comprehension in second language learn-ing (Unpublished doctoral dissertation). Pennsylvania State University, University Park, PA.

Borg, S. (2006). Teacher cognition and language education: Research and practice. London: Continuum.

Edelenbos, P., & Kubanek-German, A. (2004). Teacher assessment: The concept of ‘diagnostic competence.’ Language Testing, 21(4), 259–283.

Inbar-Lourie, O. (2008). Language assessment culture. In E. Shohamy & N. Hornberger (Eds.), Encyclopedia of language testing and education (pp. 285–300). New York: Springer.

Poehner, M. E. (2008). Both sides of the conversation: The interplay between mediation and learner reciprocity in Dynamic Assessment. In J. P. Lantolf & M. E. Poehner (Eds.), Sociocultural theory and the teaching of second languages (pp. 33–56). London: Equinox.

Storch, N. (2002). Patterns of interaction in ESL pair work. Language Learning, 52(1), 119–158.Taylor, L., & Wigglesworth, G. (Eds.). (2009). Special Issue: Pair work in L2 assessment contexts.

Language Testing, 26(3).

at TEXAS SOUTHERN UNIVERSITY on November 11, 2014ltj.sagepub.comDownloaded from