20
Pedagogic Assessment Fulcher & Davidson (2007) A2/B2 By Ali Aaj IAU, Science and Research Branch

Pedagogic Assessment

  • Upload
    ali-aaj

  • View
    150

  • Download
    4

Embed Size (px)

Citation preview

Page 1: Pedagogic Assessment

Pedagogic AssessmentFulcher & Davidson (2007)

A2/B2

ByAli Aaj

IAU, Science and Research Branch

Page 2: Pedagogic Assessment

• Do you normally see assessment in your classroom as a discrete activity separate from teaching and learning?

Page 3: Pedagogic Assessment

• According to Brookhart (2003) and Moss (2003), assessment and learning are integrated within the classroom.

Page 4: Pedagogic Assessment

Validity Theory and Classroom Assessment

• Validity theory has grown out of the practice of large-scale testing and assessment

Page 5: Pedagogic Assessment

• Large-scale testing and assessment are situations in which a test provider develops a test that is used on a national or even international scale.

• The test provider is required to ensure that the use of the test is appropriate for its stated purpose and target population.

• Whether score meaning can be generalized beyond the conditions of the test.

Page 6: Pedagogic Assessment

But, can we apply what we have learned from large-scale testing directly to the classroom?

The ANSWER is:

• It is not always easy to take the principles from large-scale assessment and apply them directly to what is done in the classroom.

• Sometimes the difference between the classroom and large-scale testing is NOT taken into account, leading to considerable confusion. • What is the difference? The CONTEXT

Page 7: Pedagogic Assessment

• The learners are in classroom as learners, and the teacher is there to engage with the learners in the learning process.

• The context of the classroom is a social situation, in which our understanding of the learner is partly based on how they interact with their environment and the others.

• Teachers understand a great deal about the knowledge, abilities and skills of the learners in their classroom without the need to resort to formal tests.

• Over periods of time they have the opportunity to observe learners participate in a wide range of activities and tasks developing their ability to communicate with others.

Page 8: Pedagogic Assessment

• In the classroom the activities and assessment are almost entirely performance-based, and completely integrated.

• In contrast, in validity theory this context is not available.

• The ‘context’ of a language test is the environment in which the test takes place.

• It is the room where the learners will sit, the proctor who shows them to their seats and the test, the decoration, temperature, and all the other factors that might impact on the test performance of a person taking the test.

Page 9: Pedagogic Assessment

• In the classroom, however, the context is the learning environment. It is, in fact, part of the construct. It is constructed of sets of learning experiences that are designed to lead to the acquisition of language and communication.

• In large-scale language tests the assumption is that a fairly good picture of a learner’s ability can be achieved only if that learner responds to many different items.

• The formal test needs to be as long as possible in order to collect lots of pieces of evidence about a learner in a short period of time, whereas a teacher can take months or years to do this.

Page 10: Pedagogic Assessment

• What other types of evidence do you use to make judgments about the learning success of your students? Moss (2003) in B2 suggests these among others:

• how students engage in tasks• ongoing conversations• interactions with others• knowledge of the resources available to the

learners

• In the classroom learning environment it is feedback to the learner, from any source, that helps him or her to identify what needs to be learnt next to become an independent user of language in a new context.

• The feedback must contain diagnostic information, and this is not usually found in formal tests.

Page 11: Pedagogic Assessment

• In a classroom context collaboration is encouraged, particularly in developing writing skills and the presentation of portfolios of work.

• In a language test, in the contrary, the responses of any one individual to a task or an item should be independent of the responses of any other individual. Collaboration in a test is usually described more pejoratively as cheating.

• In B2, Moss (2003) provides a reflective commentary on her own teaching practice. She’s combined assessment with teaching.

Page 12: Pedagogic Assessment

• Moss argues that the paradigm of large-scale assessment does not provide a useful context for the goals and philosophy of her classroom, and as a result arrives at some radical decisions such as not issuing grades at all.

• What is needed for the classroom is an approach to evaluation that is judged in terms of the consequences of the decisions made.

• These consequences are perceived in terms of improved learning outcomes for the individual students.

• Decisions within the classroom are primarily pedagogic.

Page 13: Pedagogic Assessment

Where do classroom assessment and large-scale assessment meet?

• Some classroom assessment takes place in order to demonstrate that learning is aligned with external standards, or that students are achieving the goals laid out in a curriculum imposed by a national agency.

This is where classroom assessment and large-scale assessment meet.

• Instead of taking a psychometric approach, Moss believes in a sociocultural perspective of teaching and learning.

Page 14: Pedagogic Assessment

“Learning” from a Psychometric perspective:

• It is inferred from observed changes in individuals’ performances over time

• It is viewed only as something that takes place inside the head of the learner, a vertical hierarchy of increasingly generalized and abstract knowledge and skills.

“Learning” from a socio-cultural perspective:

• It is perceived through changing relationships among the learner, the other human participants, and the tools (material and symbolic) available in a given context. • It involves not only acquiring new knowledge and skill, but taking on a new identity and social position within a particular discourse or community of practice.  

Page 15: Pedagogic Assessment

Conception of assessment: ‘Assessment is a discrete activity’ • Assessment is always ongoing, it is not a separate, one-shot activity in classroom

When designing your course, think about the kinds of:

• Experiences Students are likely to bring with them to the class • Experiences you want them to have in class to provide resources for their learning

In sum, think about• the overall shape of the activities in the course rather than about single assessment instruments

Page 16: Pedagogic Assessment

Focus of validity: ‘The focus of validity theory is on an assessment based interpretation and use’

• Conventionally, validity is conceptualized as referring to an inference or interpretation, and a use or action based on a test score.

• Thus the validity argument (or judgment) focuses on an interpretation or action based on an instrument. • As teachers we have no need to draw and warrant fixed interpretations of students’ capabilities;  Instead, our job is

to make decisions – moment-to-moment, day-to-day, course-to-course – that help students learn, as individuals and as members of learning communities

Page 17: Pedagogic Assessment

Unit of analysis: ‘The unit of analysis for an assessment is the individual’

• The methods of educational measurement are most typically used to develop interpretations that characterize individuals, or rather, classes of individuals with the same scores.

Instead, consistent with a sociocultural perspective, the most appropriate unit of analysis is the social situation.

• It entails the recursive relationship between person and context (including the actions of other people, available resources, and larger social structures in which they are encountered) – and claims about individuals must be grounded in interaction.

Page 18: Pedagogic Assessment

Combining evidence: ‘Interpretations are constructed by aggregating Judgments from discrete pieces of evidence to form an interpretable overall score’ • Having multiple sources of evidence gathered across time and situation enhances the validity of an interpretation.

• If you do not have enough evidence to address an issue that you believe needs to be addressed, you can seek additional evidence. 

Page 19: Pedagogic Assessment

The role of cases (case studies) in validity theory

What role cases of assessment practice should play in the development and/or representation of validity theory and assessment pedagogy?  The answer is• the principles are necessarily general and we need cases to illustrate how they can be instantiated in practice.  • Such cases provide us with vicarious experiences of how successful teachers create learning environments and evaluate their students’ work using evidence based in interaction.

Page 20: Pedagogic Assessment

THANK YOU VERY MUCH FOR YOUR ATTENTION