Upload
others
View
3
Download
0
Embed Size (px)
Citation preview
Deepening student learning through formative
assessment strategies in primary school programs
____________________________________________________________________________
Brendan Kean
Bachelor of Education
Victoria University
Master of Education
University of Melbourne
A thesis submitted for the degree of Doctor of Philosophy at Monash University in 2016
2
Copyright notice © The author (Brendan Kean). Except as provided in the Copyright Act 1968, this thesis may not be reproduced in any form without the written permission of the author.
3
Abstract
In the field of education, how and why teachers develop formative assessment in their
classrooms and how this focus can improve student learning is under researched. This thesis
aimed to develop an understanding of how a range of formative assessment strategies can be
developed and implemented in the classroom and enhanced through collaboration amongst
teachers within professional learning teams. The study was conducted in three phases
including two cycles of action research within one school and a comparative school case study,
undertaken to deepen understanding of formative assessment and collaboration.
Phase 1 involved action research in an international school in Hong Kong to develop formative
assessment strategies in my own and two colleagues’ classrooms. The phase 2 action research
cycle investigated how formative assessment implementation could be enhanced and developed
through collaboration within a professional learning team (PLT). Phase 3 was a comparative
case study of formative assessment in a Melbourne school, well known for its innovative
assessment practices, providing a further analytical lens to validate and enrich earlier findings.
In all three phases, the study found evidence of the importance of formative assessment being
explicitly and deeply planned by teachers, in order to impact on student learning. The findings
showed that when teachers embed key aspects of formative assessment strategies including
stating the learning intention, developing the success criteria, effective teacher questioning,
teacher feedback and the use of self and peer-assessment, student learning can be enhanced.
The study also found that development of formative assessment through teacher collaboration
within the structure of a PLT involving shared planning and reflection, enhances teachers’
capacity to implement these strategies.
Finally, the study concluded that since the use formative assessment strategies improves
student learning, their implementation should be a priority for schools and educators. School
leaders should ensure that the structures and systems for professional learning teams are in
place so that successful planning and collaboration to develop formative assessment strategies
is at the forefront of school improvement strategies and decision-making for teachers’ work.
4
Declaration
This thesis contains no material which has been accepted for the award of any other degree or
diploma at any university or equivalent institution and that, to the best of my knowledge and
belief, this thesis contains no material previously published or written by another person,
except where due reference is made in the text of the thesis.
This research has received the approval from the Monash University Human Research Ethics
Committee (MUHREC) for project number CF10/2579 – 2010001436.
5
Acknowledgements
Many great supporters have helped me on my journey through this PhD and their assistance
has been the reason I have made it to the end. Firstly, I would like to thank my supervisor,
Associate Professor Libby Tudball, whose continued optimism and encouragement has helped
me transform the idea of completing my PhD into a reality. Libby has provided me with the
confidence and the belief that I needed in order to continue working through to the end. Her
guidance, support, mentorship and direction have kept me on the path to where I am now. I am
forever grateful for the work and support Libby has given me.
To my wife, Elizabeth, who has had to endure the ups and downs of this journey, I thank her
for her patience, understanding and her continued acceptance of the line, ‘sorry, I have to work
on my thesis’ excuse, for the last six years. I am now looking forward to uninterrupted time
with Elizabeth and our two daughters, Matilda and Isla.
Finally, to my mother and late father (1946 – 2001) whose love and guidance from birth cannot
be underestimated. To write a PhD, one must begin with a supportive environment that has
instilled the attitude and belief that one can achieve anything, however, great the task. I look
forward to sharing this moment with Mum and helping her understand the role she has played
in shaping me and my academic path. Unfortunately, I will not have the opportunity to share
this with my father who passed away before I began the journey. I suspect he may be a little
surprised with what I have achieved, but the role he played long before my PhD journey ever
began will stay with me forever.
6
Contents
List of tables .............................................................................................................................. 11 List of figures ............................................................................................................................ 11
Chapter 1: Introduction ........................................................................................................... 12 Introduction ............................................................................................................................... 12 Research questions .................................................................................................................... 13 Justification for the study .......................................................................................................... 14 My interest in formative assessment ......................................................................................... 15 The evolution of the study ......................................................................................................... 16 Conceptual understanding and evidence of the importance of formative assessment .............. 17 Scope and limitations ................................................................................................................ 18 Thesis outline and structure ....................................................................................................... 19
Chapter 2: Literature review: Assessment ............................................................................. 21 Introduction ............................................................................................................................... 21 History of assessment ................................................................................................................ 21
History of formative assessment ........................................................................................... 22 Defining the purpose of assessment .......................................................................................... 23
Defining formative assessment .............................................................................................. 24 Assessment for, as and of learning ........................................................................................ 26
Traditional assessment ............................................................................................................... 29 Standardised testing ............................................................................................................... 30 Grading .................................................................................................................................. 33 Summative assessment .......................................................................................................... 34
Creating the right balance in the use of assessment .................................................................. 35 Formative assessment strategies ................................................................................................ 37
Sharing the learning intentions .............................................................................................. 38 Developing the success criteria ............................................................................................. 40 Effective teacher questioning ................................................................................................ 42 Teacher feedback ................................................................................................................... 47 Self-assessment ...................................................................................................................... 51 Peer-assessment ..................................................................................................................... 54
Learning progressions and formative assessment ..................................................................... 55 Conclusion ................................................................................................................................. 57
7
Chapter 3: Literature review: teacher collaboration ............................................................ 58 Introduction ............................................................................................................................... 58 Definition of a ‘professional learning team’ ............................................................................. 59 Collaboration in professional learning teams ............................................................................ 61 Factors ensuring successful collaboration in the PLT ............................................................... 62
1) Ensuring that all students learn ...................................................................................... 63 2) A culture of collaboration .............................................................................................. 63 3) A focus on results ........................................................................................................... 64
Other characteristics of an effective professional learning team ............................................... 67 Shared values and vision ....................................................................................................... 67 Collective responsibility ........................................................................................................ 67 Reflective professional inquiry .............................................................................................. 68 Supportive and shared leadership .......................................................................................... 68
Effective leadership and professional learning teams ............................................................... 69 Teachers as researchers ............................................................................................................. 71 Conclusion ................................................................................................................................. 72
Chapter 4: Research methodology .......................................................................................... 74 Introduction ............................................................................................................................... 74
Research questions ................................................................................................................ 75 Rationale for selection of qualitative methodology ............................................................... 75 The choice of action research ................................................................................................ 77 Phase 1 and 2: Action research process ................................................................................. 79 The focus on practitioner research ........................................................................................ 81 Phase 1 Action research at MIS: Selection of participants .................................................... 83
Phase 1 action research: Implementing formative assessment strategies at MIS ...................... 85 Phase 1: Discussion of data collection methods ........................................................................ 87
Interview process ................................................................................................................... 87 Participant observation .......................................................................................................... 88 Use of a reflective journal ..................................................................................................... 90 The use of documentation ..................................................................................................... 91 Reflection and action ............................................................................................................. 92
Phase 2: Action research on collaboration at Matilda International School .............................. 93 Phase 2: Selection of participants .......................................................................................... 94
Phase 2: Data collection methods .............................................................................................. 94
8
Interview and planning process ............................................................................................. 94 Participant observation .......................................................................................................... 95 Use of a reflective journal ..................................................................................................... 95 The use of documentation ..................................................................................................... 95
Data analysis for phase 1 and 2 ................................................................................................. 96 Theoretical frameworks and phase 1 and 2 data analysis techniques .................................... 97 Matrix displays for data analysis ........................................................................................... 99
Methodology and aims: Case study of Western College ......................................................... 100 Selection of participants ...................................................................................................... 101
Case study data collection methods ......................................................................................... 101 Interviews ............................................................................................................................ 102 Observations ........................................................................................................................ 102 Document data collection .................................................................................................... 103 My role as a researcher ........................................................................................................ 103
Data analysis ............................................................................................................................ 104 The ethical issues involved in my research ............................................................................. 105 Conclusion ............................................................................................................................... 106
Chapter 5: Developing formative assessment at Matilda International School ............... 107 Introduction ............................................................................................................................. 107 The need for formative assessment at Matilda International School ....................................... 107
Profile of teacher participants .............................................................................................. 109 Beginning of the action research process ............................................................................ 111 The starting points: developing formative assessment through action research ................. 111 Lesson observations: Emily ................................................................................................. 116
Reflections on the importance of planning the action research ............................................... 122 Stating the learning intentions and developing the success criteria ........................................ 124
Emily’s use of learning intentions and success criteria ....................................................... 124 Next steps in the action research for Emily ......................................................................... 128 Learning from students about formative assessment ........................................................... 128 Harriet’s view on learning intentions and success criteria .................................................. 130 Developing success criteria with my class .......................................................................... 134
Teacher questioning as formative assessment task ................................................................. 135 The role of ‘talk partners’ during teaching questioning ...................................................... 137 Student perspective on ‘talk partners’ ................................................................................. 140
9
Developing effective feedback as formative assessment ........................................................ 141 Descriptive feedback for learning instead of managerial feedback ..................................... 143 The importance of relationships between teachers and students ......................................... 146
The importance of self-assessment .......................................................................................... 147 Peer-assessment in Emily’s class ............................................................................................ 150
My class and peer-assessment ............................................................................................. 152 Peer support in Harriet’s class ............................................................................................. 153 Students’ perspective of peer-assessment ........................................................................... 156 Creating a classroom environment for peer-assessment ..................................................... 158 Multiple layers of assessment: Aligning self-assessment, peer-assessment and teacher feedback ............................................................................................................................... 159
Motivation and formative assessment ..................................................................................... 162 Teacher attitudes and responses to action research ................................................................. 163
Findings: A multilayered approach to formative assessment .............................................. 164 Emily and Harriet’s final reflections ................................................................................... 167 Students’ final reflection on their learning .......................................................................... 172 Final reflection on my practice ............................................................................................ 173
Chapter 6: Developing formative assessment strategies through collaboration .............. 175 Introduction ............................................................................................................................. 175
Profile of teacher participants .............................................................................................. 176 The beginning: Developing relationships within the team ...................................................... 178 Understanding of formative assessment .................................................................................. 179 Establishing the professional learning team ............................................................................ 180 The need to plan for formative assessment ............................................................................. 180 The pressure of planning ......................................................................................................... 183 A whole school approach ........................................................................................................ 184 Shared values and vision ......................................................................................................... 185 Collaboration in reflective professional inquiry ...................................................................... 186 Supportive and shared leadership ............................................................................................ 189
Principal’s influence on collaboration ................................................................................. 190 Planning formative assessment in the early stage of the action research ................................ 191
Developing the learning intentions and success criteria ...................................................... 192 Developing learning intentions and learning progressions ................................................. 194 Developing teacher questions for self and peer-assessment ................................................ 195
Formative assessment driving the planning ............................................................................ 198
10
Creating ideal planning scenarios through collaboration .................................................... 199 Conclusion: Next steps in collaboration and planning ............................................................ 202
Chapter 7: Case study of Western College ........................................................................... 205 Introduction ............................................................................................................................. 205
Profile of the school ............................................................................................................. 206 Profile of the participants .................................................................................................... 206
Case study data collection process .......................................................................................... 208 Teacher knowledge of formative assessment ...................................................................... 209
Formative assessment in the classrooms ................................................................................. 210 Formative assessment in Justine’s class .............................................................................. 210 Formative assessment in Wendy’s class .............................................................................. 214 Formative assessment in Rachel’s class .............................................................................. 216 Curriculum coordinator’s perspective ................................................................................. 218
The implementation of formative assessment strategies at Western college .......................... 221 Learning intentions .............................................................................................................. 222 Success criteria .................................................................................................................... 224 Effective teacher questioning .............................................................................................. 227 Teacher feedback ................................................................................................................. 228 Self-assessment .................................................................................................................... 231 Peer-assessment ................................................................................................................... 234
Formative assessment and collaboration ................................................................................. 235 Leadership and collaboration .............................................................................................. 237 Developing teacher beliefs and knowledge about formative assessment ............................ 238
Conclusion ............................................................................................................................... 239 Chapter 8: Conclusion ........................................................................................................... 241
Introduction ............................................................................................................................. 241 Summary of findings ............................................................................................................... 242 Significance of the findings ..................................................................................................... 248 Conclusion and recommendations ........................................................................................... 249
References ............................................................................................................................... 251 Appendix 1 – PYP planner .................................................................................................... 274
11
List of tables
Table 1: Assessment for, as and of learning ............................................................................... 27
Table 2: Action research process: adapted from Mertler (2006) ................................................ 81
Table 3: Phase 1 action research process at Matilda International School ................................. 87
Table 4: Phase 2 action research process at Matilda International School ................................. 93
Table 5: Overview of action research at Matilda International School ...................................... 96
Table 6: Phase 3 case study overview ...................................................................................... 105
Table 7: Emily's lesson observations ........................................................................................ 117
Table 8: Harriet's lesson observations ...................................................................................... 121
Table 9: Framework for multilayered approach to formative assessment ................................ 166
Table 10: Formative assessment planning for mathematics ..................................................... 193
Table 11: Data collection at Western College .......................................................................... 208
List of figures
Figure 1: Traditional assessment pyramid (Earl, 2003, p. 27) ................................................... 37
Figure 2: Reconfigured assessment pyramid (Earl, 2003, p. 27) ............................................... 37
Figure 3: How learning intentions fit (Glasson, 2009, p. 11) ..................................................... 39
Figure 4: Teacher peer relations continuum (Riordan & Gaffney, 2001, p. 6) .......................... 65
Figure 5: Relationship between principal behavior and student achievement with the
collaborative teams of a professional learning community (DuFour & Marzano, 2011, p. 52) . 70
Figure 6: Process of action research (Mertler, 2006, p. 24) ....................................................... 80
Figure 7: Participant-observer continuum (Glesne, 1999, p. 44) ............................................... 89
Figure 8: Examples of genre checklists .................................................................................... 135
Figure 9: Multilayered approach to formative assessment (Kean, 2016) ................................. 167
Figure 10: Multilayered approach to formative assessment through collaboration (Kean, 2016)
.................................................................................................................................................. 204
12
Chapter 1: Introduction
Professors Paul Black and Dylan Wiliam synthesised evidence from over 250
studies linking assessment and learning. The outcome was a clear and
incontrovertible message: that initiatives designed to enhance effectiveness of
the way assessment is used in the classroom to promote learning can raise pupil
achievement.
(Assessment Reform Group, 1999, p. 4)
Introduction
The concept of assessment has been around for centuries (Earl, 2003). However, it is only
relatively recently that the notion of assessment being used beyond ranking or grading of a
student’s work or performance, to improving their learning, has been a strong focus of research
and professional learning. Since Black and Wiliam’s (1998a & 1998b) research on the
academic gains of assessment practices used to improve student learning, there has been a
greater emphasis on the purposes and practical implementation of assessment (Absolum, 2010;
Clarke, 2008, 2005 & 2001; Glasson, 2009; Wiliam, 2011). My research has focussed on how
teachers and students can utilise formative assessment to improve student learning on a daily
basis. While there is no one definition for formative assessment, it is widely accepted that it
involves a range of formal and informal assessment procedures during the learning process, in
order to modify teaching and learning activities to improve and monitor student learning and to
provide ongoing feedback (Absolum, 2010; Clarke, 2008; Glasson, 2009).
Formative assessment has now begun to receive the attention it needs from academics and
educators alike and there is deepening interest in how this form of assessment can be
developed in practice. However, a gap remains between the theory of why assessment for
improving learning is important and the practical application of assessment practices in
classrooms for learners of all ages. Too often, teachers are frustrated with being told of the
importance of pedagogy that centres on being clear about learning intentions and how the
achievement of that learning can be assessed, whilst at the same time not having the clarity in
13
their teacher education or further professional learning to develop effective assessment
practices in their classrooms. Glasson (2009) argues that traditionally, teachers viewed their
role in the assessment process as giving students a grade as having passed or failed, recording
that in their mark book and summing up the information in the student’s school report. It is the
need to deepen understanding and development of formative assessment approaches that
provided the motivation for me to undertake this study. As a former classroom teacher and
now a vice principal in education, I have been able to see the ongoing need for deep classroom
based practitioner research to improve and inform teachers’ views and understanding of how to
develop pedagogy that focuses strongly on developing formative assessment to promote and
improve student learning.
In order to close the gap between theory and the practical implementation of effective
assessment strategies, it is clear that involvement is required from teacher educators, teachers
in schools and researchers with a focus in this field. In recent decades, there has been
increased interest in the teacher as researcher (Carr & Kemmis, 1986; Loughran, 2010). This
research movement contends that teachers looking deeply at their own practice can be involved
in a systematic process to improve learning and teaching. Research in schools involving
teachers is a positive reaction to the gap between what academics have been arguing needs to
happen in research and what teachers believe can happen. It is becoming clear that where
teachers investigate and learn continuously from their own and their colleagues’ practices, they
are able to implement new pedagogical ideas and strategies that enrich understanding of their
practices and enhance their teaching (Bartlett & Burton, 2006; DuFour, 2004). It is for these
reasons, and my emerging belief that there was insufficient emphasis on assessment to improve
learning, that I commenced this study with a focus on these questions:
Research questions
The research questions explored through all phases of the study were:
1. What key strategies for formative assessment can improve student
learning?
14
2. How can formative assessment strategies lead to improvements in
student learning?
3. How can collaboration in teacher professional learning teams influence
the development and implementation of quality formative assessment
strategies?
4. What impact does this collaboration have on classroom pedagogy linked
to formative assessment?
Justification for the study
Since my research questions necessitated the location of the study in classroom practice and
my intention was to improve my own assessment practice and to work with colleagues,
practitioner research with the inclusion of action research was clearly the most suitable
approach for my study. extensive success for both individual teachers seeking to extend
their practice ‘and as a collaborative route to professional and institutional change’ (Herr &
Anderson, 2005, p. 17). In recent years, there has been increasing interest in teachers
investigating their own practice (Bartlett & Burton, 2006). Action research provides
legitimacy for practitioners to trial more systematic qualitative approaches to understand and
improve practice. There is at times still a lack of connection between education research and
classroom teaching (Mertler, 2006). For some teachers, what takes place within school
classrooms does not always reflect research findings on effective teacher practice and student
learning (Johnson, 2005).
This issue of the theory-practice divide continues to challenge both teachers and academics.
My action research project aimed to provide a solution through the development and use of a
‘two-way flow of information’ (Mertler, 2006, p. 14) comprising ideas and planning developed
by me as the researcher in collaboration with colleagues who came on this journey with me; for
the purpose of improving our practice and sharing our learning. This two-way flow involves a
process where ‘teaching decisions are not only shaped by theory and research’ (Parsons &
Brown, 2002, p. 7), but also where reflection on practice helps to give shape and new
directions to educational theory and research.
15
In my study, the central aims were to engage in action research to develop formative
assessment strategies, and to investigate how collaborative planning through professional
learning teams (PLTs) can be utilised to improve student learning. Through an action research
and case study approach, my study aimed to contribute to the emerging body of literature that
focuses on developing formative assessment to improve student learning. There has been
limited research conducted in the area of teacher collaboration and how this can enhance the
development of formative assessment strategies. Also, a gap in the literature identified a need
for deeper understanding of how collaborative planning within a PLT can assist teachers to
develop formative assessment strategies. As my study is qualitative, the value of its
contribution is the provision of an in-depth understanding of how formative assessment
strategies can be developed through a collaborative approach amongst teachers within a PLT.
The findings from my research contribute new knowledge and deep understanding of the
development of formative assessment practices in primary schools and contributes to the
literature in the field of the implementation of effective formative assessment in schools.
My interest in formative assessment
Having completed a Master of Education in the year prior to commencing this study, my
interest in research on formative assessment arose both from my desire to continue my
postgraduate studies and to focus my research on improving learning and teaching in the
international school where I was teaching at the time of the action research component of the
study. I had the then vice principal of my school explain that assessment and reporting was an
area of focus for improvement in learning and teaching in the school. Looking back, I realise
that at the commencement of the study, my own knowledge of assessment and reporting was
limited and in some instances, simplistic. Like many of my colleagues, I understood
assessment to be linked mostly with grading or ranking a student in a summative way for
reporting purposes, at the end of a unit of study. I began to read literature on the topic to
understand the theories behind assessment and their application in the classroom. As my
understanding developed, I realised the school could benefit from focused action research on
how to implement assessment to improve student learning.
16
At the commencement of my study, other teachers in the school mainly saw assessment as
summative, or related to grading or ranking the students. During planning, assessment was
only raised whilst discussing the grading of students during the reporting period. My own
growing understanding of the deeper role that assessment can have in learning grew through
my literature review. I am a dedicated teacher, eager to learn about and improve my teaching
with my colleagues, but prior to conducting this study, I had in fact never had a conversation
about using assessment to improve student learning. As I had not been exposed to evolving
theoretical understanding of the purpose, role and processes of assessment in my undergraduate
degree or in my ongoing profession development, I realised that it was likely there were other
teachers like me, who could also benefit from deepening their understanding and application of
the concept of formative assessment. As Duckor (2014) argues, all teachers require a clear
understanding of ‘which practices are most effective, when to deploy them, and why a
particular combination actually worked for a particular student in a particular classroom’ (p.
28). My initial literature review proved how imperative formative assessment is for learning
and teaching and for all teachers to understand different formative assessment strategies that
can be used in the classroom to improve student learning.
The evolution of the study
Phase 1 of this study involved identifying, planning for and developing different formative
assessment strategies in the classroom through an action research project involving two
teachers and me in an international primary school in Hong Kong. After completing the phase
1 action research cycle, the importance of teacher collaboration in developing formative
assessment strategies began to emerge as a theme. After reviewing further literature, it became
clear there was a gap in the research regarding how teacher collaboration can be used to
develop formative assessment strategies. Therefore, I decided to commence phase 2 of the
study to investigate how teacher collaboration could play a role in developing formative
assessment strategies through joint planning, sharing and reflection in action, in a cyclical
manner consistent with action research (Mertler, 2006).
17
To validate phase 1 and 2 and to provide a further means of reflection on the processes of
embedding formative assessment in classroom learning and teaching, I then conducted a case
study of a primary school in Australia (phase 3). The school was chosen for its strong
reputation for effective learning and teaching, as well as its innovative curriculum and
assessment practices. Through analysis of semi-structured interviews and observations in the
school, I aimed to compare the formative assessment and collaboration strategies within this
school, with the processes that had been developed and achieved through our action research in
the professional learning team in Hong Kong. A further aim was to use the insights gained
through teacher interviews in this case study to deepen the conclusions for the overall study.
Conceptual understanding and evidence of the importance of formative assessment
My initial literature review convinced me of the importance of practitioner research on
formative assessment. My study draws on Black and Wiliam’s (1998b) research and
conceptual understanding of formative assessment strategies outlined by Clarke, (2008, 2005,
2001) and Glasson (2009) (see literature review chapter 2). According to many academics,
Black and Wiliam’s (1998a & 1998b) studies have produced the most influential findings in
relation to formative assessment (Clarke, 2008, 2005; Earl, 2003; Glasson, 2009; Miller &
Lavin, 2007; Popham, 2008). Commissioned by the Assessment Reform Group (ARG), Black
and Wiliam (1998b) reviewed 250 studies and found that formative assessment improves
student learning with an effect size between 0.4 and 0.7. They considered this effect size larger
than most educational interventions. To provide context to what this effect size illustrates,
Black and Wiliam (1998b) argue that a 0.4 effect size would move an average student involved
in formative assessment practices to achieving the same as a student in the top 35% as
compared to those not exposed to formative assessment. If an effect size of 0.7 was achieved,
at the time, this would have moved England from the middle of the 41 countries involved in
TIMMS (trends in international mathematics and science study) to the top 5. It is now
accepted that the effective use of formative assessment can lead to academic gains (Miller &
Lavin, 2007). However, Bennett (2011) argues that a ‘major concern with the original Black
and Wiliam (1998b) review is the research covered is too disparate to be summarized
meaningfully through meta-analysis (p. 11). In other words, the collection of evidence is too
18
diverse in its focus to categorise as a single effect size or a range of effect sizes. I am also
concerned when formative assessment is provided in a quantitative effect size, however,
because what is important to focus on is how formative assessment can improve student
learning when implemented effectively.
Black and Wiliam also (1998b) found that formative assessment helped low achievers more
than any other student, reducing the range of achievement in the classroom while raising the
overall standards. Other findings concluded that formative assessment impacted positively on
all ages, regardless of gender across countries. They also found that self-assessment has a
significant role in formative assessment by providing the student more ownership over their
learning. Black and Wiliam’s (1998b) landmark study and research that has taken place since
in relation to formative assessment (see for instance Clarke, 2008, 2005, 2001; Glasson, 2009;
Wiliam, 2011; Wyatt-Smith, Klenowski & Colbert, 2014) reinforces the importance of
improving the quality of teacher practice in the field of formative assessment, in order to
improve student learning.
Scope and limitations
My study was conducted across two schools over a period of six years. This included two
phases of action research within the one school in Hong Kong and a third phase focused on a
case study school in Australia. I triangulated and deepened my analysis through the phase 3
case study. The intention of the study was not to generalise; rather it was to provide deep
qualitative study insights through practitioner research (Maykut & Moorehouse, 1994) into
formative assessment and collaboration in two contexts; Hong Kong and Australia. I do not
claim that the results should be generalised to schools in all contexts. However, the findings
on the impact on student learning of the multiple formative strategies implemented during all
phases of the study provides evidence of the importance of formative assessment in improving
student learning.
19
Thesis outline and structure
This thesis is constructed and divided into eight chapters. Below is a brief summary of each
chapter:
Chapter one ‘Introduction’ introduces the study outlining the aims of research, the research
questions, conceptual framework and the rationale behind the research. How the study evolved
and my interest in the research are discussed as well as the scope and limitations.
Chapter two ‘Literature review: Assessment’ comprises the first part of the review that
explores formative assessment in the literature. This includes the history and reasoning behind
the rationale for a more focused development of formative assessment in the classroom. The
literature review discusses critiques to traditional approaches to assessment and how it has
emerged that practices involving assessment should focus on formative approaches to improve
student learning. In the literature review, I discuss the benefits of formative assessment and
how these findings informed my research.
Chapter three ‘Literature review: Teacher collaboration’ is the second part of the literature
review, focused on professional learning teams (PLTs), particularly in relation to the
effectiveness of collaboration. The chapter explores the key features of PLTs, their
effectiveness in relation to student learning; and how this literature informed phase 2 of my
study.
Chapter four ‘Research methodology’ explains the methodology used for this research and
rationale behind the multi method approach. An overview is provided on the selection of
participants for the action research and the case study. The processes used for data collection
and analysis are explained in detail.
Chapter five ‘Developing formative assessment at Matilda International School’ examines
the development of formative assessment strategies through phase 1 of the action research in a
school in Hong Kong involving two participating teachers and my own class.
20
Chapter six ‘Developing formative assessment through collaboration’ discusses phase 2 of
the action research in this study and explores how the Grade 1 PLT collaborated to develop
formative assessment strategies.
Chapter seven ‘Case study of Western College’ reports on the case study of a school in
Australia renowned for its innovative assessment practices. It provides comparison with the
findings in phase 1 and 2 and validation discussion.
Chapter eight ‘Conclusion’ provides the conclusions and recommendations from the research
study. The chapter summarises the key findings from the two phases of action research and the
case study. The significance of the research is discussed and recommendations are made with
regard to how formative assessment can be implemented within schools, including how this
can be achieved through collaboration within a professional learning team.
21
Chapter 2: Literature review: Assessment
One of the outstanding features of studies of assessment in recent years has
been the shift in the focus of attention, towards greater interest in the
interactions between assessment and classroom learning and away from
concentration on the properties of restricted forms of tests which are only
weakly linked to the learning experiences of students. This shift has been
coupled with many expressions of hope that improvement in classroom
assessment will make a strong contribution to the improvement of learning.
(Black & Wiliam, 1998a, p. 7)
Introduction
This literature review focuses on an exploration of research on assessment in education
pertinent to my study. Specifically, the focus is on the discussion of literature related to
contemporary views on the concepts and nature of formative assessment, as well as the
application and use of this kind of assessment in school classrooms. The aim of this review
was to establish a thorough understanding of formative assessment in order to inform the
action research I planned to implement in my own school and classroom and the ongoing
development of my study. The review discusses the key strategies of formative assessment
which include: stating the learning intention, developing the success criteria, use of effective
teacher questioning, teacher feedback and the use of self and peer-assessment (Clark, 2008,
2005, 2001; Glasson, 2009). With a mindset change in recent years on how assessment is
viewed amongst practitioners and academics, traditional methods of assessment are reviewed
as the background to understanding contemporary views focused on the importance of forging
stronger links between assessment and student learning.
History of assessment
The word ‘assessment’ derives from the Latin word, assidere; meaning to sit next to or with
someone (Wiggins, 1993). Earl (2003) argues that the idea of assessing students has existed
22
for centuries. The earliest recordings of assessment come from the days of Aristotle, when he
encouraged his students to make public presentations. Popham (2002) notes that one of the
first written assessments documented was for the Chinese civil service examination to enter
into higher public office. These assessments were in place over 2000 years ago and the system
was aimed to use test attainment, rather than parentage or patronage as a means to success
(Black, 1998). During the industrial revolution when schools were modelled on factory
assembly lines, assessment was used to determine who would pass through to the next grade
and who would leave school early to work in factories, mines and farms. After World War II,
standardised assessment was introduced in the United States (Popham, 2002). Up until to this
point in time, assessment had a limited purpose; it was used to grade or rank a student (Earl,
2003). For many years, this was the accepted use for assessment, as education was seen to be
appropriate for the select few in certain sectors of society. However, in the last two decades,
the research on assessment has shifted towards the focus on the impact assessment has on
student learning, and the exploration of formative assessment benefitting ‘teachers’
instructional decisions’ (Popham, 2008, p. 4).
History of formative assessment
It was not until the 1960’s and 1970’s that academics began using the terminology of formative
assessment and linking it to student learning (Pryor & Crossouard, 2007). Scriven (1967) first
introduced the term formative evaluation in connection to education (Black & Wiliam, 2003;
Clarke, 2008; Popham, 2008; Pryor & Crossouard, 2007; Scriven, 1967). Bloom, Hastings and
Madaus (1971) used the term ‘summative evaluation’ (p. 117) to define the type of tests given
at the end of a unit or course for the purpose of grading or certifying students and to assess the
implementation of a curriculum. In contrast to this evaluation, Bloom et al. also identified
another type of evaluation that teachers and students would find useful in improving student
learning; ‘formative evaluation’ (p. 118). It was identified that formative evaluation was more
effective in helping students in improving their learning. It became clear to Bloom et al.
(1971) that the distinction between formative and summative could be found in the purpose of
the assessment and that the processes were similar (Pryor & Crossouard, 2007). In my study, I
aimed to engage in a deep exploration, with my colleagues, to identify formative assessment
23
strategies to improve student learning, rather than the traditional notion of merely utilising
summative assessment.
Defining the purpose of assessment
Black and Wiliam (1998a) identified a shift in understanding of what is assessment and how it
is used in schools. They stated that a shift needs to occur from test oriented assessment to
classroom assessment to improve student learning. This is the case with definitions of
assessment that came after Black and Wiliam’s (1998a) research. There is now a deepening
understanding of assessment and formative assessment, however, no single definition is used
amongst researchers or educators. Therefore, it is important to explore a range of definitions
used by academics.
According to Earl (2003), assessment is diverse, complex and dynamic and should not be
viewed as a single entity distinct from learning and teaching. Wilson and Wingjan (2003)
define assessment as the process of gathering data to establish individual students’ progress,
improve their learning and provide information for the teacher to plan accordingly for the next
stage of learning. The Department of Education and Training, Victoria (2013) identified
assessment as the process of identifying, gathering and understanding information from
student’s learning, and views formative assessment as assessment for learning and assessment
as learning. Education Scotland (2012) considers assessment as important for tracking student
progress, planning for next stages in learning, reporting and involving parents, children and
teachers in the process of learning. Black and Wiliam (2007) argue that there are others
purpose of assessment that go beyond improving learning. They view assessment as the
interpretation of evidence about the knowledge, skills and understanding of learners. They also
state that it has the purpose of reporting on individual achievements and satisfying the demands
of being accountable to the public and to parents.
24
Defining formative assessment
Current thinking about learning acknowledges that learners must ultimately be
responsible for their learning since no one else can do it for them. Thus
assessment for learning must involve pupils, so as to provide them with
information about how well they are doing and guide their subsequent efforts.
(Assessment Reform Group, 1999, p. 7)
Defining formative assessment as a distinct form of assessment is important for my study.
Traditionally, formative assessment was seen to inform teaching practice through summative
assessments, typically through the use of tests (Glasson, 2009). Teachers would use the
information provided by the results to change the curriculum for the following year, rather than
change the performance of the individual student who provided the information. In recent
years, there has been a shift in how formative assessment is developed and who is involved in
the process. Although formative assessment does not have an official or universally accepted
definition, there is one common feature amongst all definitions; it exists to improve student
learning (Popham, 2008). One definition of formative assessment is ‘to establish what
progress a student is making during learning and to give feedback on it’ (Cotton, 1995, p. 24).
A further definition developed a few years later by the ARG (2002) defined formative
assessment as the student and their teacher using evidence of student learning to decide where
the learner is, where they need to go and how they will get there. Absolum (2010) argues that:
In order to learn, you need information about what you want to be able to
understand and do, and about what you currently understand and can do, so
that any gap between the two can be made apparent. Learning is about attempts
to reduce the gap. Assessment is the process of gaining information from the
gap. Learning is impossible without the learner engaging in at least tacit
assessment of the nature of that gap. (p. 103)
Clarke (2008) provides a similar definition by stating that it is any practice which helps the
learner understand how to improve. This definition acknowledges that ultimately, the learner
25
plays a significant role in improving their learning. However, Yorke (2003) argues formative
assessment ‘is more complex than it might appear at first’ (p. 478). The complexity comes
from what Rowntree (1987) describes as a range of approaches ranging from ‘very informal,
almost casual, to the highly formal, perhaps even ritualistic’ (p. 4-5).
The ARG (2002) released 10 principles as a guide for teachers to use when considering
formative assessment for the classroom. They argue that it:
• Is part of effective planning
• Focuses on how pupils learn
• Is central to classroom practice
• Is a key professional skill
• Is sensitive and constructive
• Fosters motivation
• Promotes understanding of goals and criteria
• Helps learners know how to improve
• Develops the capacity for self-assessment and peer-assessment and
• Recognises all educational achievement. (p. 2)
Teachers need to consider these ten principles when implementing formative assessment in the
classroom. The ARG (2002) argue that these principles will ‘safeguard the necessary quality
learning experiences needed to achieve the goals of education’ (p. 3). Clarke (2008, 2005,
2001) and Glasson (2009) developed both the ARG (2002) and Black and Wiliam’s (1998b)
research into practical strategies for teachers to implement in the classroom. They identified
six key formative assessment strategies that teachers can use to improve student learning.
These strategies include:
• Stating the learning intention
• Developing success criteria
• Effective teacher questioning
26
• Teacher feedback
• Self-assessment
• Peer-assessment.
Clarke (2005) states that the student should be involved in each strategy as an active learner
with the role of the teacher moving from ‘controller to coordinator’ (p. 6). This is based on the
view that every student can improve and become an effective learner (O’Connor, Evans &
Craig, 2009). This important research into key formative assessment strategies was used as a
framework for the development of the first phase of my study involving action research in my
own classroom and school.
Assessment for, as and of learning
The use of assessment in schools can be defined by its purpose. A common approach to
identifying the purpose of assessment by classifying it into three different areas:
• Assessment for learning
• Assessment of learning and
• Assessment as learning. (Clarke, 2008; Department of Education and Training,
Victoria, 2013; Earl, 2003)
Table 1 (see p. 27) provides an outline of the three types of assessment synthesised from the
literature. The term ‘formative assessment’ is regularly discussed as assessment for learning
when discussing teacher-driven assessment to improve student learning and assessment as
learning when discussing assessment in which students have the primary role to play in
improving their own learning. Summative assessment is also identified as assessment of
learning. In my study, the focus is on assessment that improves student learning. Therefore,
assessment for learning and assessment as learning are further discussed in this literature
review.
27
Type of
Assessment Brief Outline of the Assessment Approaches
Assessment
for Learning
Assessment for learning involves the teacher interpreting evidence of student
learning to decide where is the learner’s level, where they need to improve and
how best to get there (Assessment Reform Group, 2002). Teachers are at the
centre of this approach through their knowledge of the student, their
understanding of the context of the assessment and the curriculum targets to
establish the relevant learning needs (Earl, 2003).
Assessment
as Learning
Assessment as learning is when the student is active, engaged and critically
assessing their own learning (Earl, 2003). The student reflects and monitors
their own learning in order to inform their future learning (Department of
Education and Training, Victoria, 2013). The ultimate goal is for the student
to take ownership of his or her learning (Earl, 2003).
Assessment of Learning
Assessment of learning is another term for summative assessment. The
purpose of this form of assessment is to summarise what the student knows in
order to report on their progress and achievements (Assessment Reform Group,
2006; Earl, 2003). Assessment of learning is typically completed at the end of
a unit, course or key stage (Earl, 2003). The student is given an opportunity to
demonstrate what they have learnt throughout a unit of learning in what is
usually a teacher-initiated assessment.
Table 1: Assessment for, as and of learning
Assessment in International Baccalaureate schools
This section of the literature review explores assessment in International Baccalaureate (IB)
schools, given that my study was undertaken in two IB Primary Years Programme (PYP)
schools. The IB’s stance on assessment is outlined in the ‘Programme standards and practice’
(2005) section C4:
28
There is an agreed approach to assessment, and to the recording and reporting
of assessment data, which reflects the practices and requirements of the
programme. (p. 13)
The IB defines assessment as requiring both formative and summative approaches
(International Baccalaureate Organization, 2007). The main stated purpose of assessment in
the PYP is to:
…provide feedback on the learning process… Teachers need to select
assessment strategies and design assessment instruments to reflect clearly the
particular learning outcomes on which they intend to report. They need to
employ a range of strategies for assessing student work that take into account
the diverse, complicated and sophisticated ways that individual students use to
understand their experiences. Additionally, the PYP stresses the importance of
both student and teacher self-assessment and reflection.
(International Baccalaureate Organization, 2009, p. 13)
Assessment is central to guiding students through the five essential elements of the PYP: ‘the
acquisition of knowledge, the understanding of concepts, the mastering of skills, the
development of attitudes and the decision to take action’ (International Baccalaureate
Organization, 2009, p.44). Summative assessment provides an insight into the students’
understanding and formative assessment provides information that is used to plan the next
stage of learning. Giving regular feedback helps to improve students’ knowledge and
understanding and fosters an enthusiasm for learning. The IB states that there is evidence that
formative assessment helps low achievers to make significant gains in their understanding
(International Baccalaureate Organization, 2009). O’Connor, Evans & Craig (2009) comment
that in the PYP, formative assessment is at the centre of assessment.
The IB encourages formative assessment at the forefront of its assessment policy. It also
accepts that there are a range of assessment strategies that can be used in the classroom as
stated in section C4.4 and C4.5, (International Baccalaureate Organization, 2005). This
29
stipulates that there must be a balance between formative and summative assessment and that
students must be involved in both peer and self-assessment. C.4.7, states that teachers should
provide students with regular and prompt feedback to improve student learning. This
information from the literature review provided further justification for my study, since there is
clearly a need for studies which provide insights into how these policies on assessment can be
put into practice, given the reality that many teachers in schools still lack a deep understanding
of the purpose, processes and pedagogy of effective formative assessment.
The IB does not differentiate between assessment for learning and assessment as learning,
since they use the terminology of formative assessment for both. Since the case study schools
in this research were authorised IB schools, the terminology ‘formative’ and ‘summative’
assessment will be used throughout the study to ensure consistency with IB terminology.
Traditional assessment
Wyatt-Smith, Klenowski and Colbert (2014) state that:
Traditionally, curriculum and teaching have been given privileged attention,
with assessment understood as separate, and in some contexts, the
responsibility of systems or examinations boards (as distinct from teachers).
(p. 14)
Earl (2003) identifies traditional assessment as assessment that is predominantly for measuring
what a student has achieved and the information used to categorise students and report findings
to key stakeholders including parents, universities and governments. The concerns raised
about traditional assessment in the literature are that it is not focused on improving student
learning. It is often viewed as something in competition with learning and teaching, rather than
an integral part of it (Heritage, 2007). Whilst this type of assessment is important for the
attainment of information of what students have achieved, and how they compare to other
students, the concerns raised by academics and educators centre around the extent to which
traditional assessment is utilised, leaving no opportunity for other approaches to assessment
30
(Earl, 2003). Standardised testing, grading and summative assessment are all part of traditional
assessment and do not improve student learning when used as tools for standalone assessment
(Assessment Reform Group, 1999).
Frequent testing where marks are taken and given back to students is considered a form of
summative assessment (Black & Wiliam, 2007). If time is given to the students to review their
responses and attempt to improve in those areas of need, then it becomes assessment for
improving students learning, otherwise known as formative assessment. Most of the recent
literature focuses on the use of formative assessment to improve student learning (Absolum,
2010; Black & Wiliam, 1998a, 1998b; Clarke, 2008, 2005, 2001; Earl, 2003; Glasson, 2009;
Wiliam, 2011; Wyatt-Smith, Klenowski & Colbert, 2014). However, it remains imperative to
understand the issues related to traditional assessment and why it cannot be relied on as the
only form of assessment in the classroom. The concerns raised have assisted to shape the
development of my action research project. The literature demonstrated that this type of
assessment alone does not improve student learning, and teachers therefore need to find
practical strategies to inform their teaching that include various formative assessment
approaches.
Standardised testing
Standardised testing first appeared in the United States during World War I, when the Army
Alpha test was employed to compare potential officer candidates (Popham, 2002). National
standardised achievement tests made their appearance shortly after the same war (Popham,
2008). Since World War II, standardised testing has become the norm across many countries,
with governments investing large funding to carry out these tests. Haertel (1999) found three
main purposes for this type of testing:
• Providing information on school’s accountability
• Providing media attention to educational issues
• Changing teaching practice by influencing curriculum and instructional practices.
31
Guskey (2003) identifies standardised testing as ‘large-scale testing’ (p. 6). He contends it is
used for ranking and ordering schools and students for the intention of accountability and that
some of these assessments achieve this outcome. However, these types of assessments are not
effective tools for helping teachers improve their instruction or student learning. Guskey
(2003) raises three concerns regarding the ineffectiveness of standardised testing as follows:
• Students normally take the tests at the end of the school year, when teaching is finished
• Teachers do not receive any results until a later date, by which time it is likely students
have a different teacher
• The results do not provide substance or specific information required by students and
teachers for improvement on targeted outcomes.
Standardised testing spreads students out on a continuum, but does not provide teachers with
enough specific information to tell them how to move students forward in their learning (Earl,
2003). Heritage (2007) also states that teachers do not control how or when these types of
tests occur, what or who is assessed, therefore making it difficult to use the information
provided in their day-to-day instruction. Popham (2003) does concede that some
accountability tests can serve a useful purpose. He states that if the only focus is on a small
number of curricular aims, there is a clear description of what is being tested and where
individual reports are given to students regarding each curricular aim assessed, then an
educational purpose can be served. Earl (2003) also concedes there is an effort to improve test
scores, but argues there is little attention paid to what these scores mean in terms of student
learning. Teachers who have developed effective formative assessment strategies find
extension for improving their practice is limited by the pressure of the standardised assessment
that is also used as high-stakes testing (Black & Wiliam, 2007).
The IB (2007) does not prescribe or encourage the use of standardised testing, however, it
‘recognizes that there may be local, state, national requirement concerning the use of such tests
for many IB World Schools’ (p. 50). The case study school currently uses standardised
assessment in Grade 6 only in the primary section. This assessment is used to track cohort data
for informing long term curriculum decisions. As a result of very little use of external
32
standardised assessment, teachers are prepared to take more risks in developing assessment
strategies, giving a wider range of students a greater opportunity to succeed. However,
standardised testing is still commonly used for assessment, particularly by governments. This
remains an on-going debate in the United States, where students are undergoing more tests
culminating resulting in teachers teaching to the test (Jennings & Stark Renter, 2006).
In Australia, the National Assessment Program – Literacy and Numeracy (NAPLAN) has
caused much controversy with respect to whether it is in the ‘best interests’ of students (Wyn,
Turnbull & Grimshaw, 2014, p. 6). There have been arguments that it creates unnecessary
pressure for students as well as lengthy waiting time for schools before feedback is received.
However, some schools do use this data to inform future teaching and track cohort growth
across year levels. Klenowski and Wyatt-Smith (2014) emphasise that teachers still need to
understand curriculum standards and the learning continuum for their students in order to
improve learning for all. Generally, the literature on standardised testing criticises how and
why these assessments are used, as teachers frequently use other forms of assessing that can
more immediately inform the ways they plan to maximise student learning. The use of a
combination of approaches to assessment can be useful to students and teachers provided it has
a positive impact on learning (Harlen, 2012). Klenowski (2010) argues that any ‘new
developments in assessment must keep the support and improvement of student learning as the
major priority’ (p. 12). Standardised assessment was not directly relevant to my study,
although many IB schools are required to undertake some form of standardised assessment.
Klenowski and Wyatt-Smith (2014) contend that:
…the negative impact of high-stakes assessment on the quality of teaching and
learning becomes evident with the shift to a focus only on the results in the
evaluation of school performance and when sanctions are imposed. (p. 25)
This is not to say that standardised assessments ‘do not have a legitimate place in twenty-first
century schooling’ (Wyatt-Smith, Klenowski & Colbert, 2014, p. 3) in mapping cohort and
national trends for example. However, schools need to ensure they focus on using data from
these tests formatively to enable and improve student learning.
33
Grading
Sadler (1989) argues that a difficulty inherent issue in the concept of grading is that when the
teacher gives a grade or a score to a student with no discussion or criteria; attention is taken
away from the criteria for learning that informed the grade, to only what mark the student
received. Therefore, Sadler (1989) concludes that giving grades to students can actually be
counterproductive as a way of improving learning, unless students know how these grades
relate to the learning tasks. According to (2005), grades also cause other issues in the
classroom. He states that there is no exact way of knowing what is meant when a student
receives an ‘A’ for a test or a subject unless it is very well connected to a marking rubric. It
can be subject to misinterpretation by anyone who reads it unless it is read in conjunction with
an explanation of what the learner was supposed to achieve. An ‘A’ is also the highest mark,
which provides no incentive to work harder, because there is nowhere to go beyond the highest
mark, unless this is discussed with the learner as part of a learning continuum. Rennie’s (2005)
secondary observation is that grading does not provide less able students with credit for the
success they may have in improving their own learning. They may still end up with a ‘C’
regardless of the improvements they have made. One argument put forward for grading is the
perception that if a child receives a ‘C’, it forces them to consider why they didn’t get a ‘B’ or
an ‘A’ (Short, Harste & Burke, 1996). However, this is not always the case. The reality is that
some students see themselves as being ‘picked on’, some give up and others see themselves as
a ‘C student’. Marking and grading should not be used to compare students, but instead only
for highlighting the strengths and areas of development for each student and thus providing
them with feedback that can be used to enhance their learning (Earl, 2003). Through informal
conversations and observations in my own school, I was aware before I commenced my study
that this form of graded assessment without the use of clear criteria was prevalent across all
grades. Teachers relied heavily on assessing students in this manner, and in some classrooms,
this was the only form of assessment teachers implemented. When the school moved towards
implementing the PYP in the year prior to my arrival, their use of assessment was one of the
areas identified for improvement. Due to the heavy reliance on this form of traditional
assessment, my action research project had particular significance for learning and teachers’
pedagogical improvement. Accordingly, I found that senior school leaders and teacher
34
participants were willing to support the development and implementation of the action research
as they considered the research would benefit the school.
Summative assessment
Summative assessment occurs at the end of a unit of learning and is used to give a clear
understanding of what students have learnt (Tomlinson & McTighe, 2006). Unlike formative
assessment, where the purpose is to give feedback on the learning, summative assessment
provides the evidence once the unit of learning has ended (Ryan, Cooper & Tauer, 2013). The
IB views summative assessment as an integral part of the assessed curriculum, as it provides
clear insight into some aspects of students’ understanding (International Baccalaureate
Organization, 2007). If used effectively, this form of assessment in IB schools can
simultaneously assess several elements; it can improve student learning and the teaching
process, measure students’ understanding of the central idea and move students towards taking
action. In my study, I aimed to question this assumption and develop a deeper understanding
of other ways to assess learning as the learning unfolded. There is common agreement in the
literature that the purpose of summative assessment is to sum up student achievement and for
this then to be used predominantly for reporting to key stakeholders (International
Baccalaureate Organization, 2007; Sadler, 1989). For this reason, Sadler (1989) states that
summative assessment does not normally have an impact on learning, even though it can at
times have educational and personal consequences for students. It is acknowledged that
summative assessment does play a role in the classroom, but it is heavily relied upon as the
dominant assessment approach in some classrooms (Earl, 2003; Crooks, 1988), where tests and
tasks are given at the end of a unit rather than during the learning process. In my view,
summative assessment should play a dual role of fulfilling its purpose of documenting
students’ skills, knowledge and understanding and improving student learning. This is an
argument also put forward by Bennett (2011) who argues a ‘carefully crafted (summative
assessment) should also meet a secondary purpose of support for learning’ (p. 7). Summative
assessment can be used formatively, if students or teachers take the results and use them to
guide their approaches and activities that inform future learning.
35
Whilst the important role of summative assessment in the classroom is acknowledged, the
difficulty is that the principles underlining summative assessment are not transferable to
formative assessment (Sadler, 1989). It is concerning that teachers are using the tests to assess
the quantity of the students’ work and that teachers’ assessment time is taken up significantly
with grading or marking (Earl, 2003). This form of summative assessment has a strong
emphasis of comparing students with one another and feedback mainly comes in the form of a
grade with little suggestion on how to improve (Earl, 2003). This could have detrimental
effects on some students’ self-esteem (Taylor, 2006). These types of tests reveal students who
do well on tests and students who don’t (Earl, 2003) but there is very little indication of the
mastery of learning. At the same time, the scoring is often too simplistic for students to
demonstrate the broad range of skills and knowledge they have developed.
The above criticisms of summative assessment are valid arguments, however, there is still a
role for summative assessment in the classroom (Clarke, 2008; International Baccalaureate
Organization, 2007). According to Clarke (2008), the key is to create a balance with formative
assessment. Formative and summative assessments are not entirely incompatible with each
other. The underlying issue is creating the right balance (Black & Wiliam, 2007). The
abovementioned complications with summative assessment are particularly prevalent in PYP
schools, and therefore, need to be a focus within my research. When planning for the unit of
inquiry and the learning sequence in a PYP school, the summative assessment task is placed on
the first page. At the time of commencement of my study, I became aware that many teachers
believe this takes precedence over any other form of assessment. As a result, it appears that
more time is spent focusing on summative assessment task/s rather than developing the use of
formative assessment in the classroom. This reality provided a further justification to me for
the conduct of a study of formative assessment in my school.
Creating the right balance in the use of assessment
This imbalance of assessment in schools is a further justification for developing my action
research study. Many IB schools focus on developing their summative assessment tasks and
very little time focusing on the use of self-assessment and teacher feedback to improve
36
learning. Although the use of formative assessment has improved considerably over the past
few years, establishing the right balance between formative and summative assessment is
challenging for teachers. Earl (2003) argues that too many classrooms have a strong
dominance of summative assessment (assessment of learning) with formative assessment
(assessment for learning) and self-assessment (assessment as learning) receiving little attention
(see Figure 1, p. 36). Figure 2, Reconfiguration of assessment model (see p. 37) is a redesign
of the pyramid which Earl (2003) argues should be the focus of the assessment being reversed.
In this model, formative assessment including assessment as and for learning have the
dominant role in the classroom, whereby assessment of learning is used when decisions need to
be made that require a judgment of a student’s learning. In this reconfiguration, assessment of
learning would have a very small role in the classroom.
If the major focus in the classroom becomes formative assessment, student learning is more
likely to improve and students would move towards thinking about their learning, rather than
marks or grading (Earl 2003). The reconfiguration allows for all three assessment types to be
used in the class, but the key to effective assessment is to use the different styles purposefully
within a balanced assessment program (Earl, 2003; International Baccalaureate Organization,
2005). A balanced approach to assessment improves learning, increases student achievement
and will benefit the student and society as a whole (Wiliam, 2006). Due to concerns
surrounding the extent to which schools focus on traditional assessment, I planned for my
action research study to identify and develop the formative assessment strategies the literature
views as being able to improve student learning, and to capture how teachers implement these
approaches, as well as the impact on student learning. I aimed to avoid the traditional
approach outlined in Figure 1 (see p. 37) and to focus on what is here described as a
‘Reconfigured assessment pyramid’ (Earl, 2003, p. 27) in Figure 2.
37
Figure 1: Traditional assessment pyramid (Earl, 2003, p. 27)
Figure 2: Reconfigured assessment pyramid (Earl, 2003, p. 27)
Formative assessment strategies
The following section of the literature review is critical to my study because it has informed
the focus, structure and process of the action research that I will complete in the first phase of
my study. The practical formative assessment strategies I aim to implement and study in my
own and my colleagues’ classrooms include; sharing the learning intentions, developing the
38
success criteria, effective teacher questioning, teacher feedback, self-assessment and peer-
assessment as well as other deeper aspects of formative assessment that are discussed below.
Sharing the learning intentions
Since the literature review revealed that learning intentions should be at the centre of formative
assessment strategies, I decided I would delve further into the literature to establish how
teachers could develop and use the statement of the learning intention with students in
classrooms, and then use the suggested processes in the action research. Glasson (2009) argues
that:
Framing a learning intention and then sharing that learning intention with
students is a very powerful way for teachers to improve learning in their
classrooms. The establishment of a learning intention is the basis of everything
that follows in the lesson or series of lessons. (p. 10)
Sadler (1989) argues that assessment should no longer be a secret to students. Students need to
be participants in the learning curriculum. Glasson (2009) agrees that students need to become
a part of the assessment process, and thereby take responsibility for their own learning.
Establishing the learning intention (including being more precise about the aims, objectives
and goals) at the start of a lesson is the basis for everything that follows in that particular
lesson or following related lessons (Glasson, 2009). The learning intention states clearly what
the student is expected to know, understand, or be able to do as a result of the learning
activities they are involved in (Glasson, 2009). Figure 3 (see p. 39) depicts Glasson’s (2009)
views surrounding how the role of learning intentions is vital for implementing other formative
assessment strategies that enable student learning.
39
Figure 3: How learning intentions fit (Glasson, 2009, p. 11)
Black and Wiliam (2009) argue that if the teacher does not develop learning intentions,
however, implicit they might be, a situation of ‘anything goes’ will be created in the classroom
(p. 24). However, Knight (2008) contends that requiring the teacher to set the learning
objective undermines the creation of student autonomy by giving the creation of learning to the
teacher. I find that these competing, or binary views about learning intentions unhelpful, since
teachers should include student voice in the creation of learning environments and tasks, whilst
simultaneously, take into consideration the needs of the individual student. Teachers must also
be held accountable to ensure the students function in the discipline as effective learners (Black
& Wiliam, 2009).
Glasson (2009) argues that often the teacher understands why the class is engaged in a
particular activity, but students may be unaware of what they are learning and the context of
the activity. Clarke (2005) points out that it is important to separate the learning intention from
its context, to enable students to make connections so their learning can be applied to different
situations. A well thought out learning intention will direct students’ focus to the required
learning, and thereby allow students to see the difference between what they will learn and
what they will do (Glasson, 2009). As a practitioner, I see the value of stating exactly what the
learning intention is for the student, so they can to make their own connections from the
40
beginning of the lesson. After identifying this view reinforced in the literature, my decision to
investigate my practice and to invite my colleagues to join me in action research on formative
assessment was strengthened. It was clear that I needed to include these important dimensions,
but with the caution, that at times, students need opportunities to define their own learning.
Developing the success criteria
Another area of literature pertinent to my study is the use and development of success criteria.
The views demonstrated in the literature on the causal relationship between success criteria and
learning intentions strengthened my intention to use and explore the use of success criteria in
the action research process. The ARG (2002) states that:
Communicating assessment criteria involves discussing them with learners,
using terms that they can understand, providing examples of how the criteria
can be met in practice and engaging learners in peer and self-assessment. (p. 2)
Success criteria are the indicators that demonstrate to students what steps and learning tasks
they need to achieve the learning intention (Clarke, 2005). Elwood & Klenowski (2002),
argue that to promote a ‘shared understanding of assessment practice’ (p. 254), the criteria
students are being assessed against should be made explicit to them. Success criteria can help
both the student and teacher know whether the learning intention has been met and are directly
linked with each other (Glasson, 2009). Sharing the learning intention with students helps
them focus their thinking on what is required in their learning (Clarke, 2005). Clarke (2005)
agrees that determining the success criteria with students ensures they are able to apply
‘appropriate focus, clarify understanding, identify success, determine difficulties, discuss
strategies for improvement (and) reflect on overall progress’ (p. 37). As an assessor, the
teacher possesses an understanding of the success criteria related to the context of the lesson,
and therefore, needs to ensure that knowledge is made clear to the students (Pryor &
Crossouard, 2007). According to Glasson (2009), different success criteria that can be used
include:
41
Performance criteria provide the opportunity for an evaluation judgment on the quality of
work from the student. For younger students, these are commonly used with smiley faces and
‘I can’ statements (i.e. I can use tools safely, I cooperated well in my group, I gave this project
my best effort).
Rubrics are designed for the purpose of giving clear indicators on what needs to be done to
achieve the learning intention/s. Although frequently used as a summative tool where students
are marked against the rubric at the end of the task, rubrics can be used in a formative way by
evaluating the student during the progress, thereby giving them the opportunity to improve
their performance.
Process criteria are a series of steps that students need to complete in order to meet the
learning intention. Process criteria are not used to examine the quality of the work, and so may
be used when introducing a genre or a set procedure.
Checklists are similar to process criteria. Checklists have a significant self-assessment aspect
to them in that they provide guidelines for students to tick off as they complete a task. It is
also effective for using peer-assessment to provide feedback on the task completed.
The Education Services Australia (2014) website states that for success criteria to support
student learning, the following basics need to be considered:
• Provided in language that students are likely to understand
• Be limited in number so students are not overwhelmed by the scope of the
task focus on the learning and not on aspects of behaviour (e.g. paying
attention, contributing, meeting deadlines etc.)
• Be supported, where necessary, by exemplars or work samples which make
their meaning clear (This is probably particularly relevant in the case of
rubrics.)
• Created, ideally, with input from students so that they have greater
understanding and ownership.
42
Chappuis (2005) argues that when determining the success criteria with students, teachers need
to use exemplars or a work sample to show students what is defined as excellent
work. Students evaluating anonymous work samples for quality could achieve this
result. Students should be asked to defend their judgment, as this will help develop the
necessary self-assessment skills students need to improve their own learning. This will also
assist to alleviate any confusion between what the teacher and the student thinks is acceptable
work (Tomlinson & McTighe, 2006). As well as strong exemplars, teachers should also use
weak examples (Chappuis, 2005). Whilst some teachers may feel it could risk the quality of
the students work in attempting to mirror the example, it can help students identify areas of
improvement in their own learning, in order to gain a clearer understanding of what constitutes
quality learning. In my opinion, teachers also need to provide opportunities for students to be
involved in developing success criteria and give them time to reflect on their own learning
against the criteria, in order to lead to the desired results for students’ learning. In my study, I
will include some use of exemplars, but also plan to use a range of other strategies to
encourage student learning for comparative purposes.
Effective teacher questioning
The literature review also informed my decision to explore how questioning plays a role in
relation to formative assessment as part of this study. Glasson (2009) argues that:
Strategic questioning refers to the careful and deliberate use of questioning in
order to elicit information from students about what it is that they know and can
do, and the formative use of that information to shape future teaching and
learning. (p. 6)
Clarke (2008) agrees that the kinds of questions teachers ask will further deepen students’
understanding. Black and Wiliam (1998b) argue that discussions leading to students talking
about their understanding in their own way, increases knowledge and improves understanding.
However, they acknowledge that teachers quite often unintentionally respond to student
43
thinking in a way that inhibits future learning. Black and Wiliam (1998b) contend when
teachers manipulate student thinking to elicit the response they are looking for:
The teacher seals off any unusual, often thoughtful but unorthodox, attempts by
pupils to work out their own answers. Over time, the pupils get the message:
they are not required to think out their own answers. The object of the exercise
is to work out - or guess - what answer the teacher expects to see or hear.
(p. 143)
Another concern Black and Wiliam (1998b) raise about questioning includes the lack of wait
time provided by teachers for students to answer, resulting in teachers answering their own
question followed by more questions being asked in rapid succession. The consequence of this
style of questioning is that some students do not see any point in trying, since they know the
next question will be asked within a few seconds. They are also unable to respond as quickly
as some of their peers and are unwilling to make a mistake in front of them.
According to Glasson (2009), research has identified key aspects of questioning that can
contribute to its effectiveness of a teaching tool. These specifically include:
• identify, as part of planning, the key questions that are to be the focus of a
lesson
• use open questions, which demand higher order thinking
• provide for ‘wait time’ or ‘thinking time’ to allow students time to consider
a question before they offer their responses
• use prompts to encourage students to produce a response or to elaborate on
a response
• make use of answers that display faulty thinking
• model positive listening behaviour
• distribute questions around the classroom
• encourage students to ask questions. (p. 40)
44
Black and Wiliam (1998b) also identify ways to improve teacher questioning by asking
students to discuss their thinking in pairs or groups, giving choice between possible answers
and writing their answers down and selecting one to share. According to Clarke (2008),
abolishing hands up in the class to answer questions and giving students the opportunity to
have thinking time provides a classroom environment that is more conducive to learning.
Pryor and Crossouard (2007) categorise two styles of formative assessment, convergent and
divergent assessment. During convergent assessment, questioning focuses on ensuring
students are successful at completing a predetermined skill, knowledge or understanding. The
questioning involves giving closed or pseudo-open questions that focus on students giving
what the teacher believes constituted a correct response. Therefore, the focus of the lesson is
on the successful completion of the task, not the process. During divergent assessment,
questioning is focused on asking students questions they do not already have the answer
for. As a result, students see these questions as ‘helping questions’, rather than ‘testing
questions’ (p. 4) as they focus on reconstructing student’s reasoning (Pryor & Crossouard,
2007). This strategy of formative assessment has an impact on the teacher’s feedback as it
becomes exploratory and provocative, encouraging further discussion instead of correcting
mistakes.
Although teachers often naturally use questioning as a way of checking on student’s learning,
there are times when this is unproductive in the classroom (Black & Wiliam, 1998b). Clarke’s
(2005) concern is that where teacher questioning focuses mostly on recall, social or managerial
questions, it has no assessment benefit. This type of questioning is important to ensure
classroom management is effective, but teachers need to aware of what type of questioning
they are using and its purpose. Whilst Clarke (2005) accepts that this is a difficult area of
formative assessment to improve, it is one area that can result in a positive change in the
classroom. Effective questioning works as formative assessment when teachers ask
worthwhile and probing questions to elicit responses from students and use their professional
judgment to draw conclusions about what the students know and understand (Glasson,
2009). This formative information can be further used to adjust the teaching plan accordingly.
45
Questioning as formative assessment
Key questioning involves teachers being clear about what learning intention has been set for
the lesson and ensuring the questions directly relate to the relevant learning goal (Glasson,
2009). Key questions are used during the crucial points of a lesson and are based on the
knowledge and understanding that the teacher wants the student to demonstrate. These
questions are used frequently throughout the lesson and are used as a focus for both teachers
and students (Glasson, 2009).
Increasing the wait time after teacher questioning can assist in developing student’s responses
and discussions and increase the length of their answers (Black, Harrison, Lee, Marshall, &
Wiliam, 2004). Clarke (2008) also suggests pairing students to broaden student participation
and help to brainstorm their ideas. To ensure questioning is an effective formative assessment
tool, Black et al. (2004), suggest the following actions need to take place:
• More consideration spent framing questions that are worth asking, and therefore,
critical to the development of students' understanding.
• Wait time needs be increased to several seconds to give students time to think, and
everyone should be expected to have an answer and to contribute to the discussion.
Then all answers, whether they are correct or not, can be used to develop
understanding. Therefore, the goal is for improvement, rather than getting the answer
right the first time.
• Follow-up activities have to be meaningful and engaging and create opportunities to
extend students' thinking and understanding.
A number of academics have identified two types of questioning; closed and open questions
(Clarke, 2008; Glasson, 2009; Victorian Curriculum and Assessment Authority, 2007).
Although both types of questions are needed at different stages in the classroom, open
questions are viewed as more aligned with formative assessment questioning. How the
46
implementation of open questions in the classroom can impact on student learning is clearly
worth investigating in my research.
Closed questions
Closed questions require certain information which can often be answered in a short sentence
(Victorian Curriculum and Assessment Authority, 2007). Glasson (2009) argues these
questions typically begin with who, what, where and when. She states that they require
students to recall information they have either read or viewed. Closed questions do not attempt
to uncover why or how a student is thinking in a particular way.
These types of questions have a place in the classroom and can be used for quick revision of
what has been taught in a prior lesson (Clarke, 2008). The information garnered from this
revision can be used for a managerial purpose, in other words, it can make it clear to the
students where their focus needs to be (Glasson, 2009). However, there are times when closed
questions are used for the students to guess what is in the teacher’s head and this offers very
little assessment value for teachers (Clarke, 2008; Glasson, 2009). Clarke (2008) argues that
unless every student is asked the same question, one answer from a student cannot be an
indication of the class’ thinking. Rather, teachers need to implement alternative questioning
strategies such as ‘talk partners’ (Clarke, 2008, p. 35) to gauge the understanding of the whole
class.
Open questions
Open questions require students to think at a more in-depth level (Victorian Curriculum and
Assessment Authority, 2007). To successfully answer these types of questions, wait time
(Rowe, 1972) or think time (Stahl, 1994) is needed during which a teacher gives students some
time to process their response, before they answer. Duckor (2014) recommends sometimes
implementing a think-pair-share or diary entry routine after posing a question in class. This
will help teachers ensure there is sufficient think time given. This form of questioning is more
focused on trying to elicit responses from students to provide an understanding of their
47
thinking (Glasson, 2009). It can encourage students to articulate how they came to a certain
answer and engage them in ‘higher-order thinking skills such as analysis, synthesis, evaluation
and application’ (Glasson, 2009, p. 41). Open questions will also encourage students to reflect
on their learning and engage in evaluating their own learning (Glasson, 2009). Clarke (2008)
provides recommendations for how teachers may structure open questions including providing
a range of answers, using statements, starting with the end/answer and using opposing
standpoints. These types of questions require students to extend their thinking and learning.
When questioning students, rather than have them guessing what is in the teachers head,
teachers should be using open-ended questioning strategies to work out what is going on inside
a student’s head (Glasson, 2009). However, teachers need to be aware of what the purpose of
open-ended questioning is, to ensure it has the intention of improving learning. Asking open-
ended questions with no clear understanding of what teachers hope to achieve can lead to
confusion and misunderstanding for students in the classroom.
The literature on questioning provided strong provocations to my thinking about how a focus
on questioning could be part of the action research, since it is an important part of formative
assessment.
Teacher feedback
The literature also reinforced my view that teacher feedback is an essential strategy of
formative assessment. Its purpose is to provide information for the learner on how they can
move forward in their learning. As Black & Wiliam (1998b) note:
When anyone is trying to learn, feedback about the effort has three elements:
recognition of desired goal, evidence about present position, and some
understanding of a way to close the gap between. (p. 144)
Rushton (2005) argues that ‘existing evidence supports the identification of feedback as the
central component of formative assessment’ (p. 509). A significant role for teachers in the
learning process is to provide feedback to students that provides direction and encourages
48
further learning (Black & Wiliam, 1998b; Clarke, 2005; Earl, 2003; Glasson, 2009;
International Baccalaureate Organization, 2007; Sadler, 1989). Because feedback is intended
to improve the learning of students, it cannot be an added extra to learning and teaching, but
must be ‘built into the teaching plans, which thereby will become both more flexible and more
complex’ (Black, 1998, p. 26). Feedback is a key strategy in formative assessment, and as
Sadler (1989) states, feedback plays an important role, as teachers must provide information on
how successful a student is in relation to the learning intention and what the next steps in
learning are for that student to be successful. For the feedback to be effective, the teacher must
know what knowledge and skill is being learnt and indicate to the student how they can
improve their learning. Black and Wiliam (1998b) reiterate this by arguing that feedback for
any student should be about the quality of their work, with advice on how to improve, and
should avoid comparisons with other students. Scott (2000) agrees that for feedback to be
effective, it should help ‘students see themselves and their performance more clearly’ (p. 36).
Wiggins (1993) defines feedback as the information given to the ‘performer with direct,
useable insights into current performance based on tangible differences between current
performance and hoped for performance’ (p. 182). Similarly, Ramaprasad (1983) states that
feedback is the information given about the gap between actual student level and how to alter
that gap. Sadler (1989) argues that the information provided by the teacher is only feedback
when it actually alters the gap in learning. Tomlinson and McTighe (2006) identify effective
feedback through four qualities; feedback needs to be timely (immediate), specific to what the
student is supposed to be learning, understood by the learner and allows for adjustment.
Feedback can be provided in different ways (Earl, 2003). It can be formal or informal
feedback, given to individual students or to the whole class and it can also be evaluative or
descriptive (Gipps, McCallum & Hargreaves, 2000). The research identifies evaluative
feedback with summative assessment and descriptive feedback with formative
assessment. Reflecting upon my early years as a classroom teacher, I realise that evaluative
feedback was prevalent in my teaching. I made no attempt to differentiate between the types of
feedback given, nor did I have the understanding to do so. Through my informal conversations
with teachers, prior to the commencement of my study, it appeared that this was a common
49
issue. Consequently, I felt that feedback should become another focus in relation to
assessment for learning for my action research.
Evaluative feedback
Evaluative feedback comes in the form of grades and short comments, often praising or
criticising the student’s work (Earl, 2003). This type of feedback indicates to students whether
they have performed the task correctly or not, but offers little or no direction of the areas in
which to improve their learning. This feedback is seen as summative feedback, with the
information usually used on reports and given to parents (Glasson, 2009). Early in my own
career, I consistently provided feedback to students that did not provide students with
information that would help them move forward. In my experience, when teachers are
unaware of different types of feedback, evaluative feedback appears to be more prevalent. In
my action research study, the aim is to assist teachers to move away from this type of feedback
to feedback that is connected to the learning intentions, so it is more likely to lead to
meaningful learning.
Descriptive feedback
Descriptive feedback is connected to the learning intention and provides suggestions on ways
students can improve their learning (Earl, 2003). This type of feedback is aligned with
formative assessment, as it helps the student move forward in their learning (Glasson,
2009). Gipps et al. (2000) state that descriptive feedback tells students whether they are right
or wrong (what they have achieved and have not achieved), why an answer might be correct
and specifies why that might be the case, and asks students to suggest ways in which they can
improve learning.
Earl (2003) states that teachers regularly provide written feedback in the form of grades or non-
specific short comments. Earl (2003) and Wiggins (1993) argue that this leaves students
feeling unsure where to go next, and does not support them with their learning. Chappuis and
Stiggins (2002) argue verbal comments like; ‘You need to study harder’ or, ‘Your handwriting
50
is very nice’, or ‘Good job’ (p. 42) are statements regularly used by teachers as feedback.
Although these comments may be positive, they do not help the student to move forward
(Tomlinson & McTighe, 2006). Black and Wiliam (1998b) state there is a need to replace this
type of judgmental feedback with clear descriptive feedback that is immediate. In my study, I
wanted to explore as a practitioner how teachers being conscious about the type of feedback
given and focusing on giving clear feedback to students, highlighting specific aspects of the
learning, could help the students achieve their learning goals. Chappuis and Stiggins (2002)
also argue that evaluative feedback in the form of traditional statements by teachers of either
approval or disapproval of a student’s performance have limited value on improving student
learning and in some cases can have a negative impact. Rather, they argue teachers who focus
on student learning with their feedback increase student motivation, and student interest in
learning. As a researcher, the literature gave me insights into the importance of exploring how
to give clear feedback to the learner, for student improvement, and to focus more deeply on
this aspect of formative assessment in my study.
The literature review, therefore, reinforced my belief that I should investigate the power of
feedback, and encouraged me to include further emphasis on this in my study in order to
enhance the development of formative assessment in powerful ways. Descriptive feedback
should give students ways to enhance their learning in clear constructive language that focuses
on one area at a time, instead of focusing on a range of errors (Black & Wiliam, 1998b).
Wiggins (1993) argues that effective feedback enhances students learning when it is given
throughout the learning process as opposed to evaluating the end product. He also states it
needs to become a part of the student’s mental process whereby they learn how to assess
themselves. Good descriptive feedback gives explicit connection between the student’s
thinking and the possibilities that they should consider (Earl, 2003). Wiggins (1993) concludes
that learners are in need of information that will help them to self-assess and self-correct,
therefore, it needs to be integrated through the learning experience. Consequently, feedback is
most effective when it focuses on the learning expectations. Clarke (2008) advocates that
learning outcomes should be displayed around the classroom, to provide students a clear
understanding of what they are doing. If teachers are distracted by other factors such as neat
51
handwriting or student behaviour, this can gives the student the impression that these other
factors are more important.
Wiggins (1993) believes there is a subtle difference between guidance and feedback. Guidance
gives direction, whereas feedback tells someone whether or not he or she is on course.
Guidance is teacher initiated and prescriptive; feedback engages the learner and actively
involves them. Students and teachers become partners in the process. These insights from the
literature provided vital input into variable and complex aspects of formative assessment to
inform my study.
Transition from feedback to self-monitoring
According to Sadler (1989), the transition from feedback to self-monitoring can only be
achieved when three conditions are met. Firstly, students must possess an understanding of the
learning objective. Second, they must compare their current level of performance with the
standard that is provided and thirdly, engage in work that will lead to altering that gap. These
three conditions are fundamental and must take place simultaneously, rather than as sequential
steps. Self-monitoring is identified as an important student-directed strategy that encourages
independence, ‘external support is minimized, engagement and motivation are increased, and
learning is maximized’ (Agran et al., 2005, p. 3). As a result, students require less teacher
directed support and have the potential to monitor off task behaviour, giving the teacher more
time to focus on teaching (Boyd Bialas & Boon, 2010). Transitioning students from feedback
to self-monitoring is the aim of many instructional systems (Sadler, 1989).
Self-assessment
Yorke (2003) argues that whilst teacher formative assessment in the classroom is beneficial to
student learning, it does not always help students reach their full potential. The risk of the
teacher solely providing the feedback can create what Yorke (2003) describes as ‘learned
dependence’ (p. 489). This occurs when the learner relies on the teacher to provide the next
steps in learning and cannot move beyond what is required of them relating to a set task. It is
52
for this reason that self-assessment plays a significant role in learning. As one of the more
engaging formative assessment strategies, self-assessment has a key role in any use of
formative assessment in the classroom (Black & Wiliam, 1998b; Earl, 2003; Glasson, 2009).
Traditionally, students waited until post-performance to receive feedback (Scott, 2000).
However, the current literature now emphasises the inclusion of encouraging students to
review their performance during their learning. I, therefore, decided to include a focus on
improving student awareness of their role in the assessment process as a significant element in
my action research. I am interested to investigate how this could play a role in classroom. As
Glasson (2009) points out:
Students self-assessment focuses on encouraging students to take responsibility for
their own learning, to identify strengths and weaknesses, to be aware of how they
learn, to set learning targets, to act on feedback, and to be able to make judgments
about the quality of their work in relation to success criteria. (p. 6)
According to Glasson (2009), ‘student self-evaluation is the broader term that incorporates
meta-cognition of the self as a learner’ (p. 92). She argues for students to monitor their own
learning to be successful, they need to:
• understand both learning intentions and success criteria
• use these criteria to judge what they have learnt and what they still need to learn
• reflect on the learning process to ascertain how they learn best
• act on feedback received from the teacher and peers
• set learning targets based on what they still need to learn
• manage the organisation of their learning. (p. 92)
It is imperative for students to be involved as far as possible in the understanding and
constructive criticism of their own work (Clarke, 2005). Black and Wiliam (1998b) argue that
it is inevitable that formative assessment would link to self-assessment. This is the ultimate
goal, whereby students become their own assessors, and therefore, are in charge of their own
learning (Earl, 2003). Assessment as learning (self-assessment) occurs when students begin to
53
monitor their learning and use this to inform their future goals (Department of Education and
Training, Victoria, 2013). Earl (2003) argues that with the introduction of self-assessment,
students are moving forward in constructing meaning, develop self-monitoring skills and are
able to identify when they do not understand something and have strategies to assist them in
deciding what to do next. Encouraging students to self-assess and reflect upon their own
learning helps develop important meta-cognitive skills to improve their own thinking and
learning (Keely, 2008). Guskey (2003) states that too many teachers mistakenly believe
assessment needs to be kept a secret from students, resulting in students’ belief that they have
to ‘guess’ what the teacher is thinking (p. 6). This should not be the case. It is imperative for
self-assessment to take place in primary schools so that it actually enhances the progression of
student learning (Towler & Broadfoot, 1992). Earl (2003) argues this can be achieved by
empowering the student to be involved in the assessment and learning process and to become a
critical ‘connector’ (p. 25) between the two.
Self-assessment and a student-centred classroom
Towler and Broadfoot (1992) argue that students’ involvement in the assessment process is a
natural addition to the student-centred classroom, since it re-examines the traditional teacher-
student relationship and brings a power share balance (Munby, Phillips & Collinson,
1989). Students can be active, engaged and critical assessors by making sense of information,
relating it to their prior knowledge and mastering the skills involved (Earl, 2003). As Towler
and Broadfoot (1992) point out, sharing the responsibility for learning in the classroom with
the students helps them understand the expectation, improves their motivation and leads to a
sense of pride in positive achievements. Therefore, students need to be given the opportunity
to assess their own work and regulate what they are learning during the work (Sadler,
1989). According to Glasson (2009), for self-assessment to work effectively, students need to
ask themselves three questions:
• Where am I going?
• Where am I now?
• How can I get there? (p. 92)
54
Sadler (1989) argues that this will only work if students understand the learning outcomes they
are trying to achieve, so they can then compare their own level of performance with what is
required and furthermore take action to alter the gap between the two. As mentioned
previously, the key to achieving this is to remove the mystery of assessment, to enable students
to take a leading role (McTighe, 1996). A simple example of this is to provide students with
the opportunity to score themselves through a set criteria, which will see students shift their
mindset from ‘what did I get?’ to ‘now I know what I need to do to improve’ (McTighe, 1996,
p. 9). But for this to be successful in a student-centred classroom, students need to be
involved in the analysis and constructive criticism of their own learning, so that reflection,
pride in success, modification and improvement are a natural part of the learning process
(Clarke, 2005). However, it is important to acknowledge that unless this is made explicit to
students and they have a clear understanding of their responsibilities, this approach has the
potential to be ineffective.
Peer-assessment
In my study, I plan to develop peer-assessment amongst primary school children as part of the
action research. This section of the literature review provided important understanding of the
complexities involved in building this into practice. Black, Harrison, Lee, Marshall and
Wiliam (2003) argue that one of the key reasons for peer-assessment is that students often
receive and give criticism more freely than that traditional teacher/student interaction. Another
advantage is students use a more natural language rather than the formal school language
typically given by teachers.
Glasson (2009) defines peer feedback (although this thesis is using the terminology peer-
assessment) as students providing each other with advice about their work. Similar to teacher
feedback, it gives the student information about:
• what has been done well in relation to the success criteria
• what still needs to be done in order to achieve the success criteria advice on
• how to achieve that improvement. (p. 78)
55
If the teacher is the only one involved in giving feedback, the balance is ineffective, and
students become ‘powerless, with no stake in their learning’ (Clarke, 2005, p. 88). One of the
main reasons for using peer-assessment is that there are benefits for both the student receiving
the feedback and the student giving the feedback (Airasian, 1996). Glasson (2009) gives the
example of a student giving feedback on another student’s oral presentation; at the same time
they are thinking about how they can vary their own presentation in order to meet the success
criteria. Glasson states that what is important in his example is the assessing and learning are
happening at the same time. Further, by involving students in peer-assessment, students begin
to see themselves as partners in learning together with the teacher (Heritage, 2007).
Peer-assessment is more effective when the culture in the classroom allows for it (Glasson,
2009). It is important for students to feel comfortable with each other and supported by their
peers. They need to feel they can take risks and make mistakes whilst at the same time, there is
a mutual respect for each other’s opinions. When creating an environment for peer-
assessment, age and gender need to be taken into consideration when deciding the best way to
give feedback (Glasson, 2009). Older students, especially girls, can be more sensitive towards
feedback, whereas younger students need to have it modelled first, before asking them to
provide feedback. Glasson (2009) argues that for peer-assessment to be effective, the success
criteria need to very clear to all students and they must understand what the criteria
means. Students also need to be provided with exemplars of work to give relevance to the
success criteria. Identifying quality learning in other students will in turn help students
identify quality in their own learning. Glasson (2009) and Black et al. (2003) contend that
peer-assessment is an important prerequisite to assessment, as it prepares students to assess
their own learning by being able to recognise quality in other student’s work.
Learning progressions and formative assessment
Learning progressions are particularly relevant to my study in relation to the planning of
learning intentions within a classroom. I am interested in investigating whether teacher
understanding of learning progressions could support developing a progression of learning
intentions towards a long-term goal. Heritage (2007) argues that:
56
The purpose of formative assessment is to provide feedback to teachers and
students during the course of learning about the gap between students’ current and
desired performance so that action can be taken to close the gap. To do this
effectively, teachers need to have in mind a continuum of how learning develops in
any particular knowledge domain so that they are able to locate students’ current
learning status and decide on pedagogical action to move students’ learning
forward. (p. 2)
As stated by Heritage (2007), ‘by its very nature, learning involves progression’ (p. 2). It is a
concept that connects formative assessment and learning progressions. Popham (2007) defines
learning progressions as a ‘carefully sequenced set of building blocks that students must master
en route to a more distant curricular aim. The building blocks consist of sub-skills and bodies
of enabling knowledge’ (p. 83). The New Zealand Ministry of Education (2010) describes
learning progressions for literacy as ‘what students need to know and be able to do, at specific
points in their schooling, if they are to engage with the texts and tasks of the curriculum and
make the expected progress’ (p. 3). These definitions underpin the importance of the student
moving through a set of necessary skills that enables them to be successful in a long term goal.
Popham (2008) goes further in another study to describe learning progressions as the setting
whereby teacher and students can decide when to collect the evidence needed in relation to the
student’s capabilities towards the curricular aim. Using learning progressions for specific
outcomes enhances the teacher’s ability to provide relevant feedback to students on the target
they are attempting to achieve and helps teachers distinguish appropriate ‘adjustment decision
points’ (Marzano, 2013, p. 27). Learning progressions become the map that gives direction for
the most effective way to use formative assessment. Ideally, students need to be involved in
developing short terms goals for themselves, which are developed from a set learning
progression (Heritage, 2007).
Many educational systems are identifying learning progressions as an effective way to assist
teachers in planning and monitoring the curriculum and improving student learning (Popham,
57
2007). Student learning is differential and therefore, needs differential instruction, hence the
case for teachers understanding learning progressions to assist them with formative assessment
(Heritage, 2008). With specific learning goals determined in a learning progression, teachers
can match formative assessment opportunities and improve student learning in a systematic
way. Importantly, Marazno (2013) acknowledges that learning does not take place in a ‘strict
linear fashion’ (p. 83), in other words, teachers need the flexibility to address different
elements of the learning progression at different times.
Importantly, learning progressions provide opportunities for effective instructional planning.
They allow teachers to focus on the learning goals (learning intention) as opposed to what
activity students are going to do (Heritage, 2008). However, Popham (2007) argues that
learning progressions focusing on all the nuances of a student’s learning toward their long-term
goal can be too complex and therefore, difficult to monitor. Teachers need to identify the key
steps en-route to the mastery of the curricular aim.
Conclusion
The aim of this chapter was to explore key literature and contemporary thinking on assessment
and in particular the use of formative assessment to inform my study. Through an analysis of
the literature, I found significant evidence to support the notion that formative assessment
improves student learning. The literature has now grown beyond proving that formative
assessment improves student learning and is moving towards the current trend of encouraging
research on how to implement strategies in the classroom that will help students move forward
in their learning. The natural links between each of the strategies of formative assessment
demonstrate their relationship to learning and teaching and should not be seen as separate
entities.
For many schools, the pressing issue is no longer whether formative assessment can improve
student learning, but how best to implement the strategies of formative assessment that can
lead to the ultimate goal of student’s assessing their own learning (Earl, 2003).
58
Chapter 3: Literature review: teacher collaboration
International evidence suggests that educational reform’s progress depends on
teachers’ individual and collective capacity and its link with school-wide
capacity for promoting pupils’ learning. Building capacity is therefore critical.
Capacity is a complex blend of motivation, skill, positive learning,
organisational conditions and culture, and infrastructure of support. Put
together, it gives individuals, groups, whole school communities and school
systems the power to get involved in and sustain learning over time.
Developing professional learning communities (PLCs) appears to hold
considerable promise for capacity building for sustainable improvement.
(Stoll, Bolam, McMahon, Wallace & Thomas, 2006, p. 221)
Introduction
The focus of the first phase of the literature review (Chapter 2) was on developing insights on
the concept, scope and practices of formative assessment, as part of a wider discussion on
assessment. However, as the first phase of the action research showed teacher collaboration to
be important in the development of formative assessment, I decided to undertake a further
literature review to fully explore the theory and practice of collaboration. This lead to the
formation of an additional research question for phase 2 of my action research: How can
collaboration in teacher professional learning teams help influence the development and
implementation of quality formative assessment strategies?
In the first phase of the study, as the teachers worked together, collaboration was identified as a
key factor in the planning of the strategies. However, I found I did not have sufficient insights
from the literature to inform the ways we could structure and develop our collaboration. Yet, I
wanted the second phase of the study at Matilda International School (MIS) to evolve with a
specific focus on the investigation of how collaboration within a professional learning team
(PLT) can assist and improve teachers’ capacity to develop formative assessment strategies.
59
After an initial review of the literature exploring various forms of teacher collaboration, I
decided to use the concept of a PLT instead of professional learning community (PLC) (Hord,
1997). The concept of PLC implies the involvement of a larger group such as a whole school
community, whereas the PLT we formed involved only teachers from the same Grade 1 year
level.
In the literature, both the terms community and team are used to describe a group of educators
coming together for a shared educational purpose and to improve student learning (Hord,
1997). I decided to use the acronym PLT with respect to any literature that discusses teams or
communities collaboratively working together in a systematic approach to improving student
learning. The acronym PLC is used instead of PLT where the literature being discussed uses
this term, and where accuracy of the referencing requires this term.
In this chapter, I explore how PLTs are defined and the principles that inform their success in
improving student learning. This literature provides a valuable framework for capturing and
analysing the functionality of the PLT in my study. The review also provides insights into how
collaboration can be a key element within a PLT.
Definition of a ‘professional learning team’
Defining a PLT within the literature is challenging, since there are no universally accepted
definitions (Stoll et al., 2006). What makes defining the construct challenging is that various
labels for groups of professionals working together exist within the literature including
professional learning communities (DuFour & Marzano 2011; Hord 1997; Stoll et al., 2006),
professional learning teams (Griffin et al., 2010) and teacher learning communities
(McLaughlin & Talbert, 2006). While there is no one exhaustive definition they all have
common features. Hord (1997) states that a critical element of PLCs is the development of an
on-going process that has educators working collaboratively towards a shared goal to improve
their effectiveness as professionals through shared learning for the benefit of student learning.
McLaughlin and Talbert (2006) argue it involves reflecting on ‘practice, to examine evidence
about the relationship between practice and student outcomes, and make changes that improve
60
teaching and learning’ (p. 4). Stoll and Louis (2007) state there is a general consensus that it is
‘a group of people sharing critically and interrogating their practice in an on-going, reflective,
collaborative, inclusive, learning-orientated, growth-promoting way’ (p. 2). The definition
most closely aligned to my study is that of DuFour, DuFour, Eaker and Many (2010), who
define the working of a professional learning community/team as an 'ongoing process in which
educators work collaboratively in recurring cycles of collective inquiry and action research to
achieve better results for the students they serve’ (p. 14).
To understand what is a PLT, it is imperative to consider each of the elements that make up the
construct. The word professional provides the emphasis on work that requires a specialised
and technical knowledge, strong collective identity through professional commitment and the
position towards the client and professional autonomy (Stoll & Louis, 2007; Stoll et al., 2006).
Stoll and Louis (2007) argue ‘it is not insignificant’ (p. 2) that the word learning appears in the
title as it shifts the emphasis from a process towards the objective of improvement.
McLaughlin and Talbert (2001) state that not all professional teams are open to change or
concerned with improvement. Learning together in professional teams involves working
together towards a common goal through the shared understanding of concepts and/or practices
(Bryk, Camburn & Louis, 1999; Stoll et al., 2006).
The move away from teachers working as individuals signals significance for the word
community in PLC and team in PLT, but the distinction between the two does not change the
purpose. DuFour and Marzano (2011) identify teams as people working interdependently to
achieve a common goal, where everybody is accountable. Teams agree upon clear benchmarks
and measures to monitor student progress. They examine student data together to make
informed decisions about how to improve their practice. Much of what Hord (2009) describes
in relation to PLCs also relates to a successful team. She includes a focus on a shared purpose,
mutual respect and caring for each other, and persistence on integrity and truthfulness.
Lambert (2003) argues that in ‘elevating our work in schools to the level required by a true
community, we must direct our energies and attention toward something greater than
ourselves’ (p. 4). It is DuFour’s (2011) focus on interdependence that means teams are
working towards goals beyond what an individual could achieve. The elements of shared
61
vision, collaboration and learning together provide the necessary foundations for teachers
collectively to take responsibility for students’ success (McLaughlin & Talbert, 2006).
Collaboration in professional learning teams
Traditionally, the structure of schools created a ‘culture of professional isolation’ (DuFour &
Marzano, 2011, p. 50), leaving teachers to pursue improving student learning on their own,
behind closed doors, in individual classrooms. One of the reasons collaboration has become
the focus of attention is due to ineffective teaching practices related to teacher isolation
(Riordan & Gaffney, 2001) and the lack of shared and effective planning for learning that is
now seen to be an important part of professional work. Goodlad (1984) found that in isolated
classrooms where teachers worked on their own planning and preparing their own lessons, they
struggled to solve most of their instructional, curricula and management problems. Fulton,
Yoon and Lee (2005) argue that a culture of working as a solo practitioner in teaching creates
isolation. They found this traditional approach to teaching was seen as the most persistent
issue preventing schools from improving. In contrast to a culture of isolation, Riordan and
Gaffney (2001) argue that collaborative practices can prevail over the negative impact of
teacher isolation. Little’s (1982) research identified teachers working in isolated departments
and reported improved student learning and classroom discipline when they met regularly to
work together on curriculum and specific classroom approaches. She noted that these meetings
amongst teachers became a consistent feature of the work. Professional teams emerged as a
concept that improved the wellbeing of teachers, and that could make a difference to student
achievement in the classroom (Louis, 2006). Teachers moving from working independently to
interdependently, has led to the concept of PLTs being explored further through research.
Rosenholtz (1989) found:
That when collaborative norms undergird achievement-oriented groups, they
bring new ideas, fresh ways of looking at things, and a stock of collective
knowledge that is more fruitful than any one person’s working alone. (p. 41)
62
Rosenholtz (1989) argues that this notion of teachers working together, is at the very core of
what makes PLTs effective. Vescio, Ross and Adams (2006) state that PLTs are founded on
two assumptions:
• Knowledge exists in a teacher’s experiences and is best understood through critical
reflection with peers who share similar experiences (Buysee, Sparkman, & Wesley,
2003).
• Having teachers involved in PLTs will increase their professional knowledge and
improve student learning.
Fullan (1990) states that there are numerous factors that impact on the effectiveness of
collaboration. This includes the characteristics of the task, the organisation itself and the
beliefs and perceptions that exist within it as well as the skills of individuals. Each of these
elements can enhance or derail collaboration. Teachers do learn ‘how to translate enhanced
curricula and higher standards into teaching and learning for all of their students’ through PLTs
(McLaughlin & Talbert, 1993a, p. 5). However, it is more than just the existence of the
learning community that improves through reform. It is what the learning community chooses
as their focus which determines the desired success (Hord, 1997). In phase 2 of my study, the
Grade 1 team agreed that formative assessment was an area they believed could be improved in
their practice. With all members in agreement, there was a shared commitment to
collaborating on developing formative assessment strategies. The literature views this as a
pivotal requirement (Hord, 1997), and the teachers in my study believed that their collaboration
was worth the time and energy, and would lead to improved learning for the students. During
phase 2, further insights were needed into the key factors involved to ensure success in teams.
Factors ensuring successful collaboration in the PLT
Thompson, Gregg and Niska (2004) argue that the purposes and goals of a PLT should grow
amongst the participants based on their values, beliefs, and individual and shared experience.
DuFour and Marzano (2011) state that there are three clear ‘big ideas’ (p. 22) that drive the
PLC process that are summarised here:
63
1) Ensuring that all students learn
The PLT model is based on the assumption that the core focus of education is not just to ensure
that students are taught, but to ensure that they are learning as well (DuFour, 2004). When a
school shifts from a focus on teaching to a focus on learning, this can have profound
implications on the school and the learners. They begin to focus on what they need to do to
create such a focus in the school. Staff who have a built a shared knowledge and are working
on a common ground have a solid foundation upon which to move forward with initiatives. In
doing so, Hord (2009) argues that teachers ‘take responsibility to learn new content, strategies,
or approaches to increase’ (p. 40) their effectiveness in teaching. Teachers begin to ask
questions about finding the characteristics and practices that help all students learn. DuFour
(2004) argues that there are three essential questions that drive a PLT:
• What do we want each student to learn?
• How will we know when each student has learned it?
• How will we respond when a student experiences difficulty in learning? (p. 7)
DuFour (2004) states that it is the asking of the third question that ‘separates learning
communities from traditional schools’ (p. 7). DuFour and Marzano (2011) have since added a
fourth question to consider focused on, ‘How will we enrich and extend the learning for
students who are proficient’ (p. 23)?
2) A culture of collaboration
Schools who build PLTs understand that they must work together in order to achieve their
collective purpose of learning for all. Structures are put into place to promote a collaborative
culture. However, despite the overwhelming evidence that working together collaboratively
moves schools towards best practice, many teachers continue to work in isolation. The
powerful use of collaboration that underpins a PLT is a systematic process by which teachers
work together to analyse and improve their teaching. These communities work together in an
64
on-going cycle of questions that promote deep learning within the team (DuFour, 2004). The
collaborative discussions open up conversations that otherwise traditionally stayed closed.
Goals, strategies, materials, pacing, questions, concerns and results are openly discussed in
order to produce the best opportunities for student learning. McLaughlin and Talbert (1993b)
argue that teachers who work in cohesive and highly collegial environments ‘report a high
level of innovativeness, energy, enthusiasm, and support for personal growth and learning’ (p.
244). However, Hargreaves (1994) is concerned about ‘contrived collegiality’ (p. 195) where
mandated collaboration is put in place by adjusting organisational structures and processes. He
argues this can have both a positive and negative effect on learning and teaching. One concern
raised is that when collaboration is focused on the goal of management as opposed to the goals
set by teachers aimed at improving student learning, benefits can be compromised.
3) A focus on results
PLTs should judge the effectiveness of their shared work through evidence of their results in
improving learning. DuFour and Marzano (2011) advocate that teachers must create a ‘results
orientation in order to know if students are learning’ (p. 24). They argue that teachers need to
be looking for evidence of student learning and use that to push for continuous improvement.
The aim of this is to improve individual practice as well as the practice of the collaborative
team. One approach endorsed by DuFour and Marzano (2011) is to use SMART goals that are:
1. Strategically aligned with school priorities
2. Measurable
3. Attainable
4. Results focused
5. Time appropriate (O’Neill & Conzemius, 2005)
In my research, I wanted to build understanding of how collaboration within a team could
improve the development of formative assessment strategies. Riordan and Gaffney (2001)
argue that collaboration can be a means to improve teacher practice and student learning if it is
supported appropriately within the school community. McLaughlin and Talbert (1993a) claim
65
that when experienced teachers have the opportunity for collaborative inquiry, the result is a
body of knowledge and understanding that can be shared amongst all colleagues. The feeling
of interdependence is vital to collaboration (Stoll et al., 2006). Little (2003) found that key
elements in successful schools include opportunities for teachers to engage in ongoing
collaborative opportunities, talk about their practice, receive constructive feedback, develop
lessons together and teach each other.
According to Riordan and Gaffney (2001) successful collaboration is described in the literature
as involving collegiality, colleagueship, cooperation, helping, peer coaching, peer sharing and
caring and consulting. They view teacher collaboration as a continuum from complete
independence (me or I) to interdependence (us and we) (see Figure 4). They contend that one
of the main benefits of collaborating with peers is the opportunity to learn from the experience
and expertise of peers, as teachers often identify teachers as the best source of information and
support.
Figure 4: Teacher peer relations continuum (Riordan & Gaffney, 2001, p. 6)
For collaboration to be effective within a school, Riordan and Gaffney (2001) identify factors
that can hinder or help its impact on student learning. They argue that for collaboration to be
given adequate emphasis in schools, teachers need to be provided with time to become
involved, both formally and informally. Other factors include administrative support, trust,
66
respect, shared control and responsibility for teachers. If these factors are present, there is the
necessary support for collaboration to be effective. It is these factors that are of particular
interest to me, as in the second phase of the action research, I wanted to investigate how the
teachers within the PLT would be able to develop the level of teacher collaboration discussed
in the literature in order to develop formative assessment strategies.
Louis (1994) argues that a key aspect of collaboration is collective learning, whereby the
community interacts, engages in conversation about information and data, interprets it as a
group and distributes the information amongst the members. Importantly, in group learning,
data is interpreted communally and the information shared with teachers, thus promoting
individual and group learning (Stoll et al., 2006). It is the norm of working together that
promotes group and individual learning amongst educators. Offering teachers an opportunity
to be involved in professional dialogue with other teachers and administrators means that
teachers' ideas of effective teaching practice become more clearly defined (McLaughlin &
Talbert, 1993a). It is this hypothesis that I wanted to explore through collaborative work with
the Grade 1 PLT.
Hord and Hirsh (2009) argue that effective PLTs are democratic, participatory and share
authority and decision-making from the beginning. That is, they involve continuous
collaboration. Importantly, they also prepare others to take the lead (Hord & Hirsh, 2009).
Darling-Hammond (1996) agrees that shared decision making leads to curriculum reform and
the transformation of teaching roles in some schools. In my action research aimed at reforming
assessment practices to improve learning, I wanted to capture the impact of teachers working
together. Boyer’s (1995) argument is that an essential element of a successful school is that of
connection; where teachers teach effectively in their own classrooms, but also do important
work together to find solutions with other teachers. In these schools, teachers operate as
members of a team with shared goals and routinely designated time for professional
collaboration. In these situations, teachers are more likely to be consistently well informed,
professionally improved, and motivated to improve student learning, thus creating a sense of
satisfaction amongst teachers.
67
Other characteristics of an effective professional learning team
Bolam et al. (2005) and Stoll et al. (2006) argue that along with collaboration, there are other
characteristics of PLTs highlighted in the literature. These include shared values and vision,
collective responsibility and reflective professional inquiry. Although not explored by Stoll et
al. (2006), supportive and shared leadership has also been identified as key characteristics of a
PLT (Hord, 1997). My phase 1 study showed that it would be worthwhile to explore the
characteristics of a PLT and how it would impact on the development of formative assessment
strategies in the collaborative team meetings in our action research. Below is a description of
these characteristics and their effectiveness in a PLT.
Shared values and vision
Schools need to create a shared understanding of the purpose and value of collaboration
(David, 2008). Having this as a shared vision and sense of purpose has been found to be of
vital importance and leads to agreement about conduct in the school (Andrews & Lewis, 2007).
Staff should be encouraged to be involved in the process of developing and implementing a
shared vision, and using that vision to make decisions about learning and teaching in the
school. This is relevant to my study because one of the first aims for the PLT will be to
establish a shared understanding of what we are trying to achieve as a team together, to ensure
we have a collective responsibility for the vision.
Collective responsibility
There is an overarching agreement in the literature that effective members of the PLT will take
collective responsibility for student learning (DuFour, 2004; Hord, 1997; Kruse, Louis & Bryk,
1995). DuFour and Marzano (2011) strongly argue that it is the collective capacity created by
the implementation of PLTs within schools that will lead to sustained and on-going
improvement of meeting the needs of students. It is also seen as one of the most powerful
strategies for improving learning and teaching (DuFour & Mattos, 2013). Fullan (2010)
contends that this approach to school reform allows ‘ordinary people to accomplish
68
extraordinary things’ (p. 72). Shared vision, collaboration and learning together provide the
necessary bases for taking collective responsibility for student learning (McLaughlin &
Talbert, 2006). This collective capacity provides teachers with the structure and culture to
focus on continuous improvement of individual teachers and the collective professional
practice (DuFour & Marzano, 2011). For my study, I was interested in identifying how
collective responsibility for the learning of all Grade 1 students would improve the quality of
formative assessment strategies developed.
Reflective professional inquiry
There is little debate that sharing and reflection have a crucial role in discovering or creating
the practices to help improve student learning (Lieberman & Miller, 1999; McLaughlin &
Talbert, 2006; Schon, 1983). In recent years, reflective teaching has been embraced by the
teaching profession as a pushback against top-down educational reforms that involve teachers
implementing programs that have been formulated and developed elsewhere (Zeichner &
Liston, 1996). If teachers are involved in reflective conversations about learning and
identifying related issues and concerns, they learn to apply new ideas and information to
problem solving (Hord, 1997). This shared practice and collective inquiry promotes sustained
improvement amongst teachers by strengthening their connections, stimulating dialogue about
their practice, and assisting teachers to build expertise of their peers (McREL, 2003).
Sergiovanni (1994) argues that inquiry forces teachers to debate what is important and this
promotes a shared understanding that brings them together. Within my action research, it was
important to explore how the Grade 1 team could reflect upon the formative assessment
strategies developed and how this could assist in improving strategies. Ponte (2002) argues
that teachers engaging in reflective based discussions on data they have systematically
collected is a vital for action research to be successful in improving student learning.
Supportive and shared leadership
Hord (1997) argues that the campus administrator (i.e. the principal) has a key impact on
change. She states that a school can only be transformed into a community of learners with the
69
support of the school’s leadership. It is the principal who has a significant influence on the
shared leadership within the process of PLTs. Caine and Caine (2000) identify this as an
opportunity to develop teacher leaders. Effective principals will enlist the support of leaders to
develop success within the school and not attempt to do it alone as no one person is the bearer
of all knowledge, skills and expertise needed to fulfill all necessary leadership responsibilities
(DuFour & Marzano, 2011). Leithwood, Leonard and Sharratt (1998) reinforce this view,
since they argue that principals who treat teachers with respect and as professionals, working
with them as peers and colleagues, create a strong learning community. This is the case where
leaders in schools commit to sharing decision making with teachers and providing them with
pathways to serve as leaders (Hargreaves & Fink, 2006). Teacher leadership helps to sustain
PLTs, as sharing power and authority is given to teachers through decision making and shared
leadership (Olivier & Hipp, 2006). Shared authority, whereby teachers have a key role in the
decision making processes relating to student learning is the one key area identified by Prestine
(1993) that principals need to implement in order for PLTs to be effective.
Effective leadership and professional learning teams
To implement the action research for both phase 1 and 2 of my study, I needed the principal’s
permission to conduct the research at MIS. I also needed the principal’s support and
encouragement to ensure I could effectively complete the action research. It is for this reason,
that I wanted to explore literature related to the influence leaders can have on PLTs and,
indirectly, on student learning.
Significant research has been conducted into whether leadership within a school, in particular,
the principal, has a positive connection with student learning (Marzano, Waters & McNulty,
2005). Hord (1997) argues that the literature recognises the role and influence the campus
administrator has on whether or not change will occur within a school. Ultimately, she states,
transforming a school into a learning community can only be achieved with the principal
supporting the whole staff’s growth as a community of learners. Figure 5 (see p. 70) shows the
indirect influence a principal has on student learning through professional learning
communities outlined by DuFour and Marzano (2011). They argue that a principal has a great
70
influence on student learning when they improve the learning and teaching through PLTs by
gathering momentum and influencing a group of teachers at one time as opposed to
individually working with each teacher of the school.
Figure 5: Relationship between principal behavior and student achievement with the collaborative teams of a professional learning community (DuFour & Marzano, 2011, p. 52)
DuFour and Marzano (2011) stress that principals must establish effective communication,
demonstrate flexibility in meeting the needs of different teams, create the conditions that
optimise school improvement efforts and provide time, resources and materials to help teachers
succeed in their goals. Kruse, Louis and Bryk (1995) make a clear argument for principals
giving high autonomy to PLTs:
Teachers with more discretion to make decisions regarding their work, feel
more responsible for how well their students learn. The flexibility allows them
to respond to the specific needs they see. Instead of being guided by rules, they
are guided by the norms and beliefs of the professional community. (p. 5)
71
Kruse, Louis and Bryk (1995) argue that teachers should have the flexibility to deal with the
individual needs of students as this brings student learning to the forefront. It is this culture
within schools that should underpin the beliefs of PLTs. I aimed to investigate this notion in
my research, in order to gain more understanding of the power of teachers in my PLT to
improve learning for the children in our classes. Andrews and Lewis (2007) found that where
teachers have been involved in a PLT, it increases their knowledge base and also has a
significant influence on their classroom work. Teachers within schools with PLTs embedded
into the culture have a sense of community, an increased sense of work efficacy, which leads
towards increased self-motivation and work satisfaction as well as a greater sense of collective
responsibility for student learning (Louis & Kruse, 1995). DuFour (2003) argues the benefit of
this approach is that teachers experience both higher job satisfaction and the sense of success
that comes with having a positive influence on students. Bryk, Easton, Kerbow, Rollow &
Sebring (1994) also argue that schools where democratic practices are in place, with extensive
participation from teachers, are more likely to undertake essential systemic change. I was
interested in investigating both teacher participation and the principal’s approach and beliefs
about PLTs and how teachers would respond to their engagement in the PLT.
Teachers as researchers
Literature related to teachers as researchers through action research is of particular relevance to
me since this is the methodological approach to be used for my study. Sergiovanni (1996)
argues that:
If our aim is to help students become lifelong learners by cultivating a spirit of
inquiry and the capacity for inquiry, then we must provide the same conditions
for teachers. (p. 152)
The argument that teachers should be teaching and researching, and therefore playing a critical
role in curriculum reform and development of more effective learning and teaching strategies
has existed for many years (Stenhouse, 1975). However, ‘learning to do action research and
learning to facilitate action research successfully can be a complex process’ (Ponte, 2002, p.
72
420). Therefore, the structures put in place for my study were very important to ensure the
action research would be successful. Calhoun’s (1994) research provides teachers with the
structured focus they need to create effective PLTs through action research. She states that
action research engages teachers in looking at what is going-on within their own school and
classrooms and determining if and how they can make a difference. Calhoun (1994) outlines
the required conditions for action research to be supported and argues that the following
prerequisites are necessary:
• Educators committed to improving the learning for all students
• A clear agreement about how decisions will be made by the staff together
• Facilitators who will support and guide the staff in the action research process
• Groups that commit to meeting regularly
• An understanding of how action research works
• Technical assistance.
Since phase 2 of my study involved a continuing cycle of action research, the conditions
proposed by Calhoun (1994) were of a particular interest to examine how they might impact on
the functioning of the PLT.
DuFour, DuFour, Eaker and Many’s (2010) view that PLTs require an ongoing process of
inquiry that focuses on improving learning for all students is consistent with the method for my
study, and identifies the important role of the teacher as a researcher. In this process, teachers
can gather evidence of current student learning, develop and implement strategies to improve
the learning, analyse the impact of the strategies and decide what steps need to be taken
through a systematic inquiry cycle.
Conclusion
The purpose of this chapter was to explore literature related to the phase 2 action research (see
chapter 6). The review provided an understanding of PLTs, in particular how collaboration
could assist with developing the characteristics that would ensure our PLT would succeed. The
73
chapter began with a review of the literature related to the concept of professional learning
teams (PLT). This included how the development of PLTs creates the de-privatisation of
teacher’s practice in order to create interdependence, where educators work together to meet
their collective goal of improving student learning. The chapter continued with a section
discussing the three ‘big ideas’ outlined by DuFour and Marzano (2011, p. 22) that underpin
the effectiveness of PLTs. These include ensuring that all students learn, developing a culture
of collaboration and a focus on results. This chapter then explored characteristics of a PLT
including shared values and vision, collective responsibility, reflective professional inquiry and
supportive and shared leadership. The final section discussed the importance of leadership
and in particular, the school principal in the effectiveness of PLTs and provided an overview of
the characteristics of PLTs. The literature review provided an important scaffold for the phase
2 focus of my action research.
74
Chapter 4: Research methodology
Introduction
The research methods utilised in this thesis and the rationale for the choice of methods are
explained in the three main sections of this chapter. This first section provides an overview of
the methodology and outlines the research questions. The second section describes the action
research undertaken in phases 1 and 2 including the data collection and analysis methods. The
third section explains the case study research including the data collection and analysis
methods.
This thesis investigated how formative assessment can be developed and implemented within a
primary program, through action research in a Hong Kong international primary school and a
case study of an Australian primary school. The aim was to develop new understanding of
how formative assessment can be developed in classrooms and how it can influence and
improve student learning. In phase 1 of the study, I undertook practitioner research through
two phases of action research within my own classroom and with colleagues in the Hong Kong
school. This firstly involved developing explicit use of formative assessment strategies
including stating the learning intention, developing the success criteria with students, effective
teacher questioning, teacher feedback and self and peer-assessment; all well documented
aspects of formative assessment that were discussed in the literature review (see chapter 2).
This included planning, acting, developing and reflecting on the development and use of these
strategies in the classroom through an action research cycle outlined by Mertler (2006). This
action research was undertaken with two participating teachers from Prep and Grade 5 and in
my own classroom. At the time of the research, we all taught at Matilda International School
(MIS), where two phases of action research took place.
Second, building on the learning achieved in the first the action research process, phase 2
continued at MIS, with the aim of further investigating how to develop formative assessment
strategies, through an explicit focus on investigating the importance of teacher collaboration
within a PLT. The aim of this phase was to develop deeper understanding of how teacher
75
collaboration can influence the implementation of quality formative assessment strategies, and
the resulting impact on classroom pedagogy.
Phase 3 involved a case study undertaken in a primary school located in Melbourne, Australia,
including observations and semi-structured interviews related to practices in formative
assessment in a primary school known for its innovative and progressive curriculum. I aimed
to use the data and findings from this case study to make comparisons with the Hong Kong
school, and to provide further understanding of formative assessment strategies and how they
can improve student learning within the classroom. In addition, I aimed to use this phase to
triangulate the findings about the development and implementation of pedagogy for formative
assessment and about teacher collaboration in PLTs.
Research questions
1. What key strategies for formative assessment can improve student
learning?
2. How can formative assessment strategies lead to improvements in
student learning?
3. How can collaboration in teacher professional learning teams influence
the development and implementation of quality formative assessment
strategies?
4. What impact does this collaboration have on classroom pedagogy linked
to formative assessment?
Rationale for selection of qualitative methodology
Crotty (1998) defines methodology as ‘the strategy, plan of action, process or design lying
behind the choice and use of some particular methods to desired outcomes’ (p. 3). For my
study, the rationale for utilising qualitative methodologies was that I wanted to explore and
understand formative assessment in the natural setting of a classroom through practitioner
76
research and this requires a qualitative approach. Denzin and Lincoln (2003) see qualitative
reasearchers as studying:
…things in their natural settings, attempting to make sense of, or interpret,
phenomena in terms of the meanings people bring to them ... [and] deploy a
wide range of interconnected methods, hoping always to find a better fix on the
subject matter at hand. (p. 2)
My decision to use both action research and a qualitative case study for this study was strongly
influenced by the outcomes I aimed to achieve. This involved a deep understanding of the
development and implementation of formative assessment to improve student learning and
could not have been achieved using quantitative methods. Maykut and Morehouse (1994)
argue that the use of qualitative research has increased the ‘understanding of human
experience’ (p. 150).
Since the aim of this study was to gain a deeper understanding of the development and
effectiveness of formative assessment strategies and its enhancement through collaboration, the
most suitable analytic techniques were interpretive and collaborative social research
approaches.
Miles and Huberman (1994) argue that the strength of qualitative research is the focus on
‘naturally occurring events in natural settings’ allowing researchers to understand what ‘real
life’ is like (p. 10). Similar to this, Maykut and Morehouse (1994) see a major focus of
qualitative research as being researchers attempting to understand the experiences of people in
a natural setting. It is often people’s words and actions that the researcher attempts to capture.
This is optimally achieved by collecting data as close to the specific situation as possible.
Since this study attempted to view students and teachers in their natural setting to best
understand how formative assessment was used and developed, qualitative research was the
most appropriate methodology.
77
The choice of practitioner research using action research methods and case study provide the
necessary methods to develop an understanding of implementing and developing formative
assessment strategies, the importance of collaboration, the impact of this process on pedagogy,
and a comparative process to deepen the analysis and conclusions. These methods are
explored further below.
The choice of action research
Brydon-Miller, Greenwood and Maguire (2003) argue action research has a ‘complex history’
(p. 11) as it is not a discipline, but an approach to research that has developed over time from a
range of academic areas. Action research is a rigorous inquiry process based upon cycles of
actions and reflections that produce practical knowledge towards set outcomes and create new
forms of understanding (Reason & Bradbury, 2006). O'Toole and Beckett (2010) state that,
‘action research is not about describing or interpreting what happens; it is about change and
using research to solve real problems’ (p. 65). Phase 1 and 2 were action research projects
which both had clear aims of solving real problems. Phase 1 aimed to identify, develop and
implement formative assessment strategies and phase 2 aimed to develop and implement
formative assessments strategies through collaboration within the structure of a PLT. The
overall aim was to discover how this could improve student learning and to then use the
findings to influence the wider school community to have a stronger focus on formative
assessment.
Action research in academic terms is seen as a cycle of planning, action, developing and
reflecting which can lead to further planning and action (Mertler & Charles, 2005). Mills
(2000) defines action research in an education setting as:
Any systematic inquiry conducted by teacher researchers, principals, school
counselors, or other stakeholders in the teaching/learning environment to
gather information about how their particular schools operate, how they teach,
and how well their students learn. (p. 6)
78
McMillan (2004) identifies action research as being focused on solving a specific school
problem and improving the immediate practice within the school. Carr and Kemmis (1986)
state that action researchers ‘aim to improve their own educational practices, their
understandings of these practices, and the situations in which they practice’ (p. 180). The
approach is ‘characterized as research that is done by teachers for themselves’ (Mertler, 2006,
p. 2). Classroom action research involves a qualitative approach to inquiry and data collection
by teachers with the aim of improving their practice through their own judgments (Kemmis &
McTaggart, 2005). Each of these definitions of action research convinced me that action
research would be appropriate for my study.
When undertaking action research, it is important to consider it in terms of its social and
collaborative dimensions, where people together can make changes to their practices through a
shared social world (Kemmis & McTaggart, 2005). When I commenced this study, my own
prior experiences of formative assessment provided limited insight into the effectiveness of
assessment in the classroom to improve student learning. The school leadership and teachers
had identified the need to explore ways in which student learning could be enhanced. With this
understanding, I saw this as an opportunity to develop and improve formative assessment
strategies in my own classroom and through collaboration with my colleagues, therefore, the
justification for using action research in what evolved as phases 1 and 2. The focus of action
research is working towards practical outcomes (Reason & Bradbury, 2006). As a classroom
teacher at MIS, it was clear to me that there was a need for teacher professional learning in the
area of formative assessment. The school had a traditional approach to assessment with
summative grading and testing being the common approach to how assessment was used.
Formative assessment was not discussed at curriculum meetings, nor was it a major focus
within the learning and teaching of the classroom. Action research provided the methodology
to achieve the aims of phase 1 and 2 by working with colleagues on an in-depth project on
formative assessment.
Another aim of my study was to bridge the gap between theories on assessment and the day-to-
day practicality of what goes in classrooms. But it was important to remember that ‘action
research is research with, rather than on practitioners’ (Reason & Bradbury, 2006, p. xxv).
79
This notion of working together with teachers to improve student learning would be the key to
providing new and deeper knowledge about the field of formative assessment as an outcome
from the research. Johnson (2005) also stipulates that action research does not follow a set
process, as researchers may find they need to skip certain steps, rearrange the order, or repeat
some steps more than once. This methodology was an appropriate choice for my study as it
allowed for learning and changes to develop along the way. In addition, the participants began
with a varying degree of understanding of formative assessment. Therefore, they needed to
start at different points.
Phase 1 and 2: Action research process
Mertler’s (2006) process of action research (see Figure 6, p. 80) provides the action research
cycle that I undertook for phase 1 and 2. The cyclical and spiraling nature of action research
allowed the development and implementation of assessment strategies to be reflected upon and
implemented over and over, as the strategies for formative assessment were developed and
enacted, and thus increasing the likelihood of improving student learning. During the data
collection, I moved through stages from 1 – 4 on many occasions with the teacher participants
for phase 1 and 2. At different stages during the process, I found that we were occasionally
repeating some of the steps and sometimes completing them in a different order (Mertler,
2006). Johnson (2005) argues that action research is a fluid process, which allowed for each
teacher to be at different stages of the process. Through observations and discussions, I
monitored the development of each teacher with respect to their use of formative assessment
strategies and collaborated with them to modify and implement the different formative
assessment strategies.
80
Figure 6: Process of action research (Mertler, 2006, p. 24)
Table 2 (see p. 81) shows elements of the different stages of Mertler’s (2006) process of action
research I was involved in with the participating teachers including the four stages of planning,
acting, developing and reflecting:
81
Stage Name of the stage Elements of the stage
1 Planning
Identifying and limiting the topic
Reviewing related literature
Developing a research plan
2 Acting Collecting data
Analysing data
3 Developing Developing an action plan
4 Reflecting Sharing and communicating results
Reflecting on the process
Table 2: Action research process: adapted from Mertler (2006)
The focus on practitioner research
While some discussion of the teacher as researcher was provided in the literature review, it is
important to be clear that the review convinced me to adopt this methodology. Kincheloe
(2003) argues that, ‘teachers must join the culture of researchers if a new level of educational
rigor and quality is ever to be achieved’ (p. 18). He advocates strongly for the inclusion of the
voice and experience of practicing classroom teachers in improving education. Teacher
research, otherwise known as practitioner research (Herr & Anderson, 2005), has become an
effective and regular characteristic in teacher education, professional development and school
reform at school, district and national level (Cochran-Smith & Lytle, 1999). In recent years the
teacher research movement has been taken more seriously by academics, as they are ‘engaging
in deep intellectual discourse about the knowledge question and/or experimenting with
alternative strategies for school change and teacher learning that are inquiry-based’ (Cochran-
Smith & Lytle, 1998, p. 22). Cochran-Smith and Lytle (1998) argue that this methodology has
the ‘potential to alter the knowledge base of teaching’ (p. 22). They state that teachers carrying
out research in their classrooms and schools can provide strategies that ‘professionalise’ (p. 17)
their work (Cochran-Smith & Lytle, 1999). Practitioner research is most often seen as
‘research undertaken by practising teachers who seek to improve practice through purposeful
and critical examination of, and reflection on, their work’ (Goodfellow, 2005, p. 48). Rigor is
provided through its systematic, critically informed and sustainable approach (Macpherson,
82
Brooker, Aspland & Cuskelly, 2004). Practitioner research focuses on posing critical
questions to improve one’s teaching practice, not just ‘interrogating one’s own and others’
practices and assumptions’ (Cochran-Smith & Lytle, 1999, p. 17). Cochran-Smith and Lytle
(1998) also argue that learning through teacher research is imperative as it includes teachers’:
…inside perspectives as participants and the distinctive lenses they use to make
sense of classroom life over long periods of time that promise to illuminate new
aspects of teaching, learning, and schooling. (p. 26)
Cochran-Smith and Lytle (2009) later stated that:
…what is going on now in the practitioner research movement is far from
monolithic. Because there are so many different initiatives informed by a range
of purposes, contexts, epistemologies, methods, resources and consequences,
various actors in and around the movement necessarily represent it using
different information and different frameworks. These differences are in fact
one of the reasons the movement is dynamic, not dormant, “alive and well” in
spite of – or perhaps in resistance to – the dominant of discourses of the day.
(p. 8)
Practitioner research often involves action research in local contexts, but the findings from
these local contexts may have wider application and significance. Lingard and Renshaw
(2009) view ‘teaching as both a research-informed and research informing profession’ (p. 37)
and argue that it is necessary for the improvement of schools to be supported by researchers,
practitioners and policy makers. Although my study aimed specifically to improve MIS
formative assessment practices from research informed practice, it also aimed to contribute
new knowledge to the field about how formative assessment practice can be developed by
teachers collaborating together to improve their practice.
My aim in my study was to improve the use and development of formative assessment
strategies, and as a classroom teacher within the school, practitioner research offered a
83
legitimate methodology to achieve this inquiry-based goal. Cochran-Smith & Lytle (2009)
state that practitioner research is the ‘umbrella term’ (p. viii) that encompasses multiple forms
of research, including action research. Action research provided a systematic approach for
practitioner research in phase 1 and 2 of my study and a way of creating and extending
professional knowledge about the development of formative assessment that could improve
practice and possibly influence future policies (Macpherson et al., 2004).
My role as a participant observer across the first two phases was in many ways made easier by
having regular contact with all teacher participants. I knew all teachers in the action research
well and had worked with them for a number of years. However, this familiarity also created
its own challenges. Familiarity and comfort within my own school meant it was challenging to
distance myself for reasons of objectivity. However, being consciously aware of this as a
practitioner researcher and participant, I attempted to combine participation and observation
with clarity and detail to ensure that the findings presented are both valid and reliable.
Phase 1 Action research at MIS: Selection of participants
The practical use of formative assessment was a new concept to most teachers at MIS, but
since I felt any teacher’s involvement would add value to the study, I invited all teachers to
participate. At the beginning of the study, I had just begun working at MIS, after teaching at
another international school in Hong Kong. My first year at the new school was my sixth year
of teaching. I previously taught in a secondary school in England as well as two primary
schools in Melbourne. The majority of my teaching had been in the lower primary years, and
that continued in Hong Kong during the data collection phase. The fact that I was new to the
school may have been the reason why, after inviting wider participation during a staff meeting,
only two teachers showed interest and were subsequently selected. It is acceptable for
researchers to recruit a small number of participants in order to develop in-depth investigation
about the research phenomenon (Creswell, 2009), but I would have welcomed the involvement
of more staff.
84
Being a teacher within the same school and having continuing involvement with the
participants who did volunteer assisted with the task of finding the space and time for on-going
dialogue. I was familiar with the surroundings, and the participants were comfortable with my
movements in and out of their classrooms. The students also felt comfortable, often seeing me
around the school. However, whilst in many respects my role was simplified by being a
teacher in the school, it could equally have been complicated by my role as a colleague of the
participants. There was the chance they might feel uncomfortable with my observations in
their natural environment. However, since they had volunteered and the lessons were planned
together through the action research project, the research proceeded in a collegial manner.
Prior to commencement of the project, I approached the Headmaster of MIS to ensure I had
permission to begin my action research at the school. I presented my project to the staff at a
lower school team meeting, explained the project to prospective participants and provided an
explanatory letter and consent form to be completed by the teachers who volunteered to assist
with the research. In the invitation and explanatory statement, I assured participants of their
confidentiality in any published accounts of the research. Teachers were asked to contact me
via email to express their interest in participating. The two teachers who expressed their
interest in participating were contacted via email to confirm their participation.
An explanatory statement was sent to parents of students in each class asking if they would
consent to their child being observed and questioned at school. In the invitation and
explanatory statement, I assured parents of their child’s confidentiality in any published
accounts of the research. Consent forms for the volunteers were provided and signed by
parents. Since there were more than four responses for both classes, the student names were
balloted and the classroom teachers then picked out four names. With parent permission, a
letter of consent was also given to students explaining their involvement in the research
project. In the case of both classes, a brief explanation was given to the participating students
ensuring they understood what they were signing and it also gave them the opportunity to ask
any questions. A total of four students were selected from a ballot from each class. This total
number of students selected allowed me to have in-depth discussions to develop a better
85
understanding of impact the formative assessment strategies had on their learning. A profile of
each participant is provided in chapter five.
Phase 1 action research: Implementing formative assessment strategies at MIS
We began our conversations and planning for the action research about the key formative
assessment strategies of stating the learning intention, developing the success criteria, effective
teacher questioning and feedback and self and peer-assessment (Black and Wiliam, 1998a,
1998b; Clarke, 2008, 2005, 2001; Glasson, 2009). Subsequently, each of these strategies of
formative assessment were developed and implemented as part of the action research. It is
important to note that in the early stages of phase 1, the focus of the planning meetings with
the teacher participants was on setting goals for classroom action, however, these changed as
we moved forward with the action research process.
To commence phase 1, I organised an initial 45 minute semi-structured interview together with
an initial classroom observation involving each teacher, Emily and Harriet, to gather an
understanding of their knowledge of formative assessment and action research. Subsequently,
at a mutually convenient time, I met with the participating teachers individually before and/or
after each lesson that I observed. The purpose of the meetings was to reflect on what had been
observed in the lesson and how the learning from that lesson informed the next lesson/series of
lessons. This reflection and collaborative planning was a significant part of the action
research. I worked with Emily and Harriet over a period of three months. This meant I was in
constant contact with them, organising a time to meet the teacher by either email or through
informal discussions. The planning meetings ranged from 10 minutes to 40 minutes depending
on the strategy being introduced. There were a total of ten before and after meetings for Emily
and seven before and after meetings with Harriet. Teacher availability of time was a
significant factor in determining whether we were able to meet before and after an observed
lesson. In some scenarios the reflecting and planning were combined in the same meeting.
Some lessons were planned with the focus on implementing one specific formative assessment
strategy and other lessons included numerous strategies. I observed and documented 7 lessons
in Emily and Harriet’s class each. The reflection meetings were typically brief 10 minute
86
discussions on what learning was evident in the lesson, what areas to improve and what the
next observed lesson might include. Six months after the observed lessons, I conducted a final
follow up interview with Harriet and Emily separately, to capture their reflections and progress
in implementing formative assessment strategies and to discuss how they believed the action
research had impacted on the development of their practice. Table 3 provides a summary of the
action research process.
Phase 1 action research process at Matilda International School
Step in process Actions, planning and data collection
Step 1
• Initial interview with individual teacher to identify understanding of
formative assessment
• Classroom observation including teacher being observed and
participating students were asked questions about their learning
Step 2
• Planned meeting to develop formative assessment strategies through
shared professional learning and collaboration. This included:
§ Identifying which formative assessment strategy to
implement
§ Developing best approach for implementation of formative
assessment strategy
§ Identifying possible evidence of learning as a result of
formative assessment strategy
Step 3
• Lesson taught by participating teacher:
§ Teacher observed
§ Students observed and asked questions about formative
assessment strategies and the impact on their learning
Step 4
• Reflection including:
§ Feedback about use of formative assessment strategies
§ Identifying evidence of student learning relating to formative
assessment strategies implemented
87
§ Next steps for formative assessment strategies developed
through shared professional learning and collaboration
Step 5 • Steps 2 – 4 repeated another 6 times with each teacher
Step 6 • Final reflection from students and teachers on the impact of
formative assessment strategies on learning
Step 7 • Follow up interview six months later with teachers
Table 3: Phase 1 action research process at Matilda International School
Phase 1: Discussion of data collection methods
In the following section I provide a summary and discussion of the data collection methods
used for phase 1.
Interview process
Creswell (2005) argues that interviews are a well-accepted form of data collection. Maykut
and Morehouse (1994) identify interviews very simply as a conversation with a purpose. This
is where a researcher can ask semi-structured or open and in-depth questions either one-on-one
or in a small group and records the answers (Creswell, 2005). Open-ended questions help the
participants to express their viewpoints unconstrained by any perspectives the researcher has.
They are best used when the participants are able to articulate and feel comfortable about
sharing ideas. Although interviews can be time consuming, there may be some inaccuracies
due to recall and the possibility of the interviewee giving the interviewer the answer they want
to hear. However, I decided that semi-structured interviews would be important in my study to
capture in-depth data from both participants and to gather their answers to common questions.
The reflections on the impact of the action research required on the spot discussions after
individual lessons, providing an opportunity to obtain causal inferences and explanations (Yin,
2009).
88
In phase 1 of the action research, one-on-one in-depth semi-structured interviews were used to
capture useful data as the action research unfolded (Merriam, 1988). The interviews were
guided by open-ended questions determined ahead of time. Emily asked for the questions
ahead of the first interview, whereas Harriet did not feel she needed to see the questions before
her first interview. The semi-structured approach also allowed me to respond to the
information provided and explore any new topics or need for action that arose. The initial
interviews took place with teachers before any observations occurred to gather information on
the participant’s understanding of formative assessment strategies. This was with a particular
focus on what the teachers’ perceived to be their strengths and needs in terms of areas of
improvement relating to the use of formative assessment. This helped to establish which
direction the action research would take. The next meeting established what formative
assessment strategies would be the focus for the action research, including the time and place
for the class observations. The on-going reflective interviews after each observed lesson took
place within and around the school day as opportunities arose. After each interview, the
transcripts were given to the participants to ensure that they had been quoted correctly. In
some cases, this reflective process prompted further discussion enhancing the data collected.
Participant observation
Participant observation requires the researcher to be present in the natural setting where the
phenomenon takes place (Maykut & Morehouse, 1994). It allows the researcher to see
firsthand how the actions of the participants correspond with their words (Mertler, 2006).
There is a risk of the observations proceeding differently from normal practice, because people
are being observed, but they can capture the events in real time and contextualise the study
(Yin, 2009). In action research, participant observations involve observing as a researcher but
also participating within a group as an active member (Mertler, 2006). This generally occurs
throughout the data collection but importantly it happens in the early stages of the data
collection to establish relationships with teachers (Mertler, 2006). For my action research,
participant observations took place throughout the data collection. I found myself moving
through the participant-observer continuum (see Figure 7, p. 89) (Glesne, 1999, p. 44). When
working with the teacher participants, I spent most time as participant as observer in the
89
classroom, where I would actively participate in the class lesson by engaging with students and
asking questions about their learning. During reflection and planning meetings, I was more of
a participant as observer to full participant where I engaged in conversations to plan and
develop the formative assessment strategies. When I was implementing strategies within my
own classroom, I was also a full participant as first and foremost the class teacher, but also
participating as a researcher as well. The observations were significant for directly observing
the process and implementation of the formative assessment strategies with the learners,
requiring me to reflect deeply about the impact on student learning.
Figure 7: Participant-observer continuum (Glesne, 1999, p. 44)
I set up regular observations with the teachers at times that were convenient in their class. We
would meet first and discuss what strategies would be implemented, and arrangements were
made for me to be released from my own class to observe the class. Due to the natural
business of a school, there were times that a planned observation lesson could not feasibly take
place. However, we managed to have regular observed lessons that were sometimes back-to-
back lessons whilst others were weekly. This ensured that a sufficient amount of data was
collected to capture the development of the formative assessment. Over a period of three
months, I observed Emily in her class seven times with meetings that followed each lesson
together with some additional planning and reflection meetings. I observed Harriet in her class
seven times with meetings that followed each lesson as often as possible. Typically,
participant observations are supplemented with interviews of individuals or groups as a part of
the data collection (Mertler, 2006).
During the observations, I audio taped the lessons to provide an accurate account of the lesson.
I also kept detailed notes during the observations and ensured that I only focused on the
formative assessment strategies rather than allowing behaviour management or the structure of
the classroom to distract me from the data collection (unless this impacted on the strategies
implemented). Since my ethics approval allowed me to discuss the learning of a selected
Observer Observer as participant
Participant as observer
Full participant
90
group of students who were given permission to be involved in the study, I used the ‘activity’
part of lessons to engage in what these selected students were learning and thinking. I asked a
range of questions which included:
• What are you learning in this activity?
• What has helped you with your learning?
• How has the teacher helped you with your learning?
• What do you think will make you successful in your learning?
• How do you know you have learnt what you were supposed to learn?
• How did the success criteria help you?
• Did the assessment you talked about with your friends help you?
As the students were familiar with having more than one adult in the classroom (each class had
a classroom teacher and a full time assistant), the students were not concerned by my presence.
The pattern and make-up of the lessons did not change considerably and were conducted as per
usual.
Use of a reflective journal
The notion of reflection ‘can be defined as the act of critically exploring what you are doing’
(Mertler, 2006, p. 10). More explicitly, it is rigorous thinking about the process that is shared
with others (Rodgers, 2002). In academic research, there is a long tradition of identifying the
importance of reflective practice in teacher education (Lane, McMaster, Adnum & Cavanagh,
2014). It is a key element at the end of an action research cycle; but importantly, reflection is
also continuous during the process of teaching (Mertler, 2006). It allows for adjustments to be
made during a teaching lesson, but also throughout the entire action research project and is
considered necessary if teachers are going to learn from the practice of colleagues as well as
their own practice (Lane et al., 2014). Reflection played a significant part in my research, as I
used this time to reflect with the teachers involved in the action research project, as well as my
own teaching practice. I used this time to identify any themes and trends emerging and the
91
impact the formative assessment strategies were having on the students. I found the most
effective approach to this was to keep a reflective journal.
A reflective journal can be referred to as a diary, journal, or memos, and contains a personal
record of the researcher’s thoughts and understandings that occur during the research process
(Maykut & Morehouse, 1994). The journal allowed me to capture thoughts about my growing
understanding of formative assessment; what I was observing within MIS about how
knowledge of this area was increasing and more specifically, its application to improve student
learning in my own classroom. My reflections were also an important part of the action
research cycle and helped my colleagues and I to refine and improve the learning that was
occurring. Reflections encompassed all the integral elements of action research outlined by
Mertler (2006). This included reflecting on the action cycle, developing classroom practice,
discussing what worked, what did not work and afterwards, revisions for future lessons and
being clear about teacher expectations and student motivation.
The use of documentation
In exploring the lives and experiences of people, Maykut and Morehouse (1994) argue that
there may be documents helpful to understanding the field of study. They can be ready-made
resources, ‘easily accessible to the imaginative and resourceful investigator’ (Merriam, 1988,
p. 104). It is the responsibility of the researcher to make sense of what documents already exist
(Mertler, 2006). According to Yin (2009), documentation can take many different forms and
can be relevant to almost every case study topic. He argues the advantages in using
documentation are that they are unobtrusive, not created as a result of the case study and can
give details of a particular event. However, as Yin (2009) points out, there can be an unknown
bias from the author in selecting certain documents to be viewed and sometimes the right
documents can be difficult to retrieve. Merriam (1998) also argues that since the
documentation is not completed for the study, it may be inconclusive or incomplete from a
researcher’s perspective. In my research study, documentation included lesson plans and
student work. The lesson plans were from some of the observed lessons together with the
student learning from these lessons. Both forms of documentation were easily accessible,
92
because the planners were a part of the team meeting and some of the student work was
produced and discussed during the observations. Although not a reliable source on their own,
combined with other methods of data collection, these documents increased the credibility of
the findings (Maykut & Morehouse, 1994).
Reflection and action
Reflection and action is integral to any action research process (Mertler, 2006). For teachers
to be effective in their roles, they must be active participants in the classroom and effective
observers of the learning process (Parsons & Brown, 2002). Importantly, this same notion
applies to teacher-researchers involved in action research. Mertler (2006) argues that reflection
is ‘the act of critically exploring what you are doing, why you decided to do it and what its
effects have been’ (p. 10). Marshall (2001) describes the movement between reflection and
action as a way of creating momentum for further inquiry.
For phase 1 of the study, the participants and I reflected after each lesson on what occurred in
the lesson, how the use of formative assessment had worked in practice, and how the students’
learning had been achieved. This established what needed to be acted upon in the planning
meetings and subsequent lessons. The reflection became the basis for action that was to
subsequently take place. These conversations with both teacher participants were an integral
part to the success of the action research. It ensured student learning stayed at the forefront of
decision-making and provided the time to reflect upon what transpired in each lesson. Through
the use of a journal, I also used reflection to record what strategies I was implementing and
how they were impacting on the students in the class.
93
Phase 2: Action research on collaboration at Matilda International School
Phase 2 action research process at Matilda International School
Step in process Actions, planning and data collection
Step 1
• Initial interview with Grade 1 teachers to identify understanding of
formative assessment and how they collaborated to develop
assessment strategies
Step 2
• Planned meeting to develop formative assessment strategies through
shared professional learning and collaboration. This included:
§ Identifying which formative assessment strategy to implement
§ Discussion on best approach to collaboration
§ Identify possible evidence of learning as a result of formative
assessment strategy
§ Reflection on collaboration and what worked and did not work
during the planning meeting
Step 3 • Lesson taught with formative assessment strategies by participating
teacher in their class (lessons not observed)
Step 4
• Reflection as a group in planning meeting. This included:
§ Teacher feedback on impact of formative assessment strategy
implemented
§ Impact on student learning
§ Next steps for formative assessment strategies developed
through collaboration
Step 5 • Steps 2 – 4 repeated another 6 times.
Step 6 • Final reflection from team about the changes to collaboration and
impact it had on the development of formative assessment strategies
Table 4: Phase 2 action research process at Matilda International School
94
Phase 2: Selection of participants
The teachers I invited to participate in phase 2 were all working in the same year level (Grade
1). For this phase of the study to be successful, I needed opportunities for regular involvement
of the participants so that we could capture our understanding of collaboration in planning
meetings so it was ideal that the teachers were in the same year level. The participants were
approached through the school principal and invited to participate. All four teachers agreed
willingly to participate in the study. A profile of each teacher participant is provided in chapter
six.
Phase 2: Data collection methods
The data collected was from a PLT formed by a Grade 1 team. The data collected for the
action research was over a three month period involving seven planning meetings with the
Grade 1 teachers. The data collection methods included interviews, observations of planning
meetings, a reflective journal and documentation.
Interview and planning process
In phase 2, the first collaborative meeting and semi-structured interview involved all four
teachers together. As the participants were different to phase 1, the aim of this 45 minute
interview was to gather their understanding of formative assessment. At the meeting we
discussed how well they thought they collaborated to develop formative assessment strategies.
This interview was guided by open-ended questions determined ahead of time. The semi-
structured approach also allowed me to respond according to the information provided and to
explore any new issues that arose. This helped established the direction the action research
would take to improve the development of formative assessment strategies. The following four
interviews were short reflections at the end of planning meetings, where formative assessment
strategies were planned during the weekly team meeting. Subsequent interviews established
the success of the collaboration on implementing formative assessment strategies and what
needed to be improved or adapted for the next meeting. All interviews involved all four
95
teachers together. The study concluded with a final semi-structured interview with all teachers
reflecting upon the outcomes of the action research for their professional learning and
formative assessment in their practice.
Participant observation
The observations for phase 2 were focused on the team meetings that took place to develop the
formative assessment strategies. The challenge in the observations was that I was directly
involved in the team meetings while at the same time collecting data through the observations.
To ensure I did not miss any important data, I audio taped four of the meetings with the
consent of the four participants and compiled notes during and after the team meetings.
It was a challenge to combine participation and observation as a research method and to
understand the participants and the setting as an insider in the environment, while attempting to
describe it for outsiders (Patton, 1990). I was conscious of this as I attempted to strike a
balance between the different roles I had; teacher, collaborator, team leader and researcher.
Use of a reflective journal
The reflective journal in phase 2 was used in a similar manner to phase 1, but with a stronger
focus on reflecting on the collaboration between the Grade 1 teachers. The reflections
informed the next planning meetings and played a major role in the action research cycle.
The use of documentation
The use of documentation was very important in the phase 2 data collection. When planning
learning intentions as a team, we used the MIS scope and sequence documents to inform our
planning. These documents were important in assisting the team develop learning intentions
and success criteria in student language. We also had planning documents including the IB
PYP planner (see Appendix 1) and assessment overview. At various times, these documents
were used in the planning meetings that took place during the data collection.
96
Phase 1 Phase 2
School Matilda International School Matilda International School
Teachers
participants
2 and the researcher
(Prep and Grade 5 teacher)
4 and the researcher
(All Grade 1 teachers)
Student
participants Eight 0
My
participation
Actively involved in developing
formative assessment strategies
through an action research cycle
Actively involved in collaboration to
develop formative assessment strategies
through an action research cycle
Purpose of
research
Implement and develop formative
assessment strategies to improve
student learning
Develop formative assessment
strategies through collaboration
Length of
research Three months Three months
Data
Collection
Semi-structured interviews
Observations
Discussions with students
Reflective journal
Document analysis
Semi-structured interviews
Observations
Reflective journal
Document analysis
Analysis Interpretive inquiry
Collaborative social research
Thematic
Interpretive inquiry
Collaborative social research
Thematic
Table 5: Overview of action research at Matilda International School
Data analysis for phase 1 and 2
Data analysis in action research is a process of ‘systematically organizing and presenting the
findings of the action research in ways that facilitate the understanding of these data’ (Parsons
& Brown, 2002, p. 55). This involves an inductive analysis of the data whereby the researcher
attempts to reduce the volume of information that has been collected by identifying and
97
organising the data into patterns and themes (Mertler, 2006). The importance of this step is to
avoid misinterpreting any data during the data reduction (Schwalbach, 2003). To ensure this
did not occur during the analysis of the large amount of data collected, I continually analysed
observations and the interviews with teachers and students to look for themes and patterns.
Maykut and Morehouse (1994) state that important themes are identified in the early phases of
data analysis and are then pursued through new questions and new observations. They see this
as broadening or refining what is important to the research topic, which should be anticipated
and planned for in a qualitative design. With the implementation of an action research model,
it was expected that I would need to work and plan with my colleagues to change, develop and
adjust the focus of the lessons in the different classrooms.
The initial stage of the data analysis involved listening to and transcribing the audiotapes from
the interviews and the observed lessons whilst organising them into folders under each teacher
participant’s name. The transcripts allowed me to examine the views expressed by the
participants closely. The process gave me access to the data repeatedly to ensure validity of
the data I was commenting on in the analysis. For ease of reference, I also catalogued the field
notes taken into folders arranged under each teacher participant.
The nature of the action learning cycle ensures that the data collection occurs on an ongoing
basis (Mertler, 2006). The early analysis helped inform the next stages of the action research
project. With regular meetings before and after teaching lessons, I saw this as an opportunity
to engage in ongoing learning with the participants.
Theoretical frameworks and phase 1 and 2 data analysis techniques
Miles and Huberman (1994) argue that there are three main approaches to analysing qualitative
data: interpretive, social anthropological and collaborative social research. Since the aim of
my study was to gain a deeper understanding of the effectiveness of formative assessment
strategies, the theoretical perspective chosen to underpin my methodology is the interpretive
constructivist and collaborative social research approach. As Creswell (1998) argues, ‘the
interpretive constructivist researcher tends to rely upon the ‘participants’ views of the situation
98
being studied’ (p. 8) and ‘recognises the impact on the research of their own background and
experiences’ (Mackenzie & Knipe, 2006, p. 198). The justification for the interpretive
constructivist approach is its capacity for treating social action as text. This means the
interviews and observational data can be transcribed into written text for analysis, thereby
making it easier to capture the complexities of the relationships between what the teachers
were trying to achieve and what was happening in the classroom. Also, due to qualitative
research being primarily interpretive, it requires a rich description and analysis to be truly
understood (Rossman & Rallis 1998). Collaborative social research connects well with action
research since participants attempt to accomplish change or action in their own setting (Miles
& Huberman, 1994) in this approach. The data are collected and given to the ‘activists’ (Miles
& Huberman, 1994, p. 9) in the form of feedback to move to the next stage of the research
cycle. In my study, the reflections after observations and collaborative planning meetings
informed the next steps and directions both phase 1 and 2 took.
The data analysis occurred in a continual and ongoing manner as the study progressed. Reason
and Bradbury (2008) argue that the nature of action research means that action undertaken
without reflection is ‘blind’ (p. 2), and therefore the data collected must be analysed from the
beginning in order to plan the research direction. Merriam (1988) further argues that analysing
data at the end of the data collection can make it very difficult to attempt because of the energy
and time it takes to analyse. For both these reasons, my data analysis began from the very
beginning. I was eager to create transcripts needed and to organise my field notes and begin
looking for emerging patterns. The reflections after each lesson and collaborative planning
meetings also helped with knowing which areas to focus on for the individual teacher and the
PLT to progress their understanding and application of various aspects of developing and
implementing formative assessment.
In the phase 1 data collection, the observations of lessons and planning meetings, interviews,
documentation and my own reflective journal all provided me with a rich source of data for
analysis. The theoretical framework from the formative assessment literature framed the
headings in a matrix. This ensured that my data collection stayed on track and remained
pertinent to my study. The beginning headings used in phase 1 were stating the learning
99
intention, developing the success criteria, teacher questioning, teacher feedback and the use of
self and peer- assessment. For phase 2, the beginning headings included the framework of
PLT of ensuring all students, culture of collaboration and a focus on results and the
framework of collaboration strategies which included collective responsibility, shared values
and vision, reflective professional inquiry and supportive and shared leadership. These
headings were then connected to the development of formative assessment strategies. When
the patterns and themes emerged through the analysis, the data from the phase 1 of the action
research were organised into different headings related to the formative assessment strategies
implemented in the two teacher participants’ classrooms and my own homeroom. The aim of
this was to tell a story through a rich narrative of what was observed during the action research
project.
Matrix displays for data analysis
As a part of the analysis, I set up an electronic matrix. This is the ‘crossing of two or more
main dimensions or variables (often sub-variables) to see how they interact’ (Miles &
Huberman, 1994, p. 239). For phase 1, I set up a matrix for each teacher to capture the lessons
that were observed and the strategies implemented in the lesson. This was helpful to
understand the connection, flow and location of the lessons (Miles & Huberman, 1994). This
design allowed me to compare the different matrices used in this study, and helped to develop a
deeper explanation of the findings in the data. The matrices were stored electronically to allow
for changes to be made as the data was collected. Miles and Huberman (1994) state that an
iterative approach to early findings may establish more changes to the analysis as new patterns
begin emerge. New and established patterns will determine how the matrix is designed. In this
study there were rows added to the matrix with each lesson observed and at times some
headings were added and/or merged as patterns in the data were established.
For phase 2, the matrix was used to list the formative assessment strategies that were the focus
of planning and collaborative strategies (listed above) used in a PLT to see what connections
existed between collaborating for each formative assessment strategy. For the case study
(methodology explained below), a matrix was used to compare the observations and semi-
100
structured interviews of the four different participants to identify patterns in relation to the use
and development of formative assessment strategies. I then compared all three matrices to
identify any overarching patterns that might exist between the action research and case study.
Methodology and aims: Case study of Western College
As previously stated, the case study of the Melbourne primary school was undertaken for
comparative purposes to provide another lens on formative assessment practices and the
importance of teacher collaboration. This was in order to deepen my capacity to understand
and critically analyse what occurred at MIS in phase 1 and 2 of the action research process, in a
reflective manner. The aim of a case study is ‘to catch the complexity of a single case’ (Stake,
1995, p. xi), however, the data from that case can also allow a researcher to make comparisons
with other situations. Following the implementation and development of formative assessment
through the action research in my own school, I was eager to see how formative assessment
was developed and implemented in another school setting. The second school is an IB
accredited school and provided a rich comparison with the MIS experience.
Yin (2009) defines a case study as ‘an empirical inquiry that investigates a contemporary
phenomenon in depth and within its real-life context, especially when the boundaries between
the phenomenon and context are clearly evident’ (p. 18). Baxter and Jack (2008) view the
concept as describing a real-life phenomenon within its own setting through a range of data
sources. Importantly, this study was about describing and exploring the how and why
questions that would help to understand the use and development of formative assessment
(Yin, 2009, p. 4). Maykut and Morehouse (1994) state that a qualitative study is most
effectively presented within a rich narrative, which is often referred to as a case study. Using
a descriptive approach to a case study helps to identify the causal links between the strategies
implemented and the improvement of student learning (Yin, 2009).
In conducting the case study at Western College, I used a range of approaches for data
gathering (Berg, 2004; Merriam, 1988; Yin, 2009); semi-structured interviews, document
analysis and personal reflections from observations gleaned from the school all provided data
101
on the use and development of formative assessment strategies. Berg (2004) argues that a rich,
detailed and systematic approach to gathering data about the research phenomenon is required
in case studies.
Analysis of the case study of Western College draws on themes relevant to formative
assessment from the key theorists that informed the whole study; Black and Wiliam (1998b),
Clarke (2008, 2005, 2001) and Glasson (2009). The analysis explores the impact of the
formative assessment strategies on student learning in the classroom and the evidence of
teacher collaboration in the development of formative assessment strategies and draws on the
same theoretical understanding of teacher collaboration that informed phase 2 of the study.
Selection of participants
The school selected for the case study component was chosen because it had been
recommended to me as a having innovative assessment practices. I contacted the principal
who gave me permission to conduct the research at Western College. I was asked to ensure
that all contact would be through the PYP coordinator, Jessica. I wrote a proposal to Jessica to
give to the staff on my behalf asking for volunteers to participate in the research. Teachers
were asked to contact me via email to express their interest in participating. Subsequently I
contacted them via return email to confirm their participation. In total, I had four participants
in the research including Jessica. A profile of each participating teacher is provided in chapter
seven.
Case study data collection methods
Yin (2009) argues that case study data can be collected from many sources and emphasises
three guiding principles:
• Principle 1: Use multiple sources of evidence
• Principle: 2: Create a case study database to assist with organising and documenting the
data collected
102
• Principle 3: Maintain a chain of evidence in order for steps of the process to be tracked
in either direction.
These three principles ensure that the process is explicit as possible and ‘reflect a concern for
construct validity and for reliability’ (Yin, 2009, p. 125). Below is a summary of the data
collection methods for the case study at Western College.
Interviews
As in phase 1 and 2 of the study, in-depth semi-structured interviews provided useful data from
this case study. The teacher participants were interviewed on day one of the data collection to
gather their understanding of formative assessment and to ascertain if and how teacher
collaboration helped to develop formative assessment strategies. Each interview took between
30 to 40 minutes. After class and team meeting observations and informal discussions, the
teachers were interviewed for a second time on the last day of two week data collection. This
was to develop further understanding of what had been observed during the two weeks and to
clarify any information gathered from the informal discussions and the first interview. The
teachers, Justine, Wendy and Rachel were interested in participating in the interviews to
investigate and reflect upon their own practice and their teaching team’s approach to
collaboration on formative assessment. Jessica was the curriculum coordinator in the school,
and therefore, worked with all teams within the Junior School. Since she was not a classroom
teacher, the purpose of interviewing her was to gather a view of formative assessment purposes
and practices across the Junior School.
Observations
For data collection in the case study of Western College, teachers were observed in two
different areas of the school. They were observed either two or three times teaching in their
classroom and at one of their curriculum team meetings. The purpose of the class observations
was to see how some of the formative assessment strategies were being implemented. The
103
purpose of the curriculum team meetings was to find out how the year level teams planned for
and developed formative assessment strategies.
While collecting data for this study, I kept in mind Yin’s (2009) five researcher skill pre-
requisites for those conducting a case study research:
• Asking good questions
• Being a good listener
• Be adaptive and flexible
• Have understanding of the phenomenon being studied
• Unbiased by preconceived ideas. (p. 76)
I found these points were helpful to ensure I was thinking about what was required when
collecting data. In particular, I found the last point crucial to ensure that I did not go into a
classroom with preconceived ideas of what would happen when the formative assessment
strategies were implemented.
Document data collection
The documentation included in the study for the case study was the unit of inquiry planners the
participant teachers worked on to outline the formative assessment strategies and some
anonymous student work that could show how different formative assessment strategies were
implemented.
My role as a researcher
Maykut and Moorehouse (1994) argue that the natural setting is where qualitative researchers
will discover or uncover people’s experiences about the phenomenon of interest. They state
that the combination of interviews and observations, together with the analysis of the relevant
documents, increases the chances of the phenomenon being understood from different points of
view. Extended amounts of time spent with people in their natural place is critical to
104
developing explicit and tacit knowledge (Maykut & Moorehouse, 1994). In order to follow up
the interviews conducted with the teachers investigating their development of formative
assessment strategies and to see how they were implemented in their classrooms, it was vital to
observe the teachers interacting with the students and how they planned both formally and
informally with their colleagues.
In contrast to my role in phase 1 and 2 where I was involved as a participant and observer
during the case study, I was an outsider observing the teachers in their natural setting at school.
Therefore, my role as the researcher was very different, and involved watching, listening and
attempting to capture and understand how assessment practices were used in the school and
through observation, to gather data and evidence of the impact of various formative assessment
practices being used.
Data analysis
According to Yin (2009), analysing case study evidence can be challenging due to poorly
defined techniques. He argues that case study analysis should follow a general analytic
strategy that includes ‘defining the priorities of what to analyze and why’ (Yin, 2009, p. 126).
Using the same theoretical headings from phase 1 and 2 analysis, the case study analysis was
set up in a matrix with the headings from formative assessment and collaboration at the top and
the teachers name on the left. Using the data collected, the evidence was compiled under each
category to identify patterns and further themes. The data collection included observations of
planning meetings, interviews and documentation, all of which provided me with a rich source
of data for analysis. To ensure I performed a high quality of analysis, I attended to some key
principles outlined by Yin (2009) which for my study included, attending to all evidence,
addressing the most significant aspect of the case study and demonstrating a strong
understanding of current thinking of my topic whilst using this prior expert knowledge during
the analysis.
105
Phase 3
Case study school Western College
Teachers participants 4 (three class teachers and the curriculum coordinator)
Student participants 0
My participation Observing teachers in their classrooms and how they plan for
formative assessment strategies
Purpose of research
Comparative study between Matilda International School and
Western College to understand how formative assessment strategies
were implemented and developed through collaboration
Length of data
collection 2 weeks
Data Collection
Semi-structured interviews
Observations
Documentation
Analysis Thematic
Interpretive inquiry
Table 6: Phase 3 case study overview
The ethical issues involved in my research
As a qualitative study, the ethical implications related to my research were considered. Both
the action research and case study involved interviewing and observing teacher participants, so
it was necessary to gain their consent beforehand. In phase 1 of the action research, I also
interviewed and observed participating students, so it was necessary to obtain written
permission from their parents prior to beginning the data collection. I also obtained permission
from the students themselves. The participants were invited to volunteer to be involved in the
research. Pseudonyms were used for all participants and they were informed that they could
withdraw from the research at any point. Teacher participants were invited to read the
completed research.
106
Conclusion
This chapter has provided an explanation of the qualitative approaches used in this study and
an overview of the research questions. The choice of qualitative methods and the approaches
used including action research, practitioner research and case study are explained. An outline
of how the data was collected and analysed is provided. Explanations are given for the
selection of research methodologies and data collection process, with the literature reinforcing
the justification of those decisions.
107
Chapter 5: Developing formative assessment at Matilda International School
Introduction
This chapter focuses on discussion of the first phase of the action research at Matilda
International School (MIS). MIS utilised the Canadian Ontario Curriculum before it became
an authorised IB school in early 2009. It has students from Pre Reception (3 year olds) through
to Grade 12 on one campus, including approximately 1850 students representing 41
nationalities. Considered to be one of the leading schools in South East Asia, MIS is an IB
World School fully authorised to deliver the International Baccalaureate's Diploma, Middle
Years and Primary Years Programmes. The mission statement of the school is ‘to develop
responsible global citizens and leaders through academic excellence’. The primary goal of the
Lower (primary) School is for each student to become an independent learner, excelling both
academically and personally with respect and concern for the school community and the
community in which they live.
The need for formative assessment at Matilda International School
According to the then vice-principal, prior to 2009, there was a belief that the learning and
teaching was ‘very good’ at school. Yet learning and teaching was often discussed without
mention of assessment. The concept of formative and summative assessments understood by
very few teachers or administrators. When the school became an IB World School, this gave
the school permission to adopt the IB PYP curriculum. In the early stage of the transition,
many teachers realised that the implementation of assessment to improve student learning was
almost non-existent. As one administrator commented to me when I first arrived at the school
in mid-2009; ‘we simply need to improve our assessment practice’. It was recognised that MIS
needed to move away from assessment of learning (summative assessment) as the predominant
form of assessment, to assessment as learning and assessment for learning (formative
assessment) (Earl, 2003). It was a discussion with my then vice-principal and other colleagues
about areas of improvement that were needed in the school that lead to my decision to conduct
research on how to improve student learning through formative assessment.
108
This first phase focused on action research to explore how to develop and implement formative
assessment strategies and to analyse how this approach could improve pedagogy and student
learning. Two teachers offered to participate in the action research as discussed in the
methodology chapter. The data was collected from a Prep and Grade 5 class at MIS over a
three month period. The participants included two classroom teachers; Emily in Prep, Harriet
in Grade 5 and four students from each class.
In this chapter, the interviews with teachers, planning sessions, classroom observations,
reflective discussions, questioning of students and student work are reviewed and analysed
through thematic analysis and interpretative inquiry. I identify, analyse, and discuss patterns
and findings from the data in relation to the research questions. In addition, my journal notes
are used as a further source of data and to inform the overall findings on how formative
assessment strategies were developed and implemented as well as how they influenced
improvements in student learning.
This chapter is divided into four sections:
1. Section one is focused on the initial stages of the action research including a profile of the
teacher participants. This includes the first interview with the two teachers discussing their
current understanding of formative assessment.
2. Section two provides analysis of the data collected from the classes observed. This was
recorded through rich narratives describing what was observed in relation to the various
formative assessment strategies, which included stating the learning intention, developing the
success criteria, effective teacher questioning and feedback and self and peer-assessment, as
well as the learning and teaching that occurred in the two classrooms.
3. The third section provides an analysis of my journal entries pertinent to the initial stages of
the study. The purpose of this analysis is to increase understanding of how our action research
process unfolded in developing formative assessment strategies, how we planned and
109
developed the strategies in the classroom and how we bridged the gap between theory and the
practical use of formative assessment.
4. The final section describes the themes that emerged from the analysis of data in relation to
the research questions. The findings and conclusions from this phase of the study are then
presented.
Profile of teacher participants
Emily
Emily is a Canadian national who studied in Canada and the United Kingdom and had been
teaching overseas for the past seven years in Uganda, China and Hong Kong. She was in her
fourth year teaching in Hong Kong and a preparatory (year 1) teacher when the study took
place. She also has been the team leader for prep for the past three years. Spending most of
her career teaching in the early years, she is a confident and very competent teacher. With a
sound knowledge of inquiry learning, she has effectively adapted her teaching moving from
teaching the English National Curriculum to the PYP. She has an extensive knowledge of
teaching young students and demonstrates genuine care for the children she works
with. Working in a British international school in Shanghai, Emily established a very good
understanding of formative assessment early in her career. Over the years, Emily has had
strong training in formative assessment and for much of her professional development, she has
touched upon the different strategies of formative assessment. Emily was present at the first
presentation I gave to MIS on my research into formative assessment and was able to add to
the collaborative discussions we had, pointing out alternative ways of using formative
assessment in the class, particularly with respect to how it might be developed in the early
years.
Emily was very keen to participate in this research. She saw it as an opportunity to enhance
her own teaching through collaborative planning and a formal process of looking closely at her
110
teaching practice. Being very open to constructive feedback and change, Emily saw this as an
invaluable experience to improve her practice.
The prep students from Emily’s class who were selected by ballot for this research were Pia,
Luke, Chris and Patrick.
Harriet
Harriet completed her undergraduate degree at Queen’s University in Ontario, Canada. For her
first teaching role, she moved to South America where she taught for three years and was in
her third year of teaching in Hong Kong at the beginning of the research. She has taught
Grades 5 to 9 and at the time of data collection was teaching Grade 5. She was for two years a
team leader of her year level. Harriet has a strong interest in technology and is well known
amongst her colleagues as being an innovative teacher. She has presented at workshops around
Asia on using blogging in the classroom and was in charge of the one-to-one computer release
program at the school. She sees her future as a learning and teaching technologies facilitator
which would allow her to use her technology skills with a broader audience.
By her own admission, the idea of formative assessment was new to Harriet. She had only
come across the concept when she began her teaching at MIS and acknowledged that she still
has much to learn. Harriet saw this opportunity to be involved in the research as a chance to
explore her teaching practice. She felt that her assessment practice was an area to improve on
and was very willing to listen and learn. Having Harriet involved in the research was an
exciting prospect, as I identified her participation as an opportunity to explore in depth how
formative assessment might develop in the upper years of primary school.
The Grade 5 students from Harriet’s class who were selected by ballot for this research were
Donald, Daniel, Sophia and Anna.
111
Beginning of the action research process
At the start of the study, Emily and Harriet both appeared excited and were looking forward to
the challenge of the action research. Before the study began, both teachers said they had a
limited understanding of action research, but they were optimistic about what they could learn
from the process. The study began with a 45 minute interview with Emily and Harriet
separately, to establish their understanding of formative assessment, what areas they believed
they needed to improve and what could be the starting point in implementing formative
assessment strategies. At the beginning of the action research, Emily already had a very good
understanding of formative assessment. In the lead up to the initial interview, Emily requested
a copy of the questions prior to the interview as she ‘just want(ed) to be prepared’. As a result,
Emily articulated her views very well, and provided lengthy answers particularly in relation to
how she used different formative assessment strategies in the classroom. Whilst, Harriet
understood the purpose of formative assessment, she struggled to identify clear strategies she
implemented in the classroom.
The starting points: developing formative assessment through action research
The following section explains how the action research process evolved with Emily and Harriet
over a three month period from the initial interview and observation, to the final reflection with
the teachers and students. A final reflection also took place with Emily and Harriet six months
later.
Emily and Harriet were at different levels in relation to their understanding of formative
assessment. Therefore, I needed to start at different points with each teacher to enable them to
build understanding and find ways of improving their own pedagogy and methods of assessing
student learning. What became very quickly apparent was that Harriet was new to formative
assessment and had not considered in detail how or why formative assessment might be
important for her teaching practice and students’ learning. In the first interview I had with
Harriet, she said:
112
I have only really learnt about using formative assessment since I have been in
Grade 5 and it is quite limited, even though I know that there are lots of things I
could bring in right now.
This meant that I needed to discuss the different formative assessment strategies that we would
plan to use with Harriet in-depth. I showed her a copy of Clarke (2008) and Glasson’s (2009)
formative assessment books that I drew on in analysing the concepts. Harriet showed interest
in the books stating, ‘I might read them later’. Although she never asked for the books, I used
them as a reference point for our meetings to ensure that the strategies being implemented were
focused on the theoretical framework.
Emily, however, had read about formative assessment and was aware of the different strategies
I was researching. In the initial interview, Emily outlined her understanding of formative
assessment with its connection to learning and teaching stating:
Assessment occurs when the teaching and learning has taken place. Formative
assessment is not meant to produce grades. It is meant to improve teaching and
learning, and therefore, student outcomes.
Emily saw the importance of formative assessment and how it helped students connect to their
learning. She said that it ‘helps students have more ownership over what they are doing
because they know what they are doing'.
As the interview moved into discussion about the formative assessment strategies, Emily
revealed that she had implemented some formative assessment strategies and understood their
purpose, though she had a modest opinion of how well she used the strategies in her classroom.
She commented:
I have been thinking about different formative assessment strategies I can use
in my class for a while now. I really began exploring the idea when I was
working in Shanghai a few years ago.
113
Emily was also eager to hear my views on the different formative assessment strategies.
During the first meeting following the initial observation, I brought Clarke (2008) and
Glasson’s (2009) and books to the meeting. Emily asked for a copy of them and then regularly
brought the books to planning meetings, often referring to them during the meeting. This
showed she was willing to engage in new learning and to build new thinking into her learning
and teaching.
After the initial interview, I expected Emily would demonstrate a greater understanding of
using formative assessment, and this was certainly the case. Emily applied strategies outlined
by Clarke (2008, 2005, 2001) and Glasson (2009) including using learning intentions and
success criteria and the descriptive feedback focusing on improving student learning. As
Harriet acknowledged, formative assessment was new to her, so this research offered a
professional learning opportunity to develop her understanding of different formative
assessment strategies. For Emily, it was also a professional development opportunity, but it
was about deepening her understanding of formative assessment by using strategies to inform
each stage of learning and teaching.
The action research process was also different for Emily and Harriet as they had different
teaching styles and levels of expertise. Emily plans her strategies meticulously for student
learning. Harriet has a more relaxed and laidback approach to teaching and works with
students in a less planned manner. Harriet planned her lessons to some extent with her team as
well as on her own, but she was very comfortable when the lesson needed adapting ‘on the
spot’. At MIS, my own Grade 1 classroom was a short distance from Emily’s class, which
provided many opportunities formally and informally to discuss the action research. Harriet’s
classroom was in a different building, and due to distance and the split lunch and recess breaks,
we rarely saw each other. This was a constraining factor in organising times to meet and
limited the opportunities we had informally to discuss the action research. Even though Harriet
had agreed to participate formally in the action research planning, it was at times challenging,
as she demonstrated passive resistance to suggestions on how she might implement certain
formative assessment strategies. As I was concerned that Harriet felt forced to participate in
the action research, I asked if she still wanted to be involved after the second lesson
114
observation. Since she did not ask to withdraw and appeared happy to continue, I tried to
encourage her participation as far as possible.
During our initial semi-structured interview, Emily was able to outline her understanding of
formative assessment strategies and how she implements them in the classroom. She presented
succinct answers to explain her definition of formative assessment that provided evidence of
her understanding. She stated:
Assessment occurs when the teaching and learning takes place, while students
are forming their understanding. It’s ongoing and frequent and it should be
directly in line with the direct instruction in the classroom. I guess it can be
planned for and unplanned. It can be informal and frequent when an
opportunity arises anytime during the lesson for you to question and make
observations.
The initial observation showed Emily already embedding formative assessment strategies into
learning and teaching in the classroom. Emily applied Clarke’s (2001) puppets strategy;
‘Walt’ (we are learning to…), to state the learning intention and ‘Wilf’ (what I am looking
for…) to develop the success criteria (explained in further detail in this chapter). Other
strategies Emily implemented included regular oral feedback, which was given when she
moved amongst the students to provide individual feedback as well as relevant feedback to the
whole class. Emily demonstrated understanding of self and peer-assessment in our
discussions. She talked about how peer-assessment helps students to reflect critically on their
own learning and how self-assessment is important for students monitoring their own learning.
Emily did acknowledge she ‘did not use a lot of peer and self-assessment in the classroom and
would like to use it a lot more’. This understanding of assessment meant that we could discuss
and begin to implement other key strategies of formative assessment without it appearing to be
too daunting for Emily. It also provided an opportunity to develop a multilayered approach to
formative assessment whereby multiple strategies were implemented at one time and then built
upon within a lesson or a unit of lessons (explained in further detail in this chapter).
115
The initial interview and observation of Emily’s class took place before we began planning
formative assessment strategies together. Once I had established Emily’s understanding of
formative assessment, we began planning for the implementation of stating the learning
intention and developing the success criteria for the next lesson observation. Although Emily
was already implementing learning intentions and success criteria with her class, she was still
eager to plan the assessment strategy together. Communication via email or informal
discussions were used to organise times to meet throughout the process.
Initial perceptions of Emily’s class
The following vignette is a snap shot from my notes of Emily’s class from my first
observation, when the action research and data collection first began. This vignette describes
my initial perceptions of her classroom:
Walking into the room, the feeling of warmth and openness in the learning
environment is apparent. Student learning is proudly displayed on the walls.
The ‘library corner’ looks cosy and comforting with the soft pillows and
colourful carpet. There is a ‘home corner’ where the students have a range of
toys to play with. The teacher stands at the front door greeting students with a
natural enthusiasm for her job that cannot be missed. The students are
attracted to this comforting style from their ‘favourite teacher’ and they show
desperation to please her by almost fighting for her attention. Emily takes it all
in, ensuring that each child has their moment with her without making anyone
feel alienated. It is a positive environment where the children are valued and
feel like they are an important part of the classroom. The classroom has the
look and feel of a supportive early years environment. The excitement of this
prep class reminds me why people choose education as their careers. The
enthusiasm in the room is contagious. For five year olds, this is a special place
to be.
116
Lesson observations: Emily
Table 7 (see p. 117) outlines the lesson observations in Emily’s class. Lesson observation 1
was a single lesson with Emily implementing formative assessment strategies involving stating
the learning intentions and success criteria. We held reflection meetings after the observation
to establish what worked well and what could be added to improve the strategy. We also used
email and informal conversations to discuss other strategies that she had implemented for the
occasions observations did not take place. For these unobserved lessons, Emily gave me
feedback on how she felt the lessons proceeded. Once it was established that Emily was well
aware of the strategies, lesson observations 2, 3, 4 and 5 focused on two learning intentions
(with the second learning intention building on the first learning intention), during which we
planned together all the strategies of formative assessment including the learning intentions,
success criteria, teacher questioning, teacher feedback and self and peer-assessment. This was
to evaluate the effectiveness of each formative assessment strategy individually. These four
lessons took place over a week and half. There were short reflection meetings including
informal conversations between the lessons to discuss how the strategies were being
implemented and how they could be improved. Lesson observations 6 and 7 involved
observing the formative assessment strategies together and to establish how they built upon
each other to strengthen the effectiveness of the lesson. Again we met formally and informally
between observations to discuss how the lessons were being implemented and they could be
improved. She more often provided this information during our planned meetings and
occasionally through informal conversations in the corridor.
In total, I observed Emily seven times including the initial visit to her classroom. The
observations ranged from 20 minute to one-hour in duration. We had 10 pre and post planning
meetings (including some extra follow up discussions when time did not permit us to finish our
reflections) to discuss what happened in the lessons and what the next stage would be. This
did not include the informal discussions that took place throughout the data collection. The
planning meetings ranged between 20 and 40 minutes in length. We conducted a final
reflection interview approximately six months after the observations to draw conclusions on
117
what had changed in the formative assessment strategies she implements in the classroom.
This interview is discussed later in this chapter.
Lesson observed Formative assessment strategies
planned and implemented
Learning focus (learning
intention)
Initial observation
No planned formative assessment
strategies implemented (learning
intention, success criteria and teacher
feedback were observed)
Sort 2D and 3D shapes to
identify their differences
Lesson observation
1
Stating the learning intention and
developing the success criteria
How to write a sentence with
an adjective
Lesson observation
2 - 3
All formative assessment strategies
implemented (stating the learning
intention, developing success
criteria, effective teacher questioning
teacher feedback and self and peer-
assessment)
How to create a character using
a picture
Lesson observation
4 – 5
How to create a character using
words
Lesson observation
6 - 7
‘Multilayered approach to formative
assessment’ - All formative
assessment strategies implemented in
an aim to build upon each other in a
systematic approach
How to write open-ended
interview questions
Table 7: Emily's lesson observations
Initial perceptions of Harriet’s class
The following vignette is a snap shot from my notes of Harriet’s class from my first
observation when the action research and data collection first began. This vignette describes
my initial perceptions of Harriet’s classroom:
118
The students gathered outside the classroom chatting comfortably amongst each
other waiting for Harriet to let them in to the classroom. She arrives moments
later balancing coffee in one hand and her laptop in the other. She greets the
students who are standing near the door with a warm, friendly smile. The
students appear relaxed around her as they head towards their seat in the
classroom. The configuration of the desks set up was in three large groups of
students facing each other, giving the opportunity for possible collaboration
between students. Harriet begins talking with some students who are still not
ready for the instructions to begin. She talks quickly as the last few students
scramble to get ready for the lesson. There is no wasting time in this
classroom, is the perception. Harriet moves through her instructions for the
lesson focusing on the content of the lesson. Students will need to keep up with
the pace the teacher sets. Despite the configuration of the classroom, the
beginning of the lesson involved no peer discussion. Perhaps, an opportunity to
enhance the learning was missed by not giving students the chance to discuss
their learning with each other. Students begin work in their books on their own.
Some students raise their hand for further clarification as Harriet walks around
the room. Most of the class works diligently and quietly on the set task. The
classroom has the feeling of a warm comfortable environment with a dedicated
teacher and enthusiastic students. To ensure all students are in the best
position to learn, there appears to be something missing. This may provide an
opportunity for a starting point with Harriet for our action research on using
formative assessment to improve the learning in her classroom.
Harriet believed that most of her learning about formative assessment had taken place since
moving into her role in Grade 5. This belief resulted from frequent conversations and
discussions with teachers and team members and was the likely outcome of the school making
assessment a focus for that year. Harriet understood the key goals outlined by Clarke (2008)
and the ARG (2002) that formative assessment involves identifying the current levels of
learning for students, developing their capacity to understand where they need to go in their
learning development and how this can be achieved. However, Harriet recognised that her
119
understanding of formative assessment required improvement and she knew of other strategies
she could implement in the classroom. Harriet acknowledged that many of the formative
assessment tasks she and her team used were unstructured and unplanned, as opposed to the
structure they put into the summative assessment tasks. She said that, ‘Most of our formative
assessments are unstructured. For me, my day-to-day formative assessment will just happen
without a structure’. She said that during their team planning meetings, it was only the
summative assessment task that was discussed, as formative assessment was not seen as a
priority. Harriet’s comment that ‘we use explicit learning outcomes or intended outcomes and
success criteria for summative assessment tasks’ reveals some confusion about the nature of
formative assessment for her and the team. During the initial interview when I asked Harriet
about self and peer-assessment, her instinct was to discuss in it relation to the summative
assessment task as she said:
We have at least two or three summative pieces per unit, so at least one of
those, there is some sort of technology project. With those, they are really easy
to go back and make changes. With those, the kids go back and change after
the fact, after we have assessed it in a summative way. But that is not
something that I would give them class time to do. So we would rarely visit a
project after the summative assessment.
The team believed there was value in spending time discussing summative assessment tasks,
but it was evident that there was a need for and a commitment to discussing, sharing and
collaborating in developing and implementing formative assessment strategies.
It was clear that for Harriet, consciously implementing formative assessment strategies was an
area to develop. Other areas that Harriet did not discuss but could have further developed
included the type of teacher questioning or how the students answer questions and giving
students an opportunity to be involved in developing the success criteria. It appeared that the
only formative assessment structures in her teaching were to ensure that she sits down with
each child to provide verbal feedback on how they are going as well as what they need to do to
improve their learning.
120
Once I had established Harriet’s use and understanding of formative assessment, we began
planning for the implementation of the different strategies. Harriet and I met to discuss what
strategies she felt comfortable implementing. Although Harriet said she was open to trying all
the strategies that were a part of the action research, it was not always evident throughout the
study. It was not the formative assessment that Harriet was reluctant to attempt; rather it was
suggestions that could improve her teaching she did not want to take on board. It was difficult
to pinpoint why this was the case, but teachers who are reluctant to accept constructive
criticism of their practice can see it as disapproval of their teaching. Teachers able to de-
privatise their practice by focusing on what improves learning for students are more likely to
accept constructive criticism and focus on student learning and what the evidence tells them
(DuFour & Marzano, 2011; Kruse, Louis & Bryk 1995).
Table 8 (see p. 121) outlines the lesson observations with Harriet. With the exception of lesson
observations 6 and 7, the lessons observed were not related to each other, although, lessons
observations 3, 4, 6 and 7 were related to the unit of inquiry on human systems. In lesson
observations 6 and 7, students learnt to study alone and then study with a peer following
which, the participating students reflected with me on the study method they felt was more
effective. Harriet and I used email and informal conversations to discuss other strategies she
had implemented when observations did not take place. For those unobserved lessons, Harriet
gave me feedback on how she felt the lessons progressed. She more often provided this
information during our planned meetings and on occasions, through informal conversations in
the corridor.
In total, I observed Harriet seven times ranging from 20 minutes to 1 hour including the initial
observation in her classroom. We had seven pre and post-planning meetings to discuss the
lessons and the planning for the next stage, not including informal discussions that took place
throughout the data collection. The planning meetings ranged between 20 and 40 minutes in
length. We had a final reflection interview approximately six months after the observations
had been completed to discuss any changes in the formative assessment strategies Harriet uses
in the classroom.
121
Lesson observed Formative assessment
strategies planned and
implemented
Learning focus (learning
intention)*
Initial observation No planned formative
strategies implemented
Unit of inquiry pre-assessment,
spelling and mathematics
(computation skills)
Lesson observation 1
and 2 No agreed strategy
Students involved in a
mathematics quiz and then
correcting the answers
themselves.
Unit of inquiry lesson involving
group work
Lesson observation 3 Teacher questioning – Use of
‘talk partners’
Students discuss and look into
new unit of inquiry on human
body systems
Lesson observation 4 Self-assessment
Learning intention
Reflection on choice of studying
and mock examination taking
Lesson observation 5 Teacher feedback Report writing
Lesson observation 6 Teacher feedback
Self-assessment
Studying alone for mock
examination
Lesson observation 7 Peer-assessment Studying with a partner for mock
examination
Table 8: Harriet's lesson observations
*During Emily’s observed lessons, a learning intention was clearly defined for each lesson.
For Harriet, this was not always the case and for some lessons, where the discussion before
meeting was brief, it was difficult to determine what the learning goal was for that lesson.
122
Reflections on the importance of planning the action research
The planning that took place with Emily, Harriet and I before each lesson proved to be
valuable time spent improving the quality of the strategies to be implemented in the classroom.
It gave us the time and focus to develop the formative assessment strategies. I had more time
planning with Emily than with Harriet and the extra time spent planning created better
formative assessment strategies in Emily’s class than in Harriet’s. It allowed for deeper
discussions about the implementation of the strategies, although the other factors that played a
role are outlined above. Emily teaching a grade below me was an advantage, as we had
students of similar ages in our classes and we had also used similar assessment strategies in the
classroom, including stating the learning intention and developing the success criteria. In the
early stages of the action research project, my Grade 1 team did not spend much time planning
formative assessment strategies as the team did not consider it a priority nor did they believe
they had the ‘time’ for it. Therefore, I regularly planned the strategies I used including self and
peer-assessment on my own. Interestingly, many of the strategies were more effective in
Emily’s class than in my own. In particular, developing the success criteria was more student-
orientated in Emily’s class, and the use of self and peer-assessment was more effective when
planned together. Since the students were involved in developing the success criteria, they had
a clear understanding of what they were assessing when the self and peer-assessment strategies
were used. During our planning meeting we had discussed the importance of students
developing the success criteria through hands on activities and step-by-step introduction of self
and peer-assessment. The planning of the activities played a significant impact on the
strategies as we took the time to plan them well and in the context of what would best suit the
students in Emily’s class.
Emily regularly came to our planning meetings with established learning goals (learning
intention) for the next observed lesson. The learning intention became a starting point and a
focus for each meeting, enabling the discussion to stay focused on the intended improvements
for student learning. There were times when we would need to rewrite the learning intention to
change it to ‘child speak’, focusing on the student learning. The ‘context’ of the lesson had
been separated from the learning that was taking place (Glasson, 2009). An example of this is
123
when we changed the learning intention from ‘creating a scary picture’ to ‘how to create a
character using a picture’. The important and subtle difference for this lesson was the second
learning intention being de-contextualised so it could be applied in other contexts (Clarke,
2008).
In Emily’s class for lesson observation 1 on ‘how to write a sentence with an adjective’, the
initial plan was for students to create a list of adjective words and after a discussion with the
teacher, the students were able to develop the success criteria. During the discussion, we felt a
more hands on approach was required, so we moved to giving the students a word written on a
card enabling the creation of their own sentences. There were also students in the class with an
adjective on their card, who were required to determine where that adjective should go. This
became the basis for developing the success criteria, and during the observation, the students
were engaged and enjoyed creating the success criteria. When it came to writing their
sentence, the class was confident of achieving this learning intention. For lesson observation 3
which focused on ‘how to create a character using a picture’ with Emily, the students were
initially going to discuss with the teacher what characters are and different types of characters,
as the basis for developing the success criteria. However, during the planning meeting we
discussed how students could be given a range of pictures of characters to sort them into
groups according to what kind of character they were. From there, students would take the
scary character category and begin to make generalisations about what the character might look
like. This strategy of involving students in the planning of their learning through hands on
approaches proved to be stimulating and interesting for the students. During the observation,
students were eager to share what they thought a scary character should look like during class
discussions. Emily noted after the lesson ‘they were very engaged’ and it was a ‘much better
idea than our original one’ with reference to the hands on approach to the success criteria that
had been planned. She was pleased with how the collaborative planning had improved the
quality of formative assessment strategies for developing the success criteria and self and peer-
assessment. This prompted her to comment that the assessment strategies:
124
…really should become a regular part of our curriculum meetings. There is a
real benefit to discussing how we will use formative assessment in the
classroom.
Stating the learning intentions and developing the success criteria
Stating the learning intentions and developing the success criteria with students are two key
strategies outlined by Clarke (2008, 2005) and Glasson (2009). These strategies provide
students with the information to consider what is to be learnt and why, and how they can be
successful in their learning. Below is an overview of how Emily and Harriet implemented
these approaches and what I attempted to implement within my own classroom, together with
my reflections.
Emily’s use of learning intentions and success criteria
Glasson (2009) states that framing the learning intention and sharing it with the students
becomes the basis for everything that follows afterwards in the lesson. The success criteria are
the indicators that show students what steps they need to achieve the learning intention (Clarke,
2005) and can help both the student and teacher know whether the learning intention has been
met (Glasson, 2009). During the lessons observed in Emily’s class, it was evident her students
were well versed in working with learning intentions and success criteria. Emily used the
visuals tools developed by Clarke (2001) for younger students of ‘Walt’ the puppet (we are
learning to…) to introduce the learning intention, and ‘Wilf’ the puppet (what I am looking
for…) for the success criteria. These tools were used as a visual reminder for the students to
remember the learning intention and success criteria. This has now become a common
formative assessment tool used in early years and primary classrooms across many different
schools, as I have previously observed in Hong Kong, Australia, China, the Philippines,
Malaysia and England. During Emily’s lessons, the learning intention was introduced at the
beginning of the lesson and the students called out in unison ‘we are learning to…’ These two
puppets were regularly used as a visual reminder for early years students to assist in
remembering what they are learning and what steps are needed to be successful. Emily
125
discovered this tool when she was teaching in Shanghai with the English National Curriculum.
She commented that:
It helps students reflect and describe their own learning to other people. It also
helps me to be clear on the purpose of the lesson. I have a deliberate set
approach to sharing the learning intention and success criteria through the Wilf
and Walt puppets to make it easier on the students to remember. They really
respond to the puppets.
Clarke (2008) argues the importance of younger students having visual reminders in the
classroom. Emily found it to be a very effective way for students to stay on task during a
lesson and to help them to learn.
Although Emily found Wilf and Walt to be an effective formative assessment tool to use,
Clarke (2001) who developed the characters, now argues against the use Wilf for the success
criteria because:
Wilf was a bit of a disaster - it meant teachers were giving children the success
criteria instead of asking children to generate them. It made children think,
‘This is about doing what the teacher wants us to do’. (Ward, 2008, para. 12)
However, this was not the case in Emily’s class. By investigating the topic, she asked the
students to develop what they thought the criteria would be. During our meeting, we discussed
how the students were going to create their own success criteria. An example of this was a
lesson where the learning intention was ‘how to create a character using a picture’ involving a
range of different character pictures on the table to serve as stimulus. Brookhart (2013) argues
the work provided should be relevant to what the expected learning should be and this is
exactly what Emily did. Emily asked the students to discuss in pairs which characters were
scary and why. This discussion helped students develop the success criteria through the
implementation of talk partners (Clarke, 2008) at their tables. This encouraged the students
who were busily engaged in looking at the images and discussing what they noticed about the
126
characters. Emily brought the students back to the floor for a whole class discussion to
develop the success criteria, which was decided a scary character needed dark colours, big
body parts and a mean looking face.
Students sharing the learning intention and determining the success criteria appeared to have an
impact on the rest of the lesson. It helped the students to focus their thinking on what is
required of them and to clarify their understanding, identify the steps to success, and therefore
determined any possible difficulties. It also allowed for discussions between students and the
teacher about strategies for improvement (Clarke, 2005). Having students involved in the
assessment process also appeared to create a higher level of engagement and involved higher
order thinking skills through the use of analysis, evaluation and creation (Brookhart, 2010).
Many of the students had discussions amongst themselves related to the learning intention and
success criteria in some form. An interesting observation was the students’ ability to decide
for themselves whether they had been successful in their learning. During the reflection on the
scary character as a whole class, students were able to decide for themselves whether they had
been successful in their learning, when Emily talked through each criterion. Students gave
thumbs up to say there was evidence of that criterion in their learning. Some of the students
who did not have the evidence were given an opportunity to continue working to add more to
their picture. This is what Clarke (2008) identified as the purpose of the success criteria being
established. It allows students to self-assess their learning and establishes where they are and
where they need to go next in their learning (Black & Wiliam, 1998b; Popham, 2008; Sadler,
1989).
Clark (2008), Glasson (2009) and Sadler (1998) argue for formative assessment to be at the
forefront of the lesson. This allows students to take ownership of their learning and it provides
clear information on what they are being assessed against. In all the lessons I observed with
Emily, the students were clear about what they were learning. Many of lessons began with the
learning intention for that lesson or a recall of what they had learnt in previous lessons. When
I spoke to one of Emily’s students, Pia, about what helps with her learning, she said having the
‘things I need to do (are) written on the board’. This is what Emily did for each lesson that had
127
a learning intention and success criteria. When I spoke with Luke, another student, during a
lesson, I could see that he was thinking about the success criteria when creating his character:
Researcher: What are you making now?
Luke: Scary character.
Researcher: Now, how do you know that your character is going to be scary?
Luke: I need to put dark colours on it.
Researcher: Dark colours. What else do you need?
Luke: Big body parts.
Researcher: And anything else?
Luke: (After some thinking time) Scary bits.
Researcher: Scary bits. Now …
Luke: Oh, angry face.
The picture Luke was drawing was his own original creation of a scary character and he could
justify how it was scary against the criteria the class had agreed upon. The interaction above
shows Luke was recalling the success criteria the class had determined earlier in the lesson.
Importantly, it helped him focus his learning and determine what he needed to be successful,
therefore, self-assessing his own learning (Clarke 2005).
In lesson observation 4, Emily had students investigate the language to describe a character
which determined the success criteria for the next two lessons. The students were highly
engaged in developing the success criteria through examples of good language from two
books. Students decided the description of the character needed verbs and adjectives. As a
result of this investigation, the students I spoke to were clear about how to describe their
character. For lesson observation 6, Emily used a similar approach to encourage students to
identify effective open-ended questions for their unit of inquiry. When students were asked to
design their own open-ended questions, they could explain what the success criteria was and
how their own questions related to each criterion. This evidence further supports the argument
that through involving students in the development of the success criteria, they are more likely
to understand what they have to do to be successful in their learning (Wiliam, 2011).
128
This example of using learning intentions and success criteria demonstrates how powerful the
involvement of students in the assessment process can be and how it can enhance their
learning. Glasson (2009) states that the setting up the learning intention at the start of a lesson
becomes the basis for everything that follows. Emily also believed in this approach as her
lessons were routinely based on stating a learning intention to the students.
Next steps in the action research for Emily
Clarke (2008) advises teachers to be aware of the success criteria in its basic form, but also to
be open to new ideas from the students. In particular with creative lessons, teachers have to be
careful not to portray one approach to success. In the case of the learning intention of creating
a scary character in Emily’s class, we had already planned the success criteria and during the
teaching, Emily was only looking for criteria that had already been established during our
planning session. One student made the suggestion that a background colour would help to
make the character look scary. As it was not a part of the planned criteria, Emily did not
explore this suggestion further, thus limiting creativity.
During our reflection on this lesson, Emily said that the students could play a bigger part in
developing the success criteria. She had her preconceived ideas of what the criteria should
look like even though the students were involved in a short activity to develop the success
criteria in the planning phase, opportunities to incorporate some of the students’ ideas were
missed as the criteria had already been established. However, Emily argues that teachers need
to have some idea of what the criteria might look like before engaging in a lesson. Glasson
(2009) states this as well, to assist teachers to guide their students in the right direction.
However, it is important that the children’s input should not be compromised by rigid success
criteria.
Learning from students about formative assessment
Researcher: And how are you going to make your scary character?
Pia: Using dark colours, using a mean face and using a big or fat body parts.
129
Researcher: Ok, what kind of dark colours are you going to use?
Pia: Black, or something. Like that dragon I drew over there (pointing to a
dragon that had been created before the success criteria was developed).
Researcher: Ok, what kind of big body parts are you going to have?
Pia: But that one is a bit thin (pointing to her dragon), but the thing that shows
he scary is that it has red eyes.
Researcher: So that’s a bit like having the mean face with the red eyes?
Researcher: (A nod of agreement from Pia.) But your character has big body
parts. Does it have a long body?
Pia: Yes it is a dragon.
Researcher: So that is like a big body part, the long body?
Pia: Not really, not very fat like that (pointing to a fat character).
Researcher: It’s not fat, but they are not all fat. Sometimes they have long
bodies. So do you think…
(Pia interrupts)
Pia: I also made it breathe out fire and I also put some sharp teeth on it.
Researcher: Ah, you have got sharp teeth. Thank-you Pia.
From the transcript above, it is clear that Pia knew the success criteria for the lesson, but
appeared to have her own thoughts on what makes a character scary. She talked about her
character having sharp teeth, red eyes and breathing fire. For Pia, this appeared to be part of
her success criteria. Although Emily had created the success criteria with the students, it still
did not have to be the criteria for all the students. In this case, Pia’s success criteria would
have also made a scary character in her eyes, but had she not completed all the set criteria
Emily developed with the class, and thus her learning may have not been classified as
‘successful’.
This discussion with Pia shows there could have been room for individual students to add their
own criteria, which would provide an opportunity to expand and extend student thinking
without limiting their creativity to the examples that are being explored. This approach would
not change the learning intention, only the different ways in which students might choose to
130
achieve success in the lesson. Glasson (2009) discusses the view that teachers need to have an
idea of what the success criteria might look so that they can guide student thinking by asking
‘appropriate questions’ and make ‘relevant suggestions’ (p. 37). However, as Emily’s case
demonstrates, teachers have to be careful when they preplan the success criteria, that they are
still open to possible new ideas. It is important to have the end point in mind, but be willing to
shift and adjust for the students they are teaching.
Harriet’s view on learning intentions and success criteria
In the following discussion, Harriet commented that she uses:
…explicit learning outcomes and success criteria for summative tasks. Along
the way, we often do a lot of formative assessment in line with those, but there’s
rarely intended outcomes or success criteria that is just set out for formative
tasks.
Interviewer: So do you mean that a long-term goal is always the summative
assessment. What about particular lessons when you want the students to have
a certain understanding? For these particular lessons, do you make it explicit to
the students?
Harriet: Not very often in Grade 5. We would rarely do that. Something for me
to think about. I have never planned a lesson in Grade 5 that is a 40 minute
lesson. We are working on bigger picture projects all of the time. The learning
intentions and the success criteria are given at the beginning of the… I don’t
know exactly what the word is… it’s not unit by unit but it is not lesson by
lesson. It is somewhere in between because of the age of the kids, but we are
normally working in a 3-4 day range for an outcome.
Interviewer: Yes, similar idea. Obviously the timeframe is different and you do
make that explicit clear that what the learning will be, what they will be
expected to learn?
Harriet: Yes, but success criteria not always.
131
Interviewer: Yes, but success criteria can come in the range of checklist and
rubrics. The success criteria are basically the steps the students are going to
take to meeting the learning intention.
Harriet: Yes and we do that for knowledge, skills and understanding. Often in
the same assessment tool. Yes, we do that, definitely.
This discussion I had with Harriet during the initial interview about her understanding and
implementation of learning intentions and success criteria was revealing about her lack of
understanding of formative assessment. There appeared to be little connection between using
formative assessment and student learning. Harriet felt that learning intentions were not
something her team really used or needed because of the students’ age. She believed the
students in Grade 5 were at the top end of the primary years and did not need explicit learning
intentions as a part of regular success criteria. Since most of the learning activities in the class
lead to the summative assessment task, Harriet said that students knew they were working
towards the summative assessment task or the overall objective they were trying to achieve.
When I suggested there might be certain skills that stating a learning intention would be useful
for, Harriet felt it did not fit the learning for the students, as they did not teach stand-alone
skills. In relation to the success criteria, Harriet did not discuss how it is developed with the
students.
It was clear that she had not thought a lot about explicitly stating a learning intention or
developing the success criteria with the students. Harriet did tell the students what they were
learning, but it was in relation to the summative assessment task at the end of the unit. She
clearly considered that the learning in the classroom was for their final assessment task and
final grading, not because the learning is important. Clarke (2008) discusses the ‘dramatic
impact’ (p. 86) learning intentions have on learning and teaching if the learning is separated
from the context or the activity. If students are given a contextualised objective, they over
focus on the most concrete element to them (in the case of Harriet’s class, the summative
assessment task itself), and therefore, student thinking and talking is more about what they are
doing as oppose to what they are learning. Students also have trouble transferring a skill they
132
believe relevant to only one context. The way Harriet’s team plans and then implements their
lessons supports this concern raised by Clarke (2008).
In the verbatim script above, Harriet’s statement, ‘we are working on the bigger picture
projects all of the time’ demonstrates the importance of making the purpose clear for what and
why students are learning, to ensure they make a connection to their learning. This was not
something Emily had really spoken about. However, as Popham (2008) points out, long term
mastery goals will need learning progression outlined to map how students are going to meet
the long term objective. Harriet’s teaching could have been improved by stating learning
intentions on a daily or regular basis for her students to work towards the bigger picture. I
suggested this in a planning meeting but she was not prepared to accept this view.
Before lesson observation 4, I suggested that we implement learning intentions for this lesson.
Harriet was feeling there was not much point in doing this, but reluctantly complied. As this
lesson was about studying techniques, I suggested to Harriet that the learning intention could
be, ‘I understand what best helps me learn’. Harriet was happy with that outcome and said she
would refer to it regularly throughout the lesson to ensure everyone stayed focused on the
learning.
Harriet began the lesson well by introducing the learning intention and writing it on the board.
As the lesson continued, she forgot about the learning intention and her feedback became
general about studying techniques. This is an issue William (2011) describes as a ‘wallpaper
objective’ (p. 56), where the learning intention becomes tokenistic to the lesson as it is not
referred to by the class teacher. When I asked Daniel how he knew what he was learning, he
said it was written on the board and ‘Ms Redina (Harriet) mentioned it’. Straight away, this
had an impact on Daniel. With a visual prompt, he found it easy to reference the learning
intention. Observing the lesson, there was evidence of students like Daniel remembering what
they were learning, but Harriet was not convinced, as this was a ‘long term goal I talk about a
lot’. As a result, we only attempted learning intentions for one lesson and it did not have the
impact on student learning I was aiming to achieve. This was evident in the final interview
with the Grade 5 students, where they were asked how their teacher could help them to learn.
133
None of the students made reference to the teacher explaining what and why they were
learning (the learning intention). Instead, Anna said her teacher ‘can give us more tests’ in
reference to how the teacher could assist her learning. At this stage, Anna had not separated
the context from her learning as the context, in this case, was taking tests. Glasson (2009)
states that if teachers make the learning intention more explicit and separate it from the context
of lesson (in this case the summative assessment task), students begin to think of learning
intentions as assisting with their learning. This can make it clear what they are supposed to be
learning, what they are doing and why.
Harriet’s Grade 5 team of five teachers did use success criteria to assess the students with the
summative assessment task according to Harriet, but it was developed at the planning meeting
and not with the students. This was common practice between the Grade 5 teachers, as this is
how they planned as a team for each unit of inquiry. Black and Wiliam (2003) raise the
concern that associating learning with grades means the student’s focus becomes obtaining the
best mark, rather than aiming to improve their learning. When this discussion was explored
further with Harriet, she did not quite accept the argument put forward by Black and Wiliam
(2003). She believed the summative assessment task was the best way to see if students had
achieved the learning outcomes, and it made most sense to her to discuss their learning in
relation to the summative assessment task. As stated by Bennett (2011), Harriet should design
summative assessment tasks to document student learning and develop these tasks to further
meet the secondary purpose of improving student learning.
During this interview, Harriet seemed to be interested in how stating the learning intentions
and developing the success criteria worked with the students. However, because of the style
that her team had developed for planning these strategies, she was reluctant to try these
approaches with her team or with me. For these formative assessment strategies to be
successful in the classroom, it would take a shift in mindset. Planning and team collaboration
would have a significant role in its success. Fullan (2011a) states that ‘there is no greater
motivator than internal accountability to oneself and one’s peers. It makes for a better
profession and it makes for a better accountability system’ (p. 8). There would need to be
motivation or pressure from the team to make planning formative assessment effective. It was
134
clear to me that Harriet was resistant to the whole action research process focus on formative
assessment, though she did say she might think about it again with her team in the future when
things were not so busy.
Developing success criteria with my class
Developing the success criteria became a regular part of my own class practice during the
study. One area I focused on improving was developing success criteria in relation to learning
different writing genres. As a Grade 1 team, we decided on checklists as an effective tool for
students to assess their own learning. Figure 8 (see p. 135) demonstrates an example of four
different genre checklists that were developed and created by the students during each unit of
inquiry for writing. Each checklist followed a guided process that the Grade 1 teachers
discussed with their students. When each genre was introduced to the class, we spent one
lesson investigating a range different writing pieces related to the genre taught. For example,
when we had a unit on writing invitations, a range of invitations were put on the students’
tables. Using sticky notes, students would leave comments on the invitations about what they
noticed. Then students moved back to the floor as a whole class and made a list of what was
found on the invitations. The students chose what they believed were the most important
elements of the genre. Subsequently the elements were broken down into two parts. The
elements in red on the checklist students agreed must be in the genre every time they wrote in
that style and the elements in black may or may not be needed depending on individual
student’s needs. This allowed for the differentiation in the classroom and for the students to
take ownership over the checklist.
Being involved in the process from the start of the unit, students felt confident about assessing
their own learning. Other formative assessment strategies followed on from the success
criteria including the teacher feedback given and the use of self and peer-assessment using the
checklists. Glasson (2009) argues that this approach to formative assessment provides clear
and explicit success criteria that ‘are the linchpin for assessment for learning’ (p. 37).
135
Reflecting upon the changes I made to my teaching practice, it was evident I had developed a
greater understanding of students being actively involved in their learning. This learning by
the students increased my knowledge of the valuable capacity of criterion based learning and
student involvement in development of criteria. Brookhart (2013) found that students, even in
lower grades, can identify what is the ‘desired performance… and what it looks like’ (p. 12).
Through the development of success criteria with students in my class, I realised that students
could effectively be involved in the assessment process and how it supported their learning.
Figure 8: Examples of genre checklists
Teacher questioning as formative assessment task
Throughout the literature, effective teacher questioning is considered to involve teachers
asking questions related to the learning intention and success criteria and the strategies teachers
implement during teacher questioning to improve student understanding (Clarke, 2008;
Glasson, 2009). During the initial interview, Emily acknowledged questioning was an area she
136
needed to develop. Harriet did not discuss it at length. During the study, I found teacher
questioning was given the smallest amount of focus during the observed lessons for both
teachers. Emily acknowledged she had put ‘little thought’ into how to use questioning to
improve student learning. During one of our planning meetings, we discussed at length the
most appropriate style of questioning to use with prep students. Through an adaptation of
Clarke’s (2008) recommendations for questioning, we analysed each type of teacher
questioning to determine the most effective style for this age group which included:
A range of answers - Students could give more than one answer with definite
‘yes’ answers and definite ‘no’ answers
A statement – Turning questions into statements and having students
agree/disagree and justify their response
Right and wrong – Two opposites are presented and students must decide which
is wrong and which right and decide how they know this to be true.
Starting from the answer/end – Give the answer and have students decide the
question
Opposing standpoint – Introducing a different standpoint in a question instead
of the conventional point of view (p. 55-62).
We also discussed the use of wait time (Rowe, 1972) and using student answers that may
display any ‘faulty thinking’ (Clarke, 2008, p. 40). After this discussion, Emily chose to use
‘agree/disagree’ statements outlined by Clarke (2008) whereby she asked students a question
and students had to agree or disagree by raising their thumb up or down. During the lesson
observation 3, the students enjoyed participating through the thumbs up and thumbs down
method to give their answer, for example, whether they thought a character was a scary
character based on the success criteria established as a class. Students were engaged during
this questioning and it appeared to assist them in thinking closely about each statement Emily
made.
In my own classroom, I trialled a variety of questioning techniques and found that asking open-
ended questions in different ways created more interest in the question. In particular, questions
137
that required students to justify a point of view encouraged students to think more deeply about
the question. When I first trialled questioning with students to justify their point of view with
follow up questions such as ‘what makes you think that…’ or ‘how did you come to that
conclusion’, the students struggled to extend their point of view. However, I went back and
modelled this over a few lessons to the students and demonstrated the type of responses that
they might provide. This explicit teaching of an assessment strategy provided the confidence
for students to explore their thinking further. What was evident during the lesson was the use
of open-ended questions coupled with talk partners (Clarke, 2008) and wait time required
students to provide a justified point of view.
The role of ‘talk partners’ during teaching questioning
During the planning meetings, I encouraged both Emily and Harriet to introduce what
Alexander (2004) determined as dialogic talk and what Clarke (2008) devised as talk partners.
This involves students discussing their answers to an open-ended question with a peer before
students share their answers with the class. This gives all students an opportunity to
participate in answering a question and creates a high level of involvement. During lessons on
the occasions this strategy was implemented in both the Prep class (lesson observations 2 – 7)
and the Grade 5 class (lesson observation 3), engagement levels lifted and more students
became involved in discussing and thinking about the possible answers. The behaviour
observed during the dialogic talk demonstrated students busily discussing answers to questions
at a high speed, correcting each other’s misconceptions, listening and building upon each
other’s answers. This last observation, of students building upon each other’s answers,
provided a great insight into how students support each other’s learning. When a student was
unsure of what an answer might be, they would listen to their partner to see what ideas they
had. In some cases, students also enhanced each other’s thoughts and ideas by either coming
up with a new idea or improving on their partner’s idea. An example of this from Emily’s
class was when Chris reflected with the class that this was very helpful for him because,
‘somebody has different ideas from your work and they can explain it to you and then the
work (ideas) becomes even better than before’.
138
Emily believed the purpose for the dialogic talk was so, ‘everyone gets a chance and everyone
gets a chance to talk. And also, when students are talking directly with somebody else, they
get new ideas themselves’. Emily noted that when she implemented the dialogic talk, she
asked fewer questions, a habit she knew needed improvement. By asking fewer questions,
Emily believed the quality of her questions improved and she became more open-minded
because she ‘wanted students talking with each other and you don’t really get that with one
word answers’.
Before I introduced dialogic talk with Harriet, I had observed two previous lessons and noticed
the emergence of a pattern in her teacher questioning. Students quietly sat at their desks while
Harriet stood at the front of the room delivering her instructions and asking questions. Due to
the noise level, an assumption could have been made that all students were listening and
thinking. Harriet would ask a question (both open and closed questions) and a few hands
would be raised to answer the question. If Harriet felt the answer was insufficient or did not
provide the detail she was looking for, she would move on to the next student who had raised
their hand doing little to acknowledge what the previous student had said. In some cases, she
was asking student after student to find the answer she was hoping to hear. There was none of
the wait or thinking time that Glasson (2009) and Black & Wiliam (1998b) suggest, no
prompts and no opportunity for students to discuss their learning with a peer or partner, such as
Clarke (2008) recommends with her talk partners. As a result of this, very few students were
involved in class discussions and quite often, it was the same students each time giving the
answers. Wiliam (2013) raises concern that ‘high engagement classroom environments appear
to have significant impact on student achievement’ (p. 81) and this initiation-response-
evaluation (I-R-E) (Mehan, 1979) approach to teacher questioning can have a negative impact
on student engagement.
I suggested at our next scheduled planning meeting (before our third observed lesson) that
Harriet try two strategies to improve her questioning of students; firstly, providing wait and
thinking time and second, involving the students in the dialogic talk as discussed by Alexander
(2004). This would involve students involved in what Clarke (2008) describe as ‘active
learning’ (p. 2), whereby students discuss their answers to teacher questions before calling
139
upon another student to give their answer. After a couple of lessons, which included one
observed lesson and Harriet’s anecdotal evidence of other lessons, Harriet was pleasantly
surprised by the student response. She found it challenging to give students more time to
answer. She said it was difficult to resist the urge to move onto another student or to answer
the question herself. When Harriet did wait, she felt the answers from the students were more
‘interesting’ and the reluctant students were more likely to answer her questions instead of
waiting for her to move onto another student. She immediately became aware of higher
participation rate when students were given the opportunity to discuss their answers with
another person. The evidence from both Emily and Harriet’s classes suggest that engaging
students in discussion with their peers improves student understanding and produces a higher
participation rate than what would be expected in teacher to student discussions.
In the final interview, six months after the action research project had been completed, I asked
Harriet if she was still involving her students in active learning through dialogic talk. Harriet
said:
I have engaged students in a lot of different class discussion dynamics this year
and a lot of brainstorming before we have a discussion. Sometimes having turn
and talk to your partner works, but I find in Grade 5 the boys and girls are so
different and so pairing is hard… so it only works for half the class. Then I have
one person present the group’s ideas to the class and try to change that up a bit.
When I asked Harriet what the benefits were to this change in questioning, she was very clear
with respect to what it had achieved:
The real attentive, talkative, first ones with their hands up, I am making them stop
and not talk... then they listen more and learn from others. I have got some of the
more quiet and reluctant kids to contribute. Often, I use questioning to just start a
class discussion and used to only get 25% of the class participating.
140
Importantly, Harriet was moving her class towards being active learners and she was seeing
the benefits of using formative assessment strategies, in particular having every student
involved in learning through questioning and not leaving it to just a few a students in the class.
She was also seeing her class listening a lot more to each other and moderating their own
discussion without teacher input. Clarke (2008) argues this is the type of class setting that
provides the most opportunity for formative assessment to exist. She also argues that this will
help students build an encouraging environment for peer-assessment where students are open
to constructive feedback from their peers.
Student perspective on ‘talk partners’
Towards the end of phase 1 in my own classroom, I spoke with the students about how talking
with a partner benefits their learning. The students offered interesting perspectives. A short
conversation with Luke and Patrick offered differing thoughts on the benefits:
Patrick: The reason we have talk partners is it helps us is if the answer is not
good, you listen to another answer from someone else and you are going to
have a different idea while you listening to someone.
Interviewer: Because you are going to get new ideas?
Patrick: Yeah, from the other person.
Luke: Sometimes, if you have a bad idea, someone can tell you a good idea but
sometimes, we don’t want to use their idea because it is theirs, so it helps with
us with another idea of our own.
Both students saw the benefit of this approach, but through different lenses. The focus was
building upon the ideas of another student. According to Clarke (2008), involving students in
active learning is imperative to the success of formative assessment through ‘engaging in a
constant process of considered review of success and improvement’ (p. 2).
Pia discussed the importance of body language when listening to someone, saying that ‘you
have to listen, you have to have eyes watching the speaker so you can listen better because if
141
you face the other way, you cannot really hear facing backwards’. When Emily set up her talk
partners, she very clearly set out the expectations of how students should talk with each other.
This had resonated with Pia. Clarke (2008) argues that setting up clear expectations is
imperative to the success of talk partners.
Developing effective feedback as formative assessment
Planning and identifying the type of effective feedback Harriet and Emily were to use in their
classrooms was the hardest element to plan for during the action research. Given that feedback
is identified as immediate information related to a student’s learning and is specific to an
individual student (Tomlinson & McTighe, 2006), it was very difficult to identify what
feedback would be needed during the lessons. Therefore, the majority of the focus for
planning discussions was not what type of feedback students might receive, but how the
feedback might be given.
Emily understood how effective feedback given to students could assist with their learning.
During the initial interview, it was clear she had already developed a systematic approach to
feedback. Emily would first focus on an area of the student’s learning that had been performed
well to build the confidence and self-esteem of the students and subsequently identify an area
for improvement in relation to the learning intention of the lesson. In the classroom, most of
the feedback Emily gave focused on the learning intention and success criteria. She seemed
conscious of the difference between feedback to improve learning and managerial feedback
used to monitor the class. Clarke (2001) identified concerns with teachers who share the
learning intention with the class, develop the success criteria but the feedback relate only to
presentation, surface features of writing, quantity and effort. It is this traditional approach to
feedback that prompted the research into how to give feedback for students to improve their
learning.
Emily had shown a thorough understanding of how to give teacher feedback to the students, so
not a lot of time was spent on planning for feedback. However, as feedback was used by
Emily in every lesson observed, it provided ample opportunity to observe how she used
142
feedback with her students. During some of Emily’s lessons, the learning activity began by
using Wilf, the success criteria puppet to give feedback to the whole class on their achievement
against the success criteria. This was achieved by finding a student who was performing
accordingly to the learning intention and success criteria and reporting to the whole class how
and why this student was successful. Emily noted that this was done to ensure the students
were aware of exactly what she was looking for in their learning:
When I gave feedback loud enough for everyone to hear related to the success
criteria, the students knew what I was focusing on. So they would focus their work
on what I was talking about out loud. This made me realise every time I gave
unrelated feedback to one student, all the students were listening and reacting to
that feedback.
Any managerial feedback given to an individual during the lesson was done in a quieter voice
and only for the benefit of that particular student. By using this approach, Emily did not
distract other students from their thinking. She was very aware students can get caught up in
irrelevant feedback if they think it is important. Clarke (2001) identified giving feedback like
this as important so that other students are not distracted by the managerial feedback provided
to a particular student.
After giving general feedback to the class, Emily then focused on giving explicit feedback to
individual students whilst emphasising the strategy discussed during the initial interview.
Firstly, she started with something she liked about the student’s work (related to the success
criteria). Subsequently, she would give additional feedback regarding their learning through
questioning the student on how they could improve, as this example shows:
Emily: Tell me about your picture.
Student: I made a big sausage tongue.
Emily: That is an interesting way to show a big body part. I like the way you
made the tongue as your big body part. What colours have you used in your
picture?
143
Student: Red and some pink. And some brown.
Emily: Looking at list on the board, are there any changes to the colours you
think you need to make?
Student: (student thinking) umm, maybe the colours should be darker.
Emily: What kind of colours are you thinking of using?
Student: I might put some black here (pointing to the face) and brown here
(pointing to the body).
In this small extract, Emily used three questions to help the student move forward in their
thinking. The feedback was direct, immediate, and constructive without being negative or
critical of the decisions the students had made with their learning (Glasson, 2009). When
Emily walked away from this student, the student began to focus on the area of discussion and
went to look for a brown and black colour pencil. It was the student who made the suggestion
for improving his work, not Emily. This indicates the beginning of a move from feedback to
self-monitoring.
Although much of the literature focuses on feedback and questioning as different strategies in
formative assessment, Emily identified how powerful questioning as feedback is for students to
extend their own thinking:
…by asking students where they think they need to improve instead of me telling
them, the students are more likely to connect the dots between their own
learning. It forces the student to do the thinking and it is a very simple strategy
for the teacher to just ask instead of telling.
Descriptive feedback for learning instead of managerial feedback
The feedback in Harriet’s class did not progress as far as it did in Emily’s class. Harriet’s
feedback was quite often mixed between managerial and descriptive feedback. At times,
Harriet would be giving feedback to a student, then become distracted by the behaviour of
another student. So the student needing the immediate feedback missed out and the class
144
became distracted by Harriet’s managerial feedback. During lesson observation 2, the students
were working in groups on a project involving all groups progressing at different stages. The
task was related to the summative assessment and their current unit of inquiry. There was
noise coming from every group, but one particular group was louder than the rest of the class.
Harriet was giving feedback to another group when she stopped, raised her voice at the other
group for being ‘too loud’. At that moment, the whole class stopped, and listened carefully to
Harriet speak sternly to the group. Following this rebuke, the class understood that noise was
not acceptable for this lesson. As a result, the whole room went quiet, with students moving
either to no talking or to a whisper, effectively stopping engaging in peer learning for the fear
of being too loud themselves. This resulted in lost momentum for some of the groups who
were learning very well together. Clarke (2001) suggests that teachers hold on to that
‘secondary feature’ (p. 52) and make sure that first piece of feedback given is related to the
learning intention. They then quietly tell only the students who need the secondary advice
what they need to know without distracting the rest of the class. After this lesson, I discussed
with Harriet what I saw and she commented, ‘Oh, I just didn’t want that noise level getting to a
stage where nobody could work in it’. Harriet said this is an area that she would think about
next time. Since this was a sensitive topic and Harriet felt I was judging her for the style of
feedback she gave, she did not bring up feedback again in any of our meetings.
When comparing the two teachers, it became evident how the feedback given in the classroom
had the ability to improve student learning, as well as negate the learning. When Emily
focused on the success criteria, so did the students. During some of Emily’s lessons she used
Wilf to emphasis the success criteria. This resulted in the students becoming focused on the
criteria because they understood what Emily was looking for in their learning. Clarke (2001)
argues that students respond to teacher feedback and will focus on what teachers believe is
important. If a teacher believes learning is important, that is what students will focus on. If a
teacher is concerned about how students behave, their noise level in class and whether they put
their name on a worksheet, this is what students will focus on. Clarke (2001) suggests teachers
need to be aware of the difference between feedback for learning and what I have identified as
managerial feedback, to ensure students are focused on the learning intention and success
criteria.
145
In Emily’s and my own class, I observed how influential feedback can be on student learning.
Emily’s approach of constantly thinking of how to give feedback related to the learning
intention and success criteria and thus meant she avoided giving managerial feedback to the
whole class when it only related to only one or two students. She showed how the first part of
feedback to a student was always related to the learning intention of the lesson in a loud
enough voice for the rest of the class to hear before moving onto the individual feedback the
student may need.
Before I began the action research, I did not focus on the manner in which I provided feedback
to the students. If there was a behavioral issue, I allowed it to distract from the learning by
telling the whole class when it only involved one or two students. If it was a lesson on
adjectives, I focused on students’ spelling and whether they were writing on the lines. Only
then did I discuss their use of adjectives. Through my learning from the work of Black and
Wiliam (1998b), Clark (2008, 2005, 2001) and Glasson (2009), I gleaned an understanding of
how feedback influences student learning. During the action research process, I looked closely
at my own formative assessment practice and identified this as a key area in which to improve.
I developed a methodical approach to my feedback to students. When working individually
with a student, I first commented on an area they had achieved against the learning intention or
success criteria. Then, I questioned the student on an area for improvement. I guided the
student towards identifying what step/s they needed to take and importantly how they might
achieve it. Glasson (2009) describes this type of feedback, which is direct, immediate, explicit
and focused on the learning intention and success criteria, as influential on student learning.
Black and Wiliam (1998b) state that teacher feedback must provide advice on what the student
can do to improve. In describing my use of feedback to students, I now use questioning to
guide students to identify their own areas of improvement. It has become evident that the main
focus of feedback is identifying the next step for learning, and subsequently explaining this
clearly to the individual student (Gipps at al., 2000).
146
The importance of relationships between teachers and students
Emily believes that the relationship between the teacher and the student is vital for feedback to
be effective and that relationships built upon mutual respect and trust will assist with students’
perception of feedback. When there is a learning focused relationship, the sole purpose is to
support the student in their learning (Absolum, 2010). Wiliam (2011) argues that ultimately
the relationship between the student and teacher is the most important factor in feedback. The
teacher needs to know the student, and the student needs to trust the teacher as ‘students are
quick to determine whether or nor an environment is built on equitable foundation’ (Dueck,
2014, p. 161). Emily’s efforts to create mutual respect were evident, as she developed her
relationships at the beginning of the day when greeting students. She would ask a specific
question related to their life outside of school and they were eager to share their thoughts.
Emily knew the students well, so they felt they could seek her support with questions about
their learning. Emily believed that by developing a close personal relationship with each
student, they handled constructive feedback with a more open mind and understood it was not
a criticism of themselves, rather a way for them to move forward in their learning. Luke
commented that he found the feedback given to him by his teacher very helpful: ‘I like it when
I have ideas about how I can improve my work. It means I do better work’.
Harriet also aimed to know her students on a personal level and developed the mutual respect
between teacher and student, and between peers. In the upper primary years, students can be
more sensitive to the manner and type of feedback given to them, particularly from their peers.
Encouraging students to understand the range of learning styles is also important (Glasson,
2009). During a discussion with the Grade 5 students during one of the lessons I found they
also believed teacher-student relationships in the classroom impacted on what they believed
was ‘fair’ feedback. The peer-assessment section later in this chapter discusses further the
impact a learning culture can have on the effectiveness of this kind of formative assessment.
147
The importance of self-assessment
During Emily’s first interview, she demonstrated her understanding of self-assessment, saying
that students:
Need to know what they are good at, what they need to get better at, so they can
reach those goals and expectation. So I think as their self-reflection and self-
assessment skills improve, we move more into self-assessing ourselves as
learners, rather than just reflecting on our choices.
Emily’s thinking is in line with the views of the ARG (1999) and Black and Wiliam (1998b),
who argue that self-assessment is essential in the classroom and ultimately, it is the learner
who is responsible for their learning. For self-assessment to be successful Emily argues that:
The kids are involved in the assessment process so that they can have an input
as well into what the success criteria is, so that they feel more a part of being
clear on what their goals are and what (the) criteria is and they have more
responsibility over their learning.
Emily makes the connection between self-assessment and the success criteria that Glasson
(2009) articulates. Involving students in the assessment process needs to begin at the start of
the lesson/unit by sharing the learning intention and developing the success criteria as a class
through the analysis of the task. If students are clear about what they are being assessed
against and what that looks like, only then they can assess their own learning effectively.
During the first planning meeting of a four-lesson mini unit focusing on a two part learning
intention of ‘how to create a character using a picture’ and ‘how to create a character using
words’ (lesson observations 2 – 5), Emily and I discussed the importance of students needing
to develop the success criteria to understand the steps they need to take to be successful in their
learning. For these lessons, the learning intention was about creating a character through a
picture and following with words. This would make the implementation of self-assessment
148
more effective as the students knew the criteria to assess their learning against (Clark, 2005;
Glasson, 2009).
During the observations of these lessons in Emily’s class, the students had already been
involved in a peer-assessment assessment strategy of providing feedback to each other relating
to the success criteria before they began assessing their own learning. We discussed whether
self or peer-assessment should come first in these lessons and it was decided that providing the
opportunity for peer-assessment first made it easier for students to find improvements in their
own learning through self-assessment. Since Emily’s students had just spent the time being
involved in peer-assessment, it encouraged them to think about their own learning.
Strategically, we discussed implementing the peer-assessment before the self-assessment
strategy to enable students to obtain the benefits of analysing another student’s learning and
having the chance to reflect and make changes to their own learning. Both Glasson (2009) and
Airasian (1996) discuss the benefit of implementing peer and self-assessment as forming a
natural link where the peer becomes the assessor and the learner simultaneously. The self-
assessment strategy implemented by Emily was managed in a similar way to the peer-
assessment. As the students had already been through this process with their peers, they had a
clearer understanding of how to assess their own learning and had a very clear idea of the
meaning of the success criteria. Emily reminded them of the success criteria following which
she encouraged her students to check their learning to see if they felt they had been successful
with their character in making it scary. Emily believed the students responded well because
most students made subsequent changes to their work according to the success criteria:
Students responded well to the self-assessment. They were questioned
according to the checklist. ‘Have a look at your work. Did you include…’ Then
(they) had an opportunity to make changes to their work. 100% of students
made changes and said that they liked having the chance to review their work
and make it even better.
During the observation of self-assessment, I noted that many students did make changes to
their scary picture when given the opportunity. Emily acknowledges that this self-assessment
149
was very scripted and given to the students in a step-by-step process, but she believes young
students assessing their learning should be taught explicitly for students to use it effectively.
Elwood and Klenowski (2002) maintain that teachers need to provide ‘contexts and
opportunities within their assessment practice for students to become aware of their own
learning strategies and to take more responsibility for them’ (p. 246). In Emily’s explicit focus
on self-assessment she endeavored to achieve this by exposing students to a strategy to
improve their learning. Clarke (2005) and Glasson (2009) both argue that students need to be
given the time to go back and improve their learning. By teaching self-assessment explicitly
from a young age, it becomes a part of the meta-cognitive thinking of a student and thereby
embedded in the student’s thinking during the learning process.
During the plenary discussion, Emily asked the students if they thought it was helpful to go
and look at the checklist to identify whether their learning matched the criteria. Most of the
students felt this was of assistance. When asked why it helped, one student commented that,
‘sometimes you forget what you did and you get to look at it again’. This demonstrates that at
a young age, students, through their own understanding, can see the benefits of formative
assessment and in particular, self-assessment.
In Harriet’s class, self-assessment was discussed and implemented in lesson observations 4 and
6. During planning, we discussed using self-assessment after students sat their mock
examinations, so afterwards, the students were given the opportunity to correct their results.
Students had been involved in correcting their own learning previously, but this time we agreed
the students would reflect on their learning according to how they believed they went in the
examination. When Harriet told the class they would be correcting their own paper, they were
very excited by the idea. Harriet said the students ‘always enjoyed correcting their own work’.
This was an effective use of self-assessment rather than having the teacher correct the learning
and delaying feedback to the students (Clarke, 2003). However, the students did not make a
connection between their choice of a studying technique and how they believed they performed
in examination. For the second implementation of self-assessment in lesson observation 6, we
discussed how we could improve the use of self-assessment and learn from the student’s
150
performance in the previous lesson. We agreed students needed to reflect again, but on this
occasion, they needed to make a stronger connection between their achievements and studying
technique utilised prior to the examination. After the examination, students answered the
following questions:
• How do you think you went on the examination?
• How did the studying technique of working alone help/hinder your learning?
• What would you change if you could do this again?
These questions were very powerful for the students to improve their understanding of what
assists and hinders their learning. Students became involved in thinking about their own
learning and identifying what would support their learning. Earl (2003) advocates that students
need to develop the skills of self-assessment and self-adjustment ‘to become self-starting and
self-motivated lifelong learners’ (p. 101).
Peer-assessment in Emily’s class
Before this study, Emily had not focused on peer-assessment with prep students before.
Therefore, developing the peer-assessment strategy for the action research would require
explicit teaching. During lesson observation 3, the first lesson of implementing peer-
assessment with this cohort of students, Emily explicitly explained and modelled the step-by-
step process to the class. This involved students swapping learning and commenting on their
peer’s work by relating it to the success criteria. Emily supported the students by providing
sentence starters (i.e. ‘I really like how you…’ ‘you could add … to make it ...’), reinforcing
the learning intention and success criteria to remind students of the areas in which to focus
their feedback. Although this was a new formative assessment strategy for the students, it still
proved to be effective as many students responded to the improvements that were suggested to
them. As Emily stated:
When asked if they (the students) found it useful or helpful to have their friend
check their work and make suggestions, all students said that they liked it and
151
that it helped them to improve their work. About 90% of students made changes
to their work after getting feedback from their partner.
As a result of this, Emily found the peer-assessment to be very effective:
The fact that all students said that they liked getting the chance to check their
work or have someone else check their work and the fact that they all made
changes/improvements to their work at some point, demonstrates that the
questioning and feedback (from their peers) helped students to deepen their
understanding and improve their learning.
Many of the students giving the feedback were effective in explaining what could be improved.
When Pia was looking at her partner’s learning, she explained to her partner, ‘I think you need
to add a bit more black or something and make it a bit darker,’ which was related to the success
criteria for the lesson. Afterwards, Emily encouraged the class to reflect on who had improved
their picture of a scary character following a suggestion from their partner. A majority of
student’s hands were raised to indicate they had made a change. Some students had the
opportunity to say how they had improved their picture. A short extract taken from the plenary
discussion demonstrated how Patrick and another student had listened to their partner’s
suggestions to make some changes:
Emily: This is interesting because people have different opinions about what
looks best or what looks scary. And that’s ok. We don’t have to use our friend’s
suggestion, but we might find it helpful. Did anyone have a suggestion that was
really helpful? Patrick?
Patrick: Michael said I should use more dark colours.
Emily: Did Michael suggest any dark colours you could use?
Patrick: No
Emily: Did it make you think of any colours that you might be able to use?
Which ones did you think about?
Patrick: Black, brown, dark green
152
Emily: Interesting ideas. Did anyone else have a suggestion they wanted to
share?
Student: Yeah.
Emily: What did Matthew tell you?
Student: I could make the ears bigger.
Clarke (2005) argues the benefit of this type of peer-assessment is that it allows students take
constructive criticism more freely than the traditional teacher-student discussions. Emily
agreed with this as she stated, ‘I think it is valuable because they do learn a lot from each other
and they speak differently to each other and respond differently’.
During the peer-assessment strategy for this lesson, students were given the opportunity to
make changes to their picture. Some students chose to add more than just what their peer
suggested. Luke, who made changes to his picture, said he saw some ideas he liked on another
student’s picture he was looking at and that gave him another idea for his own learning.
Airasian (1996) and Glasson (2009) both argue that by having students involved in peer-
assessment, the assessor is thinking about how he or she can improve their learning. Luke’s
learning benefitted from assessing his partner’s learning.
When our reflective discussion moved towards areas of improvement from the lesson, Emily
noted that peer-assessment is one element of formative assessment that was new to her
students. She felt that this lesson was one of the few times she has used it, and acknowledged
that she could see the benefit to student learning if she implemented more often.
My class and peer-assessment
In my own classroom, I consciously began to implement peer-assessment on a regular basis.
At the beginning of the year, I set up a structure whereby students asked two peers before they
asked me the question if I was working with other students. Other students vetting the
questions the students were asking before they came to me was of great value. Questions like,
‘which side of the sheet do I start on?’, ‘can we do this with a partner?’ or ‘I don’t have a
153
pencil’ are easily answered by another student allowing me more time to focus on giving
quality feedback to students with the added benefit of enabling the students to utilise each other
as learning resources.
As a part of using peer-assessment in my classroom, I would have a discussion with students at
the start of the lesson about whether the answers would be the same as their peers (closed
questions) or different (open-ended questions). This helped students determine the type of
feedback they would give to their peer during the lesson. For closed questions (i.e.
mathematical equations) I regularly asked students to compare their answers to another person
to see if they had the same answers. Through this process, students would explain to each
other how they came to the answer. This was particularly important for those who had the
incorrect answer as it allowed another student to explain the correct answer. For open
questions during writing lessons, I would ask students to read their piece of writing to a peer
and use ‘two stars and a wish’ to provide feedback. This involved commenting on two areas
the peer did well and one area they could improve and importantly, how they could do this.
This was most successful when the success criteria had been set up and assisted in guiding the
students in the giving of feedback. Peer-assessment was also successful for student
performances and presentations. Articulating the success criteria clearly allowed students to
understand better the focus of the presentation and whether it was the content or its delivery
that needed improving.
Peer support in Harriet’s class
Creating a culture in the classroom that encourages peer support can be achieved through
measured steps to ensure all students feel comfortable and supported when involved in peer-
assessment (Glasson, 2009). This includes providing activities that develop an atmosphere of
cooperation, support and trust (Sullivan, 2002). Explicit teaching of how to give non-
judgmental and constructive feedback including using language that supports learning as well
as the physical set up of the classroom environment to encourage collaborative learning in
pairs and groups. Harriet had her classroom arranged to enable the students to sit in groups
making group learning easily accessible. During the observations, group work took place on
154
several occasions with the students working on mini-projects. The students enjoyed learning
with one another and they listened very well to their peers’ ideas. However, peer-assessment
was not necessarily used at any stage for explicit feedback on student learning. Harriet did not
involve the students in any form of explicit peer-assessment. This presented an opportunity to
offer suggestions and improve Harriet’s implementation of peer-assessment.
During one of the units of inquiry in Harriet’s class (lesson observation 7), the students were
practising a range of studying techniques for a ‘mock examination’ to see how effective each
study technique was for the individual student. This included learning through games, studying
alone, working in pairs and groups, developing mind maps and ‘cramming the night before’.
When Harriet and I sat down to plan a formative assessment strategy for these activities, we
both agreed to focus on studying in pairs with the aim of using peer-assessment. We planned
for Harriet to ask the students to share what is the best way to learn in pairs and what helps or
hinders the process of learning with another person. The same introductory approach was used
for studying alone. For learning in pairs, the class made a list of what would help and hinder
their learning:
What helps learning in pairs? What hinders learning in pairs?
Positive comments Negative comments
Listening with your body Not looking at your partner
Sharing all your ideas Interrupting
Take in turns offering ideas (until the other
person runs out of ideas)
Be open to new ideas
Be respectful to your partner by staying focused
When Harriet took the time to identify the strategies and tools to help students with their
studying, the students had a clearer idea of how to study effectively and were more independent
in these approaches. This list created by the students set expectations and led to a higher level
of concentration and responsibility being taken by the students for their learning. This was also
155
the first time I had seen Harriet engage the students in meta-cognitive thinking about what
would support their learning. Harriet noted that she rushed through the introduction of some of
the other strategies and as a result, she found that many students thought those study techniques
were ineffective. Clarke (2005) argues the need for explicit teaching of self and peer-
assessment for it to support effectively student learning. After Harriet introduced strategies for
working in pairs, the students worked busily together in different areas of the classroom. The
noise level was high but mostly focused. The high level of intensity demonstrated the students
were eager to study together. Daniel, a participating student in Harriet’s class commented that
it was a ‘fun way to learn’. The results on the mock examination were not necessarily higher
than other study techniques, which is consistent with Popham’s (2008) contention that
‘formative assessment will not improve students’ scores on most of today’s accountability
tests…’ (p. 121). A test is incapable of measuring the effectiveness of formative assessment
appropriately, however, what was evident amongst students was the high engagement levels
demonstrated and the eagerness to learn. Students were listening to each other, and in many
instances, building upon the ideas of their peers. In some cases, students were telling their
partner in a constructive and positive manner, why their answer might be correct or incorrect.
Following this, students were asked to double up with another pair to make a group of four to
continue studying. Although this formative assessment strategy was not specifically peer-
assessment, students were providing feedback on each other’s learning and this was happening
naturally through the discussion students had with each other. Importantly, the lesson showed
key elements of Clarke’s (2008) active learning including higher student engagement and
dialogue. Students were engaged in collaborating with each other, formulating decisions about
their own learning and as Clarke (2008) described, Harriet was ‘letting go’ (p. 2), and thereby
providing students with the necessary freedom to make decisions for their own learning.
Harriet acknowledged that she did not plan for peer-assessment or feedback in the classroom,
although she believed the students were capable of being involved in assisting each other’s
learning. During the planned peer-assessment implemented in Harriet’s class, the students
studied in pairs, and then moved to larger groups, quizzed each other and identified areas still
requiring support and revision. Harriet saw the advantage of studying in this manner as the
students could provide and obtain feedback immediately. Although the feedback was only in
156
relation to closed questions in a mock examination, this was a starting point for Harriet to plan
and implement peer-assessment as a part of the learning and teaching.
Students’ perspective of peer-assessment
For the planned peer-assessment, Harriet wanted to use the strategy to assist students to achieve
a better mark on a mock examination. Being a part of a unit of inquiry for the Grade 5
students, it became difficult to shift Harriet’s focus to implementing peer-assessment through
another activity. My concern was that the students would associate effective studying
techniques with the mark they were given. When I interviewed the participating Grade 5
students after their mock examinations, the main focus of the students was on their score and
this determined their view on how effective that particular technique was for them. However,
in the interviews afterwards, students shared some interesting views regarding support in
learning. These are discussed in the following section.
Before studying alone and studying in pairs, I asked Anna which technique she thought would
help her the most. Anna was convinced that studying on her own was best for her, because she
would not be distracted. After the class had practised with the two techniques, Anna was
surprised by the benefit of working with a partner:
We were reading stuff and Rachel, who was my buddy; she would give me ideas
because she knew a bit about the digestive system. She would give me ideas that
weren’t in the book. I think it helped both of us because we had to work
through the book because it had a lot of detail.
Anna’s first reaction might have been different if she had been exposed to peer-assessment in
the past. Donald was also unsure if studying with a peer would assist him with his learning for
the mock examination. After the lesson Donald felt that there were benefits, stating that ‘when
you have a group, you can listen to other people, they can give you interesting ideas and facts
that you may not know’. When prompted further about how this was to be achieved, Donald
157
responded with ‘asking questions of each other like what is your idea, do you have anything
new to share with us and they tell us what they know’.
Sophia provided a different response to the peer learning. Before the peer learning took place,
Sophia was excited and confident that this would help her learning due to the number of people
able to provide ideas. After the activity, Sophia became uncertain. When Harriet asked the
students to double up their groups to go from two to four people, and then to eight people in
each group, she found that the group was too big. Unfocused students frustrated Sophia and
she preferred to study on her own. When asked about what kind of people she would prefer to
work with, Sophia felt she would choose ‘focused students who you know you work well
with’. Sophia gave a brief example of how having students actively involved in their learning
can create a higher level of engagement. Sophia’s opinion on learning with a partner is an
example of the proposition that students knowing how to work together to assist each other’s
learning cannot be taken for granted by teachers. It is a skill that must be taught explicitly from
an early age to ensure students can use it to their benefit.
Introducing peer learning in Harriet’s class shifted the mindset of some of the students in the
short term. In particular, it helped the students think about the benefits and the challenges of
learning alone or in groups or pairs. In the longer term, Harriet would need to set up peer-
assessment as a regular part of the learning and teaching for it to be most effective. She would
need to develop clear expectations of collaborative learning to prevent situations such as
Sophia’s experience arising.
When Emily reflected with the whole class on the benefits of the peer-assessment on their
learning, Chris commented it was helpful to have a partner look at his work with him in case he
missed something. Pia also believed it was of a similar benefit as she said ‘when they (peers)
look at your work, they might see some things that you might do better’.
Interestingly, when I spoke with the participating students after the same lesson Emily had
reflected with the class regarding peer-assessment, the use of peer-assessment was still strong
in their minds. I asked a general question about what helps their learning and Patrick said that
158
‘trying it [the activity] and trying it again’ would help him with his learning. Prompting
Patrick further, I asked him what would help him the second time when trying the activity.
After giving Patrick some thinking time, he paused for a moment and said ‘you could ask
someone at your table’. I asked Patrick why he said that would help and he commented that
‘we just look at each other’s work’. This conversation came shortly after Emily had
implemented peer-assessment in her class and it appeared that Patrick reflected upon this
strategy and found it benefitted his own learning. Luke also found ‘talking to somebody about
it’ helped him with his learning and Chris commented that he liked ‘…when Mrs Holmes
partners me up with somebody so I can talk about my ideas’. At such a young age, these
students are already beginning to internalise the benefits of peer-assessment.
Creating a classroom environment for peer-assessment
Glasson (2009) argues that the physical setup of a classroom environment can promote
collaborative learning as an essential element of Clarke’s (2008) active learning. A classroom
configuration that encourages regular collaborative learning can be achieved through
assembling students in pairs or groups and regularly giving the students opportunities to be
involved in ‘dialogue and active reflection’ (p. 28). Although Harriet did not use a great deal
of peer-assessment in her classroom during the research, the organisation of the classroom was
such that every student directly faced a peer. This seating plan promoted the opportunity for
peer-assessment and active dialogue to take place on a regular basis. Harriet also believed it
was important for students to work together in groups. She quite often set group tasks
encouraging the need for students learn with each other. There were two or three times during
the observations when students worked on projects in groups. During this collaboration, there
were situations where peers engaged in peer-assessment of their own volition and without
Harriet setting it as a deliberate assessment strategy. On most occasions it was done at a
superficial level, not focused on the learning task at hand. This natural reaction from students
to engage in peer-assessment provided Harriet with an opportunity to build upon and create
deliberate situations for student to discuss their learning with each other. Clarke (2005) and
Glasson (2009) state that setting out the learning intention and success criteria clearly for the
students provides the building blocks needed for the implementation of peer-assessment. An
159
important power shift for Harriet would be from the teacher being the centre of assessment
process to the students becoming the centre (Clarke, 2008). Since students had demonstrated
they were willing to receive such feedback, it would be more effective for Harriet to promote
peer-assessment related to the learning intention and success criteria. Harriet’s next step in her
learning experience would be to model and provide time for peer-assessment. Dueck (2014)
argues that, ‘assessment methods that foster increased student ownership and engagement can
provide a stage upon which students can shine in front of their peers’ (p. 165). This peer-
assessment strategy is imperative for students to take ownership of their learning (Black &
Wiliam, 1998b).
Emily also had her classroom set up for collaboration by grouping students together at hexagon
tables. This allowed them to work closely with the person next to them, enabling collaboration
to take place. Since Emily consistently used student collaboration as a strategy to assist with
learning, introducing peer-assessment became an easier strategy for the students to connect
with and use to improve their learning.
From the early beginning of the introduction of peer-assessment, Emily attempted to create a
positive culture by modelling peer-assessment to the students. Conscious that students can be
sensitive about receiving feedback regarding their learning and particularly related to their
artwork, Emily wanted to introduce the idea of ‘kind words and helpful words to make our
friends work a little bit better’. This was instilled before any of the students began to provide
feedback to each other. Glasson (2009) identifies the importance of talking about the type of
feedback given and the way it is received by the student.
Multiple layers of assessment: Aligning self-assessment, peer-assessment and teacher
feedback
Earl (2003) emphasises:
...the role of the student, not only as a contributor to the assessment and
learning process, but also as the critical connector between them… The student
160
is the link. Students, as active, engaged, and critical assessors, can make sense
of the information, relate it to prior knowledge, and master the skills involved.
(p. 25)
In the statement above, Earl (2003) argues that students need to be a part of the assessment
taking place in the classroom. Assessment is no longer a task educators do to students, but
forms a natural link with student learning. Earl (2003) further contends that assessment as
learning (self-assessment) is the key to students taking ownership of their own learning and
leading to the ultimate goal of education, lifelong learning.
One change implemented in my own classroom to improve the formative assessment strategies,
was to add multiple layers of formative assessment to enhance the quality of assessment. One
approach to achieving this was by aligning teacher feedback with self and peer-assessment
through feedback on the success criteria. This was developed during a unit of learning where
students were performing role-plays. The learning intention was ‘to read with fluency and
intonation’. As a class, we began with analysing what the words fluency and intonation meant
through viewing video clips of performers and made a list of what an audience looks for in a
performance. We then analysed the data that had been collected and developed the success
criteria in the form of a checklist. This checklist became the basis for all feedback and was
used multiple times throughout the unit both formally and informally. When students were
performing on stage, I used the checklist and focused on a criterion to provide feedback on
where they needed to improve. Throughout the same performance, peers used ‘two stars and a
wish’ in relation to the success criteria and gave the students feedback after they had
performed. Most importantly, the students were video-taped and had the opportunity to watch
their own performance and assess their learning and where they needed to improve. Students
were given further opportunities to perform two or three times to ensure they had the chance to
act upon the feedback they had received.
During this three to four week unit, there were some interesting observations made regarding
student behaviour. The performing student was quite often more interested in the feedback
from their peers than from me as the class teacher. Importantly, students receive criticisms
161
from their peers more freely than from teachers (Clarke, 2005). Because no grades (or smiley
faces as are quite often used in early years in schools) were used to assess the students, the
assessor had to use words to describe how the student had performed. This had a resounding
influence on the student receiving the feedback. There were no external rewards involved,
therefore, the only feedback the students received was in regard to their performance.
Consequently, the conversations the students had were based on what they had achieved in
relation to the success criteria and what they needed to improve on, rather than a grade,
certificate or how many ‘smiley faces’ they received. Clarke (2008) strongly advocates the
promotion of intrinsic motivation over extrinsic rewards as it encourages ‘deeper and longer-
lasting learning’ (p. 24).
Students involved in assessing themselves and their peers, display a considerable difference in
motivation to perform. Dweck (2006) argues that the motivation to succeed and the love of
learning and challenges are important factors which contribute to the success of a student.
Students were highly engaged both as a performer and assessor during the unit. When students
were assessing their peers, they could be seen making changes to their own performance after
watching and assessing another’s performance. Glasson (2009) states that one of the major
advantages of peer-assessment is that the assessing and learning are happening at the same
time. This occurred regularly throughout the unit. During the self-assessment, students were
often their own harshest critic, finding faults in their performance that other students or the
teacher did not identify.
The approaches to formative assessment strategies used in my classroom were based on the
notion of it multiple layers of formative assessment strategies that impact on student learning
for a greater number of students. This also ensures a balanced approach to assessment
between the teacher and the student. Clarke (2005) argues that ‘…if the teacher is the only
person giving feedback, the balance is wrong and the children become powerless, with no stake
in their learning’ (p. 88). The addition of multiple layers provides more opportunities for
improving student learning through assessment. The student performer can compare their own
assessment of their performance against that of the teacher. The performer also receives
simultaneous feedback from their peers as well. Black et al. (2003) state that a notable
162
advantage to peer-assessment is students often prefer to receive constructive feedback from
their peers. The language used by the students is more easily understood and pertinent to their
peers. Whilst giving feedback, the students at the same time are thinking of their own
performance and where they might need to improve. As Glasson (2009) and Black et al.
(2003) argue, peer-assessment is an important pre-requisite to self-assessment as it prepares
students to assess their learning through recognising the desired qualities in other student’s
learning.
Motivation and formative assessment
The data collected during phase 1 of the action research extensively examined the planning and
implementation of formative assessment strategies in the classroom. As a part of the final
reflection interview with Emily, I asked how she thought the strategies implemented had
impacted on the students. Emily commented on how engaged she thought the students had
been in any lesson where they had focused on formative assessment and that they clearly
enjoyed taking a lead role in the assessment process, or as Emily told them regularly, when
‘you get to be the teacher’. Students were motivated to participate, particularly during the
development of the success criteria and during the self and peer-assessment. Students were
confident completing the task successfully, giving their peers feedback and making changes to
their own learning. Student confidence may not have been solely related to the formative
assessment strategies alone, but the importance of formative assessment stems from the
creation of a classroom culture focused on active learning (Clarke, 2008). Research has
identified the positive links between the confidence and motivation of students and the
implementation of formative assessment strategies (Clarke, 2001). Phase 1 of our action
research replicated these findings. Black and Wiliam (1998b) argue for careful attention to be
paid to the impact assessment has on motivation and self-esteem. Emily noticed a difference in
the students’ motivation when involved in the assessment process. When students were
involved in active dialogue and peer-assessment in Harriet’s class, the motivation increased as
evidenced by the focused noise level in the class. Harriet also observed that the class ‘really
enjoyed discussing their learning with each other’.
163
Teacher attitudes and responses to action research
As shown in previous discussion, Harriet had limited understanding about the use of formative
assessment in the classroom and was at times resistant to considering its benefits. For her,
planning together during the action research was challenging, as she preferred to plan on her
own. Rather than forcing Harriet to make changes to how she planned, I decided it was better
to focus on one strategy at a time that she agreed to develop. These included how to use
questioning by introducing talk partners for the students (Clarke, 2008), wait time (Rowe,
1972), peer and self-assessment and focused on teacher feedback. This focus on individual
strategies remained valuable and the time spent planning assisted in improving Harriet’s
classroom practice. Even though at times she was a reluctant participant, Harriet did
acknowledge in her final reflection that it ‘was good to get ideas working together’ with
‘everyone talking’ about formative assessment. By the end of the action research she was
showing evidence of understanding how to implement more effective strategies in the
classroom.
Harriet’s reluctance to embrace formative assessment strategies we planned seemed to be
related to her lack of belief about the effectiveness of these strategies. Fullan (2011b) argues
that for teachers to make changes to their practice, it is ‘the actual experience of being more
effective that spurs them to repeat and build on the behavior’ (p. 2). Harriet experienced a
positive mind shift after observing the impact of active dialogue and peer-assessment in her
last observed lesson (lesson observation 7).
In contrast to this, Emily’s belief in the effectiveness of the strategies implemented opened up
an opportunity to plan for multiple layers of formative assessment. Table 9 (see p. 163-164)
outlines the findings from the first phase of the action research capturing what was learnt about
the implementation of formative assessment strategies in Emily’s, Harriet’s and my own class.
It brings together the findings from the formative assessment strategies involved in the action
research and provides a framework for improvements in other schools involved in planning and
implementation of formative assessment.
164
Findings: A multilayered approach to formative assessment
The findings from the phase 1 action research demonstrated a variety of improvements in
student learning across the Prep (Emily), Grade 5 (Harriet) and my own class, through the
planning, implementation and use of explicit formative assessment strategies in the classroom.
While during the research we mainly developed the strategies separately, evidence developed
demonstrating the benefits of using multiple strategies, often in sequence. Table 9 (see p. 165-
166) provides a framework showing how we implemented the formative assessment and how
the strategies build upon each other. For example, particularly Emily and I found that stating
the learning intention provided students with the opportunity to develop the success criteria.
Through our work with students we found that developing the success criteria gives purpose to
self and peer-assessment and a clear outline of what students are assessing. Providing a clear
learning intention and success criteria gives teachers the focus needed for providing effective
feedback. When implemented effectively, the learning, assessing and teaching fit together and
provide a natural flow and sequence the lesson and a clear link between the learning, assessing
and teaching, that is critical to the effective development of formative assessment. The
framework shows the demonstrated benefits of formative assessment that the literature
discusses and that we found evidence of through our action research process.
165
Form
ativ
e as
sess
men
t st
rate
gy
Collaborative planning required What happens in the lesson
Demonstrated benefits for student
learning in our action research
Lea
rnin
g in
tent
ion Establish the learning
intention prior to lesson beginning.
Introduce the learning intention after engaging students with a discussion about the knowledge, skill, or concepts being learnt. Walt puppet could be used for younger students (Clarke, 2001).
Students understand what they are learning (Clarke, 2001; Glasson, 2009).
Succ
ess c
rite
ria
Develop the success criteria in basic form prior to the lesson beginning (Clarke, 2008)
Establish what guidance students may need to develop the success criteria (Glasson, 2009)
Develop success criteria with students through the analysis of the learning intention. This is done through exemplars (both successful and unsuccessful pieces of learning).
Students understand how to be successful in their learning (Clarke, 2001; Glasson, 2009).
Tea
cher
que
stio
ning
Develop the key questions required
Establish a questioning strategy.
Give students opportunities to explore the key questions through wait time (Rowe, 1972), think time (Stahl, 1994), talk partners (Clarke, 2008) and think, pair, share (Glasson, 2009).
Student understanding is furthered through questioning; answers are more in-depth; students build understanding upon other students’ answers or alternative explanations offered (Glasson, 2009).
Tea
cher
feed
back
Plan teacher feedback to give to students. This is done with the flexibility to alter the lesson at anytime, depending on how well students are learning.
The teacher gives feedback to the class related to the success criteria and learning intention. Everyone will know what the expectation is for their learning. Feedback should be specific, timely, descriptive and given time to act upon the feedback (Glasson, 2009). Approach could start with a positive comment followed by questions about areas students think they could improve on.
Students receive the immediate and direct feedback needed to be successful in their learning. Strategic and guided questioning gives students the opportunity to determine improvements their learning needs.
166
Figure 9 (see p. 167) provides a diagrammatic representation showing the connections between
the strategies that we found through our action research lead to improvements. We found that
each of these should be included in order to implement formative assessment effectively in
classrooms. Clarke (2008) raises the important argument that formative assessment does not
need ‘specific techniques’, but teachers should allow for ‘experimentation and development’
(p. 2). The phase 1 action research demonstrated that using multiple forms of formative
assessment can positively impact upon student learning. Our students became actively
involved in the assessment process and in thinking about their own learning. We learnt that
developing formative assessment requires constant flexibility and re-evaluation of student
needs at each step. Figure 9 shows the possible connections in formative assessment for
teachers, but should be used with consideration for what best supports the learning at the time.
What our action research did demonstrate, was that for Emily and I, our openness and
commitment to developing multiple aspects of formative assessment enhanced our students’
learning, and where Harriet became convinced, her students deepened their learning as well.
Peer
-ass
essm
ent
Teachers decide how students are going participate in peer-assessment (i.e. two stars and wish, checklist).
Students discuss their learning with a peer who offers suggestions related to the success criteria and learning intention (Glasson, 2009). Students are given time to make any changes when given the feedback. This approach needs to be taught explicitly to ensure the feedback given is effective.
Students assist their peers to improve their learning, therefore thinking about improving their own learning (Black et al., 2003).
Self-
asse
ssm
ent
Teachers plan for how self-assessment will be implemented. It could be planned for either formally assessing (i.e. performance descriptors, checklist or rubric) or informally (stop lesson and look at their learning related to the success criteria) (Glasson 2009).
Similar to peer-assessment, students look at their own work in relation to the success criteria and learning intention and decide where they think they could improve their learning. Students are given time to make any changes. This approach needs to be taught explicitly to students ensuring their focus on improving their learning relates to the learning intention and success criteria.
Students given time to check their learning against the criteria can see where they can improve their learning. Clear success criteria helps students to learn (Clarke, 2001; Glasson, 2009).
Table 9: Framework for multilayered approach to formative assessment
167
Figure 9: Multilayered approach to formative assessment (Kean, 2016)
Emily and Harriet’s final reflections
I returned to Emily and Harriet for a final interview approximately six months after the action
research had been completed to capture their final reflections. Emily said:
I have always understood that formative assessment helps students ‘form’ their
understanding through ‘assessment for learning’, rather than ‘assessment of
learning’ but I was not aware of how many different strategies I could use in the
classroom. I was not particularly good at purposefully varying the types of
assessment that I used on a weekly basis or throughout a unit.
During the action research, I found that Emily wanted to ensure she had a thorough
understanding of the formative assessment strategies we planned and she was highly motivated
168
to find the most effective strategies to implement. Fullan (2011b) strongly argues that
motivating change is about providing experiences that teachers find intrinsically fulfilling.
This was true for Emily, since she demonstrated that successful implementation of formative
assessment strategies was influenced by her motivation and the belief that these strategies can
make a difference to student learning. This was evident in Emily taking time to investigate the
different formative assessment strategies developed in this study on her own, her willingness to
trial all the strategies in her classroom, and the in depth reflections she had on her practice and
the impact it had on student learning.
Six months after the action research had been completed, Emily showed she could identify the
range of formative assessment strategies for helping students to achieve learning outcomes:
While practising these skills over the past six months, I have seen how formative
assessment strategies work really well for certain students. For example, one
student may respond extremely well to reflection activities, while another
student may not respond and benefit as much to that particular strategy. When
I have been able to identify the strategies that work best for a student I have
seen them excel quite quickly on their journey towards mastering a certain skill
or developing a deep understanding.
She went on to show how her practice had changed because of the action research:
I used formative assessment in my practice, mainly in the form of observation,
questioning and feedback, but I have become more aware of different formative
assessment strategies like feedback, self and peer-assessment and collaboration,
how to plan for them and how to use them in the classroom more productively
and meaningfully. This work has also reinforced the importance of taking time
to use the information gathered from formative assessments to modify my plans
for future instruction.
169
Working in a PYP school, Emily acknowledged the challenges inherent in using the required
PYP planner (see Appendix 1). She said it placed limitations on how teams could plan for
formative assessment:
Unfortunately, it puts a lot of emphasis on the big summative task at the end of
a unit and I don’t agree that’s always the best approach. I understand and
believe in a backwards-by-design approach but unfortunately some of the
teachers I have worked with focus only on/or mainly on the summative task and
forget to focus on the formative assessment strategies along the way. I would
like it if section 3 of the planner was worded differently so that the formative
assessment strategies and techniques planned for were highlighted more clearly
and had greater emphasis.
When Emily was asked which areas she needed to work on, she felt establishing a system for
how she used formative assessment would help her practice; ‘I would like to get ideas from
other teachers who have effective systems in place for the use of different formative
assessment strategies and techniques in their classrooms’.
Harriet’s thinking had also changed considerably since the beginning of the action research, in
spite of her evident reluctance during the study. At the commencement of this study, Harriet’s
views showed limited understanding as she defined formative assessment as:
Evaluations and assessment for learning, so it is a collection of tools and
strategies that guides students on a learning curve towards an intended
outcome. So assessments happen along the way, assessments that help direct
students learning.
Harriet’s definition related to improving student learning, although her understanding was
mixed with summative assessment tasks. She confused ‘evaluation’ and ‘assessment’, and did
not see that evaluation involves making a judgment of a student’s learning. Earl (2003) states
that evaluation comes in the form of grades or short comments praising or criticising a
170
student’s learning. She also talks about assessment as tools and strategies, whereas Cowie and
Bell (1999) see assessment as the ‘process used by teachers and students to recognise and
respond to student learning, in order to enhance that learning, during the learning’ (p. 32).
Six months later, Harriet provided a different understanding of formative assessment:
…a way of giving children feedback about their learning experience to help guide
them towards their goals. Gives them an insight into what they are doing, how
they are doing in relation to the success criteria.
Her earlier definition was more focused on students working towards outcomes directed by the
teacher. Harriet’s thinking had moved towards a more learner orientated definition. She spoke
of students working through ‘their goals’ and shifted from content focus to valuing the
importance of the process students experience when learning. Harriet also used terminology
such as success criteria which she had not spoken about before the action research. The new
terminology had become a part of her assessment language.
Harriet also agreed that her mindset had shifted in recent times:
When everyone was talking about it (formative assessment), this led me to believe
that I had to document it, that it had to be written, something that I had to prove,
that there was evidence of what I was doing. Lately, I have become more
comfortable about conversations on formative assessment. I can give feedback to
a group of kids at one time… it does not have to be written in stone. I used to think
that formative assessment always had to be written and now I know that is not the
case.
Harriet was seeing formative assessment as a way of improving student learning. She was
making the link between assessment, learning and teaching that Earl (2003) identifies as
imperative to the use of formative assessment in the classroom. The improvements Harriet
171
identified as a key for her related to planning and developing formative assessment strategies.
Harriet felt she needed to:
Be more prepared. I am doing it (formative assessment) regularly on the fly.
Making it up as I go. But there are times when I should be planning it more
often.
This acknowledgement by Harriet opened up a new area of consideration for the next phase of
action research in my study. Although we had worked collaboratively together over three
months, there were tensions in our collaboration that I believed needed further research. I was
beginning to think about how important it is for colleagues to de-privatise their practice and to
work collegially. This approach had not been implemented in Harriet’s Grade 5 team. They
were still working individually on developing their formative assessment strategies and it was
not a focus in curriculum meetings. Harriet said that planning was:
…for our summative assessments primarily. It is still not that common to work
out our formative assessment strategies. Partly because there is a feeling that
we (Grade 5 teachers) all do it very differently. We wouldn’t share formative
assessment strategies very often…. Sharing formative assessment strategies is
still very informal. I am thinking that how this is done probably needs to be
looked at. I think we can make some changes.
Harriet’s development in her thinking meant she was beginning to see the benefit of planning
for formative assessment strategies with a team rather than just making up the strategies ‘on
the fly’ as she suggested. Importantly, Harriet was also recognising that formative assessment
needs to be planned and developed and the quality of the strategies further enhanced through
collaboration.
Interestingly, in Harriet’s final reflection, she raised the idea of information and
communication technologies (ICTs) being used to assist with using formative assessment in
her classroom. Harriet was curious about how ICTs could be better integrated into her
172
teaching. She found ‘computers to be very helpful in providing feedback to the students if they
are doing it (the task) on their own’. Harriet believed it would be particularly helpful with
mathematics and giving feedback on how students are performing a certain skill and for
tracking their progress. Harriet also discussed other ways in which ICTs could help including
games, interactive websites, peers and the involvement of other teachers in the learning
process. Importantly, from this action research, Harriet found her areas of interest and where
she would like to improve further in her teaching practice.
Harriet has also been an advocate of portfolios as part of formative assessment. In her initial
interview she discussed how portfolios are connected to formative assessment through the
discussions she would have with the students about their learning. She believes portfolios are a
great way to develop the metacognitive thinking skills in students through reflections on their
learning. Harriet also believes that the quality of conversations with students continues to
improve as she develops her questioning skills.
Students’ final reflection on their learning
The participating students from both classes spoke fondly of their teachers and appeared to
enjoy having Emily and Harriet as their teachers. As a final reflection on the action research, I
asked the students a variety of questions about their learning. Both groups of students from
Prep and Grade 5 provided insights into how the formative assessment strategies implemented
during the action research improved their learning. Most of the responses from the students
were centered on the classroom teacher in Prep. Luke, from Emily’s class stated that ‘Mrs.
Holmes (Emily) tells us what we can do better if we make a mistake’ and Chris said that Mrs.
Holmes ‘helps me by giving me ideas’. Pia believed ‘Mrs Holmes helps me find the correct
answer’. All these comments can be associated with the type of feedback Emily provided to
her students. Interestingly, Chris also said, ‘Mrs Holmes partners me with up somebody so I
can talk about my ideas’, referring to the use of talk partners in the classroom. When the other
participating students heard Chris mention this, they all actively agreed with him.
173
I asked Grade 5 students about which strategies Harriet had implemented that assisted their
learning. Donald and Daniel felt talking about their answers to a question with a peer was
helpful. Donald said, ‘I like working with others. It helps me get better answers’. They all
believed correcting their own learning was beneficial as Anna said, ‘it really helps to see where
I went wrong’. Daniel also felt it made the learning process ‘a bit more interesting as it is like
having the cheat sheet’.
Along with the responses in the final reflection and through the phase 1 action research, the
students’ responses demonstrated the positive impact the formative assessment strategies had
on their learning. What was particularly evident was that the students were eager to have an
active role in their learning. In particular, the use of self and peer-assessment created a higher
engagement level, which Wiliam (2006) argues is a key feature of an effective learning
environment. The students appeared to value self and peer-assessment and the responses they
gave focused mainly on those two strategies.
Final reflection on my practice
Being involved in the implementation of formative assessment strategies as a teacher and
researcher provided me with the opportunity to develop extensively my understanding of
assessment. In chapter 1, I discussed how little understanding I had of assessment. My final
journal reflections show that the action research has impacted significantly on my approach to
formative assessment. I experienced numerous moments in my own classroom when I made
changes to the way I taught because of my new understanding of the value in linking the
learning, assessing and teaching. In particular, I saw the importance of collaboratively
planning for formative assessment as a team. During phase 1, I had not made a conscious
decision to plan formative assessment strategies with my team as my focus was largely on
working with Emily and Harriet. It was my realisation that formative assessment could be
more effective if it was developed collaboratively in a team of teachers which lead to my
decision to embark on phase 2 of the action research (discussed in the next chapter).
174
After phase 1 was completed, there was more interest in and discussion about formative
assessment amongst other teachers within MIS. In particular, stating the learning intention and
developing the success criteria with students became a part of many planning meetings and
teaching in the classrooms. Descriptive feedback instead of evaluative feedback, and the use
of other aspects of formative assessment became a topic of discussion amongst teachers.
However, it was clear to me that further practitioner and research could provide valuable
learning amongst my colleagues and could deepen my study.
175
Chapter 6: Developing formative assessment strategies through
collaboration
Introduction
DuFour and Marzano (2011) argue that teachers need to be ‘organized into meaningful
collaborative teams in which members work interdependently to achieve common goals for
which members are mutually accountable’ (p. 3). In phase 2 of this study, I aimed to
investigate how working in a professional learning team could lead to the development of
formative assessment strategies to improve teaching, and therefore student learning.
In order to build upon the knowledge developed about implementing formative assessment in
phase 1, I decided to continue the action research cycle at MIS, but to invite teachers in my
own Grade 1 team at MIS to work with me. The aim was to further improve the development
of formative assessment strategies through a closer study of the impact of teacher collaboration
in this process.
Although MIS had weekly collaborative meetings in year levels focused only on curriculum,
very little time was spent discussing formative assessment strategies. Summative assessment
tasks were discussed frequently as a team since this was expected in the PYP unit of inquiry
planner (see Appendix 1). Because the summative assessment task is on the front of the
planner, many teachers believe it takes priority over formative assessment planning. The
summative assessment task was intended to be designed before the formative assessment
strategies could be discussed. As a result, teams using the planner frequently discussed
formative assessment after the summative assessment had been decided. In the three years of
teaching at MIS, the teams I worked with, as well as the other year level teams I spoke to,
regularly ran out of time to focus on developing formative assessment strategies. That section
of the planner was either left blank or filled in after the unit had been completed. There was no
conscious decision made to identify the imbalance of formative and summative assessment to
ensure a stronger focus on formative assessment during planning meetings. The Grade 1 team
at MIS was eager to improve their assessment planning, but they were concerned about adding
176
another layer to their planning process. However, after my explanations of the possible
benefits of the action research cycle, all four teachers willingly agreed to be involved.
Methodology: Phase 2
As outlined in chapter 4 (research methodology), the phase 2 action research focused on
building collaboration and was structured within the framework for effective development of
professional leaning communities (Hord, 1997; Stoll et al., 2006). In our initial meetings, we
openly discussed factors including the need to build trust, open dialogue, de-privatised
practice, reflective practice and other elements of PLTs (Hord, 1997). We also decided what
we aimed to achieve and why, in order to establish shared vision and goals for the process.
DuFour and Marzano’s (2011) three big ideas for driving a PLT were used as the basis for
establishing the team; ensuring all students are learning at high levels, are working
collaboratively to meet the needs of the students and being results orientated in order to
respond appropriately to learners’ needs.
Profile of teacher participants
Brianna
Brianna had worked at MIS for six years. Originating from Canada, Brianna had spent the last
eight years living in Hong Kong and worked at another international school before MIS.
Brianna’s strength in curriculum is science and she has a strong understanding of working with
an inquiry based approach. Brianna has some understanding the formative assessment
strategies being implemented in the research, but had not spent time discussing these strategies
with colleagues at a collaborative level, nor actively utilised them in her classroom.
Leanne
Leanne began her teaching career in Canada before moving to Hong Kong, where she has
worked at two international schools. In the classroom, Leanne demonstrates significant
177
dedication to her students. She is particularly knowledgeable about the use of Information and
Communication Technologies (ICTs). Leanne found inquiry learning more challenging than
other colleagues in the study, but she was willing and eager to learn from her peers. Prior to
the commencement of the action research, Leanne believed she had a limited understanding of
formative assessment strategies.
Stacey
Stacey has over 15 years teaching experience and is a Canadian national who studied in
Canada. Like Brianna and Leanne, she is very interested in international education, and has
taught in Canada, Thailand, England and Hong Kong. Stacey has a strong understanding of
inquiry learning and is very student-centered in her approaches to teaching. Stacey has
considerable understanding of early years education and is regularly approached by her peers
for support. The initial interviews demonstrated that before the action research began, Stacey
was already very confident in implementing different formative assessment strategies in her
classroom, so it was interesting for me to see how this would translate into leadership within
the PLT.
Hilda
Originally from Australia, Hilda has been teaching in international schools for over 10 years,
having worked in Africa, Fiji and Hong Kong. Hilda has spent almost all of her teaching
career working in Grade 2 and 3, which provides a strong foundation in the lower years in
primary schools. Similar to Brianna and Leanne, MIS was Hilda’s first PYP school, thus she
had to learn about the IB focus and emphases. Hilda has a thorough understanding of
formative assessment, but her confidence in implementing different strategies, prior to the
action research, required development. Quite often Hilda would look to Brianna or Stacey for
reassurance when implementing a new part of the curriculum. She was, therefore willing to
participate in collaborative efforts to develop formative assessment strategies.
178
The beginning: Developing relationships within the team
The Grade 1 team (Brianna, Hilda, Stacey, Leanne and I) had been working together for two
years when the action research for phase 2 began. There was a sense of comfort amongst the
team since we had known each other on both a professional and personal level. We often
discussed issues and challenges around individual students, cohorts of students and pedagogy,
but the discussion of formative assessment was sporadic and typically informal, and therefore
the team recognised it required further attention in curriculum discussions. The initial
interview I had with the team was to ensure they had a clear understanding of what they were
committing to in the research and to begin collecting data that explored their understanding of
formative assessment and collaboration. The team chose to be interviewed together in a focus
group, as they felt comfortable discussing their understanding and opinions on the topic of
formative assessment with one another. The team expressed the view that they related well to
each other and this created a relaxed approach for working together. Hilda said:
We have known each other for a couple of years now and I really think that
helps make planning so much easier. We have a good understanding amongst
ourselves and it helps for smooth running of planning meetings.
The teachers in the team were from either Australia or Canada, and with no family in Hong
Kong, and had members spent a significant amount of time together outside of school as well.
When I approached them about creating a more formally developed collaborative professional
learning team (PLT) as a part of this action research, they all willingly agreed to participate.
However, I did sense concern, particularly from Leanne, that this would add another layer to
the planning meeting and the team already felt there were elements of the planning meetings
that did not support student learning. I assured them that formative assessment planning would
complement and enhance the planning already taking place. This eased some of their
concerns, so the team was eager to participate. It was important that the teachers saw a
purpose to the action research. The key to believing in what we were trying to achieve was
through developing a shared vision together. Hord (1997) argues this is vital in the success of
any PLT.
179
Understanding of formative assessment
As I had been teaching with the Grade 1 team for two years, I had valuable insight into the
team’s knowledge and understanding of formative assessment. They understood the purpose
of formative assessment and were already beginning to use different strategies including
learning intentions, success criteria and self and peer-assessment. Leanne felt this was because
the school had become an authorised PYP school, so formative and summative assessment
were separated on the PYP planner. It also meant that teachers were required to plan together,
and therefore, build upon each other’s understanding of formative assessment. Brianna said
that formative assessment had become a ‘popular topic with good reason’ in recent times, and
therefore, teachers ‘just naturally pick up different understandings through discussions with
other teachers’.
In the initial group interview, Stacey’s ability to define formative assessment as ‘the on-going
assessment that informs the teaching’ exemplified her understanding. Hilda added that it helps
teachers ‘to know where to start and where to go’ with planning and teaching. Brianna and
Leanne agreed with their colleagues’ definition. As the discussion continued, there was a
sense of increased confidence in their understanding of formative assessment, with many
comments on connecting the learning and teaching to assessment. However, they all agreed
that they could try to plan for, use and implement the assessment strategies more effectively to
improve their teaching and the students’ learning. Hilda said:
I know formative assessment is very important and should be informing my
teaching but I probably don’t really plan for it. I think it is just something I
think about on the spot or occasionally organise an activity for it.
The team knew that the best approach to improving formative assessment strategies was
through their team meetings, so the explicit focus on action research commenced. We agreed
that formalising our group as a Professional Learning Team (PLT) and discussing what that
meant would provide a structured approach to improving planning of formative assessment
strategies. This aligned with the MIS priority of creating a collaborative learning culture.
180
Establishing the professional learning team
During this phase of the action research, MIS had a primary school principal in his second
year, who was aiming to create a community of learners through building PLTs with the staff.
As a Grade 1 team, we saw this as an opportunity to combine our action research and also meet
our school obligations. Having already received permission from the principal to conduct my
study, I discussed with him our focus in the PLT on developing formative assessment
strategies. He was keen to support this approach.
The need to plan for formative assessment
Phase 2 focused on capturing and discussing the process and impact of collaborative planning
and shared professional learning on the development of formative assessment strategies. MIS
held weekly collaborative meetings in year levels with curriculum only agendas, but very little
of that time previously was spent discussing formative assessment strategies. Hilda believed
that:
Too much time is spent coming up with a summative assessment task. By the
time we have a task and evidence worked out, we have run out of time to plan
anything else.
The team were happy to participate in developing formative assessment strategies, but not if it
meant ‘more work’. The IB states that the ‘articulation between the central idea (the big idea
of the unit) and the summative assessment task(s) needs to be resolved before further planning
takes place’ (International Baccalaureate Organization, 2007, p. 37). The IB does not outline
why this needs to be the case in any of their documentation. Identifying the learning goals first
is imperative in a backwards by design approach. This is where teachers decide on the big
ideas for a unit of learning and then plan the relevant assessment and activities to support the
learning (Wiggins & McTighe, 2005). The view from the PLT was that the summative
assessment task showing student knowledge, skills and understanding could be designed later
in the unit, provided the learning goals had first been established. Through this approach,
181
teachers would be able to include students in the assessment process, by asking them what
activities they could do to develop their knowledge, skills and understanding. Summative
assessment is based on capturing evidence of the student’s understanding of the overall
learning goals. However, in practice, while the PLT teachers knew this, the planning was too
focused on summative tasks. For me, this became a motivating factor to improve the
development of formative assessment and spending the time at collaborative planning meetings
on assessment strategies to improve student learning. Prior to the action research, the Grade 1
team regularly ran out of time to plan for developing formative assessment strategies. In some
cases, the formative assessment in section three of the PYP planner was either left blank, filled
in after the unit had been completed or only a few suggestions were outlined with very little
detail. The concern with this type of planning is that teachers were still implementing the same
model Earl (2003) identified as the ‘traditional assessment pyramid’ (p. 27), where assessment
of learning (summative assessment) is the main focus of assessment, leaving little time for
assessment as learning and assessment for learning (see Figure 1 and Figure 2, p. 36-37). It
also meant the quality of the assessment strategies used to monitor student learning along the
way were not as effective as they could have been, and therefore at the end of the unit some
students were not ready for the summative assessment task as the necessary skills had not been
taught. This approach to the summative assessment needed to be redesigned.
As a result of the previously ineffective planning, one of the targets set by the team during our
study, was to focus more on identifying learning processes and tasks to build evidence of
student learning and understanding of the central idea or learning goals. As a PLT, we aimed
to develop the summative task that could be expanded in more detail towards the end of the
unit. Immediately, this had a positive impact on the planning as Brianna noted:
Planning only the evidence and not having to develop the whole summative
assessment task at the start of the unit has made planning a lot easier. I now
feel like we have more time for other planning.
182
This different approach to planning was a change welcomed by the team. Hilda felt that it
made a difference to the ‘flow of the planning meeting’, giving more time to plan for formative
assessment.
This change created optimistic momentum in the way the team planned. I was eager to keep
the motivation going within the team. It was agreed that developing formative assessment
strategies would be added to the agenda for the weekly team meetings as part of the
collaborative action research. Following this, the team would consider whether they believed
this way of planning would be continued in the future. The goals and action the team put into
place are discussed in the shared values and vision section below.
Building collaborative team planning for formative assessment
When I asked the team how well they planned together, they said aspects of their planning
were strong, including developing units of inquiry and stand-alone writing units. This was
because both these areas had been a strategic priority for the school in the recent years. They
also felt very confident in their teaching during units of inquiry and writing, which was linked
to the amount of planning time they had for these two areas. However, Stacey pointed out that
planning for reading and mathematics ‘were areas that we still needed to work on’. Once it
was established what planning the teachers were involved in as a team, I asked how they
planned for formative assessment strategies. At first, this question appeared to stump these
highly experienced educators. Brianna stated that section 3 of the PYP planner required them
to discuss what formative assessments strategies were to be used during a unit of inquiry.
However, her concern was ‘that we don’t always get the time to focus on that part of the
planner’. Leanne also felt that ‘we just don’t have enough time to do this properly on our
own’. Having the time to plan for formative assessment strategies became a regular focus
during the beginning of the phase 2 action research, as the team could see this was a neglected
part of their thinking about their teaching and learning. The team members realised that if
teaching, assessing and learning were planned together, it was not an ‘add on’ to planning.
183
The team then agreed to plan for a more explicit focus on learning intentions, developing
success criteria, teacher questions and self and peer-assessment strategies. Hilda felt she
already ‘planned some of those strategies on my own’ and other team members also
implemented these strategies, but they were willing to explore how collaborative discussions
could influence their planning for teaching and learning. Subsequently, I suggested that we
take examples of the formative assessment strategies the teachers had planned alone, and build
upon these as a team. Some team members were still concerned it would be time consuming
and not allow time for other planning. They believed in the importance of using formative
assessment but they still viewed it as something different to learning and teaching as well as
adding extra to their workload. For the formative assessment practices to change, the teachers
need to view assessment, learning and teaching as one entity (Earl, 2003).
Stacey felt strongly that we should ‘not over plan, but get down a few key elements that will
help take it in a meaningful direction’. She pointed out that the learning intention should be
planned, but the success criteria is a ‘guided framework’ that can be adapted for each class
according to their needs. She felt that, ‘there needs to be room for movement, otherwise the
assessment is something we do to the students, and not involve them’. This comment about the
success criteria, supported by Earl (2003), strongly advocates for students being active
participants in the assessment process. Developing success criteria provides the opportunity
for students to be involved in the assessment process, and therefore, enhances students’
chances of success (Clarke, 2008; Glasson, 2009).
The pressure of planning
The Grade 1 PLT met once a week at their scheduled Friday morning meeting throughout the
entire school year. The majority of the data collection during team meetings occurred during
this allocated time. This was the required curriculum planning time set by the school for the
team to meet, allowing for the curriculum coordinator and occasionally the specialist teachers
to meet as well. The school also allocated six half days a year during which students went
home early to provide more planning time for teachers. As a part of the PLT, the team also
met at other times when necessary. The general consensus of the team was this was not
184
enough time to plan and still meet the school’s expectations for planning. Brianna made the
point that:
What we need to plan and what the school wants us to plan should be closely
aligned. If they are not, then who is making the decision on planning, people in
the classrooms or people not in the classrooms?
During the action research, the team experimented with finding the most effective way to plan.
They agreed that planning for formative assessment was ‘very important’, but as Hilda stated,
‘the reality is, we have other planning we need to get through’. Hilda felt the pressure of the
other documents the school required teachers to plan including the PYP planner, unit overview,
transdisciplinary document, First Steps writing and assessment overview. The team also felt
uncomfortable about the lack of planning for mathematics and reading which was also included
in the planning time. The team wanted an effective way to achieve their vision. At first, they
tried planning for formative assessment through the assessment overview; a document
designed to identify different assessment tasks, including both formative and summative tasks
used throughout the unit of inquiry. Whilst the document has some purpose to it, the team said
it only focused on the unit of inquiry and not the stand alone mathematics and literacy being
taught regularly in the program. The overview did not differentiate strategies, only tasks. The
team was looking for something more explicit in promoting formative assessment. Although
this document was a requirement, the team believed it was unnecessary and they were
‘planning for the sake of planning’.
A whole school approach
Improving the quality of collaboration at a team level is vital to the success of improving
student learning (DuFour & Marzano, 2011; Popham, 2008). Stacey agreed with this, but
made it clear this would only have an impact on student learning ‘to a limited extent’, because
she believed a whole school approach would ensure consistency and more success. Stacey
strongly argued the need for scope and sequence documents for all curriculum areas and the
use of ‘only effective documentation’. MIS did not have all scope and sequence documents in
185
place and teachers were working with planning documents they felt were unhelpful for learning
and teaching. Stacey put forward the idea that where possible, teachers need to have a say in
what sort of documentation they should be required to use. She wanted clearer curriculum
expectations related to each year level so ‘we know what it is we are suppose to teach’. Only
then would formative assessment be easier to plan and be more effective in the classroom.
Others in the team echoed this finding and argued that for any real and effective change to
occur in embedding the planning of formative assessment strategies, the use of planning
documents and how teams planned needed to be whole school approach. Stacey raised this
concern, because teachers were not clear about which scope and sequence documents teachers
should use, as the school had been transitioning from the Ontario curriculum to the IB PYP
curriculum, whilst at the same time, trying to create their own documents. Stacey had taught in
many PYP schools and been involved in decision making at a whole school level. She was
aware of the challenges schools face when they chose to adopt the IB curriculum. It involves
planning and learning within the provided IB curriculum framework. When gaps begin to
appear in the planning process, as Stacey alluded to at MIS, planning of any kind becomes
more difficult. The concerns raised by Stacey require effective leadership in curriculum and
planning that went beyond what this Grade 1 team was trying to achieve during the action
research. It is the responsibility of a principal to ensure the right leadership and practices are in
place for team planning to be effective, and thus puts student learning at the forefront of
collaboration discussions. This would ensure that developing formative assessment strategies
could be embedded across the whole school, and have a wider impact on student learning. For
Stacey, MIS did not have an effective and systematic collaborative approach to ensure
formative assessment was at the forefront of planning. This was an area she believed the
school needed to focus on in the future.
Shared values and vision
Hord (1997) and Louis and Kruse (1995) argue strongly for having shared values and vision in
a PLT. After setting up our PLT, we created a shared vision of what we wanted to achieve.
Andrews and Lewis (2007) contend this is vitally important in the success of a PLT. The
overarching goal was to improve student learning and we were hoping to achieve this through
186
the vision of ‘developing formative assessment strategies through collaboration and share
professional learning to improve student learning’. The key purpose for the PLTs was ‘to
improve student learning’, so this was added to our vision. Importantly, with the development
of this vision, all the teachers involved in the PLT were engaged in creating the long-term goal,
thereby giving the necessary ownership to the participating teachers (Hord, 1997).
The team decided on three actions they thought would help reach the PLT goal of developing
formative assessment strategies through collaboration and shared professional learning:
• Less time spent on developing the summative assessment tasks
• Stronger formative assessment strategies and therefore, less emphasis on the summative
assessments
• Embedding formative assessment into the learning and teaching.
The next step was to identify how to do to achieve this vision. At the beginning of the PLT,
we identified the need to plan the key formative assessment strategies of creating the learning
intentions, developing the success criteria, identifying key teacher questions and promoting the
use of self and peer-assessment. The aim was to incorporate this within the units of inquiry
and stand-alone mathematics, reading and writing units. This gave us the clear view we
needed to ensure we stayed true to the goal of the team’s vision; all working in the same
direction.
Collaboration in reflective professional inquiry
To explore collaboration further, we decided to build a PLT where we would be able to
observe and understand how collaboration helped develop formative assessment strategies. In
the literature, collaboration is seen as a crucial element of the functioning of a PLT because it
involves sharing the responsibility of planning for student learning and working together
towards common goals (Hord, 1997; Stoll et al., 2006). Even before we attempted to improve
our formative assessment strategies, the respect and trust the team had for each other on both a
professional and personal level meant the collaboration was more likely to be achieved. Hord
187
(1997) argues both respect and trust are key characteristics people need to have in a PLT. The
team were very willing to act as ‘critical friends’ and de-privatise their practice to help each
other develop effective formative assessment strategies, as outlined in the discussion between
the team below. Fullan (2007) acknowledges that de-privatising teaching involves a changing
of culture and practice. At this stage of the action research, the teachers were prepared to make
changes to their planning practice in order to improve student learning. If teachers are open to
having critical conversations about teacher instruction at a planning level, they are more likely
to open their classroom door to other people observing their practice and de-privatising their
practice. This is supported by Bryk et al. (1999, p. 767), who argue:
By far the strongest facilitator of professional community is social trust among
faculty members. When teachers trust and respect each other, a powerful social
resource is available for supporting collaboration, reflective dialogue, and
deprivatization, characteristics of professional community.
During the planning meetings, I noticed that each teacher would listen when someone else
spoke, and there was an acceptance of the different perspectives given, and even when there
was disagreement between members, it was handled respectfully. This open-minded and
flexible approach to planning meant that each person was open to constructive criticism about
their planning ideas and developing the strategies. Hilda said:
I was thinking that we should have a checklist on the board for their writing
(talking about writing a recount). It would help make it clearer about what we
are looking for in the students’ writing.
Stacey: I agree. A checklist is important. But the concern with that would be
that the students won’t really understand and relate to the checklist …teachers
telling the students what they need in their writing means we are doing the
thinking for them.
Hilda: So I guess we should involve the students in the checklist somehow.
188
Stacey: So perhaps we break this into a couple of lessons. First lesson could
be a whole class discussion about writing and what we look for as an audience.
Then provide some opportunities to look at books that are like a recount.
Brianna: Great idea, but I think we give them the books first with sticky notes
to write their ideas on. Tell them to write down what they notice about the
writing in the books, i.e. full stops, capital letters, adjectives, easy to read,
giving information. Then come back as a class and make a list of what the
students have and then ask them to identify what are the most common writing
elements in the books. Then from that, create the checklist and print off for the
next lesson so that students are involved making the checklist and this should
help them with their writing.
This conversation is an example of teachers engaging in ongoing collaborative opportunities,
where they improve the quality of discussion through constructive feedback to each other
(Little, 2003). Stacey provided the necessary feedback to Hilda to improve the quality of
developing the success criteria with the students and then Brianna built upon that idea. Hilda
felt a level of interdependence within the team as she often made suggestions for teaching ideas
that she knew would be improved upon:
This is a great team to work with. There is great knowledge between each team
member and I know that when I have an idea for a lesson, I can bring it to the
team and they will help improve and make it more relevant for Grade 1.
Stoll et al. (2006) argue that the feeling of interdependence is vital in collaboration, as it allows
the team to focus on their common goals through the support of each other. Hilda’s comment
was consistent with DuFour and Eaker (1998) and Stoll et al.’s (2006) view that teachers
believe they achieve more working together as a team towards a common goal, rather than on
their own. In this case, Hilda trusted that the team would be supportive of her idea and provide
constructive feedback. This meant she was not intimidated or fearful of bringing her idea to
the team. Leanne also believed ‘working together means our teaching is much better’. Hilda
and Leanne’s preference to work as a team is a result of the supportive environment in which
189
they work. The mutual trust and respect this team developed over the two years of working
together created an atmosphere of cohesiveness that meant they were working towards the
same goals; an argument that Little (2003, 1989) puts forward as a key element of successful
schools.
The collaborative conversation mention above that occurred towards the end of the action
research for phase 2 demonstrated improvement in both the formative assessment strategies the
team had planned and how they planned. The conversation showed the team was at a new
stage of planning formative assessment through the learning and teaching cycle. They planned
to develop the success criteria with the students; a strategy recommended by Clarke (2008,
2005, 2001) and Glasson (2009). The focus was now on planning interdependently, which
DuFour (2011) and Stoll et al. (2006) all argue is vital to the success of collaboration.
Supportive and shared leadership
The findings showed supportive and shared leadership had a positive influence on how the
success on the PLT during the action research. As Stoll et al. (2006) argue, shared and
supportive leadership is vital to the success of collaboration. In my study, the team viewed
supportive and shared leadership as everyone having ownership of the planning and each team
member being given the opportunity to share their ideas. The team was comfortable with this
approach to shared leadership. We discussed how we would share leadership at a practical
level in weekly planning meetings. It was decided teams would take the approach of everyone
having a leading role at different stages of the planning. Stacey liked the approach that
‘someone is developing it (formative assessment strategy), someone is sharing it and another
person giving feedback (for improving it)’ and she saw that it ‘… allows for different
perspectives'. The team were empowered by the principal to make decisions about how it
would operate effectively as a team, thus we had a principal ‘who can let go of power’ (Hord,
1997, p. 17) and recognise the importance of teacher leadership Stacey’s view also shows the
importance of ‘collective capacity’ (DuFour & Marzano, 2011, p. 20). That is, everyone has
responsibility for student learning in a PLT. DuFour and Marzano (2011) also argue teachers
should be working interdependently towards mutually accountable goals.
190
Principal’s influence on collaboration
The principal and other school leaders’ influence on a PLT is vital and significant in either its
success or downfall (Hord, 1997; Marzano, Waters & McNulty, 2005). The principal at MIS
has a strong belief in supportive and shared leadership. He made it clear he wanted to embed
the development of PLTs in the school and thus he aimed to transform the teachers into a
community of learners. The decision making with respect to the development of our PLT was
left to me to work with my team to decide how decisions would be made. However, in the
spirit of collaborative action research, I also felt it was important to distribute the decision
making within the PLT. The goal was to treat the teachers with respect and as professionals
and ensure we were equals in this process (Leithwood, Leonard & Sharratt, 1998). Examples
of this included shared decision making during planning meetings, deciding what area of the
curriculum would be the focus for planning and taking turns in chairing the meeting. These
approaches to distributive leadership encouraged teacher ‘buy in’ to the team meetings and
assisted the PLT on the path to achieving our goals.
The principal at MIS was very encouraging and accommodating in both phase 1 and phase 2 of
the action research and had a significant influence on the study. Marzano, Waters and
McNulty (2005) argue strongly that ‘leadership has long been perceived to be important to the
effective functioning of organizations in general and, more recently, of schools in particular’
(p. 12). As a teacher-researcher, I approached the principal on two separate occasions for
support in phase 1 and 2 of the action research. The principal offered time release from my
class when I required the time to work with other teachers in their classrooms and to interview
participating teachers. The principal was enthusiastic about the development of the PLT. This
was a new initiative he brought to MIS for the purpose of creating a collaborative culture and
ultimately, to improve student learning and was pleased with how the PLT was setting the
example of how teams at MIS could collaborate to improve student learning.
DuFour and Marzano (2011) argue that for a principal to be truly effective in positively
influencing student learning through PLTs, they must foster shared leadership. Along with
shared leadership, the principal supported the PLT through other approaches as well providing
191
us with any resources that were required including time for the team to work through their
ideas. Fullan (2001) argues that the provision of resources is an important responsibility a
principal must ensure is in place for PLTs to have access to the necessary resources that may
be required. For us, as a PLT, time was the most important requirement. We used this extra
time to trial different approaches to planning the formative assessment strategies. I also used
extra time to interview and work on my reflective journal.
Planning formative assessment in the early stage of the action research
Early in the action research, we agreed to spend more time discussing the development of
formative assessment strategies including learning intentions and success criteria, so that key
teacher questions were created and self and peer-assessment linked into the learning intentions
and success criteria. This was a significant advance and development that this team had not
been involved in previously. We decided that goals for formative assessment strategies for
English and mathematics lesson needed to be discussed. A couple of issues became apparent.
Planning took longer and there was a concern that the formative assessment strategies did not
always fit naturally with the learning. Brianna said:
It was like we planned the learning but then asked what (formative assessment)
strategy we could use. So there was no authentic link and it meant that the
strategies were not always benefiting student learning, which of course defeats
the purpose of formative assessment.
The team agreed this approach was not the way forward and we should try to embed the
planning of formative assessment within the learning. The team realised they were already
planning formatively, but at times not strategically. The formative assessment planning was
inconsistent, activities sometimes took precedence or there was no formative assessment
strategy at all. At our next planning meeting, we planned future lessons in the following order:
• Learning intention developed in language the students could understand
192
• The outline of the success criteria to be established and further developed in classrooms
to individual classes
• Key teacher questions and how they will be approached established
• How self and/or peer-assessment could be implemented, as well as,
• Possible learning activities and resources needed.
This matched the Framework for multilayered approach to formative assessment designed in
chapter 5 (see Table 9, p. 163-164) which provides a guideline to using formative assessment.
The team agreed with this approach, so we tried this at our next meeting. After that, the team
believed we were planning more formatively and assessment was embedded in the learning and
teaching. However, we still felt this approach was too time consuming. Hilda said it was
‘unrealistic to plan like this every week’. We did not have time to plan a progression of
lessons and consequently it was felt the lessons were not building on what was learnt in the
previous lesson. We needed more continuity between lessons, as Popham (2008) states, with
regard to learning progressions, to ensure formative assessment was built naturally into the
lessons being planned.
After this reflection, the team discovered that dividing the responsibilities between team
members was the most effective approach to collaborate through planning. The team agreed
that this would only be effective if there was built in time to discuss what other team members
had planned and have the opportunity to offer constructive criticism to the planning. This
approach to collaboration would create an interdependent relationship of working towards ‘a
common purpose and people rely on each other to reach agreed-upon goals that they would not
be able to achieve independently’ (Huffman & Hipp, 2003, p. x).
Developing the learning intentions and success criteria
In planning mathematics and English, the team used the school’s scope and sequence
document to determine how they would develop conceptual understandings as a long-term
curricular aim. The teachers planned a progression of learning intentions whereby teachers and
students identify if they are on track to meeting the long-term goal and then, the group
193
develops a success criteria framework. These criteria are flexible and can be adapted within
each classroom to match the individual needs of each class with the students being involved in
the process of developing it. This was evident in planning for the mathematics strand of
number (place value) as outlined in Table 10.
Curricular aim We are learning to understand the difference between digits and numbers
Success Criteria
for students
I will know I am successful if I can:
Tell how many digits are in a number
Tell what digits are in a number
Use digits to make 2-digit and 3-digit numbers
Progression of learning
Learning
intention
Success criteria Teacher questions
Self & peer-assessment
Learning
activity
• We are learning
to show
numbers using
place value
blocks
• Show groups of 100s
• Show groups of 10s
• Show groups of 1s
What do the words ‘place
value’ mean?
Peer-assessment: Students to
check each other’s bundles of
1s, 10s, and 100s
Bundling game
with popsicle
sticks
• We are learning
to use zero as a
place holder
• Can write a number
with zero as a digit
• Can explain what the
zero in a number
means
How much is zero worth?
Peer-assessment: students
partner to give each other
numbers. Students have a
checklist to support the learning
Writing numbers
on whiteboards.
Use the already
made bundling
sticks
• We are learning
to compare
numbers using
our place value
knowledge
• Can tell which
number is bigger or
smaller
• Can explain why a
number is bigger or
smaller
How do I know one number is
bigger than another number?
Self-assessment: Students
dictate their thinking on the
iPad and listen back to see if
they believe it matches the
success criteria
Looking at
blocks and
bundling sticks
to compare
numbers
Table 10: Formative assessment planning for mathematics
194
Developing learning intentions and learning progressions
We continued to plan and experiment with developing learning intentions through the use of
learning progressions in mathematics and writing. We planned a long-term curricular aim first
and followed with the learning intentions to meet that particular long-term goal. The team felt
this was a logical approach to planning. Brianna said, ‘it makes sense that planning builds
upon each lesson towards the big idea you have for the students’. Learning progressions as an
approach to curriculum development is a ‘sequenced set of sub-skills and bodies of enabling
knowledge that, it is believed students must master en route to mastering a more remote
curricular aim’ (Popham, 2008, p. 24). Learning progressions that ‘articulate a progression of
learning in a domain can provide the big picture of what is to be learned, support instructional
planning, and act as a touchstone for formative assessment’ (Heritage, 2008, p. 2). Designing
the curriculum through this pedagogical belief is linked to Wiggins and McTighe’s (2005)
backwards by design approach to planning. Starting with the big picture, or the enduring
understanding teachers want their students to learn, provides teachers with the best opportunity
to create meaningful learning intentions. This approach ensures that teachers in the PLT were
creating learning intentions separate to the context in which they are taught as evidenced in
Table 10.
At the beginning of the school year, prior to the action research, the PLT had begun planning
using a backwards by design approach for both genre writing and mathematical outcomes.
The team would start with the overarching ‘big idea’, that was either a conceptual
understanding or the PYP central idea we wanted the students to learn. This then helped to
determine the necessary learning of sub-skills that the students need to successfully understand
the ‘big idea’. For example, we were planning for students to understand the concept of genre,
and consequently, our planning was focused on a PYP central idea: ‘People write for different
purposes to communicate’.
With this central idea as the focus and using the school’s writing scope and sequence
document, the team planned a list of learning outcomes that would later become the learning
195
intentions reworded for the students. The learning outcomes were developed under four
headings:
• Use of texts
• Contextual understandings
• Conventions
• Processes and strategies (Department of Education, Western Australia, 2013)
Under each of these headings, the outcomes were developed in a progression that would have
the outcomes built upon each other. These outcomes became the sub-skills students needed to
know to understanding the central idea. What this meant for the PLT was they had a clear list
of outcomes to teach and they could develop those outcomes into learning intentions for the
classroom. At the end of the action research, this backwards by design planning with learning
progressions of sub-skills was still in its early stages and needed to be explored further by the
team. It is important for teachers to understand the learning outcomes (sub-skills) are planned
and taught in a progression where each skill is built upon another. For differentiation, teachers
would need to consider providing different sub-skills that give students opportunities to reach
the same big idea in different ways. For this approach to be effective long term, Popham
(2008) argues that planning must be consistent across all teams within a school. For MIS,
starting with the PLT would make it difficult for this approach to collaborative planning to
become a whole school approach to planning. It would need to begin with the principal and
curriculum coordinators collaborating closely together to develop a whole school strategic plan
to implement learning progressions to ensure that they are consistent from year level to year
level. At the end of the phase 2 research, this had not yet taken place.
Developing teacher questions for self and peer-assessment
Once the learning intentions and the success criteria framework had been developed in line
with what is outlined in Table 10 (see p. 191), the team developed teacher questions to extend
student thinking, and also formulated sample question formats. Clarke (2001 & 2005) and
Glasson (2009) referred to both approaches in asking students questions. Our team also looked
196
at ways for the use of self and peer-assessment to involve students actively in the learning. An
example of this was for the learning intention, ‘using zero as a place holder’, where students
would be paired. Student ‘A’ would say a number with a zero in it and student ‘B’ would
repeat and write the number showing it with the bundling sticks. Student ‘A’ would have a
checklist in front of them outlining what student ‘B’ needed to achieve and they would tick if
the child had not been successful. If they had not achieved the learning, it was Student ‘A’s
responsibility to identify where Student ‘B’ needed support. The team agreed this required
explicit modelling from the teacher.
During the reflection, the teachers shared their belief that this use of peer-assessment kept the
pairs focused on their learning and helped them explain the use of zero. The students were
talking about it together with the hands on approach of using bundling sticks. Stacey believed
that encouraging the students to verbalise their learning gave them a better understanding than
had they completed a worksheet on their own. This was particularly helpful for the low-level
students and for those for whom English is an additional language. The team particularly liked
this strategy because the peer-assessment was the learning activity of the lesson. Brianna
observed that in the closed activity when there is only one answer, it is easier for the peer to
help assess their partner’s learning because there is only a ‘black and white’ answer,
particularly in the lower years when students are assessing each other’s learning. This
collaborative reflection revealed the deep level of thinking the PLT was involved in and
importantly, it was student learning that was influencing whether a strategy was effective. This
is one of key characteristics that DuFour and Marzano (2011) argue is vital to a successful
PLT.
When each area of formative assessment for mathematics had been planned, we came back
together to share the planning, and in particular, to use the time for constructive feedback to
ensure the strategies were going to be effective in the each teacher’s classroom. The team
found the sharing time helpful and important to them, but again lack of time was a constant
factor. The feedback session quite often happened at the end of the planning meeting.
197
Leanne commented:
This was a great way to plan but if we did not have enough time to share at the
end, we found sometimes that the planning did not match up and the assessment
strategies were not as good as what they could be.
This was an important point made by Leanne after the fifth planning meeting. Without
discussing the different strategies that had been planned, there was possible disconnect
between what each teacher had planned. Without sharing time, the team felt this approach to
planning was not going to be as successful as they had hoped. The team was not discussing the
different assessment strategies with each other until the end of the planning, so when
developing the teacher questioning and the peer and self-assessment, the team found that their
strategies did not always match. The major issue appeared to be, how does the team
communicate with one other without having to go through every stage of planning formative
assessment strategies as a whole team? Using the planning session time effectively to focus on
student learning was equally important to the team as developing effective formative
assessment strategies.
For the sixth planning meeting, the team moved to planning on an online Google Document,
which the school was already considering as an approach to collaboration and shared
professional learning. This allowed everyone to see what each team member was planning and
created more conversations throughout the planning. For example, once the group of teachers
began developing the learning intention and success criteria, the other teachers could see which
teacher questions might be used and how such questions could be delivered. Further, they
were able to view the most effective approach to self and peer-assessment to fit in with the
learning intention and success criteria. With everybody working on the same Google
Document, constructive feedback was facilitated through the planning, rather than being an
addition at the end of the process, when time may become a factor. In addition, this approach
alleviated the concern that sharing time was ‘needed’ at the end, as the constructive feedback
formed part of the process.
198
It was evident toward the end of the data collection that the teachers felt comfortable with each
other and the notion of de-privatised practice. The trust and relationships that Hord (1997)
discusses as vital to the success of PLTs were evident. The team was happy to put their ideas
forward for formative assessment strategies in the planning meetings. Leanne, who was the
most reluctant to make changes during our planning meeting, was particularly happy with the
new planning format. In a conversation I had with Leanne towards the end of the action
research, she noticed the difference in the quality of her assessment practices:
I have really noticed that not only am I using formative assessment far more
often in the classroom, but also the quality of the strategies has been really
good with the team planning. In particular, I really like the learning intentions
and success criteria we are developing.
This was an interesting comment from Leanne. She had been the quietest during the team
meetings, conversations and interviews, preferring to listen and take note of people’s points of
view rather than giving her own. Leanne had taught in Grade 1 for a few years, but never felt
particularly confident using formative assessment to improve student learning. With the
collaboration and focus on developing assessment strategies, Leanne believed the quality of her
teaching had improved. She benefitted from the support from her colleagues, in addition to the
benefits of making formative assessment explicit in her teaching.
Formative assessment driving the planning
After three months, seven planning meetings, and continuing informal conversations,
formative assessment strategies were a regular part of the PLT planning session. The team had
developed an understanding of the connection between planning for formative assessment and
the planning of learning activities. According to the teachers, the formative assessment
strategies used in the classroom during the research were most effective when they were
connected to the learning and teaching and not ‘forced’ to fit with the lesson. They were the
‘activities’ that drove the lesson and the learning. Hilda said:
199
Formative assessment really is the lesson. It is the teaching and learning…
This was not something I had put a lot of thought into, but now I guess it makes
sense.
Hilda’s view is consistent with ARG (2002) and Clarke’s (2008) contention that formative
assessment is central in the classroom. After Hilda and the team made the connection between
the assessing, learning and teaching, the planning became more in tune with the focus on
student learning. One example of this was when the team planned a unit on writing invitations
and the learning intention was to understand ‘how to write an invitation’. The success criteria
were developed with the students. This involved the students in an activity focused on reading
and analysing different styles of invitations. When peer-assessment was introduced during the
editing process in the writing, students had already completed an invitation and were ready to
receive the peer-feedback. For the students to give effective feedback to their peers, they had
spent lesson time understanding the learning intention together with the success criteria, to
ensure they could give relevant feedback. After the teacher questions had been planned, there
was much class time based around the discussion of what makes an invitation successful. The
thinking and discussions involved individual work, pairs and groups and engaging in
discussions became part of the ‘activity’ in the lesson. This example shows how the Grade 1
team planned formative assessment strategies to have a natural fit to the purpose of the lesson.
The result of this was that the team saw value in planning for formative assessment strategies
through shared values and vision of a continuous focus on improving student learning. Hord
(1997) argues that this focus leads ‘to binding norms of behavior that the staff shares’ (p. 19).
The teachers also become involved in reflective professional inquiry, improving their practice
through collaboration based on what will support student learning (Riordan & Gaffney, 2001).
Creating ideal planning scenarios through collaboration
The team managed to create what Brianna referred to as ‘great synergy’ working together,
through respectful communication, open minded to new ideas and sharing effective practice.
Hord (1997) describes this mutual respect and understanding between teachers as a
fundamental requirement for a high level of collaboration. Brianna believed that one of the
200
advantages of having continuity in working together for two years prior to the action research,
was that the teachers got to know the ‘team’s weaknesses and strengths’. Towards the end of
the action research, the team felt their planning for mathematics was the most effective that it
had been in two years of working together. They had trialed different ways of planning for
formative assessment in mathematics. One strategy was to plan only the learning intention and
activity. After trying this approach, the team felt that it lacked depth and the learning and
teaching were almost at a ‘superficial’ level. They then tried planning as a whole team, lesson
by lesson, however, this became too prescriptive and quite often they ran out of time to plan for
formative assessment strategies. After this trial and error process, the team reached a stage of
planning with a strong focus on formative assessment strategies for a unit of mathematics using
this approach to planning:
• The team started together to identify the ‘big’ (conceptual) understandings they wanted
the students to develop.
• They identified the evidence needed to demonstrate a grasp of the conceptual
understanding. This was usually written as ‘I can’ statements that we wanted the
students to achieve.
• We divided tasks amongst one or two people to develop:
§ A progression of learning intentions and success criteria frameworks to
achieve the conceptual understanding; and
§ The teacher questions related to the individual learning intentions and a pre-
assessment to establish student’s prior knowledge.
When the Google Documents were used, the team would discuss the planning during the
meeting, including critiquing and offering ideas to improve the strategies being planned. If
there was time or if it was needed, the team came back together as a group to align their
thinking and to offer any further feedback on the areas planned.
The learning intentions developed became a sequence of outcomes designed to build upon each
other as a form of learning progressions towards the long-term goal. This was the conceptual
understanding that was planned at the beginning of the unit, that reflects Popham’s (2008)
201
view that developing learning progressions should lead to a bigger curricular aim. Reflection
throughout the unit became paramount to ensure that student learning was determining the next
steps of teaching and we were making the required adjustments to our planning. DuFour and
Marzano (2011) argue that this focus on student learning through their results is one of the big
ideas of a PLT.
The planning scenario for mathematics explained above was used beyond the action research
project by the Grade 1 team. The PYP coordinator who had sat in on a few of the planning
meetings also wanted to implement this approach to planning in other year levels. I asked the
PLT whether they would consider taking this planning model to other teams if they changed
schools or year levels. Hilda said:
It works for us, but I am not so sure other teams would necessarily understand
the purpose of this planning. It would really depend on the team I am working
with. But I would continue to push that we plan for formative assessment
strategies.
During the final reflection, I showed the team the Table 9 Framework for multilayered
approach to formative assessment (see p. 163-164) and asked them to compare it to their
approaches. Interestingly, each teacher found a connection to the outline. Brianna and Hilda
said it matched their ‘regular’ teaching approach. Stacey felt there were ‘elements’ of the
outline that were similar to how she taught, but perhaps in ‘a different order at different times’.
Leanne found that although her lessons ran differently, she could use this outline more by
implementing peer and self-assessment frequently and taking the time to explore the success
criteria with the students. Having planned for the formative assessment strategies through the
PLT, Leanne believed she had a stronger understanding of how to do this. Teachers’ learning
from each other is argued by DuFour and Marzano (2011) as a necessity for successful schools.
Their argument is that if one of the most significant factors in student learning is the teacher,
then teachers need to be sharing good teaching strategies and this needs to be undertaken in
‘…a coordinated, systematic and collective effort, rather than a series of isolated individual
efforts’ (p. 20-21). Our action research project confirmed this view.
202
Conclusion: Next steps in collaboration and planning
The PLT’s knew they had made significant progress in the development of formative
assessment strategies through collaboration and shared professional learning. The whole team
agreed that by the end of the action research they were more effective in planning for student
learning. They had developed the link between learning, assessing and teaching and saw their
planning time as effective because it was focused on improving student learning. Leanne said,
‘I really think our use of formative assessment strategies has improved a lot because of how
much talking we have been doing about it’. The PLT identified effective collaboration as a key
factor in their success. As Stacey pointed out:
We needed time to plan effectively and because we planned for formative
assessment while planning our teaching, we created a natural connection. I also
think after planning for a couple of years together, we have learnt how to
collaborate with each other.
In the final interview with the PLT, their reflections demonstrated they had learnt a great deal
about planning and implementing formative assessment. Hilda believed that the action
research:
…has helped us improve our planning in general, not just for formative
assessment. We are thinking more about how we are planning and it is helping
improve learning in the classroom.
However, the team acknowledged there were areas of improvement required. In particular, this
included a stronger focus on planning for formative assessment in the units of inquiry. Hord
(1997) argues that recognition of further areas of improvement as a PLT is important in order
to ‘apply new ideas and information to problem solving’ (p. 19). Although the planning for
mathematics and English had improved, Stacey believed they still could do more:
203
I really think we need to spend more time looking at student work to inform our
planning. We don’t bring pieces of work from the students to our meetings very
often and it would really help us to plan using formative assessment for
individual students instead of the whole year level.
Stacey identified an important missing link in planning for formative assessment. The team
had focused on planning for formative assessment strategies, but had done so without
examining closely student data and student learning before the planning. Discussions took
place between the teachers concerning what they thought their class and individual students
needed to support their learning, however, these decisions were not based on student data or
learning brought to a team meeting on a consistent basis. DuFour and Marzano (2011) argue
that a principal has significant influence on how effective teams are at improving student
learning. For MIS to move forward in planning focused on student learning, having teams
develop their own SMART (specific, manageable, attainable, relevant and time-appropriate)
goals would ensure they have a collaborative focus (O’Neill & Conzemius, 2005). If the
SMART goals were focused on student learning, developing formative assessment would
become a necessary component of planning meetings.
DuFour and Marzano (2011) claim ‘extraordinary gains in achievement’ (p. 80) in schools who
implement a recurring cycle of collective inquiry which includes clarifying what students are to
learn, planning the teaching to address that learning, implementing their plan as a team,
tracking student progress, using data to identify gaps in student learning and planning to rectify
the gaps, as well as reflecting on the effectiveness of their planning to determine the next steps.
If teams adhere to a recurring cycle of collective inquiry, the use of formative assessment
strategies in the classroom can become more embedded as the consistent focus is on student
learning. In addition, the implementation of such a protocol could support teachers to examine
student learning at a planning level and to inform the learning and teaching to ensure the focus
of planning is directly related to that cohort of students.
Phase 2 of this research aimed to investigate how collaborative planning could deepen the
effective use of formative assessment in classrooms. This was achieved by:
204
…rearranging the ways that people in the school relate to one another, by
acquiring new skills in order to change, and learning to be effective problem
solvers for the school. (Calhoun, 1994, p. 49)
The findings from this action research are outlined in Figure 10. They identify the importance
of the development and implementation of formative assessment strategies through a collective
inquiry approach, focused on student learning. Phase 2 of this research provided the
collaborative lens through which the formative assessment strategies were developed.
Formative assessment strategies are enhanced when teachers focus on students and their
learning in a collaborative manner. The key to the sustainability of developing formative
assessment strategies through collaboration is based on shared values and vision, reflective
professional inquiry, supportive and shared leadership that is coordinated in a systematic
approach.
Figure 10: Multilayered approach to formative assessment through collaboration (Kean, 2016)
205
Chapter 7: Case study of Western College
Introduction
The case study of Western College was developed as a comparative study to provide an
additional analytic lens through which to consider how teachers can work collaboratively to
develop formative assessment strategies. I believed that investigating another school’s
emphases, processes, implementation and learning and teaching approaches would strengthen
the conclusions I had drawn with respect to MIS, and improve my analysis and conclusions on
effective approaches to development and implementation of formative assessment. Developing
this phase of the research as a case study of a school in which I was not teaching allowed me to
step away from the action research process in my own school. I was able to observe and
evaluate the development of formative assessment strategies in a reputable school, well known
for its innovative curriculum practices. The decision to select Western College was based on a
recommendation from a university academic who had worked closely with the school for many
years. In particular, the PYP curriculum coordinator, Jessica, was recommended as an
educator with a thorough understanding of formative assessment in classrooms. The
opportunity to investigate how she worked with colleagues, therefore had the potential to
deepen my research. Jessica has worked in IB schools for many years and is a well-respected
educator across the IB Asia Pacific region and within her own school.
In this chapter, I first provide a profile of the school and the participants and then explain the
data collection process for the case study. I explore the practices of the individual teachers I
observed in their classrooms at Western College and how different formative assessment
strategies were implemented, including the use of learning intentions, developing success
criteria, effective teaching questioning and feedback and self and peer-assessment. The final
stage of the chapter explores how collaboration plays a role in developing of formative
assessment at Western College and compares my experiences at MIS.
206
Profile of the school
The case study school is a well-known co-educational independent school with three campuses
across Melbourne. The school, given the pseudonym Western College, was established in the
late 1800s and provides education for students from pre-school through to Year 12. The
campus involved in this study is situated 20 kilometres east of the central business district and
has approximately 1200 students including 330 students in Pre-School to Grade 4. It was
opened in 1967 and is divided into Junior School (early childhood – Year 4), Middle School
(Year 5 – 9), and Senior School (Year 10 – 12). It has a diverse student population with a high
percentage of students with Chinese and Indian ethnic backgrounds.
The school was also selected as a suitable case study for comparative purposes. As Western
College began its PYP journey in 2004 by becoming a candidate school and then a fully
authorised PYP school in 2007 and because MIS is a PYP school, the points of comparison
with MIS were considered to be interesting to my study.
Profile of the participants
There were four teachers involved in the research, Justine, Grade 3 teacher, Wendy, Grade 5
teacher, Rachel, Grade 1 teacher and Jessica, the curriculum coordinator. Each participant
volunteered to participate after an invitation was offered through the school principal.
Justine
Justine has been teaching for over 20 years, predominately in the independent school system in
Victoria. She has taught from Prep to Grade 6 and was teaching Grade 3 during the data
collection. Outside of the classroom, Justine has had numerous curriculum roles. At the time
of the study, she was the mathematics coordinator and had been acting PYP curriculum
coordinator at Western College for six months prior to the data collection. Justine has also
been an IB PYP workshop leader for two years, involved in running certified IB courses for
PYP teachers in the Asia-Pacific region. Justine had completed a Bachelor of Education,
207
Master of Education and a Postgraduate Certificate in Educational Studies in Gifted education.
Justine’s curriculum coordinator spoke very highly of her during an interview and commented
on her extensive knowledge of learning and teaching pedagogy.
Wendy
Wendy has been teaching for over 10 years in Sydney and Melbourne. She was teaching
Grade 4 during this study and her third year at Western College. Wendy completed a Bachelor
of Education with Honours, writing her dissertation in the area of assessment in reading and
writing. Prior to the data collection, Wendy had completed a certificated course on Dyslexia.
Wendy is contemplating beginning her Doctorate of Education part-time. She showed a
particular interest in the research methodology for this thesis.
Rachel
Rachel had been teaching in the Victorian education state system for 25 years before moving to
the independent system three years ago. She has held various positions including acting
assistant-principal, leading teacher, middle-years coordinator and literacy coordinator. Rachel
relishes the opportunity to be involved in areas of learning and teaching outside of the
classroom and believes her strength is her ability to find teachers capable of leading different
areas of the curriculum and assisting them. Rachel was teaching Grade 1 during the data
collection.
Jessica
Jessica is the PYP coordinator for the campus as well as Deputy Director of Curriculum for the
school. Her role includes coordinating the day-to-day curriculum and overseeing the PYP
curriculum for the three campuses. She holds three degrees including a Bachelor of Education
and Master of Education. At the time of data collection, Jessica was in her 15th year at the
school and has been in her current role since 2003. Some of Jessica’s duties include attending
team curriculum meetings, team-teaching in classrooms, collaborative decision making on
208
school policies and acting as a mentor to the new teachers. She is of the view that her role and
the way her duties are divided across all PYP year levels allows her to oversee the school from
a whole school strategic perspective.
Case study data collection process
In the case study of formative assessment processes at Western College, I explored the
strategies used by the teachers and how the strategies are developed through collaborative
teams. Observations were conducted of the participants teaching in their classrooms and at
their weekly curriculum team meetings. The first round of interviews was conducted before
any of the observations, the second round was directly after the observations and the final
interviews were at the end of all the observations.
Justine Rachel Wendy Jessica
Initial interview Yes Yes Yes Yes
Classroom
observations
3 observations 2 observations 2 observations 0 (not classroom
based)
Observation in
planning
meeting
1 observation 1 observation 1 observation 3 observations
(Jessica was
involved in the
three meetings of
the participating
teachers)
Final interview Yes Yes Yes Yes
Length of data
collection 2 weeks
Table 11: Data collection at Western College
In addition to the data collection explained in Table 11, I collected and analysed documentation
including the school assessment policy, the unit of inquiry assessment planners, student work
209
and student portfolios. The purpose of considering the documentation was to identify forms of
formative assessment strategies implemented in class.
Teacher knowledge of formative assessment
I found that all the participating teachers involved in the research had a strong understanding of
formative assessment. This was evident in the interviews, my observations of the planning
meetings and from the teaching practice in the classrooms. Justine demonstrated a
comprehensive understanding with her definition of formative assessment and her teaching
practice (as outlined below). When the teachers’ understanding was explored further through
interviews, they demonstrated a common understanding that formative assessment can improve
student learning. The definitions given by the teachers and discussed below align with what
Clarke (2008) and the ARG (2002) identify as the key purpose of formative assessment; using
evidence from assessment to help the learner understand how to improve. Jessica summed up
that formative assessment is defined at Western College as:
…what we use for our ongoing monitoring of where our kids are at, that allows
us to inform our practice whether it is the next day, the next month, the next unit
and it is the decisions we make next, based on the evidence gathered that allows
us to improve student performance.
Jessica demonstrated an understanding Clarke’s (2001) view that formative assessment assists
in planning for learning; something that cannot be achieved if only summative approaches are
used. For Wendy, formative assessment helps her to, ‘be specific and differentiate for the
students… (to) help them succeed in whatever outcome I have for them’. Wendy’s views on
formative assessment relate to the concept of convergent assessment (Torrance & Pryor, 2001).
This type of assessment involves the teacher setting the goals and focusing on whether a
student knows, understands or can achieve a certain objective. Torrance and Pryor (2001)
contend that this form of assessment has a role within schools, but it is important to balance it
with divergent assessment that focuses on how students think and is not necessarily aligned
with an objective based outcome. Divergent assessment is ‘potentially more powerful in
210
fostering the social and intellectual conditions in the classroom which would lead to enhanced
learning’ (Torrance & Pryor, 2001, p. 628). At MIS, there was more focus on convergent
assessment than a divergent approach. This typically included having students focus on
producing a product or a body of knowledge, rather than focusing on how a student might
learn.
Justine commented that formative assessment should focus on, ‘finding out what students
know and can do and understand so you can provide the next learning activities from what they
don’t know, to what they can know’. Clearly, Justine had a very strong understanding of the
purpose of formative assessment and this informed the quality of her learning and teaching.
Justine also demonstrated her understanding at a deeper level when she talked about moving
from the unknown to the known through a series of learning activities. This definition is
supported by Popham’s (2008) view of learning progressions where the teacher identifies the
steps a student needs to take in order to reach a wider learning objective. In a similar situation
at MIS during the planning in phase 2 of the study, learning progressions in mathematics
became a strong focus in collaborative meetings and teachers found it to be an effective way to
plan. However, there was very little discussion about learning progressions in a systematic
approach and very little evidence of it in planning outside of the action research. It was more
likely to be taking place with some individual teachers across the school.
Formative assessment in the classrooms
The research at Western College included classroom observations of the three participating
teachers and of student learning. These observations were completed at the convenience of the
class teacher. Below is a summary and analysis of what was learnt from the classroom
observations.
Formative assessment in Justine’s class
Justine’s Grade 3 class was full of energy and students displayed an eagerness to learn. She
displayed a natural enthusiasm in her role as a class teacher and the students appeared to be
211
very motivated. There was evidence of previous student learning displayed on the walls,
which showed their use of the inquiry cycle as a way of making learning visible. There was
also evidence that peer-assessment had been used to support the learning with the learning
intentions being displayed. There was a close relationship between students and teacher and an
open environment that encouraged students to take risks in their learning. The first observed
lesson took place after lunch and although the students were tired, they showed a high level of
engagement during the lesson.
Focus on student learning
In Justine’s class, the students were attempting to understand how the skeletal system works.
From the very start of the lesson, it was evident that formative assessment strategies were
naturally implemented into the lesson. First, Justine asked, ‘how does the skeletal system
work?’ To help prompt student discussion, she had a list of 16 facts related to the skeletal
system on the interactive whiteboard. Justine had the students turn to their partner to discuss
the question, instead of asking for hands up. This resulted in a high-energy discussion and,
importantly, had all students engaged in the question. Justine was very clear with her purpose
for this deliberate approach to questioning. She acknowledged that the traditional method of
asking for hands up would only have some students participating in thinking of an answer to
the question. Students typically leave it to the same students in the class who make the effort
to answer the question, therefore losing interest in the lesson. Clarke (2008) states that all
students need to be thinking during teacher questioning. Justine said encouraging students to
turn to a partner:
…ensures everybody is engaged and has to do some thinking here. Learning is
never passive, it’s never just listening and I have got to be switched on as soon
as I have got into that classroom.
After teacher questioning, Justine explained to the students what they were learning to remind
those who had not yet made the connection from the beginning of the lesson. Justine
212
attempted to have the students make their own connection first, before telling them what they
were learning; an example of very deep engagement with formative assessment.
Peer learning and strategic questioning
In Justine’s class, I saw a deliberate and consistent approach to peer learning in place.
Students were given plenty of opportunities to be involved in talk partners (Clarke 2008).
Students were enthusiastically involved in thinking through pre-established problem solving
partners, which were implemented during this lesson. These partners were established at the
start of the year by matching two students together and both Justine and the students believed it
would support learning. Clarke (2008) states that dialogic talk is a key element in recognising
a formative assessment culture, whereby students are actively involved in thinking. The class
practices had clearly moved away from what Alexander (2004) identifies as a teacher-
controlled environment. Instead, students were involved in collective, reciprocal, supportive
and cumulative discussions that helped them move forward in their learning, in this case, their
knowledge of the skeletal system. Justine sets up this style of dialogic talking because she
believes students can then:
…work well together and can have conversations… those partnerships work
well because they were set up for problem solving where they have to talk about
their learning and justify why they’re right and they know that if this person
solves the problem, the other person has to explain it.
Justine also noted that this was a form of peer-assessment:
…because they challenge each other and have to justify their opinion, you have
to know your stuff. You have to be able to verbalise it as it all helps with the
learning process.
When students moved towards their desks for the ‘activity’ part of the lesson, Justine had
students work with their problem-solving partner. Students found a spot in the room to discuss
213
their opinions and thoughts. This resulted in excited discussions, debating and justifying
responses. As this happened, Justine moved from pair to pair, prompting further discussion by
giving feedback through questioning. This was achieved systematically, acknowledging what
the students had achieved as a pair and then moving their focus to where they need to improve
in order to meet the learning objective for the lesson. Emily used a similar approach for
feedback in phase 1 of the study at MIS. It was evident from both approaches that students
responded positively to acknowledgment from the teacher of what they had achieved and then
finding out what they needed to do next in order to improve. Glasson (2009) discusses the idea
that the type of feedback I observed being given by Justine and Emily needs to be direct,
immediate and constructive, without being negative or critical of the decision students have
made with respect to their learning. Both Justine and Emily demonstrated experience in this
approach to feedback. These findings reinforce the importance of very explicit attention being
given to formative assessment as a critical part of learning and teaching. Similar findings were
made in phase 1 of this study at MIS when we had developed this approach though our action
research.
Once students had made their decisions, Justine brought the class back as a whole group to
decide the most important fact related to the skeletal system. This gave the students the
opportunity to have their opinions heard by the class. This resulted in the students coming to
the conclusion that ‘bones help produce blood cells’ was the most important of a number of
facts. This small snapshot from one episode in the class demonstrated to me once again that
teaching is complex work, and for deep learning to occur, understanding of the importance of
various formative assessment strategies enriches the quality of teaching.
This lesson developed by Justine focused on achieving a learning objective set by the class
teacher. Torrance and Pryor (1998) see this form of learning regarding mastering a
predetermined goal, as a lesson requiring convergent formative assessment. This meant that
Justine’s formative assessment strategies were built around students gaining the knowledge
that was predetermined.
214
Wiliam (2006) states that the key features of effective teaching include the creation of student
engagement in their learning, and guiding their learning to the appropriate goal through active
dialogue and reflection. When I was in Justine’s classroom, I saw student engagement being
developed through group and peer interaction and discussions, and when guided learning was
the focus, there was a distinct emphasis put on dialogue and active reflection. All students
were given an opportunity to succeed and the conversations were focused around the students,
not the teacher. In our action research at MIS, Emily and Harriet attempted to create the same
level of engagement through the use of talk partners and self and peer-assessment. In all
phases of the research the use of these strategies empowered students to play an active role in
the assessment process begin.
Formative assessment in Wendy’s class
When I observed Wendy’s Grade 5 class, I noticed there was also a high level of active learner
engagement. Wendy was very conscious of using formative assessment strategies throughout
her lessons. She made a point of discussing with me after a lesson how she used the ‘think,
pair, share’, a strategy which provides the opportunity for all students to be involved in
thinking and sharing their learning with peers to answer questions from the teacher. Wendy
said she used ‘think, pair, share’ to give students the opportunity to make them think first, then
share their understanding in order to build upon their own knowledge. She believes that
discussions between students are a vital part of learning.
This strategy relates to Clarke’s (2008) discussion of the importance of dialogic talk. Clarke
sees this as a key indicator of a formative assessment culture. She argues that:
The dominance of constructive pupil ‘dialogic’ talk in a classroom is a key
identifier of a ‘formative assessment’ culture, in which pupils are actively
involved in thinking: effective pupil talk playing a central role in the philosophy
of citizenship, personalization and lifelong learning. (p. 35)
215
Wendy also discussed the different tools she uses to enable students to reflect on their learning
each day. When being observed, I saw she used a rating scale out of 5 to see how students felt
about their lesson. Wendy was very clear about having the students reflect on their learning,
commenting that:
If you don't reflect upon what you have learnt, then it is rote learning and I
don't believe that is the way we want students to grow up in the world.
When giving feedback to students on their learning, Wendy attempted to use the learning
intention and success criteria as the basis for her comments. Wendy was heard giving
feedback to the students such as: ‘have you looked at the Wilf on the board?’, ‘what does the
third Wilf ask you to do?’ and ‘I like how you have included this in your learning’ (referring to
a criterion listed on the whiteboard). Wendy’s comments related to the success criteria showed
her making deliberate decisions about how she gives feedback to the students. Wendy used her
questioning strategy to encourage students to assess their own learning against the criteria
provided. The ARG (2002) stresses the importance of communicating the assessment criteria
with students and having them engage in assessing their learning against the criteria. Using
questioning instead of telling students encourages students to do the thinking. The use of
success criteria in Wendy’s class reinforced to me how successful the action research was at
MIS, particularly in Emily’s and my own class. With students’ positive responses to the
feedback given from both MIS and Western College, the importance of giving explicit
feedback against clear criteria that the students know and understand was reiterated.
Clarke (2005) and Glasson (2009) argue that students need to understand clearly what the
criteria means in order to use it to support their learning; something which was also a strong
focus of the action research at MIS. Pryor & Crossouard (2007) argue that the teacher has the
understanding as an assessor and needs to pass that information on to the students. Developing
the criteria with the students through the analysis of exemplars provides students with a clear
understanding of the learning intention. By her acknowledgment, this is an area Wendy could
still improve her practice. Before I arrived in the classroom for the observation, the success
criteria were already written on the whiteboard. However, watching the students as they were
216
working, I could see that not all students understood what each criterion meant. There was
confusion for some students about how they could be successful in the lesson. The students
had not been involved in developing the criteria, nor had they seen examples of what the
finished product could look like. After reflection on the lesson, Wendy noted that:
You have got your curriculum content that you need to cover and you have these
other skills that you want them to have as well. I know they are both equally
important, but in the end, I choose the curriculum content first.
Wendy appeared to have justified her choice to focus on covering content over learning
quality, but this does raise a concern for students who do not understand exactly what it is they
are supposed to be learning and why. Clarke (2008) argues that formative assessment
strategies are there to ensure that every student can be successful in their learning. The risk of
teaching in the style Wendy uses is that students will not truly understand successful learning.
Chappuis (2005) also argues that students need to see exemplars and work samples to show
them what successful learning might look like. At MIS, this was particularly successful in my
classroom when learning about genres. Students were given a range of examples from a single
genre and students explored and analysed what a successful genre might look like. Having
students evaluate work samples and defend their judgments helped students assess their own
learning against the developed success criteria. Effective formative assessment needs to be
interwoven through all of the learning and teaching (Earl, 2003). Although, this took time in
my own class during the first action research phase, the outcome that students possessed a clear
understanding of what they were learning, and were therefore, able to self and peer-assess, was
valuable learning. Both the action research and case study gave me a clearer understanding of
the importance of formative assessment and its power to deepen learning for students.
Formative assessment in Rachel’s class
Rachel raised a concern that assessment practices documented in the PYP and in particular at
Western College were not thorough enough to guide teachers effectively to focus on formative
assessment and learning. At the time, Rachel had almost 25 years’ experience teaching in the
217
Victorian education state system and believed the PYP does not give enough explicit direction
on how to assess. Although Rachel acknowledged that the school had recently done some
work to improve assessment practices, there had been no other professional development in
assessment in the three years she had been at the school. Rachel’s concern that the school
needed more assessment focused professional development is aligned with one of the ten
principles outlined by ARG (2002) for schools to adopt for formative assessments practices to
be effective:
Assessment for learning should be regarded as a key professional skill for
teachers.
Teachers require the professional knowledge and skills to: plan for assessment;
observe learning; analyse and interpret evidence of learning; give feedback to
learners and support learners in self-assessment. Teachers should be supported
in developing these skills through initial and continuing professional
development. (p. 2)
Along with continuous professional development, Rachel believes more time is needed for
planning, in particular, time for moderation between teachers.
When Rachel discussed her understanding of formative assessment, she focused on the
informal formative assessment as ‘something that you do as you go along’. Although Rachel
was not explicit in what that ‘something’ was, she later mentioned that assessment as learning
is the assessment implemented as the unit develops.
The first lesson I observed in Rachel’s class did not match the strategies mentioned by Rachel
during the first interview. She was not well prepared for the lesson and it lacked the key
elements of a balanced formative assessment lesson. The students were writing about their
weekend. As there was no learning intention set during the lesson, there were also no success
criteria. The feedback given to the students ranged from telling students they were writing in
the wrong place, correcting spelling mistakes, full stops and capital letters. All these areas of
218
writing are important elements for an effective piece of writing but it is not developmentally
appropriate for Grade 1 students to focus on all of these elements of their writing in the same
lesson. However, Rachel did show an understanding of formative assessment in her interview
through her summary of pre-assessments, use of rubrics for formative and summative purposes
and importance of collaborative planning in the year level. But the practice I observed did not
match what was discussed in her interviews.
Curriculum coordinator’s perspective
Jessica provided thorough insight into the different assessment practices implemented at
Western College. With 15 years of experience of teaching at the school, Jessica saw this
research as an opportunity to reflect on the changes in practice she had seen and how the
school as a whole implemented formative assessment strategies. She has seen many teachers at
Western College move forward in their understanding from not understanding the difference
between formative and summative assessment, to assessment to improve student learning
becoming a part of their everyday teaching:
I have watched assessment morph over the years from teachers not knowing
what summative or formative assessments are, to now a richness of that
language being a part and parcel of everyday practice.
Jessica believes the PYP planner assists teachers to identify formative and summative
assessment practices. However, with recent staff changes, she thinks the school needs to
revisit their assessment practices. It was four years since their last whole school professional
development on assessment. Both Jessica and Rachel considered that teachers require
professional learning on assessment practices and ongoing support; one of the guiding
principles outlined by the ARG (2002).
When asked what areas of strength were evident and what areas needed improvement, Jessica
said teachers use formative assessment well for mathematics, reading and writing. She said the
use of diagnostic assessment in English and mathematics is very effective in identifying
219
students’ current learning levels and for building on their learning. This is achieved through
reading assessments; the PROBE and PM Benchmarking; writing through year level
moderation and mathematics, by teachers implementing the Numeracy Interview Kit and
school designed pre-assessments. By comparison, MIS had only PM benchmarking
consistently used across the lower years of the school. Many teachers from MIS argued that
there was not enough consistency across year levels in assessments in English and
mathematics. Western College has more consistent assessment in place focused on improving
student learning. Jessica believes most teachers know how to gather data on students’ learning
in English and mathematics and how to use that data to move their learning forward. She now
thinks the school is ready to develop the use of formative assessment ‘check points’ during
units of inquiry to improve student learning of the PYP five essential elements (knowledge,
concepts, attitudes, skills and action):
We are still on a learning curve on using formative assessment well in our units
of inquiry. I think we are focused on our summative task and we know what we
want to do there, but I think through our inquiry lens, we can now focus on
more than the stages of the inquiry cycle, and look at what this learning
engagement tells us about students understanding of the five essential elements.
For formative assessment to be effective, teachers must use it to inform their teaching to
improve student learning (Absolum, 2010; Black & Wiliam, 1998b; Clarke, 2001; Glasson,
2009; Tomlinson, 2014). During a unit of inquiry where the outcomes can be broader in their
design and students have more influence on how the unit of inquiry develops than in a
traditional approach to teaching; teachers need to look closely at the learning students are
involved in to determine the next phase of learning. This involves regularly assessing student
work as an individual teacher and as a grade level. Jessica acknowledges that this is a
generalisation that changes with different teachers, units and discussions. Her concern which
centres around the use of formative assessment is about teachers ensuring they work through
the inquiry cycle, but they are also thinking more deeply about planning for formative
assessment. Wiggins and McTighe (2005) argue that teachers should view themselves as
assessors rather than as activity planners. This is an important change of mindset for teachers
220
to ensure they have a backwards by design approach to teaching, so that formative assessment
informs the decisions they make (Wiggins & McTighe, 2005).
Jessica believes the planning of learning intentions has ensured teachers are thinking more
about why and what they are teaching. She commented that:
…there is evidence of learning intentions and a much more focused look at why
we are doing what we are doing which has made our teachers much more
aware of what they are doing.
Through initiatives implemented by Jessica, Western College has also tried to improve the use
of diagnostic assessments within the school. This has been achieved by streamlining and
structuring diagnostic assessment tools across the academic year, to ensure the needs of each
cohort of students are met. Jessica commented that:
… (for the) last few years, we have tried to up the ante at the start of every year
of our assessments to look what does it mean for this cohort of kids? If I am
grouping them, this way and that way, how do I work this group, how do I work
with that group? How do I know that my teaching is hitting the mark? Our
teachers need to be better at that. As an example, when you have had a group
of teachers who have done a SWISS spelling assessment and all they have done
is mark it, despite the fact they have been told and shown the analysis tool that
will show them where the kids are at and what that means. When you see
teachers not doing this, it makes you think as a coordinator that I need to help
support these teachers in using a diagnostic tool.
Popham (2003) argues that these sorts of assessments serve a useful purpose if administered
with the right intentions, so they achieve small curricular aims, clear descriptions of what is
being assessed, and the information is passed back to inform the student’s learning. Western
College applied diagnostic assessments covering a range of curricular aims and Jessica
believes all teachers understand what it is they are trying to assess. It is left up to the
221
individual teachers how they pass that information back to the students. Although Jessica feels
that individual teachers still need support with certain assessment tools to ensure they are used
formatively, she believes it is a significant improvement from when she first started at Western
College. She recalled that:
When I was first here 15 years ago, we were not assessing anything formatively.
I think what we do now has a purpose – at the start of the year with preps and
ones using the observation survey, phonemic awareness, concepts of print, early
numeracy survey - a one-on-one conversation with students. These are just
some examples of diagnostic assessment tools we now use.
Under Jessica’s guidance, Western College appeared to be strategically choosing what
diagnostic assessments they implemented and how they might impact on student learning.
Black and Wiliam (1998b) put the onus on schools to think about the use of assessment and
what impact it has on student learning. Tomlinson (2014) agrees that teachers need to be
thinking about formative assessments long before they are given to students. At MIS, students
were given pre-assessments before the start of the unit of inquiry, but these were rarely used to
inform subsequent stages of learning. Often, this was due to the content of the assessment
being too abstract for teachers to use to inform next planning steps. At Western College,
Jessica was carefully considering what diagnostic assessments they should use. However, as
Jessica pointed out, unless ‘teachers have a belief that the assessment they are using’ will
inform their teaching, the assessment has little value to the student learning.
The implementation of formative assessment strategies at Western college
The formative assessment strategies of stating learning intentions, developing success criteria,
effective teacher questioning and feedback and self and peer-assessment were a key part of the
action research in phase 1 of my study at MIS. In order to deepen my reflective understanding
of how teachers implement and collaborate to embed formative assessment at MIS, I focused
on data collection and analysis of the same strategies in the case study school, as well as any
additional assessment strategies that were implemented at Western College.
222
Learning intentions
Three years prior to the data collection at Western College, Justine was acting PYP coordinator
and used the opportunity to implement the learning intentions and success criteria according to
Clarke’s (2001) development of Walt (We are learning to…) and Wilf (What I am looking
for…). This was similar to the approach that Emily had developed at MIS in her class in phase
1 of the action research. At MIS, Emily used Wilf and Walt puppets as visual reminders of
what younger students were learning. Learning intentions at Western College have become a
regular strategy discussed at planning meetings and are evident in every classroom. Glasson
(2009) supports planning of learning intentions as it ‘is the basis of everything that follows in
the lesson or series of lessons’ (p. 10). Clarke (2001) argues learning intentions should be
agreed upon in the early stages of planning to make ‘them as unambiguous and clear as
possible’ (p. 10) for students. Jessica noted the impact that introducing learning intentions had
at Western College:
When learning intentions were introduced, it was a very powerful ‘ah ha’
moment for many teachers. It was embraced and I think once they saw that it
has a powerful spin off in terms of the students being able tune in and know
what the lesson was about instead of keeping the students in the dark. Teachers
could really see the value of it.
In most of the lessons I observed at Western College, the learning intentions and success
criteria were evident. When learning was displayed on the wall, there was typically a learning
intention on show as well. During one of Wendy’s lessons, she was using Wilf and Walt for
the learning intention and success criteria. Wendy wrote the success criteria on the whiteboard
while verbalising it to the class. For students who have ‘spatial trouble’ identifying the
different criteria on the board, she wrote each criterion in a different colour. Throughout the
lesson, Wendy attempted to keep the feedback related to Wilf and Walt. My observations from
the other classes I observed supported Jessica’s comment that:
223
Learning intentions are in evidence in every classroom. We don’t prescribe
that they are used for every lesson or all the time. But at the teachers discretion
they are there to support and enhance learning and this has had a very positive
benefit.
However, Jessica argues the importance of differentiating between the learning activity and the
overall learning intention. Clarke (2008) identifies this as a challenge for many teachers. She
argues that it is important for teachers to identify the learning intention so students are able to
‘transfer skills within and across subjects’ (p. 87). Clarke (2008) also points out that using
language about the learning objective rather than context, increases the pace of the lesson, so
there is less revisiting of topics and leads to students identifying where skills can be
transferred, thereby setting students up to be successful and lifelong learners.
Evidence of Justine’s deep thinking about formative assessment can be seen in her belief that
stating the learning intention can at times limit some students’ thinking and not allow them to
develop their own understanding of the content of a lesson. She commented that:
…if you don’t allow the children to connect the dots and in a sense come up
with their own learning intention, you do not give enough of an opportunity for
higher order thinking.
Encouraging students to create the connection between the activity and what it is they are
learning encourages them to understand the purpose of their learning and also allows for a
differentiated curriculum, as students might make different learning connections in the lesson.
This would be particularly effective in a concept-driven curriculum like the PYP. At the end of
a lesson, Justine would bring the students back together as a whole group or after an
introduction to the lesson to find out how much students understood. This was evident in the
lessons observed in Justine’s class and she also spoke about this approach during her interview.
224
Justine builds in time for students to develop their understanding of what they are learning and
to make connections to the world around them, by asking students what they think is the bigger
picture and what they thought they were learning. She said this:
Helps them develop their own learning intention that is more personal to them
and their stage of learning. This can be particularly important for learners of
mixed ability. Gifted and talented learners, for example, can become frustrated
and disengaged when they are held at one level in classrooms.
Justine wants to avoid single outcome based lessons. Outcome based assessment is aligned
with what Torrance and Pryor (1998) describe as convergent assessment, in which the focus of
the lesson is on students achieving a single pre-determined skill, knowledge or understanding.
Justine’s thinking appears to connect more with the notion of divergent assessment where the
focus is on how the student thinks and how they form new ideas. Similar to Western College,
the use of learning intentions improved significantly across MIS when it became a strategic
focus for the school. This was evident in phase 1 of the action research both in the classrooms
of the participating teachers and other teachers across MIS. As a result, according to the PYP
coordinator and vice principal, there was more talk and discussion at MIS about learning than
there had been previously.
Success criteria
Developing success criteria with the students was an area of formative assessment that
appeared to be implemented in different areas of Western College. Jessica believes the
development of success criteria is a strong area at Western College:
The rubric is very well developed here and student input in that development is
very strong and more so when teachers realise the importance of student
involvement and the power of involving the students. When you create it
together, you have the motivation.
225
Jessica’s perception of student motivation is strongly supported by research, for instance in the
ARG’s (2002) Research-based principles of assessment for learning to guide classroom
practice, fostering motivation is a key factor. Towler and Broadfoot (1992) state that sharing
the responsibility of learning with the students helps them understand what is expected of
them, improves their motivation and leads to a sense of pride in positive achievements. Dweck
(2006) argues that motivation is one of the most important factors in determining the long life
success of a student. Jessica believes ‘some of our more outstanding teachers who have the
students’ interest at the centre, are very much looking at what this will mean for the child’. It
is these teachers who understand the connection between student motivation and formative
assessment and thus attempt to embed this within their teaching.
One challenge that makes developing the success criteria with the students difficult is finding
the time for this to take effect. Wendy (Western College) and Emily (MIS) felt the challenge
of covering the curriculum content and using time to develop the success criteria was a concern
she regularly grappled with at school. However, as Wendy had previously said, she believed it
was more important to get through the content of lessons than take the time to involve students
in the assessment process. This approach displayed by Wendy raises concerns about what she
values for learning. Wendy showed strong dedication to her students but is still developing an
understanding of the importance of the approaches to learning students need to develop.
Clarke (2008) and Glasson (2009) argue strongly that students need to be involved in the
development of success criteria and the time spent focusing on it ensures the benefits of
students being involved in the assessment process. Whilst it does take time for effective
success criteria to be developed, Clarke (2008) states there is less time wasted afterwards
because students are clear what is required of them. The teacher’s time is freed up, giving
them more time to listen to children and provide timely feedback.
Justine took a balanced approach developing success criteria with the students, as she
discussed the importance of ‘students have a clear understanding of what they are learning’.
For a lesson with closed success criteria that may only take one or two lessons to successfully
complete, Justine would sometimes provide the criteria for the students. For more open-ended
learning intentions that may be revisited, Justine would involve the students in the process.
226
She would explain to the students ‘this is what we are learning, how might it look?’ It is
through the second part of the question that Justine helps students develop the rubric. The
process of constructing the success criteria is a process that Justine values highly and this is
evident in the way that she allows time for it to be developed.
At Western College, Rachel saw modelling as an important part of developing success criteria.
She believes that:
It is always good to model these things to the kids, as they become a part of
formative assessment so that you are building up to what are the expectations
for the kids.
Chappuis (2005) agrees that work samples show students what is to be expected of them,
particularly when students are given an opportunity to analyse what makes them successful or
unsuccessful. Rachel acknowledges that building up the expectations for the students setting
up success criteria takes time, but student learning becomes far more effective with it in place.
Rachel links together the learning intention (expectations) and the success criteria (modelling)
identified by Glasson (2009) and Clarke (2005) as imperative to the success of student
learning. This emphasises the need for teachers to see the role of assessment as a part of the
teaching and the learning for the student. Klenowski (2010) argues that the ‘role of assessment
is an integrated one, involving both teachers and students in a mutual responsible, symbiotic
and potentially productive relationship’ (p. 5). At MIS, the action research process in phase 1
sharpened the teacher participants’ views on the important role of formative assessment.
Wendy made reference to teachers needing to understand the curriculum outcomes that are
taught and to be explicit in the teaching of certain outcomes; a connection to the learning
intention and success criteria outlined by Clarke (2008) and Glasson (2009). Clarke (2008)
argues that stating the learning objective and the steps needed to be successful are the tools that
facilitate students to have power over their learning. Once a student knows where they need
help and what they can do to learn, they can take control of their learning. The power of
control in the classroom shifts from the teacher to the student. This was also evident in the
process at MIS. My experience at Western College reinforced the importance of stating the
227
learning intention and developing the success criteria with the students to ensure they play a
critical role in the learning process.
Effective teacher questioning
During the interviews with the class teachers, questioning was the least talked about area of
formative assessment at Western College. Similarly, during the action research at MIS, it
received a limited focus for development in comparison with the other formative assessment
strategies. Jessica said that teacher questioning had been a focal point for professional
development a ‘few years ago’. This was to move teachers away from closed questions
focused only on gathering factual information from the students and toward conceptual, broad
and therefore more open-ended teacher questioning. Although not explicitly stated by Jessica,
this aligns with Glasson’s (2009) argument that teacher questioning should aim to engage
students in higher order thinking skills ranging from analysis, synthesis, evaluation and
application. Jessica noted that when teachers use open-ended questioning effectively, teachers
‘discover that it opens up a depth of inquiry and takes them beyond the closed questions’.
However, Jessica acknowledged this is an area that could be revisited as a professional
development focus.
Wendy sees teacher questioning as not just getting the factual information from students, but
rather, it is about stimulating their creative thinking in relation to a unit of inquiry. Wendy did
acknowledge that it was not common practice for teachers to plan teacher questions
collaboratively, but she did plan her questions in isolation. Justine sees effective questioning
as a ‘series of questions’ to elicit student responses and build upon their understanding.
Justine’s use of questioning is supported by Glasson (2009) who argues when students’
respond with an answer that ‘seems to be “on the right track”, but is not as fully developed (the
teacher might ask) …prompting questions to encourage them to provide more explanation’ (p.
47). By comparison, Rachel discussed the questions students were asking themselves,
therefore not making the connection to questioning as a formative assessment strategy. She
used the example of a ‘question chair’ students sit on when wanting to ask a presenter a
question. When I asked Rachel to elaborate, she became distracted and discussed it in response
228
to questions related to research reports. This limited response reinforced Jessica’s view that
further professional development is needed. Rachel is a new member of staff requiring further
understanding of teacher questioning to further student understanding.
Other teachers showed they understood the purpose of teacher questioning as identified by
Glasson (2009) and Clarke (2005) to extend student thinking. Although I only observed one
planning lesson per team, both Jessica and Wendy’s view that it is not a strong point for the
school was reinforced, as the teacher questions were either not spoken about or only briefly
touched upon in planning. Black and Wiliam (1998b), argue that teacher questioning is not
considered enough in planning of formative assessment strategies. One of the challenges at
both Western College and MIS face as IB schools is that the PYP planner (see Appendix 1)
only has teachers collaboratively planning open-ended teacher questions at the beginning of a
unit of inquiry, not as student learning changes throughout the unit. Therefore, during
collaborative planning meetings teachers need to have their own system in place to ensure they
continue to plan appropriate teacher questions, thus matching students’ learning needs and
aiming to extend their thinking.
Teacher feedback
Jessica believes teacher feedback is becoming more effective as:
…our teachers are beginning to realise that there is no point sitting at home
writing comments on work and expecting there to be relevance when it is
handed back to the student. Feedback is in the here and the now, needs to be
individualised targets and focused for that child.
Jessica said that Western College teachers are well aware of the impact that immediate
feedback has on students, and how it needs to be targeted and focused on individual students.
Jessica also believes this is an area where the school has had great success in improving
student learning.
229
Further examples of how or where teachers have used feedback to improve student learning
were given during the interviews more than for any other strategy. For Justine, teacher
feedback is about taking the time to sit and have conversations with the students about what
they know and understand in order to identify what the student needs to do next to improve
their learning. Clarke (2003) argues that oral feedback is the most natural and frequent
feedback a student experiences and is most effective when ‘it is tailor-made and powerful in
meeting the needs of the child’ (p. 55). In any classroom, a teacher’s feedback has the ability
to create a culture that embraces speaking freely about learning, and therefore opening students
up to accepting feedback from teachers and peers (Clarke, 2003).
Justine’s use of feedback in the classroom and what she discussed during her interviews is akin
to what Clarke (2003) calls ‘celebrating challenge’ (p. 55). Justine’s feedback reveals that she
embraces students being challenged but avoids the comparison of students. In the classroom,
Justine attempts to use her feedback as guidance. Her feedback to students is likened to what
Pryor and Crossouard (2008) call ‘exploratory, provisional or provocative, prompting further
engagement rather than correcting mistakes’ (p. 4). During the observations, Justine’s
feedback was not explicit insofar as telling students what the next step was, but rather, she
chose to use guiding questioning for students to determine where they needed to go next in
their learning and how to get there. Questions like ‘what do think comes next, why have you
put … (that) there, what do you think of …’ encouraged her students to determine the next
steps in learning. Much of the research on feedback considers the importance of ensuring
students know where they are and where they need to go, but very little research discusses how
teachers need to get students to the next level of their learning. Both Emily and Justine chose
to use guided questioning to extend students. Justine believed this develops higher order
thinking skills, as students are encouraged to think for themselves. Justine put emphasis on
how that feedback was given. Wiggins (1993) argues that teachers need to think about their
method of giving feedback to students. Justine’s approach to feedback offers an interesting
consideration for future research in this area.
Rachel said that she focused mainly on oral and informal feedback with respect to what the
expectations are and how the student’s learning is compared with that expectation. There is a
230
clear connection to the learning intention and success criteria through what Rachel described as
the ‘expectation’. With this approach, Rachel can provide clear and concise feedback and
avoid being distracted by other factors like handwriting or student behavior (Glasson, 2009).
However, this description given during the interview did not always match what I observed in
her classroom.
Although Jessica believes teachers are not correcting and providing written feedback on
student learning as much away from the class, Wendy still sees value in doing so. Wendy
prefers to give immediate feedback to the students orally so that it is ‘instant’, but like many of
Wendy’s approaches to teaching, she takes a pragmatic view. In her class of 26 students,
Wendy accepts that she cannot always give the required, immediate feedback that research
recommends (Black & Wiliam, 1998b; Clarke 2005; Glasson, 2009). While she acknowledges
that this it is not ideal, she attempts to make effective use of the feedback she gives at home.
She makes a comment about one area that was successful, one area that needs to improve, and
a comment of encouragement. She attempts to provide the feedback back to the students as
soon as possible and allow time for students to read her comments. Earl (2003) and Wiggins
(1993) raise the concern that written feedback can come in the form of non-specific short
comments that leave students uncertain how to move forward in their learning. However, the
manner in which Wendy approaches written feedback is aligned with Clarke’s (2003, p. 95)
recommendation of highlighting a success of the learning and at the same time offering an
improvement, either through elaborating and extending (tell the reader more), adding a word
or sentence (add one word), changing the text (find a better word) or justifying (why).
Importantly for Wendy or any teacher choosing to conduct assessments away from the student,
there needs to be a purpose to the marking. Clarke (2003) argues that purpose should be
focused around the learning intention and success criteria. With a supportive and encouraging
environment, Wendy could consider paired marking that Clarke (2003) suggests which
involves a peer focused on identifying the positive aspects first, success against the learning
intention and where improvement could take place against the learning intention. The key to
this approach to feedback is the steps are done one at time to consolidate student’s analytical
and evaluative skills.
231
Self-assessment
Jessica believes that self-assessment has been incorporated effectively into the learning,
assessing and teaching for ‘most classes’. There is built in reflection time for students to
encourage them to think about what they have learnt and how they could improve their
learning. Scott (2000) argues that students need to learn explicitly how to think about their
thinking; a critical part of self-assessment. He states this should go beyond the cognitive
development to include reflecting on attitudes and affective behaviour. During the
observations, I noticed that Justine and Wendy used a range of reflective strategies as self-
assessment tools in their classrooms. Justine asked her students to reflect upon how well they
worked with their partner and Wendy encouraged her students to rate their performance and
level of learning for a particular lesson. Self-reflection plays an important role in helping
students move from relying on teacher feedback to self-monitoring (Sadler, 1998). Providing
opportunities for students to be involved in the assessment process will support them in
understanding their own learning strategies and to take further responsibility for their own
learning (Elwood & Klenowski, 2002).
Justine provided practical examples of where she believed she had used self-assessment
effectively in her class. When she was a prep teacher, Justine felt her students had an attitude
of doing everything right and ‘really well’. Justine aimed to shift students to think about what
they were learning at school, and therefore to become a ‘learner’. This meant that students at
this age complete a task while thinking about what they might be learning and how well they
have met the learning task. For example, during a mathematics unit on money, Justine showed
the students a list of coins and they had to identify which coins they knew. When students
realised there were coins they did not know, instead of telling the students what they were
going to do next, Justine asked the class, ‘what learning needs to happen from here?’ Students
could identify exactly what they needed to learn next and this shaped the future learning for the
unit. At the end of the unit, the same activity was completed again and they could see what
they had learnt. Justine found this to be a ‘powerful’ moment for the students. For many
students, it was the first time they had evidence and recognised how their learning had
progressed. As a result of this, Justine confidently said:
232
It was one of the best sequences of lessons and the best way to get them (the
students) to critically look at themselves in a realistic fashion, without
destroying their love for learning.
Justine has found that in teaching Grade 3, her aim shifted from focusing on what students are
learning, to how they are learning. This meant that the use of self-assessment has been more
about moving from the content of the lesson being the focus of self-assessment, to students
thinking about their thinking; meta-cognition. Justine attempts to do this through the IB’s
learner profile attributes and attitudes (International Baccalaureate Organization, 2007),
attributing back to what Scott (2000) discussed regarding self-reflection needing to go beyond
just cognitive domains. Ultimately, self-assessment is about students managing their own
learning (Glasson, 2009). Earl (2003) argues that teachers have a responsibility for creating
environments that allow students to become ‘confident, competent self-assessors who monitor
their own learning’ (p. 103). Justine attempts to instill this into her students from a young age
in the hope that it becomes what Costa (1989) argues is imperative to student success:
We must constantly remind ourselves that the ultimate purpose of evaluation is
to enable students to evaluate themselves. Educators may have been practicing
this skill to the exclusion of learners. We need to shift part of this responsibility
to students. Fostering students’ ability to direct and redirect themselves must
be a major goal – or what is education for? (p. 2)
Justine believes strongly in the use of self-assessment but still sees areas of improvement,
particularly when she has a ‘behaviourally challenging’ class. Creating the habits of mind or
having students thinking about their learning involves giving students space and choice in how
they want to approach their learning. Justine accepts that this is more challenging when
student behavior becomes a factor, but a challenge that is worth the teacher’s time and energy.
Wendy also identifies self-assessment as a skill to help students become life-long learners. She
believes:
233
If you don't reflect upon what you have learnt, then it is rote learning and I
don't believe that’s the way we want students to grow up in the world.
Wendy discussed a range of tools that she uses with the students to encourage the meta-
cognition of reflecting on their learning and how they are learning. This included an informal
check at the start and end of lessons using a number system (1 means ‘I completely
understand’, 5 means ‘I don’t understand at all’), think, pair and share, 3 facts, 2
understandings and 1 connection and rating their learning at the end of a unit or lesson. Earl
(2003) argues students need to be engaged in meaningful self-assessment for them to become
‘self-starting and self-motivated lifelong learners’ (p. 101).
Wendy found that most self-assessment was informal assessment at the end of lessons. She did
not give the students much time during the lesson to engage in assessing their own learning to
make adjustments. However, the research on self-assessment shows students need to move
beyond only identifying what they have learnt. Students need to reflect on their learning and
determine where they are going in their learning and how they might get there (Clark 2005;
Earl, 2003; Glasson, 2009). It concerned Wendy that she did not utilise formative assessment
enough with her class. She saw further opportunities to incorporate self-assessment with the
student portfolios and she believed that informal self-assessment is important and could be
used more often with her class. Rachel also discussed the use of portfolios to assist with self-
assessment where students reflect upon what they have and have not yet achieved and set goals
for their next targets. Rachel emphasised the fact that not only does the goal need to be set, but
students need to establish how they are going to meet their goal. Rachel assisted this through
asking questions of the students such as ‘how are you going to meet this goal, what are you
going to need support with, what will my role (as the teacher) be to help you meet the goal?’
At MIS, the teachers had a similar belief in the importance of self-assessment after the action
research. The implementation of self-assessment had a significant role in activating students as
owners of their learning. Between the two schools, there were a range of self-assessment
strategies implemented in the classroom to support learning after the action research in phase 1
and 2. At Western College, self-assessment was incorporated into the learning and teaching at
234
a young age, an approach advocated by Clarke (2005) who argues children are more motivated
to improve their learning and students are prouder of their efforts after making an
improvement.
Peer-assessment
The physical environments in the classrooms are all configured in a manner that is conducive
to collaborative learning in groups which, Glasson (2009) argues is needed to create a culture
that encourages peer-assessment. On entering each classroom, I noticed the atmosphere had an
environment of cooperation, support and trust that Sullivan (2002) attributes to a positive
culture of peer-assessment. This was evidenced by the use of physical space and the
interactions between students and their teachers. This was similar to Emily and Harriet’s class
at MIS where they both established a positive culture in their classroom. It is, however,
important to be clear that a positive culture alone will not lead to effective use of peer-
assessment. Overall, during the observations and interviews it was difficult to determine how
peer-assessment was used in the classroom to support student learning. When I discussed this
with Jessica, she said she believed the ‘teachers have a range of strategies to assess including
teacher, self and peer-assessment’. Examples of peer-assessment given by the teachers include
a discussion by Wendy of how she liked to partner up a student who is confident in a learning
task to support another student. She explained the tasks in ‘student language’ to free up her
time to focus on feedback for other students. Wendy believes this is effective and the
configuration of desks in her classroom allowed this to occur effortlessly. Rachel briefly
discussed the use of peer-assessment with a student established rubric to inform student
learning. Justine involves the students in regular peer learning where ‘everyone needs to be
doing the thinking; learning is never passive’. She believes having students learning together
means students have to justify their responses, be knowledgeable about what they are
discussing, and they can support each other in alleviating misconceptions early in the learning
process.
However, it is apparent that Western College and MIS could further develop their practice in
establishing how effective peer-assessment can be implemented in classes. The planning
235
Emily put into the use of peer-assessment in her lessons during the action research would be a
very good model for teachers at both MIS and Western College, as she strategically planned
when peer-assessment would take place and importantly, modelled the behaviour needed for
students to make use of this strategy. Glasson (2009) argues for modelling of the required
behaviours of peer-assessment, particularly drawing ‘attention to the language that they
(students) use’ (p. 80). As both schools had class sizes of over 20, effective peer-assessment
could help students receive more feedback throughout the day. This would put the students at
the forefront of the assessment process and give them more responsibility for their learning.
Formative assessment and collaboration
Since the aim of my case study of Western College was to provide comparative data to enable
me to better reflect on what happened at MIS, a particular focus on my observations was on
evidence of the structures of collaboration through a PLT. It was mainly in year level planning
meetings that I observed how teams collaborated to plan for formative assessment in their
classrooms.
Western College had a systematic approach to planning. Teachers met once a week for
curriculum meetings and the curriculum coordinator regularly attended to support teachers.
Curriculum meetings and administration meetings were separated and held on different days of
the week. All teams only spoke of curriculum and learning and teaching related topics during
the meeting. The concise agenda was constructed deliberately to ensure teachers had time to
discuss learning, assessing and teaching in an in-depth manner. The action research at MIS
began to put student learning as the focus of discussions for planning meetings. The
implementation of PLTs was providing the systematic collaborative approach MIS needed to
ensure student learning stayed as the focus of planning.
As would be the case in most PYP schools, the meetings I observed focused on planning the
units of inquiry. It was at the beginning of a new term and the three meetings were all
beginning new units of inquiry. This meant there was a focus on the planning of assessment
and learning tasks (see Appendix 1). The PYP (International Baccalaureate Organization,
236
2007) approach involves teachers planning the possible ways of assessing student learning
throughout the unit. This ensures formative assessment is at the forefront of planning and
assists teachers to think about what will be assessed throughout the unit and the learning
experiences students may be engaged in. However, it can be confusing and frustrating for
teachers to think about formative assessment strategies individually. Teachers who naturally
plan their learning activities as formative assessment strategies to inform the next stage of
learning argue that learning activities and formative assessment cannot be separated. Earl
(2003) agrees that assessment should not be treated as a separate entity from the learning and
teaching. This is also emphasised by Wiggins and McTighe (2005) who advocate for teachers
to think as assessors when planning for student learning.
Justine believes planning formative assessment strategies is important to establish effective
learning habits in the classroom. Justine argues she needs to make:
…the time to have the conversation that you need to. It is not so much the
formative assessment task, so much as what you do with it afterward. It is not
going to change anything in the classroom unless you look at what it is telling
you. It is from there and from that discussion that is important for
collaboration and getting that understanding and then knowing what to do next
and how to do it.
Justine shows a strong connection to formative assessment and the importance of
collaboration. Justine’s planning meeting that I observed supported this belief. Her year level
was beginning a new unit of inquiry on body systems and the first point of discussion was the
pre-assessment the students had undertaken a couple of days prior to the meeting. The analysis
included discussions regarding student knowledge of body systems, misconceptions they had,
and questions students were interested in exploring. From this information, the team planned
for their next stages of learning. Justine and her team utilised their pre-assessment data to
inform their next phase of learning and teaching, a very important step in using formative
assessment to improve student learning.
237
The stance Justine takes on collaborative conversations about student learning means teams
need to be meeting regularly to establish what has been learnt from the learning activities and
how to move forward to help student learning. Western College’s weekly curriculum meetings
would give teachers time to engage in the necessary discussions outlined by Justine. However,
for this to be effective, teachers need to have student learning as the starting point for regular
discussions and identify the learning goals students need to meet and outline how the students
might get there. DuFour and Marzano (2011) argue that for teams to be effective, they need to
be focused on all students’ learning; it is a collaborative approach and is focused on results
from students.
Leadership and collaboration
Fullan (2011a) argues that the research ‘has been clear and consistent for the past 30 years’ (p.
2) that a collaborative culture focusing on improving teacher instruction, that learns from one
another, and is well supported by school principals, results in better performing students. The
case study showed these factors contributing to success at Western College. Jessica took on
distributed leadership with the support from her principal. She saw it as her responsibility to
help teachers understand and believe in the importance of formative assessment:
Unless they (teachers) understand it themselves, unless they see the value in it,
it’s not going to change. So (in my role) it is about tapping into teachers need
to feel they own it.
Jessica realises that teachers will be at different stages with their understanding of curriculum
and pedagogy; an important acknowledgment from someone in leadership. Teachers do not
apply a ‘one-way fits all’ approach to students and this needs to be considered when leaders of
a school challenge teachers to improve their own practice. She sees the need to challenge
teachers to help them grow professionally and the benefit this has on student learning.
All of the Western College participating teachers believe the school’s leadership team supports
them. Justine feels teams ‘generally get the support needed’. When asked to elaborate, she
238
spoke of support in team meetings, access to resources and being trusted to make decisions as
educators. The key themes that came from teachers included respect as a professional (Hord,
1997), trust, involvement in curriculum and the providing of resources (Marzano, Water &
McNulty, 2005). There were also examples of teacher leadership that enhanced collaboration.
For example, teachers with expertise in literacy or mathematics support teams by analysing
diagnostic assessments and using that information to improve student learning. Justine, as
mathematics coordinator, includes in her responsibilities: supporting teams in analysing
mathematics pre-assessments and building teacher capacity by helping teachers understanding
how to improve student learning with the pre-assessment information. Jessica believes, ‘we
are getting better at using our individual needs staff to help support other teachers’. The
responses from both the teachers and Jessica regarding leadership and collaboration showed
that Western College leaders understand that teachers need to have the capacity to respond to
student learning (Snyder, Acker-Hocevar & Snyder, 1996) and enlist the support of other
leaders within the school to work with teachers to build capacity (DuFour & Marzano, 2011).
Developing teacher beliefs and knowledge about formative assessment
The interviews and discussions at Western College identified two key areas for teachers to
develop further formative assessment strategies through collaboration; teacher beliefs and
teacher knowledge. Jessica is focused on changing teacher beliefs. She sees intrinsic
motivation and believing in the value of formative assessment as a key factor in teachers
improving their assessment practices. Fullan (2011b) says that motivating change is about
providing experiences that teachers find intrinsically fulfilling. He argues that it is not about
providing irrefutable evidence or inspiring visions that motivates teachers, it is getting teachers
to be more effective and believing in what they are doing that will cause change. Jessica’s aim
is to provide the environment the teachers need to believe that formative assessment can
improve learning.
All four participants acknowledged that educators need to have a certain level of knowledge of
formative assessment before they can begin to plan formatively. Although each teacher had an
individual path to how they acquired their knowledge, they agreed that personal and intrinsic
239
motivation to learn and understand drives them to improve their practice. They did this by
seeking their own professional development, reading, pedagogical discussions and undergoing
further study. Their conclusions stressed the importance of teachers working with colleagues
in a collaborative way, to try out different practices and adapt to the needs of the students in
supportive collaborative environment. Justine sees the complexities in building formative
assessment as including building teachers’:
…thorough knowledge of learning outcomes. You need to know exactly what
you are aiming for by the end of the year… in order to know where you are
headed and where the students are up to in their learning… You also need your
knowledge of child development and ages and stages and what’s appropriate
for the teaching age. All that has to be in place before teachers can
comfortably figure out formative assessment.
Conclusion
This case study provided deep insights into how teachers at Western College planned,
developed and implemented formative assessment in their classrooms. For me, it provided the
anticipated insights that enhanced my reflections on similar processes at MIS and my capacity
to draw further conclusions about what was achieved in the action research process. Although
the two schools were in very different national contexts, there were many similarities in their
approaches to developing formative assessment strategies through planning, and what was
implemented in the classroom. The findings from the study at Western College demonstrate
that the effective use of formative assessment is being implemented successfully because of
shared expertise, and a thorough and explicit focus on many different strategies planned
through collaboration, thereby enhancing the quality of the strategies implemented. The whole
school approach to developing formative assessment strategies at Western College builds a
strong basis for formative assessment to play an integral part in the teaching and learning in the
classroom. Clarke (2008) argues strongly that effective formative assessment ‘…involves
teachers learning not only about formative assessment, but also the role of pupils as learners
and establishing a climate for learning’ (p. 168).
240
At MIS during the action research and at Western College during the time I collected data,
there was evidence that the use of learning intentions, developing success criteria, teacher
feedback and self and peer-assessment all led to deeper learning and understanding from the
students. Allowing students greater ownership over their learning was an important part in the
assessment process for both schools. This created a higher level of engagement in learning
from the students and gave them a clearer understanding of what and why they were learning.
Western College has a leadership approach focused on establishing consistency across the
school for implementation and planning. Although MIS has leadership supportive of the
implementation of PLT, it did not provide the instructional leadership needed for consistency
across the whole school during the period of my study. At MIS, the small group of participants
in the action research had deep conversations about their teaching and learning approaches, but
this opportunity was still not being shared across the school in the ways Western College was
adopting a collaborative effort. Western College has a systematic approach to organising
teachers into what DuFour and Marzano (2011) describe as ‘meaningful collaborative teams in
which members work interdependently to achieve common goals for which they are mutually
accountable’ (p. 24). This was evident through the trust, open dialogue, de-privatised practice,
and reflective practice approach each team built (Hord, 1997; Stoll et al., 2006).
With respect to both schools, there are further developments that could be embedded to
continue developing the collaborative planning of formative assessment. This could include a
stronger use of student data to determine next steps in learning and ensuring that all decision
making during planning meetings is based on the needs of all students. However, what is
evident in the work of both schools is the key ARG (2002) principle that ‘assessment for
learning (formative assessment) should be part of effective planning of teaching and learning’
(p. 2) as an active focus of teachers’ work.
241
Chapter 8: Conclusion
Assessment for Learning [AFL] is part of everyday practice by students,
teachers and peers that seeks, reflects upon and responds to information from
dialogue, demonstration and observation in ways that enhance ongoing
learning …by students, teachers and peers – students are deliberately listed first
because only learners can learn. Assessment for Learning should be student
centred. All AFL practices carried out by teachers (such as giving feedback,
clarifying criteria, rich questioning) can eventually be ‘given away’ to students
so that they take on these practices to help themselves, and one another, become
autonomous learners. This should be a prime objective.
(Klenowski, 2009, p. 264)
Introduction
While Klenowski’s (2009) discussion of Assessment for learning quoted above was part of a
larger international discussion taking place at that time, when I commenced my study of
formative assessment practices in Hong Kong in 2009, there was still substantial evidence in
schools where I taught, that practitioner understanding of the power of formative assessment
strategies to deepen student learning in multiple ways, was still poorly developed in many
education contexts. Although Klenowski (2009) shows that research with this focus was
emerging, she also states that how assessment for learning was being:
…interpreted and made manifest in educational policy and practice often reveal
misunderstanding of the principles, and distortion of the practices, that the
original ideals sought to promote. Some of these misunderstandings and
challenges derive from residual ambiguity in the definitions. Others have
stemmed from a desire to be seen to be embracing the concept – but in reality
implementing a set of practices that are mechanical or superficial without the
teacher’s, and, most importantly, the students’, active engagement with learning
as the focal point. (p. 263)
242
Klenowski’s (2009) views provide a justification for my study, since I aimed to explore how a
range of explicit formative assessment strategies could be developed through action research
amongst teachers, in order to improve and deepen student learning through a focus on their
own classroom practice, and through team planning and collaboration. In the two phases of
action research at MIS, the findings showed that focused planning, shared professional learning
that is developed through classroom practice, and the use and embedding of multiple aspects of
formative assessment, can lead to powerful improvement in teaching, and more importantly
active engagement of the students in their learning. The follow up phase 3, where I conducted
a case study of teacher planning and practices in formative assessment in a Melbourne school
provided an important comparison to deepen my analysis and to validate my findings about the
importance of implementing formative assessment strategies and their development through
collaboration. This chapter provides an overall reflection on the findings and final conclusions.
Summary of findings
This study found that planned and effective formative assessment strategies that do improve
student learning require deep understanding by the teachers of the purpose, processes and
possibilities for formative assessment in their day to day teaching strategies. During my study
I collaborated with colleagues through the action research first to develop our understanding of
formative assessment, and then to develop a sequenced, planned and very specific focus on a
range of strategies that included:
• Stating the learning intention
• Developing the success criteria
• Effective teacher questioning
• Teacher feedback
• Self-assessment
• Peer-assessment
Throughout the phase 1 and 2 action research and the phase 3 case study, it was evident that
developing teachers’ knowledge of the complexities of formative assessment and willingness
243
to plan and involve learners in this, are of fundamental importance. The findings in each phase
demonstrated that the effectiveness of each strategy is enhanced through its connection to other
assessment strategies and the link to teaching and learning. The strategies are entwined with
one another and ensure students play an active role in the assessment process. Below are some
concluding comments regarding the findings from the research with respect to the importance
of each formative assessment strategy and the impact of collaboration on those assessment
strategies.
Stating the learning intention and developing the success criteria
It became clear to the teachers involved in the action research, that identifying the learning
intention for and with the students, developing the success criteria with the class and referring
back to this, helps students understand what they are learning and how they can progress. The
success criteria also play a pivotal role in providing feedback for students and clear guidelines
for what students need to focus on when they are involved in self and peer-assessment. It can
be provided using a range of tools including checklists, performance criteria and rubrics. In
particular for younger students, using visual cues (i.e. puppets and drawings) help students to
remember what they are learning. As found throughout this study, stating the learning
intention and developing the success criteria with students is essential for the learner to play an
active role in their learning. As clearly stated by Absolum (2010):
Unless both the teacher and student are clear about what is to be learned, why
it is to be learned, and how it is to be learned, then teaching and learning will
collapse. (p. 29)
Effective teaching questioning
Glasson (2009) argues that when teacher questioning is a recall tool or used as a managerial
tool in the classroom, it has very little impact on student learning. Asking questions that
students already know the answer to will not further their understanding. During phase 1 of
this study, teachers using a range of questioning techniques for open-ended questions found the
244
level of engagement to be higher and that there was more interest from students in responding.
Questions that focus on guiding student thinking through open-ended discussions assisted
students to further their understanding. Importantly, through observations and discussion with
teachers I found that questioning techniques were more effective when used alongside talk
partners and wait time. Giving students the opportunity to think about their answer and
discouraging students from putting their hand up straight away meant more thinking time was
provided. Further, giving students the opportunity to discuss their answers through talk
partners meant that students were able to build on each other’s answers, and thereby engaging
in more in depth thinking. Importantly, the students themselves noted the importance of talk
partners. Chris in Emily’s class clearly stated in phase 1; ‘somebody has different ideas from
your work and they can explain it to you and then the work (ideas) becomes even better than
before’. Clarke (2008) argues that ‘the dominance of constructive, pupil “dialogic” talk in a
classroom is a key identifier of a “formative assessment” culture’ (p. 35); a theory that was
evident in practice in the case study school and at MIS as our confidence and use of effective
questioning and engagement of learners grew.
Teacher feedback
Teacher feedback proved to be an important strategy to move student learning forward during
all phases of this study. The teachers at MIS and Western College believed it was important
that feedback was related to the learning intentions and success criteria, as well as all aspects of
student learning. In the study, the teachers who consciously used feedback found that students
responded with a better understanding of their learning. This was also true for me when I
specifically focused on feedback in my own classroom that related to the learning intention and
success criteria. This meant that teachers had identified the difference between managerial
feedback and feedback for improving learning (Clarke, 2001). When the feedback was clear,
descriptive and guided to inform students how to improve, students were more successful in
meeting the expectations. Using guiding questions to assist students to identify their own
improvements meant they were still thinking for themselves. Developing the relationship of
trust and mutual respect before providing feedback to students was beneficial as it meant
students felt comfortable with the feedback. At MIS and in my observations at Western
245
College, it was clear that when there is a culture of learning created in a classroom, students
understand the importance of learning and thereby embrace and engage in challenging
learning.
Peer and self-assessment
The impact that peer and self-assessment has on learning was a strong finding in the research.
The findings showed what Earl (2003) sees as a significant opportunity for students to play an
active role in assessment process and move towards life-long learning when they are able to
self and peer assess. As Wendy stated at Western College:
If you don't reflect upon what you have learnt, then it is rote learning and I
don't believe that’s the way we want students to grow up in the world.
What was evident during the study was how much the students were engaged when given the
opportunity to be involved in their learning and their peer’s learning. The level of engagement
increased significantly when students became active participants in assessing themselves and
their peers. The evidence showed that peer-assessment provided the student receiving
feedback with support to move forward in their learning and, importantly, the student giving
the feedback also benefitted from viewing their peers’ learning. When students were involved
in assessing a peer’s learning, they were more likely to make changes to improve their own
learning. This strategy creates self-assessment and self-monitoring of learning. My findings
confirmed Black & Wiliam’s (1998b) view that self-assessment is an essential component of
the classroom.
Linking learning, assessing and teaching
Other important findings emerged from the study of my own practice, from what I observed in
classrooms and through collaborating with my peers and interviews in the action research and
case study. A key conclusion is that teachers must link learning, assessing and teaching
together. When the teacher ensures that assessment is part of the teaching and learning and not
246
seen only as summative or information to inform a reporting process, there is a significant
impact on student learning. Overall, the study found that the regular inclusion and a
multilayered approach to the implementation of the following formative assessment strategies
provide a systematic and flexible approach for students to be successful their learning:
• State the learning intention in student language
• Develop the success criteria for the learning intention through the investigation of
exemplars
• Use effective teacher questioning to deepen student understanding
• Use teacher feedback related to the learning intention and success criteria. The
feedback is clear and descriptive and in most cases only focused on one area
• Provide opportunities for peer-assessment using the success criteria as the basis for
feedback
• Encourage ongoing self-assessment for students to identify where they are, where they
need to go and how they are progressing.
This systematic approach to the implementation of formative assessment is vital for setting a
student up to be successful in school and as a lifelong learner. As Klenowski (2009) argues:
What is distinctive about assessment for learning is not the form of the
information or the circumstances in which it is generated, but the positive effect it
has for the learner. Properly embedded into teaching-learning contexts,
assessment for learning sets learners up for wide, lifelong learning. (p. 264)
Collaborative planning
Invargson, Miers and Beavis (2005) suggest that forming learning teams of teachers to plan
and discuss their practice leads to deep discussions of pedagogy that can in turn enhance
student learning. They argue that:
247
Collaborative analyses of student work opens up many avenues for teachers to
de-privatize their practice and learn from each other. It also leads to deeper
understanding of student learning outcomes and greater discrimination about
what counts as meeting those objectives. (p. 4)
In my study at MIS, the effectiveness of collaboration was based on close professional
relationships. As Brianna identified in phase 2, her team worked effectively together because
of their respectful communication, their capacity to listen to one another’s ideas and sharing of
effective practice. Most teachers I observed and interviewed at MIS and at Western College
showed how trust and open dialogue can lead to meaningful reflective practice; factors seen to
be essential by Hord (1997) and Stoll et al. (2006). Even Harriet at MIS, who was reluctant at
first, to embrace formative assessment during the action research at MIS, was able, after
reflection, to see how these strategies can deepen student learning in multiple ways. Both the
action research and the case study provided evidence that where teachers are involved in
collaborative efforts and systematic approaches to implementing formative assessment
strategies, deepening of student learning is achieved.
I also found, as advocated by Hord (1997), that developing a shared vision, having shared and
supportive leadership and taking a collective responsibility for students ensures that PLTs stay
focused on student learning. This is achieved through explicitly planning the formative
assessment strategies outlined in Figure 10, Multilayered approach to formative assessment
through collaboration (see p. 202), and linking them to student learning. To ensure all
students benefit from formative assessment strategies planned collaboratively, Hardy (2012),
argues:
There needs to be a shift away from teachers believing that teaching is a
personal activity. Teaching is no longer a private endeavour. It is fast becoming
an open profession that encourages teachers to engage in collaborative
learning communities that focus on capacity building through effective
communication. (p. 34)
248
The teachers in my study willingly de-privatised their practice with me through the
observations of their practice, and sharing of their practice with their colleagues in open and
respectful communication. The PLT at MIS provided the framework and structure for the
action research and the systematic structure of collaborative planning at Western College
ensured everyone was working towards the same shared vision.
Significance of the findings
This study involved deep practitioner research and commitment of the volunteer participants.
Phase 1 and 2 teachers were willing to commit time and energy to engage in ongoing action
research that required questioning of their practice and an openness to engaging deeply in new
assessment strategies. In the case study, the teacher volunteers agreed to de-privatise their
practice, to allow observations of their planning, meetings and collaboration, and willingly
engaged in interviews and discussion of their focus on formative assessment. Every participant
in both phases agreed that learning, assessing and teaching is complex work, so using multiple
forms of assessment effectively in the classroom, to improve student learning, should be a
priority for all teachers. They formed the view that teachers have an obligation to incorporate
assessment into their practice that ensures students play an active role in the process. Aligning
assessment strategies so that they build upon one another and are embedded in learning and
teaching is an approach that ensures formative assessment is at the forefront of classroom
practice. This multilayered approach connects learning, assessing and teaching together,
leading to a stronger focus on the development and implementation of formative assessment
strategies. This study provided evidence that working collaboratively within a team focussed
on improving student learning can lead to successful implementation of formative assessment.
For me, as a classroom teacher at the beginning of this study and now in a leadership role as an
assistant principal in another international school in Hong Kong, the implications for my
professional learning that evolved throughout this study have been significant. I continue to
see the powerful impact of teacher collaboration in improving practice and what impact
formative assessment can have on student learning. In addition, through the planning,
implementation and reflection cycles, I saw demonstrated evidence of changes in assessment
249
practices at MIS. Although at the end of the action research, formative assessment was not
fully embedded across MIS, nor in all teachers’ practice, there was increased understanding of
the imperative for whole school approaches to improving student learning through further
professional learning and embedding of formative assessment practices. This reflexive
interaction between reflection and action is paramount in teachers becoming lifelong learners
themselves. Teachers need to continue to improve practice to enhance student learning as the
focal point of professional learning. My study found that developing a stronger link between
teaching, learning and assessment and developing stronger interdependent relationships
between educators plays a crucial role in this professional learning. In phase 3, the case study
school provided evidence of what can be achieved when the whole school is involved in a
focused emphasis on improving student learning and explicit use of formative assessment.
Where PLTs are developed effectively, with strong teacher commitment, they can make a
difference to the improvement of student learning. For PLTs to have a sustainable role within
schools, principals and other school leaders need to create the right conditions for successful
learning, which includes investing in the human and structural resources necessary and the
time required to create a shared vision, shared and distributive leadership and the development
of a collective responsibility for continuous student learning (Hord, 1997; Stoll, et al., 2006).
Only when this is systematically developed will change be sustainable.
Conclusion and recommendations
Practitioner and action research on formative assessment that is well grounded in literature and
theories in this field provides an engaging opportunity for teachers to build their professional
learning and to collaborate in the improvement of student learning. Each action research phase
and the case study showed how powerful this learning can be when teachers are involved in
deep discussions about their planning and learning intentions, and the ways they can engage
their students in assessment as and for learning. My review of the literature for this study
found very little discussion and evidence linking collaborative planning approaches in
professional learning teams to improving formative assessment strategies. This is in spite of the
fact that extensive literature documents the value of communities of teachers working together,
250
de-privatising their practice, reflecting on their teaching and developing reflexive approaches
to implement new and better practices. My study adds new knowledge to how this important
work can be developed in schools. The findings show that when teachers commit to working
in teams with a focus on their teaching, and are provided with the necessary structural and
professional support by their leaders, they can develop deep and varied formative assessment
practices. Further practitioner research is required in this area with a focus on improving
student learning in any school where this vital connection between teaching, learning and
formative assessment is not developed. My experiences and collaboration with colleagues in
this study leads to me recommend action research as an approach to improving student in
learning in schools.
Teaching is multifaceted, complex work. For deep student learning to occur, understanding of
the critical importance of implementing various formative assessment strategies, which
enhance the quality of teaching and learning, is imperative in any classroom. But most
importantly, my study found that formative assessment strategies can empower learners from a
very young age. Formative assessment encourages them to play an active role in their learning,
to think critically about what they are learning and why, to ask good questions, to talk about
their learning, to know what is successful learning, and to collaborate with each other;
powerful life skills for any learner in 21st century schools.
251
References
Absolum, M. (2010). Clarity in the classroom: Using formative assessment for building
learning-focused relationships. Auckland, New Zealand: Hodder Education.
Agran, M., Sinclair, T., Alper, S., Cavin, M., Wehmeyer, M., & Hughes, C. (2005). Using self-
monitoring to increase following direction skills of students with moderate to severe
disabilities in general education. Education and Training in Developmental
Disabilities, 40(1), 3-13.
Airasian, P. (1996). Assessment in the classroom. New York, USA: McGraw Hill.
Alexander, R. J. (2004). Towards dialogic teaching: Rethinking classroom talk. York, United
Kingdom: Dialogos.
Andrews, D., & Lewis, M. (2007). Transforming practice within: The power of professional
learning community. In L. Stoll & K. Louis (Eds.) Professional learning communities:
Divergence, depth and dilemma. Maidenhead, Berkshire: Open University Press.
Assessment Reform Group. (2006). The role of teachers in the assessment of learning. United
Kingdom: Newcastle Document Services. Retrieved 10th June 2010 from
http://www.nuffieldfoundation.org/sites/default/files/files/The-role-of-teachers-in-the-
assessment-of-learning.pdf.
Assessment Reform Group. (2002). Assessment for learning: 10 principles, research-based
principles guide to classroom practice. United Kingdom: Assessment Reform Group.
Retrieved 10th June 2010 from
http://www.hkeaa.edu.hk/DocLibrary/SBA/HKDSE/Eng_DVD/doc/Afl_principles.pdf.
Assessment Reform Group. (1999). Assessment for learning: Beyond the black box.
Cambridge, United Kingdom: University of Cambridge School of Education.
252
Bartlett, S., & Burton. D. (2006). Practitioner research or descriptions of classroom practice? A
discussion of teachers investigating their classrooms. Educational Action Research.
14(3), 395-405.
Baxter, P., & Jack, S. (2008). Qualitative case study methodology: Study design and
implementation for novice researchers. The Qualitative Report, 13(4), 544-599.
Bennett, R. E. (2011). Formative assessment: A critical review. Assessment in Education:
Principles Policy and Practice, 18(1), 5-25.
Berg, B. L. (2004). Qualitative research methods for the social sciences. Boston, USA: Allyn
& Bacon.
Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment. Educational
Assessment, Evaluation and Accountability, 21(1), 5-31.
Black, P., & Wiliam, D. (2007). Large scale assessments: Designs and principles drawn from
international comparisons. Measurement: Interdisciplinary Research and Perspective.
5(1), 1-53.
Black, P., & Wiliam, D. (2004). The formative purpose: Assessment must first promote
learning. Yearbook of the National Society for the Study of Education. 103(2), 20-50.
Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2004). Working inside the black
box: Assessment for learning in the classroom. Phi Delta Kappan, 86(1), 8-21.
Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2003). Assessment for
Learning: Putting it into practice. Buckingham, United Kingdom: Open University
Press.
253
Black, P., & Wiliam, D. (2003). ‘In praise of educational research’: Formative assessment.
British Educational Research Journal, 29(5), 623-637.
Black, P. (1998). Testing: Friend or foe? The theory and practice of assessment and testing.
London, United Kingdom: Falmer Press.
Black, P., & Wiliam, D. (1998a). Assessment and classroom learning. Assessment in
Education: Principles, Policy and Practices, 5(1), 7-74.
Black, P., & Wiliam, D. (1998b). Inside the black box: Raising standards through classroom
assessment. Phi Delta Kappan, 80(2), 139-148.
Bloom, B. S., Hastings, J. T., & Madaus, G.F. (1971). Handbook on the formative and
summative evaluation of student learning. New York, NY: McGraw-Hill.
Bolam, R., McMahon, A., Stoll, L., Thomas, S., Wallace, M., Greenwood, A., Hawkey, K.,
Ingram, M., Atkinson, A., & Smith, M. (2005). Creating and sustaining effective
professional learning communities. England: University of Bristol.
Boyd Bialas, J., & Boon, R. (2010). Effects of self-monitoring on the classroom preparedness
skills of kindergarten students at-risk for development disabilities. Australasian Journal
of Early Childhood, 35(4), 40-52. Retrieved 17th January 2014 from
http://www.earlychildhoodaustralia.org.au/australian_journal_of_early_childhood/ajec_
index_abstracts/effects_of_self_monitoring_on_the_classroom_preparedness_skills_of
_kindergarten_students_at_risk_for_developmental_disabilities.html.
Boyer, E. L. (1995). The basic school: A community for learning. Princeton, NJ: Carnegie
Foundation for the Advancement of Teaching.
Brookhart, S. (2013). How to create and use rubrics for formative assessment and grading.
Alexandria, VA: Association for Supervision and Curriculum Development.
254
Brookhart, S. (2010). How to assess higher-order thinking skills in your classroom.
Alexandria, VA: Association for Supervision and Curriculum Development.
Brydon-Miller, M., Greenwood, D., & Maguire, P. (2003). Why action research? Action
research, 1(1), 9-28.
Buysse, V., Sparkman, K. L., & Wesley, P. W. (2003). Communities of practice: Connecting
what we know with what we do. Exceptional Children, 69(3), 263-277.
Bryk, A. S., Easton, J. Q., Kerbow, D., Rollow, S. G., & Sebring, P. A. (1994). The state of
Chicago school reform. Phi Delta Kappan, 76(1), 74-78.
Bryk, A., Camburn, E., & Louis, K. (1999). Professional community in Chicago elementary
schools: Facilitating factors and organizational consequences. Educational
Administration Quarterly, 35(5), 751-781.
Caine, G., & Caine, R. N. (2000). The learning community as a foundation for developing
teacher leaders. National Association of Secondary School Principals (NASSP)
Bulletin, 84(616), 7-14.
Calhoun E. (1994). How to use action research in the self-renewing school. Alexandria, VA:
Association for Supervision and Curriculum Development.
Carr, W., & Kemmis, S. (1986). Becoming Critical: Education, knowledge and action
research. Deakin: Deakin University Press.
Chappuis, J. (2005). Helping students understand. Educational Leadership, 63(3), 39-
43.
Chappuis, S., & Stiggins, R. (2002). Classroom assessment for learning. Educational
Leadership, 60(1), 40-43.
255
Clarke, S. (2008). Active learning through formative assessment. London, Great Britain:
Hodder and Stoughton.
Clarke, S. (2005). Formative assessment in action: Weaving the elements together. London,
Great Britain: Hodder and Stoughton.
Clarke, S (2003). Enriching feedback in the primary classroom: Oral and written feedback
from teachers and children. London, United Kingdom, Hodder and Stoughton.
Clarke, S. (2001). Unlocking formative assessment: Practical strategies for enhancing pupil’s
learning in the primary classroom. London, Great Britain: Hodder and Stoughton.
Cochran-Smith, M., & Lytle, S. (2009). Inquiry as a stance: Practitioner research for the next
generation. Colombia University, New York: Teachers College Press.
Cochran-Smith, M., & Lytle, S. (1999). The teachers research movement: A decade later.
Educational Researcher, 28(7), 15-25.
Cochran-Smith, M., & Lytle, S. (1998). Teacher research: the question that persists.
International Journal of Leadership in Education: Theory and Practice. 1(1), 19-36.
Costa, A, (1989). Reassessing Assessment. Educational Leadership, 46(7), 2-3.
Cotton, J. (1995). The theory of assessment: An introduction. London, United Kingdom:
Kogan Page.
Cowie, B., & Bell, B. (1999). A model for formative assessment in science education.
Assessment for Education: Principles, Policy and Practice, 6(1), 32-42.
Creswell, J. (2009). Research design: Qualitative, quantitative, and mixed methods
approaches. Thousand Oaks, California: Sage Publications.
256
Creswell, J. (2005). Educational research: Planning, conducting, and evaluating
quantitative and qualitative research. Upper Saddle River, New Jersey: Merrill.
Creswell, J. (1998). Qualitative inquiry research design: Choosing among five traditions.
University of Nebraska, Lincoln: Sage Publications.
Crooks, T. J. (1988). The impact of classroom evaluation practices on students. Review of
Educational Research, 58(4), 438-481.
Crotty, M. (1998). Foundation of social research: Meaning and perspective in the
research process. Thousand Oaks, California: Sage Publication.
Darling-Hammond, L. (1996). The quiet revolution: Rethinking teacher development.
Educational Leadership, 53(6), 4-10.
David, J. (2008). What research says about collaborative inquiry. Educational Leadership,
66(4), 87-88.
Department of Education and Training, Victoria (2013). Assessment Advice. Retrieved 17th
May 2014 from
http://www.education.vic.gov.au/school/teachers/support/pages/advice.aspx#definition.
Denzin, N., & Lincoln, Y. (2003). Collecting and interpreting qualitative methods (3rd Ed.).
Thousand Oaks, California: Sage Publications.
Department of Education Western Australia (2013). First steps: Writing map of development.
Western Australia: Department of Education WA.
Duckor, B. (2014). Formative assessment in seven good moves. Educational Leadership.
71(6), 28-32.
257
Dueck, M. (2014). Grading smarter not harder: Assessment strategies that motivate kids and
help them learn. Alexandria, VA: Association for Supervision and Curriculum
Development.
DuFour, R., DuFour, R., Eaker, R., & Many, T. (2010). Learning by doing: A handbook for
professional learning communities that work. Bloomington, IN: Solution Tree.
DuFour, R., & Mattos, M. (2013). How do principals really improve schools? Educational
Leadership, 70(7), 34-40.
DuFour, R. (2011). Working together: But only if you want. Phi Delta Kappan, 92(5), 57-61.
DuFour, R., & Marzano, R. (2011). Leaders of learning: How district, school, and classroom
leaders improve student achievement. Bloomington, IN: Solution Tree.
DuFour, R. (2004). What is a professional learning community? Educational Leadership,
61(8), 6-11.
DuFour, R. (2003). Building a professional learning community. The School Administrator.
60(5), 13-18. Retrieved 17th February 2014 from
http://www.aasa.org/SchoolAdministratorArticle.aspx?id=9190.
DuFour, R., & Eaker, R. (1998). Professional learning communities at work: Best practice for
enhancing student achievement. Bloomington, IN: National Education Service.
Dweck, C. (2006). Mindset: The new psychology of success. New York, USA: Random House.
Earl, L. (2003). Assessment as learning: Using classroom assessment to maximize student
learning. California, USA: Corwin Press.
258
Education Scotland. (2012). What is assessment, and when and how does it take place?
Livingston, Scotland: Education Scotland. Retrieved 23rd May 2014 from
http://www.educationscotland.gov.uk/parentzone/learninginscotland/assessment/whatw
henandwhy/whatisassessment.asp.
Education Services Australia. (2014). Assessment for learning. Curriculum Corporation
Retrieved 26th July 2015 from
http://www.assessmentforlearning.edu.au/professional_learning/modules/success_criter
ia_and_rubrics/success_research_background.html.
Elwood, J., & Klenowski, V. (2002) Creating communities of shared practice: The challenges
of assessment use in learning and teaching, Assessment & Evaluation in
Higher Education, 27(3), 243-256.
Fullan, M. (2011a). Learning is working. Canada: Education in motion. Retrieved
22nd December 2012 from http://www.michaelfullan.ca/media/13396087260.pdf.
Fullan, M. (2011b). Motivating the masses: Experiencing is believing. Canada: Education in
motion. Retrieved 22nd December 2012 from
http://www.michaelfullan.ca/media/13396086820.pdf.
Fullan, M. (2010). All systems go. Thousand Oaks, California: Corwin Press.
Fullan, M. (2007). Change the terms for teacher learning. National staff development
council, 28(3), 35-36.
Fullan, M. (2001). Leading in a culture of change. San Francisco: Jossey-Bass.
Fullan, M. (1990). Staff development, innovation, and institutional development. In B. Joyce
(Eds.), Changing school culture through staff development: The 1990 ASCD yearbook,
(pp. 3-25). Alexandria, VA: Association for Supervision and Curriculum Development.
259
Fulton, K., Yoon, I., & Lee, C. (2005). Induction into learning communities. Washington, DC:
National Commission of Teaching and America’s Future.
Gipps, C., McCallum, B., & Hargreaves, E. (2000). What makes a good primary school
teacher? Expert classroom strategies. London: Routledge Falmer.
Glasson, T. (2009). Improving student achievement: A practical guide to assessment for
learning. Carlton, Victoria: Curriculum Corporation.
Glesne, C. (1999). Becoming qualitative researchers: An introduction (2nd Ed.). New
York, USA: Longman.
Goodfellow, J. (2005). Researching with/for whom? Stepping in and out of practitioner
research. Australian Journal of Early Childhood, 30(4), 48-57. Retrieved 11th January
2014 from
http://www.earlychildhoodaustralia.org.au/wp-content/uploads/2014/06/AJEC0504.pdf.
Goodlad, J. (1984). A place called school: Prospects for the future. New York: McGraw-Hill.
Griffin, P., Murray, L., Care, E., Thomas, A., & Perri P. (2010). Developmental assessment:
lifting literacy through professional learning teams. Assessment in Education:
Principles, Policy & Practice, 17(4), 383-397.
Guskey, T. R. (2003). How assessments improve learning. Educational Leadership, 60(5), 6-
11.
Haertel, E. (1999). Validity arguments for high stakes testing: In search of the evidence.
Educational Measurement: Issues and Practices, 18(4), 5-9.
260
Hardy, D. (2012). Professional learning culture and its effect on teacher efficacy. The
Australian Educational Leader, 34(3), 33-36.
Hargreaves, A. (1994). Changing teachers, changing times: Teachers’ work and culture in the
postmodern age. London: Cassell.
Hargreaves, A., & Fink, D. (2006). Sustainable leadership. San Francisco: Jossey-Bass.
Harlen, W. (2012). On the relationship between assessment for formative and summative
purposes. In J. Gardner, Assessment and learning (2nd Ed) (pp. 87-102). Sage
Publications.
Heritage, M. (2008). Learning progressions: Instruction and formative assessment.
Washington, DC: Council of the Chief State of School Officers.
Heritage, M. (2007). Formative assessment: What do teachers need to know and do? Phi Delta
Kappan, 89(2), 140-145.
Herr, K. & Anderson, G. (2005). The action research dissertation: A guide for faculty and
students. California; USA: Sage Publications.
Hord, S. M. (2009). Professional learning communities: Educators working together toward a
shared purpose. Journal of Staff Development, 30(1), 40-43.
Hord, S. M., & Hirsh, S. A. (2009). The principal’s role in supporting learning communities.
Educational Leadership. 66(5), 22-23.
Hord, S. M. (1997) Professional learning Communities: Communities of continuous inquiry
and improvement. Southwest Educational Development Laboratory (SEDL). Retrieved
8th October 2011 from http://www.sedl.org/pubs/change34/2.html.
261
Hord, S. M (2003). Foreword. In J. Huffman & K. Hipp, Reculturing schools as professional
learning communities (pp. vii-xi). Maryland, USA: Scarecrow Education.
International Baccalaureate Organization. (2009). The Primary Years Programme: A basis for
practice. Cardiff, Wales: International Baccalaureate Organization.
International Baccalaureate Organization. (2007). Making the PYP happen: A curriculum
framework for international primary education. Cardiff, Wales: International
Baccalaureate Organization.
International Baccalaureate Organization. (2005). Programme standards and practices.
Cardiff, Wales: International Baccalaureate Organization.
Invargson, L., Miers, M., & Beavis, A. (2005). Factors affecting the impact of professional
development programs on teachers' knowledge, practice, student outcomes and
efficacy. Educational Policy Analysis Archives, 13(10), 1-28.
Jennings, J., & Stark Renter, D. (2006). Ten big effects of the No Child Left Behind Policy.
Phi Delta Kappan, 88(2), 110-113.
Johnson, A. P. (2005). A short guide to action research (2nd Ed.). Boston, USA: Allyn &
Bacon.
Keely, P. (2008). Science formative assessment: 75 practical strategies for linking assessment,
instruction, and learning. Thousand Oaks, California: Corwin Express.
Kemmis, S., & McTaggert, R. (2005). Participatory action research: Communicative action
and the public sphere. In N.K. Denzin and Y.S. Lincoln, The Sage handbook of
qualitative research (3rd Ed) (pp. 559-603). Thousand Oaks, CA: Sage Publications.
262
Kincheloe, J. L. (2003). Teachers as researchers: Qualitative inquiry as a path to
empowerment (2nd Ed.). New York, USA: Routledge Falmer.
Klenowski, V., & Wyatt-Smith, C. (2014). Assessment for education: Standards, judgment and
moderation. Thousand Oaks, California: Sage Publications.
Klenowski, V. (2010). Are Australian assessment reforms fit for purpose? Lessons from home
and abroad. Queensland Teachers Union (QTU) Professional Magazine. 28, 10-15.
Klenowski, V. (2009). Assessment for learning revisited: An AsiaPacific perspective.
Assessment in Education: Principles, Policy and Practice, 16(3), 263-268.
Knight, O. (2008) Create something interesting to show that you have learned something:
Assessing learner autonomy within the Key Stage 3 history classroom. Teaching
History, (131), 17-24.
Kruse, S., Louis, K., & Bryk, A. (1995). Building professional community in school. Issues in
Restructuring Schools. 6, 3-6. Retrieved 12th August 2014 from
http://www.wcer.wisc.edu/archive/cors/Issues_in_Restructuring_Schools/ISSUES_NO
_6_SPRING_1994.pdf.
Lambert, L. (2003). Leadership capacity for lasting school improvement. Alexandria, VA:
Association for Supervision and Curriculum Development.
Lane, R., McMaster, H., Adnum, J., & Cavanagh, M. (2014). Quality reflective practice in
teacher education: A journey towards shared understanding. Reflective Practice:
International and Multidisciplinary Perspectives, 15(4), 481-494.
Lieberman, A., & Miller, L. (1999). Teachers: Transforming their world and their work. New
York: Teachers College Press.
263
Lingard, B., & Renshaw, P. Teaching as a research-informed and research-informing
profession. In A. Campbell and S. Groundwater-Smith (Eds), Connecting inquiry and
professional learning (pp. 26-39). London: Routledge.
Leithwood, K., Leonard, L., & Sharratt, L. (1998). Conditions fostering organizational learning
in schools. Educational Administration Quarterly. 34(2), 243-276.
Little, J. W. (2003). Inside teacher community: Representations of classroom practice.
Teachers College Board, 105(6), 913-945.
Little, J. W. (1989). District policy choices and teachers professional development
opportunities. Educational Evaluation and Policy Analysis, 11(2), 165-179.
Loughran, J. J. (2010). What expert teachers do: Enhancing professional knowledge for
classroom practice. NSW, Australia: Allen & Unwin.
Louis, K. (2006). Changing the culture of schools: Professional community, organizational
learning, and trust. Journal of School Leadership, 16(5), 477-489.
Louis, K., & Kruse S. (1995). Professionalism and community: Perspectives on reforming
urban schools. Thousand Oaks, California: Corwin Press.
Louis, K. (1994). Beyond “managed change”: Rethinking how schools improve. School
effectiveness and school improvement. 9(1), 1-27.
Mackenzie, N., & Knipe, S. (2006). Research dilemmas: Paradigms, methods and
methodology. Issues in Educational Research, 16(2), 193-205.
264
Macpherson, I., Brooker, R., Aspland , T., & Cuskelly, E. (2004). Constructing a territory for
professional practice research: Some introductory considerations. Action Research,
2(1), 89-106.
Marshall, J. (2001). Self-reflective inquiry practices. In P. Reason & H. Bradbury (Eds.),
Handbook of action research: Participative Inquiry and Practice (pp. 433-439).
London: Sage Publications.
Marzano, R. (2013). Art and science of teaching / Targets, objectives and standards: How do
they fit? Education Leadership, 70(8), 82-83.
Marzano, R., Waters, T., & McNulty, B. (2005). School leadership that works: From
research to results. Alexandria, VA: Association for Supervision and
Curriculum Development.
Maykut, P., & Moorehouse, R. (1994). Beginning qualitative research: A philosophical and
practical guide. London, Great Britain: Farmer Press.
McLaughlin, M., & Talbert, J. (2006). Building school-based teacher learning communities:
Professional strategies to improve student achievement. New York, USA: Teachers
College Press.
McLaughlin, M., & Talbert, J. (2001). Professional learning communities and the work of high
school teaching. Chicago: The University of Chicago Press.
McLaughlin, M., & Talbert, J. (1993a). Contexts that matter for teaching and learning.
Stanford, USA: Center for Research on the Context of Secondary School Teaching,
Stanford University.
265
McLaughlin, M., & Talbert, J. (1993b). How the world of students and teachers challenges
policy coherence. In S. Furham, Designing coherent education policy (pp. 220-249).
San Francisco: Jossey-Bass Publishers.
McMillan, J. H. (2004). Educational research: Fundamentals for the consumer (4th Ed.).
Boston, MA: Pearson Education.
McTighe, J. (1996). What happens between assessments? Educational Leadership, 54(4), 6-12.
Mehan, H. (1979). Learning lessons: Social organization in the classroom. Cambridge, MA:
Harvard University Press.
Merriam, S. (1988). Case study research in education: A qualitative approach. San
Francisco, California: Jossey-Bass.
Mertler, C. (2006). Action research: Teachers as researchers in the classroom. Thousand
Oaks, California: Sage Publications.
Mertler, C., & Charles, C. (2005). Introduction to education research (5th Ed). Boston, USA:
Bacon & Allyn.
Mid-continent Research for Education and Learning (McREL). (2003). Sustaining school
improvement: Professional learning community. Denver, Colorado: McREL. Retrieved
15th April 2014 from
http://www.plcwashington.org/cms/lib3/WA07001774/Centricity/Domain/19/MCREL-
rubric.pdf.
Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook
(2nd Ed.). California, USA: Sage Publications.
266
Miller, D., & Lavin, F. (2007). “But now I want to give it a try”: Formative assessment, self-
esteem and a sense of competence. The Curriculum Journal, 18(1), 3-25.
Mills, G. (2000). Action research: A guide for the teacher researcher (3rd Ed.). Upper Saddle
River, New Jersey: Merril/Prentice Hall.
Ministry of Education. (2010). The literacy learning progressions: Meeting the reading and
writing demands of the curriculum. New Zealand: Learning Media Limited.
Munby, S., Phillips, P., & Collinson, R. (1989). Assessing and recording achievement. Oxford,
United Kingdom: Blackwell Education.
O’Connor, K., Evans, R., & Craig, S. (2009). 21st century assessment: What does ‘best
practice’ look like in the PYP? In Davidson, S., & Carber, S. In taking the PYP
forward. The future of the IB Primary Years Programme (pp. 51-65): Melton,
Woodbridge: United Kingdom: John Catt Education Ltd.
Olivier, D. F., & Hipp. K. (2006). Leadership capacity and collective efficiency: Interacting to
sustain student learning in a professional learning community. Journal of School
Leadership, 16(5), 505-519.
O’Neill, J., & Conzemius, A. (2005). The power of SMART goals: Using goals to improve
student learning. Bloomington, IN: Solution Tree Press.
O'Toole, J., & Beckett, D. (2010). Educational research: Creative thinking and doing. South
Melbourne, Victoria: Oxford University Press.
Parsons, R. D., & Brown, K. S. (2002). Teacher as reflective practitioner and action
researcher. Belmont, CA: Wadsworth/Thomson Learning.
267
Patton, M. (1990) Qualitative evaluation and research methods. Thousand Oaks, California:
Sage Publications.
Ponte, P. (2002). How teachers become action researchers and how teacher educators become
facilitators. Education Action Research. 10(3), 399-422.
Popham, W. J. (2008). Transformative assessment. Alexandria, USA: Association for
Supervision and Curriculum Development.
Popham, W. J. (2007). All about accountability/The lowdown on learning progressions.
Educational Leadership, 64(7), 83-84.
Popham, W. J. (2003). Test better, teach better: The instructional role of assessment.
Alexandria, USA: Association for Supervision and Curriculum Development.
Popham, W. J. (2002). Right task, wrong tool. American School Board Journal, 189(2), 18-22.
Prestine, N. A. (1993). Extending the essential schools metaphor: Principal as enabler. Journal
of School Leadership, 3(4), 356-379
Pryor, J., & Crossouard, B. (2007). A socio-cultural theorisation of formative assessment.
Oxford Review of Education, 34(1), 1-20.
Ramaprasad, A. (1983). On the definition of feedback. Behavioral Science, 28(1), 4-13.
Reason, P., & Bradbury, H. (2006). Handbook of action research: The concise paperback
edition. London, Great Britain: Sage Publications.
Rennie, R. (2005). Brief history of assessment and reporting. The Journal of the Science
Teachers’ Association of Western Australia, 41(3), 25-26.
268
Riordan, G., & Gaffney, M. (2001). Teachers working together: Meanings, factors
and issues in teacher collaboration. Practising Administrator, 23(1), 6-9. Retrieved 24th
March, 2012 from
http://search.informit.com.au.ezproxy.lib.monash.edu.au/fullText;dn=113727;res=AEIP
T.
Rodgers, C. Defining Reflection: Another look at John Dewey and reflective thinking.
The Teachers College Record. 104(4), p. 842-866.
Rosenholtz, S. J. (1989). Teachers’ workplace: The social organizations of schools. New
York: Longham.
Rossman, R. B., & Rallis, S. F. (1998). Learning in the field: An introduction to qualitative
research. Thousand Oaks, CA: Sage.
Rowe, M. B. (1972). Wait-time and rewards as instructional variables: Their influence on
language, logic and fate control. Paper presented at the annual meeting of the National
Association for Research on Science Teaching, Chicago.
Rowntree, D. (1987). Assessing students: How shall we know them? [Revised edition]
London: Kogan Page.
Rushton, A. (2005). Formative assessment: A key to deep learning. Medical Teacher. 27(6),
509-513. Retrieved on 10th April 2014 from
http://www.aacp.org/resources/education/cape/Documents/Resources%20for%20the%2
0Glossary/2005%20formative%20assessment%20and%20deep%20learning.pdf.
Ryan, K., Cooper, J., & Tauer, S. (2013). Teaching for learning: Becoming a master teacher
(2nd Ed.). Belmont, USA: Wadsworth, Cengage Learning.
269
Sadler, D. R. (1998) Formative Assessment: Revisiting the territory. Assessment in Education:
Principles, Policy and Practice, 5(1), 77–85.
Sadler, D. R (1989). Formative assessment and the designs of instructional systems.
Instructional Science. 18(2), 119-144.
Scott, J. (2000). Authentic Assessment Tools. In R Custer, J Schell, B McAlister, J Scott, M
Hoepfl, In using authentic assessment in vocational education. Information Series
No. 381. (pp. 33-48). Ohio, USA: Office of Educational Research and Improvement
USA Department of Education.
Schon, D. (1983). The reflective practitioner: How professionals think in action. Cambridge
Circus, London: Maurice Temple Smith Ltd.
Schwalbach, E. M. (2003). Value and validity in action research: A guidebook for reflective
practitioners. Lanham, MD: Scarecrow Press.
Scriven, M. (1967). The Methodology of Evaluation, American Educational Research
Association, 1.
Sergiovanni, T. J. (1996). Leadership for the schoolhouse. San Francisco: Jossey-Bass.
Sergiovanni, T. J. (1994). Building community in schools. San Francisco: Jossey-Bass.
Shepard, L. (2000). The role of classroom assessment in teaching and learning. Los Angeles,
CA: The regents of the University of California.
Short, K., Harste, J., & Burke, C. (1996). Creating a classroom for authors and inquirers.
Portsmouth, USA: Heinemann.
270
Stahl, R. (1994). Using “think-time” and “wait-time” skillfully in the classroom. Bloomington,
IN: ERIC Clearinghouse for Social Studies/Social Science Education.
Stake, R. (1995). The art of case study research. Thousand Oaks, California: Sage
Publications.
State of Victoria (Department of Education and Training). (2013). Assessment advice.
Retrieved 23rd May 2013 from
http://www.education.vic.gov.au/school/teachers/support/pages/advice.aspx.
Stenhouse, L. (1975). An introduction to curriculum research and development. London,
United Kingdom; Heinemann Education.
Stoll, L., Bolam, R., McMahon, A., Wallace, M., & Thomas, S. (2006). Professional learning
communities: A review of the literature. Journal of Educational Change, 7(4), 221-258.
Stoll, L., & Louis, K. (2007). Professional Learning Communities: Divergence, depth and
Dilemmas. Berkshire: Open University Press.
Sullivan, A. (2002). Enhancing peer culture in a primary classroom. Paper presented at the
Australia Association for Research in Education International Education Research
Conference, Brisbane. Retrieved 22nd July 2014 from
http://www.aare.edu.au/data/publications/2002/sul02200.pdf.
Snyder, K. J.,Acker-Hocevar, M., & Snyder, K. M. (1996). Principals speak out on
changing school work cultures. Journal of Staff Development, 17(1), 14-19.
Taylor, A. (2006). Evaluation, assessment and reporting at Moonee Ponds West Primary
School. Professional Voice, 4(2), 45-47. Retrieved 14th July 2014 from
http://www.aeuvic.asn.au/pv_vol4_iss2.pdf
271
Thompson, S., Gregg, L., & Niska, J. (2004). Professional learning communities,
leadership, and student learning. Research in Middle Level Education Online, 28(1),
35-54.
Tomlinson, C.A. (2014). The differentiated classroom: Responding to the needs of all
learners (2nd Ed.). Alexandria, USA: Association for Supervision and Curriculum
Development.
Tomlinson, C. A., & McTighe, J. (2006). Integrating differentiated instruction and
understanding by design. Alexandria, USA: Association for Supervision and
Curriculum Development.
Torrance, H., & Pryor, J. (2001). Developing formative assessment in the classroom: Using
action research to explore and modify theory. British Educational Research Journal.
27(5), 615-631.
Torrance, H., & Pryor, J. (1998) Investigating Formative Assessment. Teaching, Learning and
Assessment in the Classroom. Buckingham, United Kingdom: Open University Press.
Towler, L., & Broadfoot, P. (1992). Self-assessment in the primary school. Educational
Review, 44(2), 137-151.
Vescio, V., Ross, D., & Adams, A. (2008). A review of research on the impact of professional
learning communities on teaching practice and student learning. Teaching and Teacher
Education, 24, 80-91. Retrieved 15th February 2015 from
http://schoolcontributions.cmswiki.wikispaces.net/file/view/Research+Review+PLCs.p
df
Victorian Curriculum and Assessment Authority. (2007). Victorian Essential Learning
Standards. Victoria, Australia: Victorian State Government. Retrieved 9th July 2010
from http://vels.vcaa.vic.edu.au/support/tla/question.html#closed.
272
Ward, H. (2008, 17th October). Assessment for learning has fallen prey to gimmicks,
says critic. TES Editorial. Retrieved on 12th April 2013 from
http://www.tes.co.uk/article.aspx?storycode=6003863.
Wiggins, G., & McTighe, J. (2005). Understanding by Design (Expanded 2nd Ed.).
Alexandria, USA: Association for Supervision and Curriculum Development.
Wiggins, G. (1993). Assessing Student Performance. San Francisco: Jossey-Bass.
Wilson, J., & Wingjan, L. (2003). Focus on inquiry: A practical approach to curriculum
planning. Carlton, Victoria: Curriculum Corporation.
Wiliam, D. (2011). Embedded formative assessment. Bloomington, IN; Solution Tree Press.
Wiliam, D. (2006). Assessment for learning: Why, what and how. Edited transcript of a talk
given at the Cambridge Assessment Network Conference on 15 September 2006 at the
Faculty of Education, University of Cambridge.
Wyatt-Smith, C., Klenowski, V., and Colbert P. (2014). Designing assessment for quality
learning. The enabling power of assessment, 1. Dordrecht, The Netherlands: Springer
International.
Wyn, J., Turnbull, M., & Grimshaw, L. (2014). The experience of education: The impact of
high stakes testing on school students and their families. Sydney: Whitlam Institute
within the University of Western Sydney. Retrieved on 31st August, 2014 from
http://whitlam.org/__data/assets/pdf_file/0011/694199/The_experience_of_education_-
_Qualitative_Study.pdf.
Yin, R. K. (2009). Case study research: Design and methods. (4th Ed.) - Applied Social
Research Methods Series v5. Thousand Oaks, California: Sage Publications.
273
Yorke, M. (2003). Formative assessment in higher education: Moves towards theory
and the enhancement of pedagogic practice. Higher Education, 45(4), 477-501.
274
Appendix 1 – PYP planner