Author
others
View
1
Download
0
Embed Size (px)
1
Predictability in the Irish Leaving Certificate Examination
Working Paper 3: Student Questionnaire
Daniel Caro and Therese Hopfenbeck
This research was sponsored by the State Examinations Commission (SEC) of Ireland. Ruairí
Quinn, Minister for Education and Skills in Ireland, announced this project and his commitment
to tackle any problematic predictability in the Leaving Certificate examinations.1
Contents
Introduction ............................................................................................................................................. 3
Survey and data ....................................................................................................................................... 3
Survey development ........................................................................................................................... 4
Learning strategy items .................................................................................................................. 4
Views on predictability items ......................................................................................................... 4
Piloting the instrument and quality checking ................................................................................. 5
Survey versions ................................................................................................................................... 5
Paper-and-pencil survey ................................................................................................................. 5
Online survey .................................................................................................................................. 5
Sample ................................................................................................................................................. 6
Data management ............................................................................................................................... 7
Scale development .................................................................................................................................. 7
Methodology ....................................................................................................................................... 7
Statistical models ............................................................................................................................ 7
Number of factors in EFA ................................................................................................................ 7
Results ................................................................................................................................................. 8
Learning strategies.......................................................................................................................... 8
Views on predictability ................................................................................................................. 14
Learning support ........................................................................................................................... 18
Family SES ..................................................................................................................................... 23
Analysis of research questions .............................................................................................................. 26
Research question 3 – how predictable are examination questions in the Leaving Certificate in
Ireland? ............................................................................................................................................. 26
Research question 4 – which aspects of this predictability are helpful and which engender
unwanted approaches to learning? .................................................................................................. 27
Research question 7 – what kinds of examination preparation strategies do students use? .......... 31
Learning strategies........................................................................................................................ 31
1 Department of Education and Skills (2013) Supporting a better transition from second level to Higher Education: Key directions
and next steps. 27 March. (http://www.education.ie/en/The-Department/Re-use-of-Public-Sector-
Information/Library/Announcements/Supporting-a-Better-Transition-from-Second-Level-to-Higher-Education.html)
2
Learning support ........................................................................................................................... 33
Regression analysis ................................................................................................................................ 33
Examination scores model ................................................................................................................ 34
Predictability model .......................................................................................................................... 38
References ............................................................................................................................................. 41
Appendix A: Questionnaire ................................................................................................................... 43
Appendix B: Summary Tables ................................................................................................................ 54
Views on predictability ...................................................................................................................... 54
Overall results ............................................................................................................................... 54
Views on the exam by gender ...................................................................................................... 56
Views on the exam and family SES ............................................................................................... 59
Views on the exam and exam results ........................................................................................... 59
Learning strategies ............................................................................................................................ 61
Overall results ............................................................................................................................... 61
Learning strategies by gender ...................................................................................................... 63
Learning strategies and family SES ............................................................................................... 66
Learning strategies and exam results ........................................................................................... 67
Learning support ............................................................................................................................... 70
Overall results ............................................................................................................................... 70
Learning support by gender .......................................................................................................... 70
Learning support and exam results .............................................................................................. 72
3
Introduction
This working paper is part of a broader investigation on the predictability of the Irish Leaving
Certificate (LC) examination. This research was sponsored by the State Examinations Commission
(SEC) of Ireland, as part of the Department of Education and Skills (DES) (2013) policy, Supporting a
better transition from second level to higher education: key directions and next steps. Overall, the
research involved:
1. A review of the international research literature
2. Analysis of the media coverage of the Leaving Certificate examinations in 2012 and 2013
3. Empirical work on the examinations materials from 2003 to 2012
4. A survey of 1,002 students’ views
5. Interviews with 70 teachers and 13 group interviews with students
This working paper is concerned with item 4. It provides a technical guide for understanding the
student survey, the derived dataset, and reports results of the analysis of the research questions
posed by the project that can be addressed with the questionnaire dataset. The technical guide
explains the methodological procedures involved in the development and administration of the
student questionnaire, the creation of the dataset, the analytic sample, and the development of
scales reflecting predictability views, learning strategies, learning support, and family socio-
economic status of students. The analysis of the research questions draw on the questionnaire
dataset, derived scales, and the examination scores2 of students. The following research questions
are analysed:
• Research question 3 – how predictable are examination questions in the Leaving Certificate
in Ireland?
• Research question 4 – which aspects of this predictability are helpful and which engender
unwanted approaches to learning?
• Research question 7 – what kinds of examination preparation strategies do students use?
The paper is organised as follows. The first section explains the survey development, the survey
versions, the sample, and the data preparation. The second section describes the techniques and
results of the scale development. The third section reports main results of the analysis of the
research questions. Finally, the fourth section presents additional results of regression models of
the examination scores and the predictability scales. Appendix A presents the student
questionnaire. Appendix B reports more detailed results related to the research questions (eg
results by gender).
Survey and data
The questionnaire was developed based upon previous research instruments and a literature
review on predictability. All the items have been adapted for the Irish Leaving Certificate and the
Irish context.
2 The SEC provided data on grade levels, which was transformed into points. These are referred to as examination scores in this
working paper.
4
Survey development
The survey on the Leaving Certificate consisted of six sections. Section A asked for background
information such as gender, plans for the future, language spoken at home, while section B asked
for background information about parents' education, work and home possession, to be able to
measure family cultural and socio-economic status (SES). Section D asked students to report their
use of subject-specific learning strategies when they were preparing for the leaving certificate.
Sections A, B and D used adapted items from the Programme for International Student Assessment
(PISA), while sections C, E and F included newly developed items for this study. Section C asked
students to indicate which subjects they were sitting for the leaving certificate, and at which level
(higher or ordinary). Section E asked students to report their experience and views of the exam.
Finally, section F asked students to answer questions of learning support for the exam, such as use
of grind schools and family support.
Learning strategy items
Most items and scales for the learning strategies have been taken from already well-researched
instruments such as the student approaches questionnaire used in PISA (Marsh, Haug, Artelt &
Baumert, 2006). The PISA instrument measures two separate categories, cognitive strategies and
metacognitive strategies, and the items have been based upon Weinstein and Meyer’s taxonomy
(1986), Learning and Study Strategies Inventory – High School Version (LASSI-HS) (Weinstein,
Zimmerman & Palmer 1988; Weinstein & Palmer, 1990), which is one of the most widely used
learning strategy questionnaires in the world, and the Motivated Strategy for Learning
Questionnaire (MSLQ) (Pintrich, Smith, Garica & McKeachie, 1991). The limited validity of this
questionnaire for measuring learning strategies at a global level is well-known (Allan, 1997) and the
PISA instrument has been criticised for generalising students’ strategy use across a number of
subjects and contexts (Samuelstuen & Braten, 2007). We therefore asked students to rate their use
of learning strategies specifically in relation to three subjects: biology, English and geography, using
a four- point Likert scale from (1) almost never, (2) now and then, (3) often, to (4) always (see
question 11, Appendix A). We included three categories of learning strategies. The first one,
memorisation strategies, such as ‘I tried to learn my notes by heart’, is particularly useful for simple
tasks, and involves repeating the material, reciting and copying the material (Pintrich, Smith, Garcia
& McKeachie, 1991). The second category, elaboration strategies, such as the item ‘I tried to relate
new information to knowledge from other subjects’, involves making meaningful connections to
the learners prior knowledge, while the last category, control strategies, such as the item ‘I checked
if I had understood what I had read’, involves being able to monitor your own learning and adapt
and adjust strategies if needed (Weinstein et al, 2000; Weinstein & Meyer, 1991).
Views on predictability items
A number of items were developed to reflect the experience of students in taking the exam,
including their views on the predictability of the exam. Students were asked a total of ten
questions relating to the English, biology and geography exams (see question 12, Appendix A). They
had to rate their level of agreement (ie strongly disagree (1), disagree (2), agree (3) or strongly
agree (4)) with different statements relating to each subject. Items measuring predictability
included statements such as ‘I predicted the exam questions well and I was surprised by the
questions on the exam this year’. The survey also included questions asking students about their
5
views about learning, with items such as ‘The exam tests the right kind of learning’ and ‘To do well
in this exam, remembering is more important than understanding’. The idea of including these
items was to further explore whether predictability is linked to students’ views about learning, and
whether they felt that remembering was more important than understanding for some of the
subjects. In addition, students were asked to indicate what kinds of support for their learning they
had in English, biology and geography, with items such as ‘Which topics were likely to come up was
explained to me’, ‘Model answers were given to me’ and ‘My parents helped me with my studies’.
Questions around grind schools and use of revision apps to support students’ learning were also
included.
Piloting the instrument and quality checking
One version of the instrument was piloted with two Irish students who had previously taken the
Leaving Certificate. First, they answered the whole survey. Second, the researcher carried out
cognitive interviews to have feedback on each item. The cognitive interviews involved asking the
participant to (a) read the question (b) explain what it means, (c) read the answer option and chose
an answer and (d) explain the reason for the answer (Karabenick et al, 2007). Based upon these
interviews, several of the items were revised to make it more suitable for an Irish context. For
example, instead of using the term ‘police officer’ when asking about parental occupational status,
we were advised to use the Irish word Garda. In items asking for classical literature, we included
Yeats instead of Shakespeare.3 A final version of the survey was given to the research team and
four Dphil students for feedback on wording and layout. Minor revisions were conducted before
sending to SEC for additional feedback. Based upon these review processes, a final version of the
Leaving Certificate survey ended up with a ten-page questionnaire, in six sections.
Survey versions
A paper-and-pencil version and an online version of the survey were prepared in England and in
Irish.
Paper-and-pencil survey
For maximum participation, it was decided that the paper-and-pencil survey should not be more
than ten pages long. The first page included a text with information of the purpose of the study and
general information on confidentiality. Students were asked to write their exam number and
further they were asked for the researchers’ permission to link their exam score to the survey
results. Students were also informed of a prize draw for five Apple iPads if they completed the
survey (see Appendix A).
Online survey
An online version of the survey was posted on the Oxford University Centre of Educational
Assessment’s homepage on 4 June until 1 August 2013 (http://oucea.education.ox.ac.uk/about-
us/oucea-commissioned-to-conduct-independent-external-evaluation-of-predictability-in-irish-
leaving-certificate-examinations). In addition, posters with information of the online version were
present in all the 100 schools that participated in the study, so students could choose between
completing a paper or online survey after they had finished their exams.
3 This item is taken from the PISA test which uses the question ‘Which of the following are in your home?’, with the option
‘Classical literature (for example Shakespeare)’.
6
Sample
After excluding 31 schools because they were listed as having no students in LC year 2 in the DES
data, a list of 690 schools in Ireland was sent to the research team by SEC. These 690 schools
included 79 community schools, 14 comprehensive schools, 375 secondary schools and 223
vocational schools. Further, 108 of them were boys’ schools, 140 were girls’ schools, and 442 were
mixed schools. From the list of 690 schools, 24 schools were selected for the fieldwork for
interviews, and these schools were not included in the survey, to avoid too much work for them.
From the list of the remaining 666 schools, 100 schools were selected to participate in the survey,
which is more than 10% of the schools in Ireland offering LC. The paper-and-pencil version was
printed and distributed by SEC to these 100 schools, so that students could participate in the
survey after they had taken the Leaving Certificate. Prepaid envelopes were offered to facilitate the
return of surveys. Posters giving information about the survey were present in the back of the
exam room, also encouraging students to take the test online if they would prefer this.
The combined sample of students who responded to the paper-and-pencil and online surveys
comprised 1,018 students. We removed 11 of the surveys, since a quality check showed that the
students had conducted both the paper and pencil survey as well as the online survey. Additionally,
five students were removed for having examination numbers with five digits, which means they
were not part of the target sample of LC candidates. A total of 1,002 were left in the sample. In
total, data from 147 surveys came from the online version and 855 surveys from the paper-and-
pencil version.
Analyses of participants’ examination scores in English, biology and geography indicated that the
sample had a wide spread of abilities, but higher performing students were represented more
frequently than in the general population of Leaving Certificate students, and results must be
interpreted in that context (see Table 1).
Table 1. Cumulative percentage at each grade: questionnaire sample compared
with population (Pop)
English Biology Geography
Sample Pop Sample Pop Sample Pop
A 14.4 9.7 22.3 14.4 15.4 8.7
B 43.6 36.4 55.5 41.7 51.6 38.1
C 81.9 76.1 77.8 69.6 85.6 75.3
D 99.1 98.3 94.3 91.7 99.1 97.2
E 99.9 99.9 99.2 98.2 100 99.8
F 99.9 100 100 99.7 100 100
NG 100 100 100 100 100 100
No. 624 33,279 449 23,436 312 19,762
Out of the sample of 1,002 students the analysis is concerned only with those students who took
the higher level LC exam. The numbers vary by subject. The final sample for the English analysis
includes 772 students, 557 for biology and 404 for geography.
7
Data management
The research team developed a codebook and adjusted some of the codes after the first 100
surveys had been entered.
Two research assistants entered data using the statistical package SPSS 20. In addition, three
researchers each entered ten surveys to check how the responses matched the codebook and
discussed the coding with the research assistants. One of the few problems detected was that
respondents sometimes ticked more than one box for parents’ education level. Initially some of the
research assistants coded this as ‘invalid’, this was revised and the highest level of education was
recorded. Another challenge was the decoding of handwriting for the open question. It was very
common that the respondents did not write legibly. In case of doubt, research assistants discussed
interpretation of the handwriting.
After the data had been entered into SPSS, a quality check was conducted. One in every 20 surveys
from the paper-and-pencil test were double checked to see whether data was entered correctly.
Only two minor errors were found in a total of 39 tests which indicates an overall good quality of
data entry. The online version of the test had automatic data entry.
Scale development
Methodology
Statistical models
Exploratory factor analysis (EFA), the Rasch model, and the partial credit model were employed for
scale development (Masters & Wright, 1997; Rasch, 1960). EFA was applied to the Likert-type items
surveying learning strategies (see question 11, Appendix A) and experiences in taking the exam
(see question 12, Appendix A). The Rasch model was applied to binary data of learning support (see
question 13, Appendix A), and the partial credit model was applied to the binary and ordinal data
on family SES. The Rasch model and the partial credit model assumed that the item data could be
represented by a single dimension. EFA employed different tests to determine the number of
factors to be retained. Missing data in number of factor tests and scale development was handled
using listwise deletion.
Number of factors in EFA
Determining the number of factors to retain is critical for scale development in EFA. If the number
of factors is underestimated or overestimated, the solution and interpretation of EFA results could
be significantly altered (Velicer, Eaton & Fava, 2000). For example, theoretically relevant scales may
be excluded if the number of factors is underestimated. Conversely, if the number of factors is
overestimated artificial scales may be produced.
Typically, analysts and statistical computer software employ Kaiser's (1960) rule of eigenvalues
greater than one, or scree visual tests proposed by Cattell (1966) for selecting the number of
factors to retain. Kaiser's rule in particular, due to its simplicity, is probably the most utilised
criterion for factor selection. This rule, however, has several problems. It has been argued that it
tends to overestimate the number of factors, that it was developed for principal component
8
analysis and its use for EFA is unclear, and that it can produce trivial solutions in which a factor with
an eigenvalue of 1.01 is retained and one with an eigenvalue of 0.99 is not (Courtney, 2013;
Fabrigar et al, 1999). Scree visual tests, on the other hand, depend on the ability of the rater and
suffer from inherent inter-rater reliability and subjectivity. Researchers have proposed three
alternative statistical criteria that overcome these limitations (Courtney, 2013; Raiche, Roipel &
Blais, 2006).
The first is the optimal coordinate (OC) test, which determines the location of the scree by
measuring the gradients associated with eigenvalues and their preceding coordinates. Eigenvalues
are projected based on preceding eigenvalues using regression models. The number of principal
components to retain corresponds to the last observed eigenvalue that is superior or equal to the
estimated predicted eigenvalue. The second is the acceleration factor (AF) test, which puts
emphasis on the coordinate where the slope of the eigenvalue curve changes abruptly. The test is
based on the second derivative of the eigenvalue curve. The third is Horn's (1965) parallel analysis
(PA), which unlike the Kaiser's rule based on population statistics takes into account the proportion
of variance resulting from sampling error. The PA method generates a large number of data
matrices from random data in parallel with the real data. That is, the matrices have the same
number of cases and variables as the real data. Factors are retained in the real data as long as
eigenvalues are greater than the mean eigenvalue generated from the random data matrices.
These methods outperform the Kaiser's rule of retaining factors with eigenvalues greater than one
in simulation studies (Ruscio & Roche, 2012). In particular, PA is likely the most strongly
recommended technique but its application is not simple (Courtney, 2013). Recently, however,
these three tests have been implemented in the R package nFactors (Raiche, 2010). These tests
together with the Kaiser's rule are compared graphically in this paper for determining the number
of factors to retain.
Results
Learning strategies
EFA was applied to the learning strategies items in the areas of English, biology and geography (ie
question 11, except item f). Figure 1 presents for each subject area a comparison of tests of the
factors to retain by the optimal coordinates, the acceleration factor, the parallel analysis, and the
Kaiser's rule. The number of factors retained by each test is included in parenthesis. The different
tests, including parallel analysis, quite consistently indicated the presence of three factors in the
learning strategies across the three subject areas. The results are also consistent with the number
of factors proposed by Marsh et al (2006).
9
Figure 1. Learning strategies: Tests to determine number of factors
Geography (n=381)
Table 2 reports loading factors (>0.3) for the EFA solution. Constituent items of theoretical
constructs are indicated with coloured circles. The empirical results reflected the latent structure of
three factors postulated by Marsh et al (2006): memorisation, elaboration, and control strategies.
The three factors have been labelled accordingly in Table 2.
In all three subjects items loaded on their corresponding theoretical constructs. Additionally, some
items loaded on two constructs. Two items loaded consistently in more than one construct. One is
item i, ‘I made sure that I remembered the most important points in the revision material’, which
was expected to reflect control strategies but also loaded on memorisation strategies in English
and biology. Since the item included the word ‘remember’, students may have answered thinking
more about memorisation strategies, even though the item in itself also involves a control strategy:
that the students exercised control when they made sure that they remembered. From theory, we
also know that some of the elaboration and control strategies overlap, and therefore crossloadings
on some of these items were expected. Another example is item h, which corresponds to
elaboration strategies but also loaded on the control strategies construct in biology and geography.
Also, item m in geography loaded on control strategies in addition to its corresponding
memorisation construct. But, in general, the factor structure is very consistent with Marsh et al
(2006).
English (n=750) Biology (n=540)
10
Table 2. Learning strategies: EFA solution ( loadings)
English
(n=750)
Biology
(n=540)
Geography
(n=381)
memorisation elaboration control memorisation elaboration control memorisation elaboration control
k) I tried to memorise as much
of the revision material as
possible
0.76 • 0.82 • 0.63 •
e) I tried to learn my notes by
heart
0.66 • 0.60 • 0.65 •
a) I tried to memorise all the
material that I was taught
0.63 • 0.59 • 0.68 •
m) I tried to memorise what I
thought was important
0.59 • 0.56 • 0.41 • 0.39
g) I figured out how the
information might be useful in
the real world
0.71 • 0.69 • 0.55 •
c) I tried to relate new
information to knowledge
from other subjects
0.59 • 0.53 • 0.51 •
h) I tried to understand the
revision material better by
relating it to what I already
knew
0.58 • 0.58 • 0.34 0.52 • 0.34
n) I studied material that went
beyond what is expected for
the exam
0.39 • • 0.33 •
i) I made sure that I
remembered the most
important points in the
revision material
0.39 0.34 • 0.41 0.42 • 0.62 •
d) I checked if I understood 0.47 • 0.53 • 0.39 •
11
English
(n=750)
Biology
(n=540)
Geography
(n=381)
what I had read
j) If I did not understand
something, I looked for
additional information to
clarify it
0.67 • 0.68 • 0.36 •
l) I tried to figure out which
ideas I had not really
understood
0.53 • 0.56 • 0.34 0.36 •
b) I started by figuring out
exactly what I needed to learn
• 0.32 • 0.40 •
Key: •= memorisation strategy construct •= elaboration strategy construct •= control strategy construct
12
Table 3 reports alpha reliability coefficients for the theoretical constructs as well as the percentage of
the variance of the learning strategies data explained by the three constructs. Overall, the three factors
accounted for 52% of the variance in English, 55% in biology and 48% in geography. In the PISA 2000
study on reading literacy, alpha coefficients across countries ranged from 0.69 to 0.81 (Marsh et al
2006).4 The reliability estimates in our analysis are similar but on the low side.
Table 3. Learning strategies: Alpha coeffic ients and explained variance
English
(n=750)
Biology
(n=540)
Geography
(n=381)
memorisation 0.76 0.75 0.71
elaboration 0.68 0.69 0.56
control 0.64 0.69 0.62
% explained variance 52% 55% 48%
The distribution of the learning strategies scales for English, biology, and geography is presented in
Figures 2, 3 and 4.
Figure 2. Engl ish: distr ibution of learning strategies scales
4 The average reliability of the scales used in PISA 2000 varied between countries. Norway, the United States and Finland had higher
reliability (M Alphas = 0.81, 0.81, 0.81) while countries such as Latvia, Mexico and Brazil had lower reliabilities (M Alphas = 0.69, 0.70,
0.73).
0.0
0.1
0.2
0.3
0.4
-2 -1 0 1
Memorisation strategies scale (EFA)
De
nsity
0.0
0.2
0.4
-1 0 1 2
Elaboration strategies scale (EFA)
De
nsity
0.0
0.1
0.2
0.3
0.4
-2 -1 0 1 2
Control strategies scale (EFA)
De
nsity
13
Figure 3. Biology: distr ibution of learning strategies scales
Figure 4. Geography: distr ibution of learning strategies scales
0.0
0.2
0.4
-3 -2 -1 0 1
Memorisation strategies scale (EFA)
De
nsity
0.0
0.1
0.2
0.3
0.4
-2 -1 0 1
Elaboration strategies scale (EFA)
De
nsity
0.0
0.2
0.4
0.6
-2 -1 0 1
Control strategies scale (EFA)
De
nsity
0.0
0.1
0.2
0.3
0.4
-2 -1 0 1
Memorisation strategies scale (EFA)
Den
sity
0.0
0.1
0.2
0.3
0.4
-2 -1 0 1 2
Elaboration strategies scale (EFA)
Den
sity
0.0
0.2
0.4
-3 -2 -1 0 1
Control strategies scale (EFA)
Den
sity
14
Views on predictability
As with the learning strategies items, EFA was applied to the items reflecting views on predictability of
students (ie question 12 and item f of question 11). Tests to determine the number of factors were
applied to the Likert-scale type items (see Figure 5). Parallel analysis, the most reliable test, indicated
three factors in English and geography and four in biology. For consistency, it was decided to retain
three factors in every subject.
Figure 5. Views on predictabil ity: tests to determine number of factors
Geography (n=387)
Table 4 reports the EFA solution. The loading patterns were quite consistent across subjects with
practically no overlap between items and constructs. Only item h in geography loaded on two
constructs.
The first latent construct reflected views of students that they will be able to use what they have
learned for the future (item h), that they need to adapt what they know to do well in the exam (item d),
that the exam tests the right kind of learning (item c), that a broad understanding of the subject is
important to do well in the exam (item f), and that remembering is not more important than
understanding (item b). The second latent construct distinguished students who said they were able to
English (n=749) Biology (n=536)
15
predict the exam questions well (item i), felt they knew what the examiners wanted this year (item a),
and were not surprised by the exam questions this year (item e). And the third construct indicated
students who chose not to study some topics as they thought they would not come up (item f from
question 11) and students who left a lot of topics out of their revision and still think they will do well
(item g). In general, it seemed the first factor reflected valuable learning views, the second factor
predictability views, and the third factor narrowing the curriculum views. Accordingly, these factors
have been labelled ‘valuable’, ‘predictable’, and ‘narrow’ in Table 4. The valuable factor reflects a
positive view of the examinations and the narrow factor reflects a negative impact (where the scores
are high in each case). Note, however, that the predictable factor does not necessarily reflect
problematic or negative aspects of predictability but it can reflect desired aspects of predictability as
well.
16
Table 4. Views on predictabil ity: EFA solution ( loadings)
English
(n=749)
Biology
(n=536)
Geography
(n=387)
valuable predictable narrow valuable predictable narrow valuable predictable narrow
h) I think I will be able to use what I learned for
this exam in the future
0.56 0.50 0.45 0.34
d) To do well in this exam I need to think and
adapt what I know
0.52 0.58 0.56
c) The exam tests the right kind of learning 0.46 0.54 0.45
f) To do well in this exam, I need a broad
understanding of the subject, across many topics
0.44 0.39
b) To do well in this exam, remembering is more
important than understanding
-0.47 -0.39 -0.41
i) I predicted the exam questions well 0.66 0.58 0.66
a) I felt I knew what the examiners wanted this
year
0.50 0.60 0.48
e) I was surprised by the questions on the exam
this year
-0.40 -0.54 -0.50
g) I left a lot of topics out of my revision and still
think I will do well
0.99 0.75 0.99
q11f) I chose not to study some topics as I
thought they would not come
0.43 0.57 0.43
j) I can do well in this exam even if I do not fully
understand the topics
-0.31
17
Table 5 reports alpha coefficients and explained variances for the proposed constructs. The three
constructs explained 48% of the total variance in the item data for the three subjects. Alpha coefficients
ranged between 0.53 and 0.62.
Table 5. Views on predictabil ity: Alpha coeffic ients and explained variance
English
(n=749)
Biology
(n=536)
Geography
(n=387)
value 0.62 0.53 0.55
predictability 0.53 0.61 0.58
narrow 0.61 0.61 0.59
% explained variance 48% 48% 48%
Figures 6, 7 and 8 show the distribution of the views on predictability scale for English, biology, and
geography. The valuable learning scale and the predictability scale are nicely distributed. The narrowing
the curriculum scale is less continuous and appears to be multimodal, reflecting that only two items
loaded on the scale.
Figure 6. Engl ish: distr ibution of views on predictabil ity scales
0.0
0.1
0.2
0.3
0.4
-2 -1 0 1 2
Valuable learning scale (EFA)
De
nsity
0.0
0.2
0.4
-2 -1 0 1
Predictability scale (EFA)
De
nsity
0.0
0.2
0.4
0.6
-1 0 1 2
Narrowing the curriculum scale (EFA)
De
nsity
18
Figure 7. Biology: distr ibution of views on predictabil ity scales
Figure 8. Geography: distr ibution of views on predictabil ity scales
Learning support
Students were surveyed on the learning support they received for the leaving certificate examination
(see question 13, Appendix A). Question 13 included 14 items for each subject area on the different
kinds of support students received. Tests to determine the number of factors to extract were carried
out on the item data. The results are presented in Figure 9.
0.0
0.1
0.2
0.3
0.4
-2 -1 0 1 2
Valuable learning scale (EFA)
Den
sity
0.0
0.1
0.2
0.3
0.4
-1 0 1 2
Predictability scale (EFA)
Den
sity
0.0
0.1
0.2
0.3
0.4
0.5
-1 0 1 2
Narrowing the curriculum scale (EFA)
Den
sity
0.0
0.2
0.4
-2 -1 0 1
Valuable learning scale (EFA)
De
nsity
0.0
0.1
0.2
0.3
0.4
-2 -1 0 1 2
Predictability scale (EFA)
De
nsity
0.0
0.2
0.4
-1 0 1 2
Narrowing the curriculum scale (EFA)
De
nsity
19
Figure 9. Learning support: tests to determine the number of factors
Geography (n=383)
Guided by the parallel analysis results, four latent constructs are identified for English and biology, and
three for geography. In unreported analysis the four-factor solution produced uninterpretable results
and the three-factor solution seemed to group students according to whether they received learning
support from the school (F1), from external sources such as the internet, parents and friends (F2), and
from grinds school (F3). Since it is not an objective of this paper to analyse learning support in depth, it
was decided to consider two dimensions of learning support only: school support and external support.
Rasch was preferred over EFA for scale development because it adapts better to the binary data of the
learning support items (ie received support or not). Items a, b, d, e and g were considered indicators of
school support and items c, f, h, i, j, k, l, m and n indicators of external support. A single learning
support scale using all the item data was also created.
Table 6 reports item weights resulting from Rasch analysis for the school learning support scale. In all
subjects item e (‘I was given past papers’) has the lowest weight. That is, most students reported that
they were given past papers. As the Rasch analyses were conducted separately for each subject, the
values of the estimates are not comparable across subjects in Tables 6 to 9.
English (n=746) Biology (n=541)
20
Table 6. School learning support: i tem weights and standard errors
English
(n=746)
Biology
(n=542)
Geography
(n=384)
Estimate Std Error Estimate Std Error Estimate
Std
Error
b) Marking criteria were explained to me 0.47 (0.10) 0.02 (0.12) 0.08 (0.16)
d) Model answers were given to me 0.46 (0.10) 2.72 (0.14) 0.11 (0.16)
e) I was given past papers -1.12 (0.15) -2.58 (0.24) -0.86 (0.20)
g) The exam format was explained to me -0.78 (0.13) -0.83 (0.14) -0.45 (0.18)
Table 7 records outfit and infit statistics for constituent items of the school learning support scale. Fit
statistics were within acceptable ranges [0.8–1.2], except for item g and item b in biology, with values
lower than 0.7. There was overfit for these items; that is, the pattern of responses did not vary as much
as expected for the Rasch model, with most students ticking that they had received these kinds of
support. However, the inclusion of these items in the scale was not degrading for construct
development (
21
Table 8. External learning support: i tem weights and standard errors
English
(n=746)
Biology
(n=541)
Geography
(n=384)
Estimate Std
Error
Estimate Std
Error
Estimate Std Error
f) I have textbooks to help with my study -2.60 (0.10) -3.82 (0.19) -3.82 (0.21)
h) I used revision guides -1.01 (0.08) -1.33 (0.10) -1.48 (0.12)
i) I looked at past papers on the internet -1.59 (0.08) -1.86 (0.11) -1.65 (0.12)
j) My parents helped me with my studies 0.61 (0.09) 1.45 (0.13) 1.08 (0.15)
k) Friends helped me to prepare for the
exams
-0.32 (0.08) -0.62 (0.10) -0.57 (0.12)
l) I used revision apps 1.46 (0.11) 1.50 (0.13) 1.53 (0.17)
m) I took one-to-one or small-group grinds 1.36 (0.11) 1.68 (0.13) 1.79 (0.18)
n) I attended a grinds school 2.10 (0.14) 2.39 (0.17) 2.79 (0.25)
The results in Table 8 show consistently across subjects that attending a grinds school (item n) was the
least frequent on the external support scale and having textbooks to help with the study (item f) the
most frequent.
Table 9. External learning support: i tem fit statistics
English
(n=746)
Biology
(n=541)
Geography
(n=384)
Outfit
MSQ
Infit
MSQ
Outfit
MSQ
Infit
MSQ
Outfit
MSQ
Infit
MSQ
c) I used material from grinds websites 0.93 0.97 0.70 0.81 0.72 0.81
f) I have textbooks to help with my study 1.13 0.97 1.40 0.72 0.63 0.77
h) I used revision guides 0.84 0.89 0.77 0.86 0.87 0.90
i) I looked at past papers on the internet 0.90 0.93 1.01 0.93 0.76 0.87
j) My parents helped me with my studies 0.82 0.91 0.78 0.97 0.89 0.99
k) Friends helped me to prepare for the exams 0.93 0.96 1.08 1.02 0.89 0.94
l) I used revision apps 0.67 0.82 0.54 0.77 0.84 0.83
m) I took one-to-one or small-group grinds 0.81 0.86 0.63 0.84 0.68 0.83
n) I attended a grinds school 0.71 0.88 0.86 0.84 0.65 0.85
Fit statistics for English were within acceptable ranges (see Table 9). Some items introduced misfit in
biology and geography. For example, fit statistics lower than 0.7 for items l and m in biology and item n
in geography indicated overfit to the Rasch model.
Tables 10 and 11 report item weights and fit statistics for the combined learning support scale. Item
weight estimates in Table 10 were quite consistent across the three subject areas. The last three items
of question 13 exerted the greatest weight on the learning support scale: attending a grinds schools (n),
taking one-to-one or small-group grinds (m), and using revision apps (l). These forms of support were
relatively least likely to be present for the student. Conversely, having obtained past papers (e), having
had the exam format explained (g), and having textbooks to help with learning (f) showed the weakest
weight.
22
Table 10. Learning support scale: item weights and standard errors
English
(n=746)
Biology
(n=541)
Geography
(n=383)
Estimate Std Error Estimate Std Error Estimate Std Error
b) Marking criteria were explained to me -1.46 (0.10) -1.29 (0.11) -1.95 (0.16)
c) I used material from grinds websites 1.00 (0.08) 1.25 (0.10) 1.43 (0.13)
d) Model answers were given to me -1.47 (0.10) 1.02 (0.10) -1.95 (0.16)
e) I was given past papers. -2.81 (0.15) -3.26 (0.20) -2.82 (0.22)
f) I have textbooks to help with my study -1.50 (0.10) -2.88 (0.17) -2.55 (0.19)
g) The exam format was explained to me -2.52 (0.14) -1.99 (0.13) -2.40 (0.18)
h) I used revision guides 0.04 (0.08) -0.58 (0.10) -0.31 (0.12)
i) I looked at past papers on the internet -0.52 (0.08) -1.09 (0.10) -0.48 (0.12)
j) My parents helped me with my studies 1.63 (0.09) 2.06 (0.13) 2.20 (0.15)
k) Friends helped me to prepare for the
exams
0.73 (0.08) 0.08 (0.09) 0.56 (0.12)
l) I used revision apps 2.46 (0.12) 2.11 (0.13) 2.61 (0.17)
m) I took one-to-one or small-group grinds 2.37 (0.11) 2.29 (0.13) 2.86 (0.18)
n) I attended a grinds school 3.09 (0.15) 2.98 (0.17) 3.85 (0.26)
In general, fit statistics for the combined learning support scale were within acceptable ranges (see
Table 11).
Table 11. Learning support scale: Item fit statistics
English
(n=746)
Biology
(n=541)
Geography
(n=383)
Outfit
MSQ
Infit
MSQ
Outfit
MSQ
Infit
MSQ
Outfit
MSQ
Infit
MSQ
a) Which topics were likely to come up was explained
to me
0.97 0.92 0.89 0.96 0.89 0.97
b) Marking criteria were explained to me 0.81 0.88 0.80 0.89 1.53 0.86
c) I used material from grinds websites 1.08 1.02 0.96 0.91 1.02 0.89
d) Model answers were given to me 1.13 0.93 0.94 0.97 1.05 0.93
e) I was given past papers. 0.92 0.82 1.05 0.78 0.63 0.79
f) I have textbooks to help with my study 1.01 0.98 0.74 0.88 0.74 0.85
g) The exam format was explained to me 0.70 0.80 0.92 0.88 0.59 0.75
h) I used revision guides 0.89 0.93 0.82 0.90 0.88 0.92
i) I looked at past papers on the internet 0.89 0.94 0.92 0.95 0.91 0.98
j) My parents helped me with my studies 0.92 0.89 0.99 0.98 0.83 0.93
k) Friends helped me to prepare for the exams 1.07 0.98 1.00 0.98 0.88 0.93
l) I used revision apps 0.76 0.85 0.91 0.88 0.86 0.86
m) I took one-to-one or small-group grinds 0.81 0.86 0.84 0.87 0.94 0.84
n) I attended a grinds school 1.17 0.88 0.93 0.88 0.51 0.81
23
Figure 10 presents the distribution of the learning support scale for the three subject areas.
Figure 10. Learning support scale: distr ibution
Family SES
The dichotomous data on home possessions and ordinal data on parental education and number of
books were summarised into a single family SES scale using the partial credit model. The model was
applied to all the sample of students, not the subject-specific samples. Item-weight estimates of final
SES models are presented in Table 12.
0.0
0.1
0.2
0.3
-5.0 -2.5 0.0 2.5 5.0
Learning support scale (IRT)
De
nsity
English
0.0
0.1
0.2
0.3
-2.5 0.0 2.5 5.0
Learning support scale (IRT)
De
nsity
Biology
0.0
0.1
0.2
0.3
-2.5 0.0 2.5 5.0
Learning support scale (IRT)
De
nsity
Geography
24
Table 12. SES partial credit model : i tem weights and standard errors
SES items (n=919) Estimate Std Error
Mother's education: b) primary education -2.63 (0.35)
Mother's education: c) Lower secondary education (Junior/Inter Cert or
equivalent) -2.57 (0.33)
Mother's education: d) Upper secondary education (Leaving Cert or equivalent) -0.14 (0.34)
Mother's education: e) Post-secondary non-tertiary (eg PLC) 0.01 (0.32)
Mother's education: f) Non-degree (certificate/diploma) 1.02 (0.32)
Mother's education: g) Bachelor's degree 3.28 (0.34)
Mother's education: h) Postgraduate degree (Masters or Phd) 5.00 (0.38)
Father's education: a) did not go to school -2.57 (0.43)
Father's education: b) primary education -3.26 (0.40)
Father's education: c) Lower secondary education (Junior/Inter Cert or
equivalent) -2.25 (0.37)
Father's education: d) Upper secondary education (Leaving Cert or equivalent) 0.52 (0.38)
Father's education: e) Post-secondary non-tertiary (eg PLC) 0.25 (0.33)
Father's education: f) Non-degree (certificate/diploma) 1.50 (0.33)
Father's education: g) Bachelor's degree 3.20 (0.33)
Father's education: h) Postgraduate degree (Masters or Phd) 4.46 (0.34)
Home possessions: a) A TV -3.67 (0.31)
Home possessions: b) A car -2.09 (0.15)
Home possessions: c) A dishwasher -0.41 (0.10)
Home possessions: d) A room of your own -1.21 (0.11)
Home possessions: e) A quiet place to study -0.30 (0.09)
Home possessions: f) A computer or laptop you can use for school work -1.48 (0.12)
Home possessions: g) Internet access -2.28 (0.17)
Home possessions: h) An iPad or other tablet of your own 2.62 (0.10)
Home possessions: i) A smartphone (for example, iPhone, Blackberry, or Android)
of your own -0.03 (0.09)
Home possessions: j) A mobile phone of your own -1.30 (0.12)
Home possessions: k) A PlayStation, X-box, or Wii -0.35 (0.09)
Home possessions: l) Classic literature (for example, W.B. Yeats, James Joyce, or
Maria Edgeworth) 1.28 (0.09)
Home possessions: m) A dictionary -1.79 (0.14)
Number of books: (0 – 10 books) 0.04 (0.16)
Number of books: (26 – 100 books) -0.23 (0.17)
Number of books: (101 – 200 books) 1.04 (0.20)
Number of books: (201 – 500 books) 2.39 (0.24)
Number of books: (More than 500 books) 3.70 (0.29)
25
The threshold parameters more or less consistently indicated greater weights for higher categories of
parental education and number of books. Item weights for the home possessions items indicated that
having an iPad exerts the greatest weight on SES and a TV the lowest weight. Item fit statistics are
presented in Table 13.
Table 13. SES partial credit model : i tem fit statistics
SES items (n=919) Outfit
MSQ
Infit
MSQ
Mother's education 0.78 0.78
Father's education 1.07 0.89
Home possessions: a) A TV 0.90 0.89
Home possessions: b) A car 0.83 0.92
Home possessions: c) A dishwasher 0.93 0.96
Home possessions: d) A room of your own 1.02 0.97
Home possessions: e) A quiet place to study 0.93 0.93
Home possessions: f) A computer or laptop you can use
for school work 0.79 0.87
Home possessions: g) Internet access 0.60 0.85
Home possessions: h) An iPad or other tablet of your
own 0.96 0.98
Home possessions: i) A smartphone (for example,
iPhone, Blackberry, or Android) of your own 1.07 1.02
Home possessions: j) A mobile phone of your own 1.11 1.02
Home possessions: k) A PlayStation, X-box, or Wii 1.01 1.00
Home possessions: l) Classic literature (for example,
W.B. Yeats, James Joyce, or Maria Edgeworth) 0.85 0.88
Home possessions: m) A dictionary 0.64 0.85
Number of books 0.89 0.87
Item fit statistics were within acceptable ranges, except for items g, ‘internet access’, and m, ‘a
dictionary’, with outfit values of 0.60 and 0.64, respectively.
Figure 11 presents the distribution of the SES scale.
26
Figure 11. SES distr ibution
Analysis of research questions
Research question 3 – how predictable are examination questions in the
Leaving Certificate in Ireland?
This question is addressed with information reported by students on their experiences with the exam
and views on predictability (see question 12, Appendix A). Students were asked to report on a Likert
scale (ie strongly disagree, disagree, agree, strongly agree) their agreement with different statements
regarding the exam. Table 14 presents a summary of responses. Categories ‘agree’ and ‘strongly agree’
have been combined into a single category, ie ‘agree’. The percentage of the combined ‘agree’ category
is reported together with the total number of valid responses.
0.0
0.2
0.4
0.6
-2 -1 0 1 2 3
SES scale (IRT scores)
Den
sity
27
Table 14. Views on the exam by subject area: percentage of agree (%) and val id
responses (n)
English Biology Geography
% n % n % n
a) I felt I knew what the examiners wanted this
year
63% 760 47% 544 58% 395
b) To do well in this exam, remembering is more
important than understanding
47% 760 55% 546 62% 395
c) The exam tests the right kind of learning 34% 760 45% 546 42% 396
d) To do well in this exam I need to think and
adapt what I know
82% 759 72% 546 80% 394
e) I was surprised by the questions on the exam
this year
32% 761 73% 546 49% 396
f) To do well in this exam, I need a broad
understanding of the subject, across many topics
69% 760 88% 548 84% 394
g) I left a lot of topics out of my revision and still
think I will do well
38% 762 29% 549 44% 395
h) I think I will be able to use what I learned for
this exam in the future
36% 761 72% 547 56% 395
i) I predicted the exam questions well 69% 760 31% 549 49% 395
j) I can do well in this exam even if I do not fully
understand the topics
37% 760 32% 548 42% 394
A considerable number of students reported they predicted the exam questions well. The percentages
varied by subject: 69% in English, 49% in geography and 31% in biology. Interestingly, a total of 72% of
the students reported they believed they will be able to use what they have learned for their exam in
the future in biology, whilst only 36% believed the same about English. In other words, there seems to
be positive aspects about the biology exam compared to the other subjects, if we judge it by students'
beliefs. This is again confirmed by only 32% of students who believe it is possible to do well on the
exam even if you do not fully understand the topic and 88% who agree with the statement ‘To do well
in this exam, I need a broad understanding of the subject, across many topics’. This is, again, the
highest reported agreement among the three subjects, indicating that the biology exam is less
predictable, examines a broad kind of understanding and is valued for the knowledge being useful for
the future.
Research question 4 – which aspects of this predictability are helpful and which
engender unwanted approaches to learning?
Different analyses are considered to address this question. One is factor analysis of the views on
predictability items presented before (see Table 4). Another is the association between the examination
scores and agreement with these items. The learning strategies items also contribute to addressing this
question. The association between the memorisation strategies and average examination scores is
presented, as well as the correlation between the learning strategies scales and the examination scores.
The factor solution of the views on predictability items produced three factors that we labelled
‘valuable’, ‘predictable’ and ‘narrow’ (see Table 4). The first factor reflected helpful aspects of
predictability, or that preparing for the exam is a valuable learning process. Grouped in this factor are
students who reported they will be able to use what they have learned for the future, that the exam
28
tests the right kind of learning, and that a broad understanding of the subject is important to do well in
the exam. The second factor reflected views that the exam is predictable, for example, that it was
possible to predict exam questions well and that students were not surprised by exam questions. Unlike
the first factor that reflects valuable learning, it is not clear whether this factor reflects helpful or
unwanted aspects of predictability, as some level of predictability is expected and desired but
predictability due to memorisation strategies, for example, can be problematic for learning. The third
factor reflected views about narrowing the curriculum for exam preparation. For example, it shows the
extent to which students chose not to study some topics because they thought they would not come up
in the exam, and the number of students who left a lot of topics out of their revision and still think they
will do well.
Table 15 reports average exam scores for the views on predictability items for two combined
categories, ‘agree’ (ie ‘agree’ and ‘strongly agree’) and ‘disagree’ (ie ‘strongly disagree’ and ‘disagree’).
Scores are derived from the data on student grades using the Central Applications Office (CAO) scheme.
The scale of score points ranges from 0 to 100 for the higher level examination.
29
Table 15. Views on predictabil ity and exam scores: average scores (M) and val id
responses (n)
English Biology Geography
Disagree Agree Disagree Agree Disagree Agree
Predictability scale
i) I predicted the exam questions well M 69.07 70.44 69.71 71.43 71.85 73.85
n 182 427 297 143 146 161
a) I felt I knew what the examiners
wanted this year
M 70.30 69.82 67.21 73.94 * 70.20 74.57
n 218 393 229 208 122 186
e) I was surprised by the questions on
the exam this year
M 70.89 68.02 69.96 70.31 73.86 71.77
n 419 192 113 325 158 150
j) I can do well in this exam even if I do
not fully understand the topics
M 70.76 68.82 69.76 71.28 71.98 74.07
n 376 234 291 149 172 134
Narrowing of the curriculum scale
g) I left a lot of topics out of my revision
and still think I will do well
M 71.52 67.73 * 72.56 64.89 * 73.91 71.78
n 364 247 309 131 161 146
11f) I chose not to study some topics as I
thought they would not come up
M 71.72 67.96 * 73.45 64.52 * 72.99 72.66
n 329 285 297 147 162 145
Valuable learning scale
h) I think I will be able to use what I
learned for this exam in the future
M 68.82 72.12 65.49 72.06 73.39 72.56
n 391 219 122 316 127 180
d) To do well in this exam I need to think
and adapt what I know
M 68.56 70.32 70.33 70.39 73.42 72.67
n 108 501 120 318 57 249
c) The exam tests the right kind of
learning
M 69.64 70.66 68.03 72.79 73.98 71.42
n 399 212 238 201 171 137
f) To do well in this exam, I need a broad
understanding of the subject, across
many topics
M 69.29 70.31 66.46 70.75 72.45 72.96
n 190 420 48 391 49 257
b) To do well in this exam, remembering
is more important than understanding
M 71.13 68.78 71.53 69.68 71.27 73.76
n 319 291 190 248 110 197
Note: * indicates statistically significant differences in mean scores between the ‘disagree’ and ‘agree’ combined
groups at 95% confidence interval using the Bonferroni correction for multiple comparison tests.
In English and biology, significant differences were found between students who agreed and disagreed
with the item ‘I left a lot of topics out of my revision and still think I will do well’, relating to the
narrowing of the curriculum scale. Differences are not statistically significant for geography. This item is
important in that it tells us whether students believe it is possible to narrow their reading before the
exam. It is obvious that the highest performing students in English and biology do not believe that is
possible. Similarly, students who agreed with the statement ‘I chose not to study some topics as I
30
thought they would not come up’, performed significantly worse in the English and biology exam.
Narrowing the curriculum strategies thus seem to engender unwanted approaches to learning in
English and biology. It may be that the extent of question choice in geography means that the context
of narrowing the curriculum operated differently in that subject.
In biology and geography students scored higher if they agreed with the statement that they felt they
knew what the examiners wanted this year; the differences are statistically significant for biology, while
almost no differences were found among students who sat for the English exam.
Students who agreed with the statement ‘To do well in this exam, remembering is more important than
understanding’, in English and biology, scored lower than students who agreed to this statement in
geography, but differences are not statistically significant. Also, in general, students who agreed with
the statement ‘To do well in this exam, I need a broad understanding of the subject, across many
topics’ performed better, but again differences were not statistically significant.
Table 16 presents average exam scores for the memorisation strategies items for two combined
categories, ‘now and then’ (ie ‘almost never’ and ‘now and then’) and ‘often’ (ie ‘often’ and ‘always’).
Table 16. Memorisation strategies and exam scores: average scores (M) and sample
size (n)
English Biology Geography
Now and
then
Often Now and
then
Often Now and
then
Often
a) I tried to memorise all the
material that I was taught
M 71.11 68.80 62.08 72.92 * 68.55 74.72 *
n 316 296 106 339 93 214
e) I tried to learn my notes by
heart
M 70.56 69.33 67.84 71.58 72.71 73.04
n 330 282 148 297 107 199
k) I tried to memorise as
much of the revision material
as possible
M 71.01 69.51 65.25 71.63 69.05 73.82
n 237 376 79 365 63 245
m) I tried to memorise what I
thought was important
M 70.10 70.07 64.22 70.98 70.43 73.04
n 98 514 32 412 23 285
Note: * indicates statistically significant differences in mean scores between the ‘often’ and ‘now and then’
combined groups at 95% confidence interval using the Bonferroni correction for multiple comparison tests.
Students who declared they often memorised all the material they were taught scored significantly
higher in biology and geography than the rest. In general, memorisation strategies seem to be more
effective for performance in biology and geography, but differences are not statistically significant. In
English, students applying memorisation strategies tend to perform worse, but again, differences are
not statistically significant.
Table 17 reports correlation coefficients for the learning strategies scales and the exam scores in the
three subject areas.
31
Table 17. Correlations between learning strategy scores and exam scores
Memorisation Elaboration Control
English -.05 -.05 .11*
Biology .15* .13* .36*
Geography .18* .03 .24*
* Correlation is significant at the 0.01 level (2-tailed)
The results are consistent with Table 16. The memorisation factor was positively related to the exam
scores for Biology and Geography but not for English. Additionally, the results in Table 17 show that
control strategies were even more important than memorisation strategies for obtaining higher scores
in the exam, especially in biology. Elaboration strategies, in contrast, are only related positively to exam
scores in biology.
From the literature it is expected that we will find lower correlation between memorisation strategies
and language scores, and higher correlation between control strategies and language scores (Donker et
al, 2014). The student differences found for biology and geography must also be understood based
upon the tasks area given in the Leaving Certificate, which asks students to recall and explain a number
of issues from the curriculum. Also, in theory it was expected that we would find stronger correlation
between control strategies and achievement than with memorisation strategies. Our study confirms
this, but it is also worth noting that for biology the correlation between achievement and control
strategy use is above r = 0.3, which is considered to be strong in strategy research. In PISA, control
strategy use and achievement is often found to be around r = 0.2 (March et al, 2006).
Research question 7 – what kinds of examination preparation strategies do
students use?
This question is addressed with information reported by students on the learning strategies they used
and the kinds of support they received for preparing for the exam.
Learning strategies
We showed in Table 2 that learning strategies can be grouped quite consistently across subjects in
three categories: memorisation, elaboration, and control strategies. Table 18 presents student
responses on their learning strategies for the exam. Students reported the frequency with which they
applied different learning strategies on a Likert scale (ie (1) almost never, (2) now and then, (3) often
and (4) always). Categories ‘often’ and ‘always’ have been combined into a single category, ‘often’. The
percentage of the combined ‘often’ category and the total number of valid responses are reported.
Learning strategies varied by subject. For example, students tend to use memorisation strategies more
often in biology and geography than in English. More than 80% of students tried to memorise as much
as possible of the revision material in biology and geography, while about 63% did it for the English
exam. Similarly, more than 60% of students tried to learn the notes by heart in biology and geography,
while 48% tried it for the English exam.
Interestingly, even though students reported that it is not possible to predict what will come up in the
biology exam, and the majority agree it assesses the right kind of learning, 77% of the students agreed
with the statement ‘I tried to memorise all the material that I was taught’. This percentage is 70% for
32
geography and 49% for English. It can be argued that biology is a subject where students need to
memorise a lot of the material, and tasks are designed to assess whether they know factual knowledge.
The analysis of the biology exams in Ireland also revealed that students are asked to show they know
factual knowledge such as naming certain objects of a cell. In this respect, it is reasonable to use
memorisation strategies.
Table 18. Learning strategies: percentages of ‘often’ (%) and total val id responses
(n)
English Biology Geography
% n % n % n
Memorisation strategy
k) I tried to memorise as much of the revision material as possible 63% 762 83% 553 81% 396
e) I tried to learn my notes by heart 48% 763 68% 553 65% 393
a) I tried to memorise all the material that I was taught 49% 763 77% 554 70% 396
m) I tried to memorise what I thought was important 85% 762 94% 553 92% 397
Elaboration strategy
g) I figured out how the information might be useful in the real world 21% 763 54% 553 41% 394
c) I tried to relate new information to knowledge from other subjects 30% 762 53% 551 56% 392
h) I tried to understand the revision material better by relating it to
what I already knew 56% 764 70% 551 69% 396
n) I studied material that went beyond what is expected for the exam 18% 762 26% 552 17% 396
Control strategy
i) I made sure that I remembered the most important points in the
revision material 91% 764 92% 554 91% 395
d) I checked if I understood what I had read 80% 763 87% 552 84% 394
j) If I did not understand something, I looked for additional information
to clarify it 62% 764 75% 553 67% 396
l) I tried to figure out which ideas I had not really understood 51% 763 72% 551 59% 396
b) I started by figuring out exactly what I needed to learn 79% 764 80% 549 83% 395
Similarly, students seemed to use elaboration strategies more often for the biology and geography
exam than for the English exam. For example, 70% of students tried to relate the revision material to
what they already knew in the biology and geography exam, while 56% did it for the English exam. Also,
in biology and geography more than 50% of students tried to relate new information to knowledge
from other subjects, while 30% did it for English. Differences between subjects in the use of control
strategies are less pronounced.
33
Learning support
Students received different kinds of support for preparing for the exam. Table 19 reports the
percentage of students who received support, by kind of support activity and the total number of valid
responses.
Table 19.Support for learning: percentages with support (%) and total val id
responses (n)
English Biology Geography
% n % n % n
a) Which topics were likely to come up was explained to
me 75% 748 67% 544 75% 384
b) Marking criteria were explained to me 81% 747 77% 542 87% 386
c) I used material from grinds websites 34% 750 27% 543 27% 387
d) Model answers were given to me 82% 747 31% 543 87% 386
e) I was given past papers 94% 748 95% 543 94% 385
f) I have textbooks to help with my study 82% 748 93% 542 92% 385
g) The exam format was explained to me 92% 747 86% 542 91% 385
h) I used revision guides 54% 747 64% 541 62% 386
i) I looked at past papers on the internet 66% 748 74% 542 65% 385
j) My parents helped me with my studies 23% 751 16% 545 17% 386
k) Friends helped me to prepare for the exams 39% 749 50% 543 44% 387
l) I used revision apps 13% 752 15% 544 12% 387
m) I took one-to-one or small-group grinds 14% 752 13% 545 11% 389
n) I attended a grinds school 8% 752 8% 545 5% 388
As can be seen from the table and was also discussed earlier, students were less likely to attend grinds
schools, take one-to-one or small-group grinds, and use revision apps compared with other kinds of
support. In contrast, the large majority of students were given past papers and report that the exam
format was explained to them. In general, there are no substantial differences between subjects.
However, important differences are found for item d, where only 31% of students report that model
answers were given to them for biology, while more than 80% do so for English and geography.
When it comes to support from family, only 23% report this in English, 16% in biology and 17% in
geography. It is more common to have support and help from friends. Here again we find subject
differences, and half of the biology students report to have had help from friends, with lower
percentages in the two other subjects.
Regression analysis
Regression analysis has been conducted for the examination scores and the predictability scales.
Regression results produce evidence of associations but results cannot be interpreted in terms of
causation.
34
Examination scores model
Regressions of examination scores on family SES, gender, the learning strategies scales, the learning
support scales and the predictability scales are estimated stepwise for English (see Table 20), biology
(see Table 21) and geography (see Table 22).
The results indicate a positive association between the examination scores and family SES. The control
strategies scale is positively related to the exam scores in all three subjects even after controlling for
family SES. In biology and geography the memorisation strategy is also positively related to the exam
scores irrespective of family SES.
35
Table 20. Regression model of Engl ish scores (unstandardised coeffic ients and standard errors)
Model 1: Background
(n= 579)
Model 2: Learning strategies
(n= 563)
Model 3: Learning support
(n= 408)
Model 4: Views on
predictability
(n=400)
Estimate Std Error Estimate Std Error Estimate Std Error Estimate Std Error
Intercept 62.83 (1.42) *** 63.84 (1.44) *** 64.52 (2.18) *** 65.37 (2.24) ***
Family SES 6.16 (1.03) *** 5.44 (1.04) *** 5.96 (1.24) *** 5.59 (1.27) ***
Female 1.98 (1.28) 1.73 (1.30) 2.56 (1.57) 2.53 (1.63)
Memorisation strategy scale -0.74 (0.71) -1.07 (0.86) -1.17 (0.86)
Elaboration strategy scale -1.49 (0.79) . -1.23 (0.96) -1.69 (1.01) .
Control strategy scale 1.82 (0.81) * 2.14 (0.95) * 1.43 (1.00)
School learning support scale -0.75 (0.68) -0.88 (0.70)
External learning support scale -0.52 (0.57) -0.38 (0.57)
Predictability scale 1.40 (1.01)
Valuable learning scale 1.55 (1.04)
Narrowing of the curriculum scale -2.08 (0.77) **
Significant codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
36
Table 21. Regression model of biology scores (unstandardised coeffic ients and standard errors)
Model 1: Background
(n=423)
Model 2: Learning strategies
(n=412)
Model 3: Learning support
(n=247)
Model 4: Views on
predictability (n=238)
Estimate Std Error Estimate Std Error Estimate Std Error Estimate Std Error
Intercept 59.15 (2.56) *** 62.40 (2.49) *** 60.38 (3.85) *** 58.88 (3.93) ***
Family SES 9.68 (1.74) *** 7.67 (1.66) *** 8.65 (2.24) *** 8.99 (2.24) ***
Female 2.90 (2.25) 1.77 (2.19) 2.81 (2.96) 3.88 (3.09)
Memorisation strategy scale 2.41 (1.14) * 1.83 (1.45) 1.15 (1.43)
Elaboration strategy scale 1.97 (1.27) 1.91 (1.64) -0.51 (1.87)
Control strategy scale 7.74 (1.31) *** 9.07 (1.73) *** 7.38 (1.75) ***
School learning support scale 0.26 (0.79) 0.38 (0.80)
External learning support scale -1.91 (0.90) * -2.49 (0.88) **
Predictability scale 4.21 (1.72) *
Valuable learning scale 2.88 (1.89)
Narrowing the curriculum scale -4.29 (1.74) *
Significant codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
37
Table 22. Regression model of geography scores (unstandardised coeffic ients and standard errors)
Model 1: Background
(n=294)
Model 2: Learning strategies
(n=283)
Model 3: Learning support
(n=198)
Model 4: Views on
predictability (n=196)
Estimate Std Error Estimate Std Error Estimate Std Error Estimate Std Error
Intercept 64.45 (1.74) *** 66.45 (1.59) *** 73.46 (2.21) *** 73.48 (2.20) ***
Family SES 7.85 (1.45) *** 6.03 (1.32) *** 3.72 (1.28) ** 2.97 (1.28) *
Female 3.42 (1.64) * 3.17 (1.50) * 1.62 (1.51) 2.23 (1.50)
Memorisation strategy scale 2.20 (0.92) * 2.53 (0.96) ** 2.03 (0.96) *
Elaboration strategy scale 0.12 (0.99) -0.02 (1.01) 0.68 (1.07)
Control strategy scale 3.29 (1.02) ** 2.73 (1.04) ** 2.97 (1.02) **
School learning support scale -0.23 (0.72) 0.05 (0.71)
External learning support scale -0.22 (0.47) 0.05 (0.47)
Predictability scale 2.16 (1.04) *
Valuable learning scale -2.81 (1.09) *
Narrowing the curriculum scale -1.32 (0.78) .
Significant codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
38
The narrowing the curriculum scale is negatively related to the exam scores, consistently across the
three subjects. There is also evidence that the predictability scale is positively related to the exam
scores in biology and geography. It is important to note that not only views on predictability can affect
performance in the exam but also exam results can influence views on predictability. One should
therefore be careful in interpreting the direction of causation in these results.
Predictability model
We now look at associations with students’ views on the three predictability scales. Regressions of the
views on the exam scales on family SES, gender and the examination scores are estimated for the
English (see Table 23), biology (see Table 24), and geography (see Table 25) samples. For ease of
interpretation coefficients of examination scores were multiplied by 100.
The results for the predictable scale indicate no significant association with family SES and, in all three
subjects, girls less often share views that the exam is predictable. There is a slight positive association
between the examination scores and the predictable scale for biology and geography. Family SES is
positively related to the valuable learning scale for English and negatively for geography. No association
with gender is found for the valuable learning scale. The narrowing the curriculum scale is not
significantly associated with family SES, but an association with gender is apparent in the three
subjects. In particular, girls are less likely to use narrowing the curriculum strategies. Also, the
examination scores are negatively associated with the narrowing the curriculum scale even after
controlling for family SES. That is, students who score higher in the exam tend to use narrowing the
curriculum strategies less often, independently of their family SES.
39
Table 23. Engl ish: views on predictabil ity regression models (unstandardised coeffic ients and standard errors)
Predictable scale Valuable learning scale Narrowing the curriculum scale
Model 1 (n=697) Model 2 (n=564) Model 1 (n=697) Model 2 (n=564) Model 1 (n=697) Model 2 (n=564)
Estimate Std
Error
Estimate Std
Error
Estimate Std
Error
Estimate Std
Error
Estimate Std
Error
Estimate Std
Error
Intercept 0.10 (0.07) 0.05 (0.15) -0.16 (0.07) * -0.48 (0.16) ** 0.29 (0.09) ** 0.76 (0.20) ***
Family SES 0.10 (0.05) * 0.14 (0.05) ** 0.12 (0.05) * 0.07 (0.06) -0.08 (0.06) -0.05 (0.07)
Female -0.30 (0.06) *** -0.36 (0.07) *** 0.03 (0.06) 0.04 (0.07) -0.35 (0.08) *** -0.36 (0.09) ***
English exam score 0.08 (0.21) 0.51 (0.23) * -0.66 (0.28) *
Significant codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Table 24. Biology: views on predictabil ity regression models (unstandardised coeffic ients and standard errors)
Predictable scale Valuable learning scale Narrowing the curriculum scale
Model 1 (n=502) Model 2 (n=475) Model 1 (n=502) Model 2 (n=475) Model 1 (n=502) Model 2 (n=475)
Estimate Std
Error
Estimate Std Error Estimate Std Error Estimate Std Error Estimate Std Error Estimate Std
Error
Intercept 0.25 (0.09) ** 0.00 (0.14) 0.04 (0.09) -0.10 (0.11) 0.29 (0.09) ** 0.68 (0.12) ***
Family SES 0.00 (0.06) -0.04 (0.06) 0.00 (0.06) -0.03 (0.06) -0.11 (0.06) . 0.02 (0.06)
Female -0.36 (0.08) *** -0.41 (0.08) *** -0.07 (0.08) -0.08 (0.08) -0.26 (0.08) ** -0.29 (0.08) ***
Biology exam score 0.47 (0.18) ** 0.46 (0.18) -0.80 (0.18) ***
Significant. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
40
Table 25. Geography: views on predictabil ity regression models (unstandardised coeffic ients and standard errors)
Predictable scale Valuable learning scale Narrowing the curriculum scale
Model 1 (n=366) Model 2 (n=287) Model 1 (n=366) Model 2 (n=287) Model 1 (n=366) Model 2 (n=287)
Estimate Std
Error
Estimate Std
Error
Estimate Std
Error
Estimate Std
Error
Estimate Std
Error
Estimate Std
Error
Intercept 0.10 (0.09) -0.40 (0.23) 0.04 (0.09) 0.32 (0.24) 0.22 (0.11) * 0.60 (0.31)
Family SES 0.03 (0.07) -0.02 (0.08) -0.15 (0.07) * -0.14 (0.09) -0.09 (0.09) -0.10 (0.11)
Female -0.22 (0.08) ** -0.25 (0.09) ** 0.14 (0.08) . 0.16 (0.09) . -0.25 (0.11) * -0.29 (0.12) **
Geography exam score 0.82 (0.33) * -0.38 (0.35) -0.39 (0.44)
Significant codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
41
References
Allan, A (1997) Begging the questionnaire: instrument effect on readers’ response to a self-report
checklist. Language Testing, 12, 133–156
Cattell, R B (1966) The scree test for the number of factors. Multivariate Behavioral Research, 1, 245–
276
Courtney, M G (2013) Determining the number of factors to retain in EFA: using the SPSS R-Menu v2.0
to make more judicious estimations. Practical Assessment, Research & Evaluation, 18(8), 1–14
Donker, A S, Boer, H de, Kostons, D, Dignath van Ewijk, C C, & Werf, M P C van der (2014). Effectiveness
of learning strategy instruction on academic performance: a meta-analysis, Educational
Research Review, 11, 1–26
Fabrigar, L R, Wegener, D T, MacCallum, R C, & Strahan, E J (1999). Evaluating the use of exploratory
factor analysis in psychological research. Psychological Methods, 3, 272–299
Horn, J L (1965) A rationale and test for the number of factors in factor analysis. Psychometrika, 30,
179–185
Kaiser, H F (1960) The application of electronic computers to factor analysis. Educational &
Psychological Measurement, 20, 141–151
Karabenick, S A, Woolley, M E, Friedel, J M, Bridget, V, Blazevski, J, Bonney, C R, Groot, E D E (2007)
Cognitive processing of self-report items in educational research: Do they think what we mean?
Educational Psychologist, 42(3), 37–41
Marsh, H, Haug, K-T, Artelt, C, & Baumert, J (2006) OECD’s brief self-report measure of educational
psychology’s most useful affective constructs: cross-cultural, psychometric comparisons across
25 countries. International Journal of Testing, 6(4), 311–360
Masters, G N & Wright, B D (1997) The partial credit model. In W J van der Linden & R K Hambleton
(Eds), Handbook of Modern Item Response Theory. New York: Springer, 101–122
Pintrich, P R, Smith, D A F, Garcia, T., & McKeachie, W J (1991) A Manual for the Use for the Motivated
Strategies for Learning Questionnaire (MSLQ). Ann Arbor: University of Michigan, National
Center for Research to Improve Postsecondary Teaching and Learning
Raiche, G (2010) nFactors: an R package for parallel analysis and non graphical solutions to the Cattell
scree test. R package version 2.3.3
Raiche, G, Roipel, M, & Blais, J G (2006) Non Graphical Solutions for the Cattell’s Scree Test. Paper
presented at The International Annual Meeting of the Psychometric Society, Montreal.
Retrieved December 10, 2013, from http://www.er.uqam.ca/nobel/r17165/RECHERCHE
/COMMUNICATIONS/2006/IMPS/IMPS_PRESENTATION_2006.pdf
Rasch, G (1960) Probabilistic Models for Some Intelligence and Attainment Tests. Copenhagen,
Denmark: Nielsen and Lydiche
42
Ruscio, J, & Roche, B (2012) Determining the number of factors to retain in an exploratory factor
analysis using comparison data of a known factorial structure. Psychological Assessment, 24(2),
282–292
Samuelstuen, M & Braten, I (2007) Examining the validity of self-reports on scale