Upload
michael-eraut
View
223
Download
0
Embed Size (px)
Citation preview
© 2005 Blackwell Publishing Ltd.
Learning in Health and Social Care
,
4
, 2, 47–52
Blackwell Publishing, Ltd.
Editorial
Uncertainty in research
Learning, the central focus of this journal, is one
of the most difficult areas to investigate, especially
when the content is complex and/or the setting is
a workplace rather than a classroom. The thought-
provoking article by Cooper, Braye & Geyer (2004),
in the December 2004 issue of
Learning in Health
and Social Care
, reminded me of the extent to which
researchers are expected to ignore complexity and
underestimate the uncertainty created by their pro-
cedures and assumptions. Some of this uncertainty
arises during the collection of evidence and some
during its analysis and interpretation; and much of
it applies to both qualitative and quantitative methods.
The principal difficulties relate to replication and
generalization, criteria that are not universally accepted
but still have some relevance. One problem, I believe,
is that most researchers tend to assume that people
have considered views on the questions they are
asked, and do not just make it up as they go along.
Moreover, they have coherent, consistent views on
a wide range of issues, events and experiences. But is
this just a figment of the social scientist’s imagination?
The pollsters are more cautious about fixed views,
and the politicians and press are all too aware that
a chance remark taken out of context can have a
significant effect.
This editorial will focus on uncertainties associ-
ated with the collection of evidence, leaving uncer-
tainties associated with analysis and interpretation
for the next issue. I will start with features of the
context that can affect a person’s answer to a ques-
tion, which include location and timing, the mode
and style of questioning, situational factors and
personal factors. Then, there are apparently more
random effects, such as the thoughts that questions
trigger and what people remember at the time.
Finally, there are things that are not said, because
they concern tacit knowledge, accustomed forms
of evasive discourse, or reluctance to talk about
controversial issues (Eraut 2004).
I will discuss the memory factor only briefly,
because it is not an area where I have expertise.
However, I am aware that it needs careful monitor-
ing, and that uncertainties in this area can, to some
extent, be reduced. The essence of the problem is the
potential dominance of what people first remember,
because features of a context may trigger early
responses, which then take precedence over other
lines of thought that might, on reflection, be consid-
ered more important. In conversations, what first
comes to mind often sets the tone. Thus, in focus
groups and unstructured interviews, the first sub-
stantial comments may not only start the ball rolling
but determine its trajectory. If one is concerned
about this almost random influence, yet still deter-
mined not to impose one’s own agenda, then it may
be useful to start with a period of agenda-setting
and prioritizing before embarking on the main con-
versation, and to periodically stop, summarize and
explore the range of views on a proffered statement
and the contexts to which it might or might not apply.
My guiding principle throughout this tour of
uncertainties will be the advice of the American
psychologist, George Miller, well known for his
aphorisms, who once said:
If someone tells you something, you should assume that it is true; your problem is to find out what it is true of.
48 Editorial: Uncertainty in research
© 2005 Blackwell Publishing Ltd.
Learning in Health and Social Care
,
4
, 2, 47–52
How often, we may ask, does a researcher know
the evidence on which an opinion or judgement is
based? Yet, without this knowledge, they may make
erroneous assumptions about the meaning of what
was said. Your informants may have a similar prob-
lem, because their experiences may have been signif-
icantly determined by the nature of their roles.
People with different roles in the same organization
or community obtain access to different kinds of
experience and, even when copresent, may interpret
what they hear somewhat differently. If they are in
a position of high status, they may hear fewer
opinions that differ from their own. If they are in a
position of low status, they may not have understood
the purpose or wider picture behind the policies or
events they were discussing. This does not negate
their statements, but it may give them a rather dif-
ferent meaning.
The main focus of this editorial is the role of
uncertainty in research, so I want to explore the
factors that cause uncertainty, and investigate the
extent to which areas of uncertainty can be detected,
reduced, shared with others, and more publicly
appreciated and understood. Thus, we need to dis-
tinguish between uncertainties that are capable of
being reduced, if not removed, and uncertainties of
which we need to be diligently aware when report-
ing research in order to avoid making overconfident
or unwarranted conclusions. My first concern is
that pressures for over-generalization may prevent,
rather than promote, honest appraisals of uncertainty
or attempts to check out possible sources of mis-
interpretation. My second concern is that both the
micropolitics and the craft of research are neglected
in the training of researchers in favour of repetitious
paradigm wars and recipe-type approaches to the
design and piloting of research protocols and
instruments.
Uncertainties in interviewing
I shall start with a discussion of uncertainties
encountered in interviewing, because interviewing
is commonly regarded as an authentic and flexible
method of inquiry, but possibly less capable of
standardization than some other methods. It certainly
offers more opportunities for fine-tuning to individual
respondents, adjusting protocols as opportunities
emerge, for probing, on-the-spot cross-checking of
one’s evidence and for exploring anticipated areas
of uncertainty. Hence I will not be referring to
interviews that are so structured and standardized
that they might better be described as ‘live question-
naires’. Issues relating to these and other methods
that present similar problems, but with fewer
opportunities to redress them, will be discussed
later.
The number of interviews in a research study is
normally restricted by the time available to the
researcher(s), so some kind of sampling strategy is
often needed. Hence, the researcher has to choose
from a random sample, a blanket call for volunteers,
targeted invitations to a selected sample, a snowball
sample that relies on early interviewees to recom-
mend others, or an opportunistic sample determined
by whom the researcher meets or notices and invites
on-the-spot. Criteria for selecting a targeted sample
may include demographic variables, positions in an
organization, or reported allegiance to particular
factions, policies or ideas. Whatever sample is finally
assembled, it is still necessary to find out as much as
possible about its composition. Comparisons may
be made with the population as a whole for some
variables, and with members of chosen subgroups
for other variables. In addition, one can ask ques-
tions during interviews about the respondents’ esti-
mates of the balance of opinion on certain issues.
They may not all agree, but it should be possible to
gain either some reduction in uncertainty or evid-
ence that the informants are not well informed
about their colleagues’ opinions – a not insignificant
finding. There may even be degrees of stereotyping
that suggest dysfunctional divisions and attitudes.
However, such questions need to be asked care-
fully if the anonymity of respondents is to be
preserved.
Sometimes there is an opportunity for intelli-
gence gathering before embarking on a set of inter-
views. This can be very helpful for finding out the
various ways by which the members of the research
population identify each other, and the subgroups
which they recognize and use in conversation.
Information about current issues and concerns will
also alert an interviewer to:
Editorial: Uncertainty in research 49
© 2005 Blackwell Publishing Ltd.
Learning in Health and Social Care
,
4
, 2, 47–52
• how their interviewees may interpret the purpose
and expectations of the study;
• how they may interpret some of the questions they
are asked;
• their expectations of the interviewer and his or her
interests; and
• possible meanings of what the interviewees them-
selves say and ask.
All of these factors will have a considerable effect
on what they choose to say to this particular inter-
viewer on this particular occasion, which may be
influenced by the way in which the researcher con-
tacts the interviewees, the explanations s /he gives
and the way that s /he introduces the interview and
particular phases of questioning.
Recent events, both local and national, also
contribute to the context of an interview. This was
dramatically illustrated by some research I carried
out on how parents viewed the primary schools
attended by their children. Reference to recent in-
cidents, such as a torn garment, a lost possession or
participation in a play, concert or sporting occasion,
frequently gave a positive or negative gloss to most
of the subsequent conversation. The most dramatic
influence, however, was the role of the mass media at
the time of a ‘back to basics’ campaign. About 50%
of the parents interviewed from seven different
schools told us that those schools did not teach
either spelling or tables, when we had independent
evidence from another part of our project that all
the schools concerned not only taught spelling and
tables but tested them weekly. Ironically, the schools
were much more like what the parents wanted, than
what the parents envisaged (Becher, Eraut & Knight
1981). Sometimes, it is the other way round:
researchers are familiar with respondents to surveys
combining positive views of their local hospital with
negative views of hospitals in general.
There are several reasons why learning in particular
is a difficult area to investigate. First, the default con-
text for questions about learning is the traditional
teaching of a formal curriculum in a classroom.
Learning at work, at home and in the community do
not easily come to mind. Specific reminders may
lead respondents to consider learning in other con-
texts, but only in the particular contexts covered by
those reminders. Even in the classroom context,
questions are much more likely to elicit the learning
of content than the learning of skills, particularly
thinking skills or study skills; because content
knowledge is usually assessed explicitly, while many
skills are only assessed implicitly as a general quality
factor, often without any accompanying feedback.
This points to a second problem. People are usually
learning several things at a time, some deliberately,
some accidentally but consciously and some impli-
citly without even being aware of it. Some of this may
be part of the formal curriculum, some part of the
informal anti-curriculum (subversive knowledge)
and some may be neither. Moreover, this learning
occurs on different timescales, and people tend to be
more aware of short-term learning than long-term
learning.
This brings us to the seemingly intractable prob-
lem of the tacit knowledge that underpins huge
areas of professional performance, but is neither
clearly articulated nor associated with learning
except through rather vague references to learning
from experience. People quickly forget what it is like
to be a novice or a newcomer, and all the things they
had to learn that were neither in classes nor in
books. Hence, researchers into learning have to start
by observing or gathering descriptions of perform-
ance, then inquiring about aspects of performance
and what differentiates the more proficient from the
less proficient performers. Then it becomes possible
to ask about recent additions or improvements to
their own performance and how these had been
acquired or learned. But the integrated, complex
nature of much professional practice makes it very
difficult to deconstruct aspects of performance that
involve different types of knowledge and were learned
in different places at different times. Both the signi-
ficance of tacit knowledge and our ability to track
it down and articulate it explicitly are persistently
exaggerated, probably because people feel so uncom-
fortable about uncertainty.
Another confounding factor in interviews focused
on practice is that the natural response to questions
from strangers, or indeed anyone but a few trusted
colleagues, is to launch into the discourse of justifi-
cation, focused on the respondent’s espoused theory
(Argyris & Schon 1974), rather than the discourse of
description. In George Miller’s terms, this is true of
50 Editorial: Uncertainty in research
© 2005 Blackwell Publishing Ltd.
Learning in Health and Social Care
,
4
, 2, 47–52
their preferred self-view of their actions, rather than
their self-observation. To avoid this the researcher
has to engage in some preliminary observation and
start the interview by discussing what is observed, or
else to specifically ask for a narrative description of
a recent period of work, task or encounter with a cli-
ent. This can then be extended to further narrative
descriptions of other events and their frequency and
typicality, before asking about the respondent’s own
performance and how that has changed over time.
Only then does it become possible to raise questions
about how they became able to perform in that way,
for example, how they learned their practice. With-
out that detailed, concrete, descriptive base, eliciting
their views about how and what they learned will be
difficult, if not impossible.
Not only does the mode of discourse set the tone
for what follows, but individual questions can also
have a similar effect by setting up trains of thought.
This can be an advantage when it enables the inter-
viewer to pursue an issue in greater depth and to
discuss detailed examples, which might reveal what
underpins their respondents’ attitudes. It also en-
ables an interviewer to plan how best to introduce
a difficult question for which an evasive reply might
be anticipated. This often occurs when addressing
particularly challenging aspects of professional
practice, where the respondent might lack con-
fidence or feel concerned about their limited progress.
The best tactic is often to design an appropriate
sequence of questions that makes the difficult ques-
tion seem obvious rather than threatening, when it
eventually gets asked. For example, one might begin
by asking about aspects of practice that people find
easy or difficult to learn, then follow with questions
about what they thought made them easy or difficult
to learn, citing particular examples. Then, one can
ask about the extent to which their colleagues had
overcome these difficulties, and whether they them-
selves felt the same about it. The way is then clear to
ask how they themselves had tackled this challenge,
and whether they had any advice to offer to others.
While there is still much to be learned about
researching complex performance and how it is
acquired or constructed, it is still important to
remain modest in one’s expectations of such research.
Many aspects of learning are likely to remain hidden
from both the interviewer and the respondent.
Hence, we have to challenge, rather than accept, the
views of those who claim that all, or even most, tacit
knowledge can be made explicit, as in the often-
cited book by Nonaka & Takeuchi (1995). In parti-
cular, the field of knowledge management not only
fails to understand the complexities of practice and
the processes through which it is learned, but also
threatens workers by seeking to reduce their com-
petitive value by taking away their knowledge. This
causes even greater resistance to attempts to learn by
sharing knowledge, as suggested in my December
2004 editorial (Eraut 2004). The irony is that this
goal is rarely feasible, because the comprehension of
how to use the knowledge that is stored on the
knowledge management system is usually tacit.
Uncertainties in using questionnaires
Let us now turn to questionnaires, which have both
advantages and disadvantages. On the positive side
they enable researchers to use larger samples, the
questions are standardized and their answers are
easier to analyse, and they offer respondents complete
anonymity when the proper procedures are used.
However, errors of misinterpretation are more likely
when responses are restricted, and there is no
personal interaction to explore the reasons for those
responses. Indeed, the main difficulty with more
standardized approaches can be a false sense of
security, simply because areas of uncertainty are
difficult to detect. The first source of uncertainty is
the external context, where the questionnaire is
completed. Questions about learning that are
answered in a classroom will tend to be treated as
pertaining to a classroom environment and associated
with the teacher in that classroom, regardless of
whether or not s/he is the researcher. Classroom
administration is a good way to achieve a high
return rate, but may lead to unreflective responses
and/or convey unintended messages about the
researcher’s expectations and purpose.
Postal questionnaires are less bound to the class-
room context but achieve lower return rates and
hence create uncertainty about the extent to which
the respondents are representative of the original
sample. It can be difficult to explain in a covering
Editorial: Uncertainty in research 51
© 2005 Blackwell Publishing Ltd.
Learning in Health and Social Care
,
4
, 2, 47–52
letter just how the research might benefit a possibly
diverse range of respondents, without discouraging
potential respondents by the length and complexity
of the introduction. The researcher will often be
unaware of recent events or personal variables that
might affect the way in which some questions are
interpreted, or convey messages that the question-
naire is not relevant for the intended audience.
Research method texts offer much useful advice on
the structuring and wording of questionnaires, but
very little about these contextual factors. As with
interviews, the grouping and sequencing of ques-
tions can help to remove some ambiguities and
allow the respondent to follow coherent lines of
thought without having to continually change their
focus. More reflective or more challenging questions
come best at the end of a sequence, where they do
not take the respondent by surprise.
George Miller’s question ‘What is it true of?’ is
particularly appropriate for the interpretation of
questionnaires. General answers about learning
immediately raise further questions about context
variables such as location, timing and the involve-
ment of other people, as well as questions about
precisely what is being learned. Learners are often
forced to over-generalize or slip into the default
positions of classroom learning or studying books in
total isolation from friends and colleagues, so that
questions about preferred learning styles are assumed
to refer to these usually non-preferred learning con-
texts, and possibly also to an unenticing subject
matter. For example, when a colleague and I were
conducting an interview-based longitudinal study
of approaches to learning of science undergraduates,
we decided to administer a well-known instrument
on study habits in order to relate our findings to other
published work. But in order to understand their
responses, we asked them afterwards how they had
found the questionnaire. Several of them said that
they did not know whether to respond about ‘learn-
ing for themselves’ or ‘learning for exams’! Perhaps
one should include a question at the end that asks
respondents what context and content they had in
mind when answering the previous questions.
This brings us back to strategies for reducing
these various types of uncertainty. Research methods
textbooks discuss piloting questionnaires to remove
unhelpful formats or ambiguous questions, but do
not often refer to more subtle ambiguities of mean-
ing. There are two other ways of detecting areas of
uncertainty. One is to use what I like to call tutorial
revision, because I first developed it for piloting
learning materials. This involves sitting next to a
pilot respondent and asking him or her to talk aloud
as they go through the draft questionnaire, and then
ascertaining what they would like to say and chan-
ging the question(s) to enable them to say it. The
other way is to ask the pilot respondents to enter
queries whenever they were in any doubt about the
meaning of a question, thought it was clumsily
worded, or felt restrained from saying what they
wanted. These queries would then form the basis for
a later interview. Naturally it is important to find
pilots from across the sample range. I have also found,
in larger studies, that authenticity and validity are
improved when certain questions are differently
worded for different subgroups, especially those relat-
ing to subject content or particular types of context.
The classic advice on reducing uncertainty is to
triangulate data obtained by more than one method,
but this should be additional to, rather than instead
of, lowering the uncertainty within each method.
However, even though different data sets may be
statistically independent, they are rarely theoretically
independent, and for that and other reasons may
not be procedurally independent. It is quite com-
mon – indeed good practice – to invite respondents
to a questionnaire to engage in a follow-up inter-
view in greater depth. This does not invalidate their
interview, but can significantly increase the value of
the study by providing a more holistic picture of
several respondents’ views and preferences and
alerting the researcher to variations in respondents’
perspectives of which they were previously unaware.
If there are a sufficient number of volunteers,
then some attempt can be made to obtain a represent-
ative sample or, if there is only time for a few inter-
views, a theoretical sample of those who completed
questionnaires.
In situations where the researcher is not very
familiar with the potential respondents, the reverse
order might be wiser. This involves conducting a few
carefully chosen interviews with potential respond-
ents before designing the questionnaire. Not only
52 Editorial: Uncertainty in research
© 2005 Blackwell Publishing Ltd.
Learning in Health and Social Care
,
4
, 2, 47–52
will this alert the researcher to current concerns and
issues, but also to appropriate terminology and the
micropolitics of the relevant subgroups and their
interrelations, provided that the researcher has
discussed these matters in the interviews. If appro-
priate, these informants could be asked to suggest
relevant questions and relevant situations, contexts
or cases, around which questions could be posed, and
even whether they would be prepared to comment
on a pilot questionnaire. These types of discussion
can play a significant role in reducing uncertainty.
Michael
Eraut
Editor
References
Argyris C. & Schon D.A. (1974)
Organizational Learning:
A Theory in Action Perspective
. Addison-Wesley,
Reading, Mass.
Becher T., Eraut M. & Knight J. (1981)
Policies for
Educational Accountability
. Heinemann, London.
Cooper H., Braye S. & Geyer R. (2004) Complexity and
interprofessional education.
Learning in Health and
Social Care
3
, 179–189.
Eraut M. (2004) Sharing practice: problems and
possibilities.
Learning in Health and Social Care
3
,
171–178.
Nonaka I. & Takeuchi H. (1995)
The Knowledge Creating
Company
. Oxford University Press, New York.