25
See also: Deixis and Anaphora: Pragmatic Approaches; Jakobson, Roman (1896–1982); Lacan, Jacques (1901– 1981); Metaphor: Psychological Aspects; Metonymy; Rhetoric, Classical. Bibliography Freud S (1900). ‘Die Traumdeutung.’ In Gesammelte werke II-III. Frankfurt am Main: Imago. 1940–1952. Freud S (1901). ‘Zur Psychopathologie des Alltagslebens.’ In Gesammelte werke IV. Frankfurt am Main: Imago. 1940–1952. Freud S (1905). ‘Der Witz und seine Beziehung zum Unbe- wussten.’ In Gesammelte Werke, VI. Frankfurt am Main: Imago. 1940–1952. Freud S (1915). ‘Die Verdra ¨ngung.’ In Gesammelte Werke X. Frankfurt am Main: Imago. 1940–1952. Lacan J (1955–1956). Les Psychoses. Paris: Seuil, 1981. Lacan J (1957). ‘L’instance de la lettre dans l’inconscient ou la raison depuis Freud.’ In E ´ crits. Paris: Seuil, 1966. Lacan J (1957–1958). Les formations de l’inconscient. Paris: Seuil, 1998. Lacan J (1959). ‘D’une question pre ´liminaire a ` tout traite- ment possible de la psychose.’ In E ´ crits. Paris: Seuil, 1966. Lacan J (1964). Les quatre concepts fondamentaux de la psychanalyse. Paris: Seuil, 1973. Lacan J (1969–1970). L’envers de la psychanalyse. Paris: Seuil, 1991. Psycholinguistic Research Methods S Garrod, University of Glasgow, Glasgow, UK ß 2006 Elsevier Ltd. All rights reserved. Psycholinguistics aims to uncover the mental repre- sentations and processes through which people pro- duce and understand language, and it uses a wide range of techniques to do this. The preferred psycho- linguistic method is to carry out a controlled experi- ment. This means that the researcher manipulates an independent linguistic variable to control some aspect of language processing and then measures the effect of the manipulation on a dependent variable of inter- est. For example, consider an experiment aimed at discovering the influence of the age of acquisition of a word (independent variable) on the time it takes an adult to identify that word (dependent variable). The psycholinguist would construct two lists of words, one containing words learned early in life, the other containing words learned later in life, and then mea- sure the effect of age of acquisition (the independent variable) on the time to identify the word (the depen- dent variable). To make sure that the independent variable is the real cause of any effect observed in the dependent variable, the experimenter needs to carefully control that variable. For example, in the above experiment it is important to make sure that age of acquisition is not confounded with the length of the word or its citation frequency, because we know that longer words and low frequency words take longer to read. Once the experiment has been run with a sufficient number of participants and a sufficient range of lin- guistic materials, the data is analyzed statistically. This usually involves taking the average of the values of the dependent variable (response latencies) for each value of the independent variable (each list of words acquired at different ages) and establishing whether the differences associated with the different values of the independent variable are statistically reliable. It is now standard in psycholinguistics experiments to test that the effects hold true both across the range of participants used in the experi- ment and across the range of linguistic materials used in the experiment. Because the psycholinguist is interested in the dynamics of language processing, an important dis- tinction is drawn between on-line techniques, which measure variables that tap into language processing as it happens, and off-line techniques, which measure variables related to the subsequent outcomes of pro- cessing. In practice, on-line and off-line techniques compliment each other, with off-line techniques used to determine the outcome of interpretation and on-line techniques used to determine its time course. Another major distinction between psycholinguis- tic techniques relates to the nature of the dependent variables they measure. Some experiments have be- havioral dependent variables, such as those asso- ciated with a reader’s eye movements while reading, others have neurophysiological dependent variables such as those associated with electrical brain activity produced while listening to a sentence. The article first considers behavioral methods and then related neurophysiological methods. It starts with methods for the study of spoken and written language compre- hension, and then methods for studying language production and dialogue. Psycholinguistic Research Methods 251

Psycholinguistic Research Methods - - Get a Free

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Psycholinguistic Research Methods -   - Get a Free

See also: Deixis and Anaphora: Pragmatic Approaches;

Jakobson, Roman (1896–1982); Lacan, Jacques (1901–

1981); Metaphor: Psychological Aspects; Metonymy;

Rhetoric, Classical.

Bibliography

Freud S (1900). ‘Die Traumdeutung.’ In Gesammelte werkeII-III. Frankfurt am Main: Imago. 1940–1952.

Freud S (1901). ‘Zur Psychopathologie des Alltagslebens.’In Gesammelte werke IV. Frankfurt am Main: Imago.1940–1952.

Freud S (1905). ‘Der Witz und seine Beziehung zum Unbe-wussten.’ In Gesammelte Werke, VI. Frankfurt am Main:Imago. 1940–1952.

Freud S (1915). ‘Die Verdrangung.’ In Gesammelte WerkeX. Frankfurt am Main: Imago. 1940–1952.

Lacan J (1955–1956). Les Psychoses. Paris: Seuil, 1981.Lacan J (1957). ‘L’instance de la lettre dans l’inconscient

ou la raison depuis Freud.’ In Ecrits. Paris: Seuil, 1966.Lacan J (1957–1958). Les formations de l’inconscient.

Paris: Seuil, 1998.Lacan J (1959). ‘D’une question preliminaire a tout traite-

ment possible de la psychose.’ In Ecrits. Paris: Seuil,1966.

Lacan J (1964). Les quatre concepts fondamentaux de lapsychanalyse. Paris: Seuil, 1973.

Lacan J (1969–1970). L’envers de la psychanalyse. Paris:Seuil, 1991.

Psycholinguistic Research MethodsS Garrod, University of Glasgow, Glasgow, UK

� 2006 Elsevier Ltd. All rights reserved.

Psycholinguistics aims to uncover the mental repre-sentations and processes through which people pro-duce and understand language, and it uses a widerange of techniques to do this. The preferred psycho-linguistic method is to carry out a controlled experi-ment. This means that the researcher manipulates anindependent linguistic variable to control some aspectof language processing and then measures the effectof the manipulation on a dependent variable of inter-est. For example, consider an experiment aimed atdiscovering the influence of the age of acquisition of aword (independent variable) on the time it takes anadult to identify that word (dependent variable). Thepsycholinguist would construct two lists of words,one containing words learned early in life, the othercontaining words learned later in life, and then mea-sure the effect of age of acquisition (the independentvariable) on the time to identify the word (the depen-dent variable). To make sure that the independentvariable is the real cause of any effect observed inthe dependent variable, the experimenter needs tocarefully control that variable. For example, in theabove experiment it is important to make sure thatage of acquisition is not confounded with the lengthof the word or its citation frequency, because weknow that longer words and low frequency wordstake longer to read.

Once the experiment has been run with a sufficientnumber of participants and a sufficient range of lin-guistic materials, the data is analyzed statistically.

This usually involves taking the average of the valuesof the dependent variable (response latencies) foreach value of the independent variable (each list ofwords acquired at different ages) and establishingwhether the differences associated with the differentvalues of the independent variable are statisticallyreliable. It is now standard in psycholinguisticsexperiments to test that the effects hold true bothacross the range of participants used in the experi-ment and across the range of linguistic materials usedin the experiment.

Because the psycholinguist is interested in thedynamics of language processing, an important dis-tinction is drawn between on-line techniques, whichmeasure variables that tap into language processingas it happens, and off-line techniques, which measurevariables related to the subsequent outcomes of pro-cessing. In practice, on-line and off-line techniquescompliment each other, with off-line techniquesused to determine the outcome of interpretation andon-line techniques used to determine its time course.

Another major distinction between psycholinguis-tic techniques relates to the nature of the dependentvariables they measure. Some experiments have be-havioral dependent variables, such as those asso-ciated with a reader’s eye movements while reading,others have neurophysiological dependent variablessuch as those associated with electrical brain activityproduced while listening to a sentence. The articlefirst considers behavioral methods and then relatedneurophysiological methods. It starts with methodsfor the study of spoken and written language compre-hension, and then methods for studying languageproduction and dialogue.

Psycholinguistic Research Methods 251

Page 2: Psycholinguistic Research Methods -   - Get a Free

Behavioral Methods

Common Assumptions Underlying BehavioralMethods

Although there is a wide range of behavioral psycho-linguistic methods, most depend upon the same basicassumptions. One important assumption concernshow measurements of the time to carry out a taskrelate to inferences about complexity of processing.Whether the timed response be an eye movement orthe time to answer ‘yes’ or ‘no’ to a question, it isassumed that the complexity of the mental process isreflected in the response latency. For example, if areader takes longer to read the fragment ‘‘. . . racedpast the barn fell’’ when it is part of the sentence ‘‘thehorse raced past the barn fell’’ than the sentence‘‘the horse that was raced past the barn fell,’’ this istaken to reflect greater complexity in the syntacticanalysis of the former than the latter. This assumptionis used when interpreting results from most on-linetechniques.

A more sophisticated timing methodology investi-gates the trade-off between speed to respond andaccuracy of that response. This is called a speed accu-racy trade-off (SAT) method. With SAT, participantsare required to make some response to a linguisticstimulus as soon as they hear a tone, presented atdifferent intervals after the presentation of the stimu-lus. When the interval is short, participants tend tomake many errors, and when sufficiently long, theybecome completely accurate. So plotting responselatency and accuracy across the range of tone inter-vals gives an unbiased measure of the rate at whichthe task can be carried out. SAT techniques have beenused to assess the rate at which different kinds oflexical, syntactic, and semantic processing occur.Although it is considered a particularly refined tech-nique for establishing processing rates, it has the dis-advantage that many thousands of trials have to berun to produce a clear SAT profile, and this will meanthat there will have to be many examples of each kindof material used. So there are only a limited numberof issues that can easily be investigated with SAT.

Another important assumption underlying manybehavioral methods concerns the interpretation ofpriming effects. Priming techniques measure the ef-fect of having previously processed a prime item onthe subsequent processing of a target item. The primemight be a word with a particular form or meaningor it might even be a whole sentence with a particularsyntactic structure. The rationale behind primingtechniques is that any influence of prime on thesubsequent processing of the target must reflectsome relationship between the mental representationsof prime and target items. Typically, when a prime

boosts the interpretation of the target (e.g., by speed-ing up the recognition of the target item), this is takenas evidence that there is something in common be-tween the representation of target and prime. Forexample, if a person is quicker to decide that giraffeis a word having just read the word tiger than havingjust read the word timer, then it is assumed that this isdue to the closer semantic relationship between tiger-giraffe than between timer-giraffe. On the hand,when it hinders the interpretation of the target – thisis called negative priming as opposed to positivepriming – this is taken to reflect some conflict be-tween the representations of prime and target.Priming techniques are particularly useful for estab-lishing the relationship between different linguisticrepresentations used during processing and are wide-ly used in conjunction with a number of behavioralor neurophysiological measures.

It is beyond the scope of this article to describe allthe behavioral techniques that have been used in psy-cholinguistics. Instead, the article concentrateson techniques that have had a major impact on thefield.

Behavioral Methods for Spoken LanguageComprehension

Cross-modal Priming A common priming techniqueused to tap into spoken language comprehension iswhat is called cross-modal priming. In cross-modalpriming the prime item is usually a word embedded ina spoken sentence or text used to prime a target itempresented in written form, which is why it is calledcross-modal. Consider the problem of working outhow listeners resolve ambiguous words. Cross-modalpriming can indicate the immediate interpretation ofan ambiguous word, such as bug, in contexts thatpromote either one or other meaning of the word(e.g., ‘insect’ or ‘listening device’). As participantslisten to bug in the different contexts, they are pre-sented with a written word (ANT or SPY) or a non-word (AST) and have to decide as quickly as possiblewhether the target is a word or not (this is calledlexical decision). The question of interest is whetherlexical decision of the targets ANTand SPY is boostedby hearing the prime bug in the different contexts.It turns out that when the visual target is presentedimmediately after hearing the prime in either contextit promotes lexical decision for both targets. So thisindicates that both interpretations (‘insect’ and ‘lis-tening device’) are immediately activated irrespectiveof the disambiguating context. However, if the visualtarget is presented at a slightly later point (about200 ms after the prime), only the target related tothe contextually appropriate interpretation (i.e.,ANT for ‘insect’ or SPY for ‘listening device’) is

252 Psycholinguistic Research Methods

Page 3: Psycholinguistic Research Methods -   - Get a Free

primed. Cross-modal priming therefore indicates thatall meanings of ambiguous words are accessed imme-diately on encounter, but then only the contextuallyappropriate meanings are retained as comprehensionproceeds.

Cross-modal priming has been used to investigatemany aspects of spoken language comprehension.These include influences of prior discourse contexton lexical processing, resolution of anaphoric refer-ences (e.g., pronouns), and morphological analysis.The technique has the advantage that it can tapinto spoken language comprehension as it occursand often uncovers aspects of processing of whichthe subject is completely unaware, such as in theambiguity experiment described above.

The Gating Technique Another technique for estab-lishing when listeners interpret words in relationto information in the speech stream is gating. Thegating procedure involves presenting increasinglylong fragments of speech and measuring when listen-ers can interpret the speech appropriately. For exam-ple, if you want to know at what point a listener canaccurately recognize the word cathedral, you canpresent them with the spoken fragments /ca/, /cath/,/cathe/ , /cathed/ . . . and record the listener’s judge-ments for each fragment. The shortest fragmentthat can be correctly identified defines the point atwhich there is sufficient information in the speechfor identification. This technique can be used to esti-mate the earliest point at which the listener could iden-tify a word and so make it possible to test whether thispredicts the recognition time for that spoken word.

The Visual World Paradigm One of the most effec-tive on-line measures is eyetracking (recording theprecise pattern of eye fixations during comprehen-sion). Because visual attention is strongly controlledby where a person is currently looking, eyetrackingcan be used to indicate what a person is attending toat any point during comprehension and for how longthey attend to it. The technique can either be used forreading research (see below), in which case the focusof attention corresponds to the words being looked atany time or it can be used to measure which part of ascene a participant attends to as they interpret spokenutterances about that scene. The latter technique isusually referred to as the visual world paradigm.A classic example of the use of this technique is indetermining how listeners deal with syntactic ambi-guities that arise during comprehension. For example,participants listen to instructions that are initiallyconsistent with two syntactic analyses while theyview a scene containing a small number of objects.They might be asked to Put the frog on the napkin

in the box when there is either one or two frogs inthe scene. Analyses of eye movements demonstratethat viewers look at the frog which is on a napkinmore if there is also a frog that is not on a napkin thanif there is no other frog present. From this it can beinferred that the visual context drives syntactic dis-ambiguation (i.e., it supports the reading in which onthe napkin syntactically modifies frog rather thanbeing a syntactic argument of put) at an early stagein processing.

The visual world paradigm has been used to inves-tigate a wide range of issues in spoken language com-prehension. These include lexical access, resolution ofanaphoric pronouns, development of strategies forsemantic and syntactic processing, language develop-ment, and even language processing in dialogue. Itoffers a precise indication of when listeners integrateinformation from a linguistic utterance with that inthe visual world and tends to show that comprehen-sion is both incremental and immediate in relation tomost levels of linguistic analysis.

Behavioral Methods for Written LanguageComprehension

There are a variety of techniques that tap into thetime course of written language comprehension.

Self-paced Reading One class of techniques is self-paced reading. The reader determines the rate atwhich written material is presented and the experi-menter records the rate of presentation. A readermight be required to pace himself or herself sentenceby sentence, phrase by phrase, or word by word. Forexample, in the word-by-word procedure, a word ispresented, and as soon as the reader has understoodit, he or she presses a key to trigger presentation of thenext word. The sequence is then repeated until all thetext has been read and the time to read each word isrecorded. This kind of technique has been used tostudy syntactic analysis, discourse comprehensionprocesses and in particular resolution of anaphors. Itgives a good indication of when a reader encountersdifficulty in comprehension, but is limited accordingto the size of linguistic unit being presented. Whereaslarger units such as whole sentences can be read at anormal rate during self-paced reading, smaller unitslike words tend to be read much more slowly in self-paced reading tasks. Hence, when the technique hashigh on-line resolution (e.g., when word by word), italso interferes most with the normal reading process.This is not a problem with the eyetracking techniquedescribed below.

Rapid Serial Visual Presentation (RSVP) A slightlydifferent technique for presenting written language

Psycholinguistic Research Methods 253

Page 4: Psycholinguistic Research Methods -   - Get a Free

uses what is called rapid serial visual presentation(RSVP). With RSVP, readers see sequences of wordsin the center of a computer screen presented at a fixedfast rate. The experimenter then has the reader carryout an additional task, such as identifying a word inthe sequence or trying to recall the sequence of words,which indicates how comprehension is limited by therate of presentation. This technique has been usedto investigate lexical, semantic, and syntactic proces-sing. Like word-by-word self-paced reading, RSVPmay well interfere with normal language processing.

Eyetracking During Reading The least interferingon-line behavioral technique for written languagecomprehension is eyetracking. During reading, theeye moves in a systematic way. There are brief fixa-tions in which gaze stays on the same letter inter-spersed with fast movements called saccades duringwhich the gaze moves to another letter or word of thetext. For a skilled reader, 9 out of 10 saccades movethe gaze from left to right to sample new materialfrom the text, whereas 1 out of 10 saccades return thepoint of gaze to previously read material (these arecalled regressions). The duration of fixations and thelength and direction of saccades (i.e., forward orbackward movement of the gaze) directly reflect theease or difficulty of the reading process. Furthermore,they indicate the precise word in the text that iscausing reading difficulty because attention is onlygiven to the word currently fixated.

The limited span of attention during reading can bedemonstrated using the moving window technique inwhich a computer program controls dynamically thewindow of text presented to the reader as a functionof where they are fixating. For example, with anasymmetric 12-letter window, the 4 letters to leftand the 8 letters to the right of where the reader isfixating will be displayed as normal, whereas all theremaining text will be converted into random letters.The window of text together with its surround ofrandom letters then changes as the point of fixationchanges. One can reduce the size and form of the textwindow and measure when it begins to affect readingrate. It turns out that normal reading is quite possiblewhen the window only contains the word currentlyfixated plus the first three letters of the next word onthe line. However, there is a proviso that the materialaround the window must retain the spaces betweenthe words in the original text. When the windowarrangement is reversed so that the window containsrandom letters and the surround contains the normaltext, readers encounter difficulty. With a reverse win-dow of only 11 letters in width, reading becomesalmost impossible.

Moving window studies indicate that readers onlytake in information from a limited region of text atany time during reading. This means that any extratime spent fixating the region must reflect processingdifficulty associated with that region of text or previ-ously fixated regions of text not completely processedbut still held in memory.

Eyetracking has been used to study a wide range oflinguistic processes, including lexical access, resolvinglexical ambiguities, syntactic analysis, and variousdiscourse processing phenomena, such as anaphoraresolution. It is particularly effective in determiningprecisely when the reader makes a decision aboutsome aspect of the linguistic input during sentenceor discourse processing. For example, when presentedwith the sentence We like the city that the authorwrote unceasingly and with great dedication aboutreaders spend longer fixating the verb wrote as com-pared with the same verb in the fragment We like thebook that the author wrote unceasingly and withgreat dedication about. This shows that readers im-mediately attempt to integrate each word of the sen-tence with the prior discourse and hence they detectthe temporary anomaly produced by wrote in the firstbut not in the second sentence.

Eyetracking is a particularly effective techniquebecause it does not interfere with the normal processof reading. A similar claim is made for some neuro-physiological techniques such as ERP, described later.

Behavioral Methods for Spoken LanguageProduction

Until the last decade, language production was nota central topic in psycholinguistics. This was partlybecause researchers did not have on-line methods forstudying production in properly controlled experi-ments. This meant that the pioneering research onlanguage production depended on off-line techni-ques, such as the study of speech errors. However,more recent work in language production has beeninfluenced by on-line techniques. We consider bothkinds of technique below.

Analysis of Speech Errors Speech error data hasbeen used to draw many interesting conclusionsabout the nature of language production. For exam-ple, it was observed that substitution errors (e.g.,saying if I was done to that rather than If that wasdone to me) tend to always involve the same syntacticclasses (e.g, the pronouns me and that). Also, as theexample shows, the lexical substitution does not al-ways involve a syntactic substitution. Otherwise, theerror would have produced If me was done to that.

254 Psycholinguistic Research Methods

Page 5: Psycholinguistic Research Methods -   - Get a Free

Such findings provide evidence that in production,words are chosen before they are strung togetherto make up an utterance and that this occurs beforethe words are marked for syntactic case.

Speech errors have been used to argue for an over-all organization of speech production into separatemessage planning, lexical and grammatical assembly,and phonological processing components. There havealso been some attempts to develop techniques toelicit speech errors experimentally by priming theerrors in a similar fashion to that used in tonguetwisters. Many of the conclusions drawn from theanalysis of speech errors have been supported byresults from more recent on-line techniques such aspicture naming.

Picture Naming A particularly influential on-linetechnique for studying language production is tomeasure the time it takes participants to name pic-tures of objects or events. Picture naming is common-ly combined with some form of priming task (see nextsection). It can also be combined with other techni-ques such as eyetracking to give a more precise indi-cation of how words are accessed during languageproduction. For example, a researcher might wantto know whether speakers wait until they have allthe words available before they start to articulatethe first word in an utterance. To find out, the re-searcher could have participants name two picturedobjects A and B in a phrase of the form A and B. If thetime to start articulating the name of A is unaffectedby the time to access the name for B as indicated bythe time to articulate B in isolation, then the research-er can conclude that the speech articulation processbegins before both word forms have been accessed.In fact, the evidence supports this conclusion.

Priming Techniques in Language Production Animportant issue in language production concerns theextent to which utterances are formulated incremen-tally one unit at a time according to different levels ofrepresentation (semantic, syntactic, and phonologi-cal). Priming techniques have been used to addressthis and other related questions. For example, at thephonological level words could be assembled forarticulation either as complete packages or incremen-tally as sequences of distinct phonological units. A so-called implicit priming technique has been developedto test this. The participant has to learn sets of pairsof words, such that when given the first word in thepair he or she names the second word as quickly aspossible. Crucially, in one condition the second wordsalways share the same first syllable (e.g., single-loner,place-local, fruit-lotus), whereas in the other condi-tion they do not (e.g., single-loner, signal-beacon,

captain-major). The question is whether participantscan use the implicit prime of the shared first syllableto speed up articulation. It turns out that they can,and more interestingly, the implicit priming onlyworks for the first syllable in the word. When givena comparable list in which the second syllable ofthe second word is shared (e.g., single-murder,place-ponder, fruit-boulder), there is no articulatorybenefit.

There are also techniques for studying priming atthe syntactic level in production which use a variantof the picture naming procedure. A typical studymight involve participants describing a sequence ofpictured events using a verb indicated at the bottomof each picture. Interleaved between these descrip-tions, the participant checks descriptions they hearagainst another series of pictured events. By usingditransitive verbs such as give, which take eitherprepositional objects (give the picture to Mary) ordouble objects (give Mary the picture), it is possibleto study syntactic priming independent of semanticpriming. The question is whether having just heardThe sailor gave the banana to the nun participantsare more likely to describe their next picture asThe clown handed the book to the pirate than Theclown handed the pirate the book? It turns out thatthey are.

Behavioral Methods for the Study of Dialogue

As with language production, it is only quite recentlythat psycholinguists have begun the experimental in-vestigation of language processing during dialogue.Again there is what has sometimes been called theproblem of exuberant responses. Because dialogue isinherently spontaneous, how can the experimenterexert the control required for a sound experiment?

One way around this problem is to set up a taskthat controls what interlocutors can talk about. Onesuch task is the referential communication task. Oneparticipant is required to describe a series of arbitraryvisual patterns such that their partner who is ableto reply can identify each pattern from the set thatthey have been given. This technique has been used toinvestigate how feedback from the listener affectsthe nature of subsequent references made by aninterlocutor. More complex dialogue tasks have hadconversational partners describe routes on a map orpositions in a maze as part of some other cooperativeactivity. More recently, researchers have started torecord interlocutors patterns of eye movementswhile carrying out some version of the referentialcommunication task. In this way, it is possible tocombine aspects of the visual world paradigm withthose of the interactive referential communicationtask.

Psycholinguistic Research Methods 255

Page 6: Psycholinguistic Research Methods -   - Get a Free

Neurophysiological Methods – ERP,fMRI, and MEG

Techniques for measuring the neurophysiological cor-relates of language processing have recently increasedthe psycholinguist’s methodological armory. Three par-ticular measurement techniques have been used. Thefirst measures electrical activity at the scalp using elec-tro-encephalography (EEG) to produce what are calledevent-related brain potentials (ERPs). The secondmeasures changes in brain blood flow associated withneural activity using functional magnetic resonanceimaging techniques (fMRI), and the third measureschanges in magnetic fields associated with the electricalactivity in the brain using magneto-encephalography(MEG). Each technique has its own advantages anddisadvantages as a psycholinguistic tool.

fMRI signals give precise information about thearea in the brain associated with the particular activi-ty, so fMRI has proved useful for neurolinguisticinvestigation. The disadvantage with the techniquefor psycholinguistics is that it is not good for estab-lishing the time course of the neural activity. This isbecause it takes time for the neural activity to producechanges in the blood flow. By contrast, ERPs provideprecise information about the time course of the neu-ral activity, but it is difficult to establish the source ofthe activity. This is because the ERP signal is affectedby all sorts of irrelevant factors such as the thicknessof the skull or interactions between signals from dif-ferent areas of the brain. Finally, the most recentlydeveloped technique, MEG, offers good localizationwith similar temporal resolution to ERP. Neverthe-less, like ERP signals, MEG signals are only sensitiveto activity in neural structures with particular orien-tations, and neither technique is easy to apply whenthere is contemporaneous motor activity, such as eyemovements or articulation during speech.

The most influential psycholinguistic research todate has tended to use ERP because it is both relative-ly cheap to run ERP experiments and the techniqueoffers similar temporal resolution to that of behav-ioral measures such as eye tracking. This article con-centrates mainly on the use of ERPs, but it will alsosay something about the use of MEG.

Event-related Brain Potentials (ERPs)

ERPs are derived from measurements of small changesin voltage at different points across the scalp. Theyare called event-related potentials because they areanalyzed in relation to the onset of a triggeringevent. For instance, to derive an ERP that reflectsword identification processes, the experimenter pre-sents a word on a computer screen and measures thechanges in scalp voltage from the point at which the

word was presented. This process is then repeatedover a number of trials. Because the data from eachtrial will contain irrelevant electrical activity, the ex-perimenter takes the average potential across the setof related trials. In this way, the irrelevant ‘noise’information can be filtered out. What remains is anERP waveform with identifiable peaks and troughs ofvoltage. These peaks and troughs are taken to reflectthe activity of bundles of nerve fibers in particularparts of the brain.

In practice, ERP researchers try to identify thecharacteristic peaks and troughs and establish howthey might relate to concurrent processing. One well-established peak is called the N400, which corre-sponds to a negative component of the wave occurringapproximately 400 ms. after presentation of the wordthat triggers it (it is a peak because ERP researchersconventionally plot negative values upward and posi-tive values downward). The N400 has been associatedwith processes of conceptual or semantic integrationof words into their sentential contexts. For example,when you measure ERPs elicited by the words eat,drink, or cry in the context ‘‘The pizza was too hotto eat/drink/cry,’’ then the N400 is larger for cry thandrink and larger for drink than the contextually ap-propriate eat. This pattern is consistent with theidea that the N400 reflects conceptual integration(with eat being integrated into the sentence betterthan drink) as well as semantic integration (drink issemantically related to eat whereas cry is not).

Another characteristic trough is called the P600 (apositive change occurring about 600 ms after thetriggering word). Unlike the N400, the P600 hasbeen associated with syntactic integration processes.For example, the word was when presented inthe ungrammatical sentence The doctor forced thepatient was lying produces a much larger P600 thanwas in the sentence The doctor thought the patientwas lying. Because these two wave forms, N400 andP600, seem to reflect different kinds of processing,ERP can be used to establish the precise time course ofthese different processes. So ERP complements otheron-line measures such as eyetracking, which do notdifferentiate between different kinds of psycholin-guistic processes in this way.

ERPs have been used to address a wide range ofquestions both about early stages of lexical proces-sing and more general syntactic and semantic process-es in language comprehension. It is best suited for thestudy of processes that immediately follow presenta-tion of the triggering stimulus because the ERP signalbecomes increasingly noisy over time. This meansthat it is not such a good technique for studyingsuch things as syntactic re-analysis or integration ofa sentence into the discourse context.

256 Psycholinguistic Research Methods

Page 7: Psycholinguistic Research Methods -   - Get a Free

Magneto-encephalography (MEG)

MEG has only recently begun to be used in psycho-linguistics. MEG has some of the advantages of bothfMRI techniques and EEG techniques because itenables precise source localization like fMRI andhas fine temporal resolution like ERP. It also comple-ments ERP. Whereas ERP electrical signals can onlybe picked up from nerve bundles that are in particularorientations with respect to the surface of the brain,MEG magnetic field signals can be picked up fromnerve bundles that are orthogonal to those giving ERPsignals. For these reasons, many researchers are par-ticularly optimistic about MEG as a psycholinguisticmethod used together with ERP.

One particularly interesting application has beenusing MEG to establish the relationship between neu-ral representation and linguistic form. The techniquedepends upon what has been called mismatch nega-tivity. It was observed that as the same items arerepeatedly presented to subjects, so the MEG signa-ture associated with their processing is automaticallyreduced (probably because of neuronal habituation).However, when a new item is presented, the signalreturns to normal. This happens irrespective of anybehavior on the part of the subject. Mismatch nega-tivity can therefore be used to establish the degree towhich different items are processed in the same way.The greater the resumption of the activity (i.e., mis-match negativity), the more different the neurologicalprocessing of the new item. In this way, mismatchnegativity can be used in a similar fashion to primingtechniques to explore the neurological representationof different aspects of a linguistic stimulus.

Summary and Conclusion

Psycholinguistic techniques differ according to thekind of variables measured and the extent to which

they tap into language processing as it happens.Behavioral measures, such as eyetracking, and neuro-physiological measures, such as ERP, are particu-larly effective for measuring the time course oflanguage comprehension. For language productionstudies, picture naming and priming techniques havebeen especially effective.

See also: Dialogue and Interaction; Evoked Potentials;

fMRI Studies of Language; Magnetoencephalography;

Psycholinguistics: Overview; Reading Processes in

Adults; Speech Errors: Psycholinguistic Approach; Spo-

ken Language Production: Psycholinguistic Approach.

Bibliography

Carreiras M & Clifton C E (eds.) (2004). The on-line studyof sentence comprehension: Eyetracking, ERP andbeyond. Hove: Psychology Press.

Garrod S (1999). ‘The challenge of dialogue for theories oflanguage processing.’ In Garrod S & Pickering M J (eds.)Language processing. Hove: Psychology Press. 389–415.

Levelt W J M (1989). Speaking: from intention to articula-tion. Cambridge, MA: MIT Press.

Osterhout L & Holcomb P J (1995). ‘Event-related poten-tials and language comprehension.’ In Rugg M D &Coles M G H (eds.) Electrophysiology of mind: event-related brain potentials and cognition. Oxford: OxfordUniversity Press. 171–216.

Pulvermuller F (2001). ‘Brain reflections of words and theirmeanings.’ Trends in Cognitive Sciences 5, 517–524.

Rayner K & Sereno S C (1994). ‘Eye movements in reading:Psycholinguistic studies.’ In Gernsbacher M A (ed.)Handbook of Psycholinguistics. San Diego: AcademicPress. 57–82.

Trueswell J C & Tanenhaus (eds.) (2004). Approachesto studying world-situated language use: bridging thelanguage-as-product and language-as-action traditions.Cambridge, MA: MIT Press.

Psycholinguistics: HistoryG Altmann, University of York, York, UK

� 2006 Elsevier Ltd. All rights reserved.

Our faculty for language has intrigued scholars forcenturies. Yet most textbooks assume that psycho-linguistics has its origins in the late 1950s and 1960s,and that nothing of note contributed to its evolutionbefore then. In some respects this is true, in that it wasonly then that psycholinguistics began to proliferateas an identifiable discipline within the psychology

literature. This proliferation was marked by thefounding in 1962 of the Journal of Verbal Learningand Verbal Behavior (which subsequently, in 1985,became the Journal of Memory and Language). Whythe original journal was so titled, and why its titlepresents us with a historical paradox, will becomeclearer as this review unfolds. The review’s purposeis to consider how the present-day state of the artevolved. In so doing, it will touch briefly on ancientGreek philosophy, 19th century neuroscience, 20thcentury psycholinguistics, and beyond. It will

Psycholinguistics: History 257

Page 8: Psycholinguistic Research Methods -   - Get a Free

consider approaches to the brain as practiced in bothancient Egypt and modern neuroscience. It will benecessarily selective, in order to make some senseof the historical developments that contributed topsycholinguistic science.

From the Ancient Egyptians to theGreek Philosophers

The earliest to write about language and the brainwere the ancient Egyptians – the first to write aboutanything at all. A catalog of the effects of head injury(and injuries lower down the body also) exists in whatis now referred to as the Edwin Smith Surgical Papy-rus, written about 1700 B.C. The writer (believed tohave collected together information spanning per-haps another 1000 years before) referred there towhat is presumed to be the first recorded case ofaphasia – language breakdown following brain trau-ma. However, the Egyptians did not accord muchsignificance to the brain, which unlike the otherorgans of the body, was discarded during mummifi-cation (it was scraped out through the nose). Theybelieved instead that the heart was the seat of the souland the repository for memory, a view largely sharedby the Greek philosopher Aristotle (384–322 B.C.) –a somewhat surprising position to take given thathe was a student at Plato’s Academy and that Plato(427–347 B.C.) believed the brain to be the seat ofintelligence.

Plato was possibly the earliest to write at length onlanguage (where others may have spoken, but notwritten). Certainly, his writings were the most influ-ential with respect to the philosophy of languageand the question ‘what does a word mean?’ Plato, inhis Republic, considered the meaning of words inhis Allegory of the Cave (as well as in Cratylus). In thisallegory, a group of prisoners have been chained alltheir lives within a cave. All they see are the shadowsof objects cast upon a wall by the flames of a fire.They experience only those shadows (in much thesame way that we can only experience the results ofour sensory percepts), and their language similarlydescribes only those shadows. Plato noted that whenusing a word, the prisoners would take it to refer tothe shadows before them, when in fact (according toPlato), they would refer not to objects in the shadowworld, but (unbeknown to the prisoners) to objects inthe real world. Thus, for Plato (and a host of morecontemporary philosophers, from Frege to Puttnam),the true meaning of a word – its reference – is externalto the person who, by using the word, is attributingmeaning to it. But why should it matter what a wordrefers to?

The psycholinguistic endeavor is to uncover themental processes that are implicated in the acquisi-tion, production, and comprehension of language.Just as psychology is the study of the control ofbehavior, so psycholinguistics is the study of the con-trol of linguistic behavior. A part of any psycholin-guistic theory of mental process is an account of whatconstitutes the input to the mental process – that is,what information is operated upon by those process-es. While Plato was of course correct that the form ofthe real-world object dictates the form of the sensoryimage presented to the allegorical prisoner, the mentalprocesses involved in that prisoner’s use of languagecan operate only on mental derivatives of that sensoryimage. There may be properties of the real-worldobject (such as color, texture, and density) that arenot represented in their shadow-forms, and thus men-tal processes that might otherwise (outside the cave)develop sensitivity to those properties need never de-velop such sensitivities if constrained to living a lifeinside the cave. But while the shadows would notpermit the distinction between, say, a tennis ball andan orange, the contexts in which the shadows wereexperienced, or their names heard, would distinguishbetween the two – mental sensitivities would develop,but they would not necessarily be grounded in theperceptual domain. These distinctions, between theactual world and our experience of the world, andbetween an object or word and the context in whichthat object or word might occur, led other philoso-phers, most notably Wittgenstein in his PhilosophicalInvestigations, to propose that the meaning of a wordis knowledge of its use in the language – that is,knowledge of the contexts in which it would be ap-propriate to utter that word, where such knowledge isshaped by experience. We return to this theme whenwe consider in more detail the more recent history(and possible future) of psycholinguistics.

The Earliest Empirical Studies

The pre-history of psycholinguistics (up until the19th century) was dominated by philosophical con-jecture. The term dominated is used loosely here, asthere was no systematic and ongoing questioning ofthe relationship between mind and language, or in-deed, brain and language – there was no communityof researchers asking the questions. But modern-daypsycholinguistics is dominated not by philosophy (al-though it had its moments), but by experimental in-vestigations that measure reaction times, monitor eyemovements, record babies’ babbles, and so on. Its pre-history lacks such experimentation. This is not to saythat no experiments were performed. Certainly, therewere isolated cases, generally of a kind that would not

258 Psycholinguistics: History

Page 9: Psycholinguistic Research Methods -   - Get a Free

be tolerated in the modern age. Indeed, one of themost widely replicated studies (if one is to believe thehistorians) is a study that was carried out on at leastthree and possibly four independent occasions be-tween the 7th Century B.C. and the 16th Century A.D.

In each case, some number of babies were apparentlybrought up in isolation (except for carers who wereeither mute, or instructed not to speak), with the aimof the experiment being to discover what language, ifany, the children would grow up speaking. The resultsvaried. The Egyptian Pharaoh Psamtik (7th C.B.C.) wascredited by Herodotus with discovering that theyspoke Phrygian. The Roman emperor and Germanking Frederick II (1194–1250 A.D.) carried out a simi-lar study, but all the infants died. King James IV(1473–1513 A.D.) is supposed to have performed asimilar experiment on the island of InchKeith, al-though it is likely that this study never in fact tookplace (the fact that the children are reported to haveemerged from their isolation speaking Hebrew is onereason to doubt the truth of the story). And finally,Akbar the Great (1542–1605), the grandfather ofShah Jahan who built the Taj Mahal, similarly failedto discover man’s ‘natural language’ (although thereis some suggestion that in this case, the infants ac-quired a form of signed language inherited, in part,from the infants’ carers).

The 19th Century Emergence of theCognitive Neuropsychology of Language

The first systematic studies of the relationship betweenlanguage and brain were conducted in the 19th cen-tury. This is probably the earliest point in the historyof psycholinguistics from when a progression of stud-ies can be traced, with one author building a case onthe basis of earlier studies coupled with newer data.The protagonists at this time were Gall, Boulliard,Aubertin, Broca, Wernicke, and Lichtheim, to name afew. None of them would be described as ‘psycholin-guists,’ but to the extent that their work (like modern-day cognitive neuroscientists) informed accounts ofthe relationship between brain and language, they areno less a part of the history of psycholinguistics thanare the linguists, philosophers, psychologists, and cog-nitive scientists who have influenced the field throughtheir own, sometimes radically different, perspectives.

Franz Gall is perhaps better known for his work onphrenology, but he believed that language functionwas localized in the anterior parts of the brain. Hisstudent Jean Boulliard collected clinical evidence insupport of Gall’s theory, and in turn, Boulliard’s stu-dent Ernest Aubertin did the same. It was at a meetingin April of 1861 that Aubertin made his beliefs plain:If a case of speech loss could be found that was not

accompanied by a frontal lesion, he would give up his(and his intellectual forbearers’) belief in the localiza-tion of language. In the audience was Paul Broca,after whom are named Broca’s aphasia and, withinthe left frontal lobe, Broca’s area. Broca was struck byAubertin’s empirical challenge, but at the same timerealized that craniology (Gall’s lasting influence onhis students) could not provide the proof that wasrequired to establish a link between language lossand cerebral localization – only anatomical inspec-tion of the brain could do that. Coincidentally, withina few days he was presented with a patient sufferingfrom speech loss who died a few days after that.Broca’s postmortem analysis of this patient’s brain(and the damage to what is now referred to as Broca’sarea), coupled with earlier observations made by MarcDax (on right hemiplegia and its correlation withspeech loss), but published at the same time, establishedthe anatomical validity of the localization hypothesis.About 10 years later (in 1874), Carl Wernicke pub-lished his work on ‘sensory aphasia’ (deficits in thecomprehension of language). This work was consider-ably enhanced by Wernicke’s student LudwigLichtheim who, in 1885, produced a schematic (cf.a ‘model’) of how three interlinked centers in thebrain are implicated in aphasia: Broca’s (the ‘centerof auditory images’), Wernicke’s (the ‘center of motorimages’), and a diffusely located ‘concept center.’Lesions to each of these areas, or to the connectionsbetween them, produce different kinds of aphasias.Most interesting of all, his schematic enabled him topredict disorders that had not yet been described. Thisability of a conceptual ‘model’ to make as yet untestedpredictions is a theme we shall return to.

The Early 20th Century Influence ofBehaviorism

By the end of the 19th century, the study of languagebegan to change, as did the study of psychologymore generally. Interest in the psychology (as op-posed to philosophy) of language shifted from beingprimarily (or even solely) concerned with its break-down to being concerned also with its normal use.Wilhelm Wundt in Die Sprache (published in 1900)stressed the importance of mental states and therelationship between utterances and those internalstates. William James similarly (at least early on)saw the advantages of introducing mental states intotheories of language use (see his 1890 Principles ofPsychology, in which several contemporary issues inpsycholinguistics are foreshadowed). But the early20th century was a turbulent time for psycholinguis-tics (as it was for psychology): J. B. Watson arguedthat psychology should be concerned with behavior

Psycholinguistics: History 259

Page 10: Psycholinguistic Research Methods -   - Get a Free

and behavioral observation, rather than with cons-ciousness and introspection (the Wundtian approach).And whereas Wundt had argued that a psychologyof language was as much about the mind as it wasabout language, behaviorists such as J. R. Kantorargued against the idea that language use implicateddistinct mental states. For Kantor, the German men-talist tradition started by Wundt was simply wrong.Even William James turned away from Wundtianpsychology. Thus, the behaviorist tradition took hold.

The late 19th and early 20th centuries were a timeof great change in linguistics, too. The 19th centuryhad seen the emergence of the Neogrammarians, agroup that studied language change. They were inter-ested in how the sounds of different languages wererelated, and how within a language, the soundschanged over time. They were less interested in whata language ‘looked like’ at a particular moment intime. This changed at the beginning of the 20thcentury when Ferdinand de Saussure brought struc-ture into the study of language. He introduced theidea that every element of language could be under-stood through its relation to the other elements (heintroduced syntactic distinctions that are still cen-tral to contemporary linguistics). In the 1930s, theBloomfieldian school of linguistics was born, with thepublication in 1933 of Leonard Bloomfield’s Lan-guage. Bloomfield reduced the study of languagestructures to a laborious set of taxonomic procedures,starting with the smallest element of language – thephoneme. In doing so, Bloomfield firmly alignedthe linguistics of the day with behaviorism. Andjust as behaviorism eschewed mental states in itsstudy of psychology, so the Bloomfieldian traditioneschewed psychology in its study of language. Thestudy of language was firmly caught between the pro-verbial rock and a hard place – between behaviorismon the one hand and taxonomy on the other. Mentalstates were, the argument went, irrelevant – whetherwith respect to psychological or linguistic inquiry.

The behaviorist tradition culminated (with respectto language) with B. F. Skinner’s publication in 1957 ofVerbal Behavior. Here, Skinner sought to apply behav-iorist principles to verbal learning and verbal behavior,attempting to explain them in terms of conditioningtheory. Verbal behavior (and Verbal Behavior) provedto be the final battleground on which the classicalbehavorists and the mentalists would clash.

The Mid-20th Century and theChomskyan Influence

In 1959, Chomsky published a review of Skinner’sVerbal Behavior. He argued that no amount of condi-tioned stimulus-response associations could explain

the infinite productivity or systematicity of language.With Chomsky, out went Bloomfield, and in camemental structures, ripe for theoretical and empiricalinvestigation. Chomsky reintroduced the mind, andspecifically mental representation, into theories oflanguage (although his beliefs did not amount to atheory of psychological process, but to an account oflinguistic structure). So whereas Skinner ostensiblyeschewed mental representations, Chomsky appar-ently proved that language was founded on preciselysuch representation. Some later commentators tookthe view that the Chomskyan revolution threw out theassociationist baby with the behaviorist bathwater.Behaviorism was founded on associationism. Behav-iorism was ‘out,’ and with it, associationism. Symbol-ic computation was ‘in,’ but with it, uncertaintyover how the symbolic system was acquired. It wasnot until the mid-1980s that a new kind of revolutiontook place, in which the associationist baby, nowgrown up, was brought back into the fold. The inter-vening 20 years were typical teenage years – full ofenergy, punctuated by occasional false hopes thatnonetheless proved essential to the maturationprocess.

Two years before his review of Verbal Beha-vior, Chomsky had published Syntactic Structures, amonograph devoted to exploring the notion of ab-stract grammatical rules as the basis for generatingsentential structure. According to Blumenthal in his1970 account of the history of psycholinguistics,Chomsky’s departure from the Bloomfieldian schoolwas too radical for an American publisher to want topublish a lengthy volume that Chomsky had writtenoutlining the new approach, and only Mouton, aEuropean publisher (and presumably more sympa-thetic to the tradition that Chomsky was advocating)would publish a shorter monograph based on anundergraduate lecture series he taught at MIT. Infact, this is not quite accurate (N. Chomsky, personalcommunication); Chomsky had indeed written alonger volume (subsequently published in 1975),and it is true that initial reactions to the manuscriptwere negative (but, according to Chomsky, not unrea-sonable), but Syntactic Structures was not a compro-mise brought about through Chomsky’s search for apublisher; he had not, in fact, intended to publish it.Instead, Cornelis van Schooneveld, a Dutch linguistand acquaintance of Chomsky’s who was visitingMIT and happened to edit a series for Mouton, sug-gested that Chomsky write up his class notes andpublish them. This he did, and modern linguisticswas born. Psycholinguistics became caught up, al-most immediately, in its wake.

Chomsky’s influence on psycholinguistics cannotbe overstated. He drew an important distinction

260 Psycholinguistics: History

Page 11: Psycholinguistic Research Methods -   - Get a Free

between ‘competence,’ or the knowledge we haveabout a language, and ‘performance,’ the use of thatlanguage (a distinction that was reminiscent of Saus-sure’s earlier distinction between langue and parole).Both, he claimed, arise through the workings of thehuman mind – a mind, which furthermore is innatelyenabled to learn the structures of human language(although not everyone agreed with the argumentsfor a language acquisition device akin to a mentalorgan – a concise summary of the counterargumentswas written by Bates and Goodman (1999)). It isperhaps surprising that against the backdrop ofSyntactic Structures and Chomsky’s Review ofSkinner’s Verbal Behavior, a new and influentialjournal dedicated to research into the psychology oflanguage should nonetheless, in 1962, give itself atitle (the Journal of Verbal Learning and VerbalBehavior) that firmly placed it in the behavioristtradition.

From Linguistic Competence toPsychological Performance

Chomsky’s theories of grammar were theories ofcompetence, not performance. And yet, his work ontransformational grammar initiated a considerableresearch effort in the early 1960s to validate thepsychological status of syntactic processing (the con-struction of representations encoding the dependen-cies between the constituents of a sentence). Manyof these studies attempted to show that perceptualcomplexity, as measured using a variety of differenttasks, was related to linguistic complexity (the so-called Derivational Theory of Complexity). However,whereas the syntactic structures postulated by trans-formational grammar did have some psychologicalreality, the devices postulated for building those struc-tures (e.g., the transformations that formed a part ofthe grammatical formalism) did not. It soon becameapparent that the distinction between competenceand performance was far more important than origi-nally realized – the linguists’ rules, which formed atheory of competence, did not make a theory of psy-chological process.

Subsequently, the emphasis shifted toward exami-nation of the psychological, not linguistic, mechan-isms by which syntactic dependencies are determined(a process referred to as parsing). In a seminal paperpublished in 1970, Thomas Bever pointed out that incases of ambiguity, when more than one structure(i.e., dependency relation) might be permissible, thereappear to be consistent preferences for one interpre-tation rather than another. This consistency appearedto hold not only across different examples of the samekind of ambiguity, but across different people, too.

Thus, despite the grammaticality of ‘the horse racedpast the barn fell’ (cf. ‘the car driven past the garagecrashed’), the preference to interpret ‘raced’ as a mainverb (instead of as a past participle equivalent to‘driven’) is so overwhelming that the sentence isperceived as ungrammatical (and the preference isthen said to induce a ‘garden path’ effect). Evidently,grammaticality and processability are distinct mentalphenomena.

On the Influence of the Digital Computer

The 1970s saw enormous growth in psycholinguis-tics. Advances were made across a wide range ofphenomena, including the identification of bothprinted and spoken words, the reading process, sen-tence comprehension (with much of the emphasis onthe resolution of ambiguities of the ‘garden path’kind), and the mental representation of texts. Wheth-er there was a ‘spurt’ in the number of publicationsis contentious, because although there undeniably wassuch a spurt, the whole of psychology experiencedthe same rapid growth. It would be wrong, however,to attribute all this advancement to the influence ofChomsky. The demise of behaviorism played a part(and certainly Chomsky played a part in that demise),but so did the advent in the 1950s of the digitalcomputer. The ‘mind-as-computer’ metaphor had asubtle but pervasive influence on both psycholinguis-tics and the study of cognition generally. Computerprograms worked by breaking down complex beha-viors into sequences of simpler, more manageable(and hence more understandable) behaviors. Theyrelied on symbol manipulation and the control ofinformation flow. They distinguished between differ-ent levels of explanatory abstraction (the high-levelprogramming language, the assembly code, and theflow of electrical currents around the hardware). Andperhaps most important of all to the empirical psy-chologist, they enabled novel predictions to be madethat might not otherwise have been foreseen had the‘model’ not been implemented in full; complex inter-actions among the components of a program were noteasy to foresee.

The influences of the digital computing revolutionwere felt in different ways. Some were direct, withresearchers building computer simulations of mentalbehavior (in the growing field of Artificial Intelligence,several language ‘understanding’ programs were writ-ten, some of which are still relevant 35 years later – e.g.,Terry Winograd’s SHRDLU program written in1968–1970). Other influences were indirect, comingto psycholinguistics via philosophy. One such exam-ple was Jerry Fodor’s Modularity of Mind hypoth-esis (from 1983). One simplified interpretation of

Psycholinguistics: History 261

Page 12: Psycholinguistic Research Methods -   - Get a Free

this hypothesis (it was interpreted in different ways bydifferent researchers) was that there are two alterna-tive ways of theorizing about the mind: one is toassume it is incredibly complex and that multiplesources of information interact in multiple ways, andthe other is to assume that it can be broken down intoa number of modules, each of which performs someparticular function and is ‘blind’ to the workings ofthe other modules (perhaps taking as input the outputof one or more of those other modules). Fodor arguedthat certain aspects of cognition were modular (theinput systems), and certain others were not (centralprocesses). This hypothesis had considerable influencein psycholinguistics, and for a time (the mid-1980sto early 1990s), hypotheses were evaluated accord-ing towhether theywere modularornot.There seemedlittle agreement, however, on where one drew theboundaries (for example, was spoken language recog-nition a part of an input system? If it was, how could‘higher-level’ knowledge of the context in whichthe language was being interpreted influence the mod-ular and encapsulated recognition process? – Someargued it could not, while others argued it could).It was about this time, in seeming opposition tothe trend toward symbolic computation, that a newcomputationally motivated approach to cognitionemerged in the mid 1980s, apparently eschewingsymbolic computation and modularity.

The Late 20th Century Emergence ofConnectionism: Statistical Approaches toLanguage

In 1986, David Rumelhart and Jay McClelland pub-lished Parallel Distributed Processing. This editedvolume described a range of connectionist, or neuralnetwork, models of learning and cognition, andmarked a ‘coming of age’ for connectionism. It was,for many researchers in psycholinguistics, their firstintroduction to a wide range of research in this emer-ging field. Of particular interest were the facts that‘knowledge’ in connectionist networks is encoded aspatterns of connectivity distributed across the neural-like units, and ‘processing’ is manifest as spreadingpatterns of activation. These networks can learncomplex associative relations largely on the basis ofsimple associative learning principles (based primari-ly on work published in 1949 by Donald Hebb, astudent of Lashley’s). Various algorithms exist to setthe ‘strengths’ of the connections between the unitsautomatically, so that a given input pattern of activa-tion across some set of units will spread through thenetwork and yield a desired output pattern acrosssome other set of units. Indeed, multiple input-output pairings can be learned by the same network.

Importantly, and in contrast to the ideals of thebehaviorist traditions, neural networks can developinternal representations.

Several connectionist models had profound effectson developments in psycholinguistics. TRACE, forexample, developed by McClelland and Jeff Elmanin the 1980s, was a model of spoken word recogni-tion that formed the focus of empirical research fora good 20 years after its inception. But TRACE didnot learn anything – it was hardwired. An extremelyinfluential model that did learn by itself was de-scribed by Elman (1990), who showed how a partic-ular kind of network could learn the dependenciesthat constrain the sequential ordering of elements(e.g., phonemes or words) through time. In effect, itlearned which kinds of word could follow whichother kinds of word (hence, it was a statistical model,because it encoded the statistics of the language itwas trained upon). Interestingly, it developed internalrepresentations that appeared to resemble gram-matical knowledge; words that occurred in similarsentential contexts came to evoke similar internalrepresentations (that is, internal patterns of activitywhen the word was presented to the network) – andbecause words of the same grammatical category tendto occur in the same sentential contexts, different‘clusters’ of words emerged, with each cluster repre-senting a different category of word.

Not surprisingly, the entire connectionist enterprisecame under intense critical scrutiny from the linguis-tics and philosophy communities, not least because itappeared to reduce language to a system of statisticalpatterns, was fundamentally associationist, nonmod-ular, and eschewed the explicit manipulation of sym-bolic structures (because the internal representationsthat emerged as a result of the learning process werenot symbolic in the traditional sense). Within thecontext of the symbolic-connectionist debate theredeveloped what became perhaps one of the longestsurviving disputes in contemporary psycholinguistics;between those that believe that word formation (e.g.,the formation of ‘walked’ from ‘walk,’ ‘ran’ from‘run,’ and ‘went’ from ‘go’) is driven by knowledgeof rules and exceptions to those rules, and those whobelieve it is driven by statistical regularity (which canapparently capture, in the right model, both the regu-larly and irregularly formed words). The debateshows little sign of abating, even 20 years later.

Critics notwithstanding, statistical approaches tolanguage (both with respect to its structure and itsmental processing) are becoming more prevalent,with application to issues as diverse as the ‘discovery’of words through the segmentation of the speechinput, the emergence of grammatical categories, andeven the emergence of meaning as a consequence of

262 Psycholinguistics: History

Page 13: Psycholinguistic Research Methods -   - Get a Free

statistical dependencies between a word and its con-text (cf. Wittgenstein’s views on the meaning ofwords). Empirically also, the statistical approachhas led to investigation of issues ranging from infants’abilities to segment speech and to induce grammar-like rules to adult sentence processing. The reasonthat such approaches have proved so appealing isthat statistics are agnostic as to the nature of thereal-world objects over which the statistics arecalculated – thus, the fundamentally same algorithmcan be applied to sequences of phonemes, words, orsentences. Their implementation within a neural net-work is similarly agnostic – the same network andthe same algorithms that enable that network toinduce the appropriate statistics can be applied tomany different domains. Connectionism opened upexperience-based learning to a range of psychologicaldomains, not just the linguistic domains. And experi-ence-based learning was attractive not least because itrequired fewer assumptions about the existence ofinnately specified domain-specific faculties (and in amulti-authored volume published in 1996, Jeff Elmanteamed up with a variety of developmental psycholo-gists to argue that connectionism was attractive pre-cisely because it enabled a new perspective on howinnate constraints on learning and neural structuremight be an important component of human lan-guage acquisition (Elman, 1996)).

Neural networks can be criticized for being (amongother things) too unconstrained – they can, in princi-ple, do more than might be humanly possible – butthe opposite criticism, that they are too small and donot necessarily ‘scale up’ is another criticism that isoften heard. Neural networks as currently implemen-ted are just the ‘medium’ on which are offered up thestatistics. To misuse a common adage, the proof willbe in the pudding, not in the plate that serves it up.There is little doubt, from the historical perspective,that although the emergence of connectionism hasoffered a powerful theoretical tool, its emergencehas also polarized sections of the psycholinguisticcommunity, between ‘connectionists’ on the one hand,and ‘symbolists’ on the other. This polarization is notunique to psycholinguistics, however, but pervadesthe study of cognition more broadly. And as if tofurther muddy the theoretical waters, the beginningof the 21st century has seen renewed interest in yetanother (no less controversial) paradigm – one thatgrounds language (and cognition) in action.

The Early 21st Century and the Groundingof Language in Action and the Brain

Traditional theories in cognition suppose that the jobof the perceptual system is to deliver to the cognitive

system a representation of the external world. The jobof the cognitive system is then to reconstruct, mental-ly, that external world. This reconstruction subse-quently forms the basis for ‘commands’ sent to, forexample, the motor system. Cognition thus medi-ates between perception and action. An alternativeapproach, termed ‘embodied cognition,’ is that cog-nition and action are encoded within the same repre-sentational medium. Cognition is thus rooted in thesame motoric and sensory representations that sup-port interaction with the external world. Or, put an-other way, cognition is grounded in the same neuralsubstrates that support sensory-motoric interactionwith the external world. One consequence of thisview is that language, a component of cognition,should, like the other components of cognition, bestudied in the context of (i) the interactions it causesbetween the hearer and the world, and (ii) the neuralsubstrates that support those interactions. Coinciden-tally, the 1990s saw a boom in research into the neuralsubstrate of language, in part due to the increasedavailability of neuroimaging technologies (predomi-nantly PET and fMRI, with EEG and more recentlyMEG also proving influential). It also saw increasedresearch into the relationship between language andaction. Taken together, these two streams of researchprovided increasing evidence for embodied cognition.

With respect to imaging, a variety of studiesdemonstrated what Lichtheim had alluded to a cen-tury earlier – that concepts are not represented insome discrete location within the brain, but aredistributed across different regions. For example,words whose meanings implicate tool use (e.g., ‘ham-mer’) activate regions of the brain responsible forcontrolling motoric action (during the use of thetool) and other regions involved in the recognitionof object form (during perception of the tool). Colorwords (e.g., ‘yellow’) and words referring to non-manipulable artefacts (e.g., ‘house’) do not activatemotoric areas to the same extent, but they do activateregions close to those implicated in form perceptionand, for color words, color perception. Importantly,there is no single region that is primarily active; rath-er, words and concepts activate complex patternsof activity that are distributed and overlappingwithin different parts of the brain that are knownto have (other) motoric and sensory functions. Themeanings of (at least some) words do appear, then, tobe grounded in those neural substrates that supportsensory-motoric interaction.

About the same time that increased attention wasfocusing on neuroimaging, new techniques for study-ing language and its effects on action were also beingdeveloped. One of these involved the monitoringof eye movements as participants listened to

Psycholinguistics: History 263

Page 14: Psycholinguistic Research Methods -   - Get a Free

commands to manipulate objects in front of them, oras they listened to descriptions of events that mightunfold within the scene before them (one can viewlanguage-mediated eye movements as central to therelationship between language and action, becauseeye movements signal overt shifts in attention, andbecause attention to something necessarily precedes(deliberate) action upon it). It was found that eyemovements were closely synchronized with processesimplicated in both spoken word recognition and sen-tence processing, and that much could be gleanedabout what kinds of information were recruited atwhat point during a word or sentence in order tointerpret the unfolding language with respect to thescene in front of the participant (it is not withoutsome irony that in L. N. Fowler’s famous Phrenologybust, from about 1865, the faculty for language islocated just below the left eye). Another techniqueinvolved measuring motoric responses to differentkinds of linguistic stimuli – for example, words orsentences referring to movements toward or awayfrom the body were found to interfere with responsesin a judgment task (e.g., ‘does this sentence makesense?’) that involved moving a finger toward oraway from a response button. A range of studies,some involving TMS (Transcranial Magnetic Stimu-lation – a method for either temporarily stimulatingor suppressing a part of the brain, such as parts ofmotor cortex) have confirmed this motoric compo-nent to language comprehension.

It is noteworthy, with respect to the embodimentapproach to cognition, that some of its basic tenetshave been around since the earliest days of (contem-porary) psycholinguistics. Winograd’s SHRDLU pro-gram, for example, viewed the meaning of a wordsuch as ‘place’ or ‘lift’ as that part of the program thatcaused placing or lifting – language comprehensionwithin that program was grounded in sensory-motoricrepresentation – and as such, SHRDLU followed inthe Wittgenstinian tradition of treating meaning asuse. Similarly, it is noteworthy that although most ofthe neuroimaging of language has been carried outindependently of theories of embodied cognition,much of the work converges on the same theme –that aspects of language are represented in the samerepresentational substrates that control our sensory-motoric interactions with the external world.

Epilogue

And that, broadly speaking, is where the field isnow. In the space available, it is impossible to docu-ment all the trends that have influenced contempo-rary psycholinguistics, and which have influencednot just what kinds of language behavior we study

(e.g., language breakdown, normal language use,ambiguity resolution, and so on), but also how westudy those behaviors (through studying aphasis, neu-roimaging, language-mediated eye movements, andso on). And we have still to see the full influences ofconnectionism, statistical learning, embodied cogni-tion, and the neuroscience of language. What we canbe sure of is that the boundaries between the study oflanguage and the study of other aspects of cognitionare wearing thinner (the eye movement researchmentioned above is at the interface of language andvision, for example). No doubt there are alreadydevelopments in ‘neighboring’ fields of study (e.g.,the computational sciences and non-cognitive neuro-sciences) that will also have an impact, but have yet toemerge as quantifiable influences on psycholinguis-tics. For example, researchers are already usingcomputational techniques coupled with detailed neu-roanatomical research on the neural structure of thebrain to attempt to understand the kinds of ‘computa-tion’ that distinct parts of the brain may be capable of.Such research promises greater understanding of thebrain’s ability to learn, represent, and deploy lan-guage. And although the history of psycholinguisticsis relevant to understanding where the field is today,perhaps of greater interest is where the field will betomorrow.

Acknowledgments

This article is a substantially expanded version of ashort, 1500-word section on the history of psycholin-guistics that appeared as part of a broader review ofissues in psycholinguistics (Altmann, 2001). Thereader is referred to this paper for in-depth reviewof (some of) the topics that constitute the field ofpsycholinguistics. The author would like to thankthe publishers of this article for donating his feeto the charity Medecins sans Frontieres, and SilviaGennari for advice on topics ranging from Plato toneuroimaging.

See also: Cognitive Science: Overview; fMRI Studies of

Language; Human Language Processing: Connectionist

Models; Human Language Processing: Symbolic Models;

Language, Visual Cognition and Motor Action; Modularity

of Mind and Language; Psycholinguistics: Overview; Sen-

tence Processing.

Bibliography

Altmann G T M (1997). The ascent of Babel: An explora-tion of language, mind, and understanding. Oxford:Oxford University Press.

264 Psycholinguistics: History

Page 15: Psycholinguistic Research Methods -   - Get a Free

Altmann G T M (2001). ‘The language machine: Psycholin-guistics in review.’ British Journal of Psychology 92,129–170.

Bates E & Goodman J C (1999). ‘On the emergence ofgrammar from the lexicon.’ In MacWhinney B (ed.) Theemergence of language. Mahwah, NJ: Lawrence ErlbaumAssociates. 29–80.

Blumenthal A L (1970). Language and psychology: Histor-ical aspects of psycholinguistics. New York: John Wiley& Sons.

Bever T G (1970). ‘The cognitive basis for linguistic struc-tures.’ In Hayes J R (ed.) Cognition and the developmentof language. New York: John Wiley and Sons.

Chomsky N (1957). Syntactic structures. Mouton.Chomsky N (1959). ‘Review of Skinner’s Verbal Beha-

viour.’ Language 35, 26–58.Elman J L (1990). ‘Finding structure in time.’ Cognitive

Science 14, 179–211.Elman J L, Bates E A, Johnson M H, Karmiloff-Smith A,

Parisi D & Plunkett K (1996). Rethinking innateness:A connectionist perspective on development. Cambridge,Mass: MIT Press/Bradford Books.

Fodor J A (1983). Modularity of mind. Cambridge, Mass:MIT Press.

Marcus G F (2003). The algebraic mind. Cambridge, Mass:MIT Press/Bradford Books.

Martin A & Chao L L. ‘Semantic memory and the brain:structure and processes.’ Current Opinion in Neurobiol-ogy 11, 194–201.

McClelland J L & Elman J L (1986). ‘The TRACE model ofspeech perception.’ Cognitive Psychology 18, 1–86.

Price C J (1999). ‘The functional anatomy of word compre-hension and production.’ Trends in Cognitive Sciences2(8), 281–288.

Rumelhart D E & McClelland J L (eds.) (1986). Paralleldistributed processing: Explorations in the microstruc-ture of cognition. Cambridge, MA: MIT Press.

Saffran E M (2000). ‘Aphasia and the relationship oflanguage and brain.’ Seminars in Neurology 20(4),409–418.

Skinner B F (1957). Verbal behavior. New York: Appleton-Century-Crofts.

Winograd T (1972). ‘Understanding natural language.’Cognitive Psychology 3(1), 1–191.

Psycholinguistics: OverviewA Anderson, University of Glasgow, Glasgow, UK

� 2006 Elsevier Ltd. All rights reserved.

Emergence of Psycholinguistics in theLate 1950s and 1960s from theChomskyan Revolution

Although the study of language has been part ofpsychology from its earliest years, including forexample in the work of Wilhelm Wundt, the father ofpsychology, a distinct field of psycholinguisticsemerged in the late 1950s largely in response to theimpact of Chomsky. In the preceding decades, notablyin the United States, psychology had been dominatedby the behaviorist approach of researchers such as B. F.Skinner. They treated language as a form of verbalbehavior, which, like all other behavior, they believedwas governed by simple stimulus–response associa-tions. Chomsky demonstrated the shortcomings of thebehaviorist approach in explaining the productivity oflanguage and its complexity, and his work, notablySyntactic structures (1957), provided a major impetusfor a new kind of psychological investigation of lan-guage. This was driven by an interest in the mentalrepresentation of language in general and syntacticstructures in particular (see Psycholinguistics: History).

Psychology since the demise of behaviorism hasagain been concerned with understanding the way that

people accomplish various information-processingtasks. In the field of psycholinguistics this means aconcern with the cognitive processes by which astring of sounds in an utterance, or marks on a page,are processed to identify individual words and sen-tences, and how this emerging structure becomesmentally represented as a meaningful concept. Thegoal of this process is to derive models that accountfor how people achieve this so rapidly and success-fully, given what we know about the general limita-tions of human cognitive processing. To oversimplify:the psychologist is concerned with how the linguisticunits are processed and represented; the linguist isconcerned with the description of the structures thatemerge from any such processes.

Research Topics in the Early Years ofPsycholinguistics

In its early years psycholinguistics reflected the con-cerns of linguistics and the central role of syntax.Psychologists such as Miller and Isard (1963) showedthat the syntax influences the way people interpretsentences, and even how many words people canremember from a string of words that make nosense. More words are remembered from a ‘sentence’like ‘Accidents carry honey between the house’ thanfrom strings with no syntactic structure such as ‘Ontrains hive elephants the simplify.’ The focus on

Psycholinguistics: Overview 265

Page 16: Psycholinguistic Research Methods -   - Get a Free

syntax and its importance in language processing ledmany psycholinguists to try to test the psychologicalreality of Chomsky’s theory of generative grammar.Experiments were designed to explore the notion thatwhen people process sentences, what they are doing isretrieving the deep syntactic structure as described inChomsky’s transformational grammar. So initially, anumber of studies seemed to show that the relativeease or difficulty with which a reader or listener couldprocess a given sentence was directly related to itssyntactic complexity. Chomsky’s kernel sentences,equivalent to active affirmative declarative sentences,were recalled most easily and processed most quickly,while sentences including one or more transforma-tions such as negative or passive forms were moredifficult to process. These kinds of study led to theso-called derivational theory of complexity. This wassuperseded as it became clear that the results of manyof the studies that apparently provided support forthis purely syntactic view of how people process sen-tences could also be accounted for by the influenceof semantic factors. Although sentence processingremains one of the most significant topics in psycho-linguistics, the range of language phenomena that arestudied has broadened considerably since the early1960s. The assumption that the role of psycholinguis-tics is to demonstrate the psychological validity of anyparticular syntactic theory has also been overtaken.

Models of Sentence Comprehension

Many of the models of sentence comprehension thathave been developed in psycholinguistics try to eluci-date the cognitive processes that are involved whena reader or listener interprets a sentence. There isconsiderable experimental evidence that sentencecomprehension is incremental, that is, that an inter-pretation is built up on a moment-by-moment basisfrom the incoming linguistic information. The evi-dence for this incremental processing is particularlystriking in the way listeners recognize spoken words(see below), which are often identified before all theacoustic information has been heard.

Even in written language processing we have clearsigns of the incremental nature of linguistic proces-sing. This is illustrated by the difficulties most readershave with sentences like the following: ‘The horseraced past the barn fell’ (Bever, 1970). This is knownas a garden path sentence, because nearly all readersinterpret this while they are reading, with ‘raced’ asan active verb and so expect the sentence to end after‘barn.’ They do not realize that the sentence couldhave an equivalent interpretation to ‘The horse thatraced past the barn, fell.’ The incremental waythat sentences seem to be interpreted by readers or

listeners is a major source of potential ambiguities ofinterpretation. Many sentences in a language havepotentially more than one interpretation as theyare processed, yet the reader or listener is usuallynot aware of any problem in arriving at one clearinterpretation of a sentence.

The cognitive architecture that underpins such anachievement has been a source of much debate inpsycholinguistics. Some researchers have held that inthe frequent cases where more than one analysis of asentence is possible, the reader or listener computesall possible analyses in parallel. The difficulty of gar-den path sentences has led others to propose serialmodels, where it is assumed that a single analysis iscomputed and corrected later if this is needed. Modelshave been proposed to account for the empirical evi-dence on sentence-processing difficulties. These ofteninvolve various versions of parallel analyses, wherecandidate analyses are only active for a brief period oftime or are ranked according to frequency in thelanguage or plausibility with the context. One of themost influential of these accounts is the constraint-based model of MacDonald et al. (1994). The weight-ings attached to each candidate analysis are based onthe frequency of a syntactic structure in the language,the plausibility of the words in the sentence to theirassigned syntactic roles, etc. So this model is an ex-ample of an interactive model where many differentsorts of information, syntactic, semantic, pragmatic,contextual, and frequency, can all play simultaneousroles by activating alternative interpretations of theincoming linguistic information. In models of thistype, semantic factors can override syntactic proces-sing biases. This model is attractive to psychologistsfor a number of reasons. It is amenable to modelingby connectionist approaches (see Cognitive Science:Overview) and it avoids the problem of having tobase cognitive processes on syntactic rules, whichfor psychologists appear to change arbitrarily withchanges in linguistic theories.

In contrast, one of the other most influential mod-els of sentence processing is the garden path model(Frazier, 1979). This uses only syntactic principles inits initial stage. An analysis is computed based on twosyntactic preferences, the most important being theprinciple of minimal attachment, the other being theprinciple of late closure. The first principle meansthat the parsing of the sentence that produces the sim-plest parse tree, with fewest nodes, takes precedence.So a sentence like ‘Mary watched the man with thebinoculars’ is usually interpreted to mean that Mary(not the man) was using binoculars. According toFrazier’s interpretation of phrase structure rules, thisinterpretation involves one node fewer than the alter-native and so demonstrates the principle of minimal

266 Psycholinguistics: Overview

Page 17: Psycholinguistic Research Methods -   - Get a Free

attachment in action. This principle takes precedenceover the principle of late closure. This is the prefer-ence to attach incoming materials to the currentphrase or clause. This latter principle is used to ex-plain the preference for interpreting sentences like‘John said he will leave this morning’ to mean thatthe phrase ‘this morning’ relates to the verb ‘leave,’not the verb ‘said.’

As part of the ongoing debate about the adequacyof different models of the parsing process there hasbeen an active discussion in the experimental litera-ture over several years about the extent to whichsemantic factors can override or guide the analysisof syntactic structure. This has been explored in sev-eral studies focusing on the ease or difficulty withwhich sentences containing reduced relative clausescan be processed. Several studies have tested howpeople interpret sets of sentences like the following:

(1a) The defendant examined by the lawyer turnedout to be unreliable.

(1b) The defendant that was examined by the lawyerturned out to be unreliable.

(2a) The evidence examined by the lawyer turned outto be unreliable.

(2b) The evidence that was examined by the lawyerturned out to be unreliable.

Sentences like (1a), which contain a reduced relativeclause, are more difficult to process than their equiv-alent full relative clause (1b). Readers initially treat‘examined’ as a main verb whose subject is ‘the de-fendant.’ They then have to reanalyze this gardenpath when they reach the phrase ‘by the lawyer.’The argument is the extent to which structurally sim-ilar sentences (e.g., [2a]) cause readers to have equiva-lent processing problems. This is what might beexpected from a purely syntactic view of parsing. Incontrast, in an interactive constraint-based model, thesemantic implausibility of interpreting an inanimatenoun such as ‘evidence’ as the subject of the verb‘examined’ should protect the reader from the needto reanalyze an initial incorrect syntactic structure.

Trueswell et al. (1994) seemed to show just such apattern. This was considered powerful evidence insupport of interactive constraint-based models of sen-tence processing. More recently, Clifton et al. (2003)have challenged the evidence that semantic factorsoverride syntactic processing in the initial stages ofparsing. They used more sophisticated techniques formonitoring and analyzing eye movements to deter-mine the processing difficulties experienced by read-ers. They found that reduced relative clauses causeddisruption to processing, irrespective of the semanticplausibility of the relationship between the apparentsubject and main verb. Semantic factors, however,

influenced how quickly the readers recovered fromtheir wrong analysis of the syntax of the sentence.Clifton et al. (2003) stressed that the key thing forpsycholinguistic models of sentence processing is notwhether the data on processing reduced relativeclauses support garden path or constraint-based mod-els. They claim that the important goal is to developparsing models that deal both with the task ofcreating structure and evaluating the structure thatis created.

Although there have been numerous studies of sen-tence processing conducted by psycholinguists overthe decades, the vast majority of these have focusedon how readers interpret written sentences. A fewstudies have tackled the issue of how listeners use thecues in spoken language during parsing. Minimalattachment strategy can be shown to be overcomeby the prosodic cues in real spoken sentences. Simi-larly, the principle of late closure, which can causesyntactic ambiguities in written sentences, can be lessproblematical in spoken materials because of clearprosodic cues to the intended interpretation. Eventhe apparent errors in spoken sentences, such as thedisfluency ‘uh,’ can have an impact during the parsingprocess. Bailey and Ferreira (2003) presented sets ofsentences to listeners such as the following:

(3a) Sandra bumped into the busboy and the uh uhwaiter told her to be careful.

(3b) Sandra bumped into the busboy and the waiteruh uh told her to be careful.

These sentences are ambiguous up to the point whenthe listeners hear ‘told.’ ‘Sandra’ could have bumpedinto ‘the busboy’ or ‘the busboy and the waiter.’ Yetthe listeners who heard the materials most often inter-preted sentences like (3a) to mean that ‘the waiter’was the subject of a new clause, i.e., that it was thesubject of the verb ‘told.’ This shows that disfluenciescan systematically influence the way listeners parseincoming sentences. The same effect was observedwhen the interruption was not a disfluency but anenvironmental sound such as a telephone ringing.

Speech Production and Speech Errors

These studies represent a welcome aspect of thebroadening of the psycholinguistic research agendato include more consideration to the production andcomprehension of spoken as well as written language.The study of speech disfluencies is one part of this.Speech disfluencies encompass a range of phenom-ena, including pauses in speech such as silences, filledpauses, and fillers such as ‘uh’ and ‘um,’ as well asspeech errors such as slips of the tongue, spoonerisms,and malapropisms. When we speak we aim to

Psycholinguistics: Overview 267

Page 18: Psycholinguistic Research Methods -   - Get a Free

produce a grammatically well-formed utterance withno noticeable hesitations. Yet this ideal delivery can-not always be achieved. It is estimated that around5% of words in speech are disfluent in some way. Yetthese disfluencies are not random in their patterns ofoccurrence. Even the similar-sounding disfluencies‘uh’ and ‘um’ have been shown to have systematiccontexts of use. Speakers use ‘uh’ before a short delayin their speech production but use ‘um’ before a moresignificant delay. Speakers seem to become disfluentbecause they are experiencing some kind of problemin planning and producing their utterance. Speakershave been found to pause more before unpredictablewords, suggesting they might be experiencing word-finding difficulties. Speakers also pause more atthe start of an intonation unit, which suggests thatpausing is related to the speech-planning process (seePauses and Hesitations: Psycholinguistic Approach).

Speech errors have also been studied by psycholin-guists, who have classified them according to theassumed units of processing and types of mechanisminvolved in their production. For example, speecherrors can relate to the phonemic features of theword, the syllabic structure, or the phrase or sentencebeing produced. This is illustrated in one of the fa-mous errors reportedly produced by Dr Spooner inthe 19th century when rebuking one of his students:‘you have tasted the whole worm’ when he presum-ably intended to say ‘you have wasted the wholeterm.’ Here the initial phonemes are swapped butthe rest of the morphemic and syntactic structure ofthe target utterance is preserved. Errors of this typebecame known as ‘spoonerisms’ as a result.

The kinds of error that occur can tell us a good dealabout how speech is produced. Speech errors are veryvaried. They can reflect many different linguisticlevels. Errors can involve the sounds of the wordsinvolved, for example saying ‘the lust list’ for ‘thelush list.’ They can relate to the intended words in aphrase, ‘the pin of a head’ being said in place of ‘thehead of a pin.’ Errors may also focus on the semanticrelations of the intended words, so a speaker mayproduce ‘I like berries with my fruit’ rather than‘I like berries with my cereal,’ among many otherforms of errors.

Some types of errors, however, do not occur andthese patterns of occurrence and nonoccurrence havebeen used to help understand the speech productionprocess. Content and function words are not substi-tuted for one another and indeed substitute words areusually the same part of speech as the intended targetword. When the wrong sound is produced, however,the substituted phoneme seems to have no grammati-cal relation with the intended target sound. Thespacing between the errors is also informative for

models of production. Errors of word substitutionusually involve words that are a phrase apart. Yetsound errors seem to relate neighboring words. Thissort of evidence has been used by several researchersto develop general models of the speech productionprocess (see Speech Errors: Psycholinguistic Ap-proach for more details).

Speech Recognition

Psycholinguists have also been concerned with ex-ploring the processes involved in speech perception.Jusczyk and Luce (2002) summarized over 50 years ofresearch on this topic. They described key researchissues in the domain as understanding invariance,constancy, and perceptual units. In speech there areno invariant acoustic features that map directly tocorresponding phonetic segments. The acoustic prop-erties of sounds vary widely depending on the sur-rounding linguistic context. To make matters evenmore complex for the listener, there is also widevariability in the way phonetic segments are producedby different speakers depending on age, sex, and indi-vidual speaker characteristics. It is not even easy todetermine what are the basic perceptual units thatlisteners use to recognize speech. Some research stud-ies seem to suggest the phoneme as the basic buildingblock of perception while others show advantages ofthe syllable over the phoneme. Conversational speechis therefore a very variable signal that does not evenprovide clear cues to boundaries between words. Yetunderstanding words in speech is an effortless andsuccessful process for listeners with normal hearing.How is it done?

One of the key research challenges for psycholin-guists was therefore to produce models of spokenword recognition. One of the most influential modelsis the Cohort Model (Marslen-Wilson and Welsh,1978; Marslen-Wilson, 1989). In this account, whena listener hears an initial sound, all the words knownto start with that sound become activated. This co-hort of candidate words is gradually whittled downto a single word, as more acoustic information isprocessed and candidate words are eliminated. Animportant feature of the model is the uniquenesspoint. This occurs when the listener has heard enoughacoustic information to reduce the cohort to a singlecandidate, i.e., there are no other words known to thelistener with that particular sequence of phonemes.Syntactic and semantic context from the surroundingdiscourse can also play a role in rejecting potentialcandidate words. In later versions of the model, thiscan only occur after the uniqueness point, though inearlier versions, context could also be used to elimi-nate possible words before the uniqueness point. The

268 Psycholinguistics: Overview

Page 19: Psycholinguistic Research Methods -   - Get a Free

recognition point occurs when a single item remainsin the cohort. This may be before the end of a word.

One of the strengths of the model is the way it canaccount for the speed with which spoken word recog-nition often occurs, often occurring before the wordoffset. In later versions of the model the activation ofcandidate words is a graded process. Candidatewords are not completely eliminated as acoustic in-formation accumulates, but rather they have theiractivation level reduced. This can rise again if lateracoustic information matches a rejected candidateword. This is important, as one of the problemsof conversational speech is that individual words orphonemes are often not articulated clearly andthe listener must be able to recover and recognizewords whose initial sounds were mispronounced ormisheard in a noisy environment.

The main competitors to speech recognition mod-els are various connectionist accounts, notably Trace(McLelland and Elman, 1986). This is a connectionistor parallel distributed processing (PDP) account ofspoken language processing. In such models research-ers were attempting to produce computational mod-els of language processes which were inspired bywhat was known about the structure and processesof the human brain (see Cognitive Science: Over-view). In spoken word recognition models, thismeant that the feature units were simple unitswith dense series of interconnections, which pro-cessed by sending many messages in parallel. Suchstructures were developed as analogous to the neuralarchitecture of the brain with its many nerves andinterconnections.

The Trace model has three layers of units,corresponding at the lowest level to features, then atthe next layer to individual phonemes, then at the toplayer to complete words. All have dense arrays ofinterconnections between them, which like nervepathways can be excitatory or inhibitory. The connec-tions between levels are excitatory and connectionswithin levels are inhibitory. Connections betweenlevels operate in both directions, so both top-downand bottom-up processing can occur. The connectivityof Trace means that evidence is boosted and plausiblehypotheses about the possible words emerge strongly.So a feature such as voicing will energize the voicefeature units. These will then transmit activation toall the voiced phonemes at the phoneme level, whichwill in turn activate all the words that begin withthese phonemes. At each level the activated unitsinhibit competitors at the same level, so reducingpossible competitors. A word is recognized when inthe end a single active unit remains.

Trace is a very interactive model. It gives context abigger role in recognition than the cohort model. As a

computational model it has the virtues of specificity.Trace models can be built and simulations run andthen compared with the experimental data fromhuman listeners. These comparisons have generallyshown that Trace can cope with some of the problemsof variability in production of features and phonemes,can account for the context effects on spoken wordrecognition that have been reported in the literature,and can cope with the kind of degraded acoustic inputthat is so typical of real conversational speech. Themain criticisms that have been leveled at Trace con-cern the large role that context is given, which may bean overstatement of how this operates in human rec-ognition. The other limitation is the less than elegantway that the time course of speech recognition ismodeled, with duplication of the levels and nodesover successive time periods.

Other connectionist models have also been devel-oped. Despite the competing architectures of themodels, there is general agreement on key aspects ofthe speech recognition process. This involves activa-tion of multiple candidate words, followed by com-petition among those known lexical items that sharea similar sound profile; these processes have to beable to cope with less than ideal acoustic inputand deliver perceptions of words very rapidly (forfurther details see Speech Recognition: PsychologyApproaches).

Discourse Processing

The gradual broadening of the psycholinguistic re-search agenda has not been limited to the inclusionof speech alongside written language. Psycholinguistshave also shown a growing interest in the processesinvolved in the interpretation not just of single sen-tences, but of complete texts. One of the first psychol-ogists to explore the complexities of this process wasBartlett (1932). In his research on the way peopleremembered and reproduced stories that they hadheard, he highlighted key research themes in discourseprocessing that are still current research topics. Bart-lett noted how quickly the surface form of the story islost and an individual’s own interpretation of the textis what is remembered. He introduced the concept ofa ‘schema,’ which was used when readers recalled anarrative. This consisted of an organized set of infor-mation based on prior experiences that is usedin interpretation and recall. The interpretation of anarrative that is retained consists of a mix of inputfrom the text and from schemata. In some of Bartlett’sstudies, British students listened to North Americanfolktales. When asked to recall the stories accurately,strange narrative details relating to the activities ofghosts were unconsciously altered and supplemented

Psycholinguistics: Overview 269

Page 20: Psycholinguistic Research Methods -   - Get a Free

by details from the participants’ world knowledge, sothat the recalled version of the stories became morecoherent by conventional Western standards.

More recently, a growing body of psycholinguisticresearch has been addressing the challenges of howreaders build up a coherent mental representation tocreate a sense of the narrative world. One of the keyconcerns of psycholinguistic studies of text or dis-course processing is the problem of inferences. SinceBartlett’s seminal studies, it has been known thatreaders expand on what is in the text by drawinginferences based on their knowledge of the world.But what are the time course and limits of suchinferencing? Many studies have been concerned withaddressing such questions. Experimental studies haveshown that some inferences seem to be made auto-matically as we read a text while some are made laterto resolve apparent problems or inconsistencies. Thiswas demonstrated in a study by Sanford and Garrod(1981), who presented readers with pairs of sentencessuch as the following:

(4a) Mary was dressing the baby.(4b) The clothes were made of wool.

(5a) Mary was putting the clothes on the baby.(5b) The clothes were made of wool.

The participants read the second sentence just asquickly in both cases. This suggests that verbs suchas ‘dress’ cause readers to automatically draw theinference concerning ‘clothes.’ In contrast Havilandand Clark (1974) found that some inferences took asmall but significant amount of time during reading.In their experiment they used pairs of sentences likethe following:

(6a) Harry took the picnic things from the trunk.(6b) The beer was warm.

(7a) Harry took the beer from the trunk.(7b) The beer was warm.

They found that sentences like (6b) took longer toread than (7b), because the readers had to make thebackward inference that the beer was part of thepicnic supplies.

Researchers wished to determine the limits on thekinds of inferences which are made immediately andautomatically and which are made later. Clearly,there must be limits on the amount of backgroundknowledge readers activate and one of the researchgoals is to understand what these limits are and thecognitive processes which support this. Models havebeen developed that attempt to specify the wayinferences are made and the way coherent mentalrepresentations are derived from texts. (For a fulleraccount of inferencing see Coherence: Psycholinguis-tic Approach).

Although there are significant differences betweenthe models, there are several agreed features of howdiscourse processing operates in terms of the wayreaders update their mental representations of atext, the way some information is held in the fore-ground of processing while others is background, andthe way that certain inferences are drawn automati-cally to maintain a coherent account of the text. Fordetails of the various models of discourse processingthat have been proposed (see Discourse Processing).

Reading as a Developmental andEducational Process

Before an interpretation of a written text can bemade, the words on the page have to be read.Although apparently effortless for the skilled adultreader, the processes of identifying letters, recogniz-ing words, and thus distinguishing the meaning con-veyed in even a simple sentence, are complex. Onekey to unraveling how this is accomplished has beento study readers’ eye movements. These have beenshown to be very systematic and to consist of threemain types: short forward movements of around6–9 letters called saccades, which last on average20–50 ms; backward movements called regressions;and the pauses or fixations when the readers’ eyes reston a word for around 250 ms.

Rayner and colleagues have shown there are con-sistent patterns in these movements (see e.g., Rayner,1998). When reading more difficult texts, fixationsgrow longer, saccades grow shorter and regressionsbecome more frequent. Even within a single text,readers will spend more time fixating relativelyuncommon words compared to familiar ones. Eyemovements are designed to keep the middle of ourvisual field, where our vision is best, aimed at newareas of interest. However, we are able to distinguishquite a lot of information within a single fixation.A skilled adult reader of English will usually be ableto identify 15 letters to the right of the fixation pointbut only three or four letters to the left.

When children begin to learn to read they fixatewords for longer than skilled readers. They also havea shorter perceptual span than adults, which meansthat in a single fixation they are able to identify fewerletters. Gradually these patterns approach those ofadults as reading proficiency increases. The typicalEnglish reader’s asymmetric perceptual span startsto appear in most young readers within a year ofstarting to learn to read. This seems to reflect theleft-to-right nature of reading English, as readers ofHebrew, which is read right to left, show the oppositepattern in their perceptual spans. Like much of psy-cholinguistics, the study of skilled reading has largely

270 Psycholinguistics: Overview

Page 21: Psycholinguistic Research Methods -   - Get a Free

been the study of skilled English reading but morerecently interesting studies have been conducted onother languages, notably on nonalphabetic scriptssuch as Chinese and Japanese. (see Reading Processesin Adults for more details).

Visual Word Recognition

One aspect of the reading process has received a greatdeal of research attention in psycholinguistics: theprocess of visual word recognition. A whole set ofphenomena have been identified in the processes ofrecognizing written words. These include the processof priming. Words are recognized more quickly if theyhave been read previously. This is known as repetitionpriming. Words are also recognized more quickly if aword of similar meaning has just been presented, so‘butter’ is recognized more quickly if ‘bread’ has justbeen read. This is known as semantic priming. Primingcan also occur between words which do not seem tohave a direct semantic relationship, such as ‘music’and ‘kidney,’ which are linked by an intermediateword ‘organ,’ which was not presented to the readers.

Other phenomena which researchers have identifiedin word recognition include the fact that words whichare common in the language, such as ‘road,’ are recog-nized more quickly than similar words that are lesscommon, such as ‘rend.’ This word frequency effect isa strong influence on recognition speeds, with evenfairly small differences in word frequency influencingreaction times. It is the most robust effect in studies ofword recognition, appearing in many different studiesusing a wide variety of research methods.

These and other phenomena about how words arerecognized have been used to develop a variety ofmodels of word recognition. These can be groupedinto families of related accounts. Some of the pro-posed models are direct access models, where percep-tual information goes straight to feature counters orunits. Other accounts propose that perceptual infor-mation is used to trigger a search through the mentallexicon, that is, the stored representation of all thewords known to the reader.

One of the best-known serial search modelswas proposed by Forster (1976). In this account, theperceptual input is used to build a representation ofthe word to be recognized which is then checked intwo stages by comparisons with a series of accessfiles, which are analogous to the cards in a libraryindex system. Once an input string is matched to anaccess file it is then linked to the master files, analo-gous to the books on the shelf, which contain the fulllexical entries for each word. The files are organizedto speed up the process of word recognition, withgroups of files being arranged in bins that contain

similar words. Within a bin, files are organized byword frequency. The details of the model are de-scribed to account for the observed features of therecognition process. For example, there are cross-references between master files that would supportsemantic priming. Despite these features it is not clearthat this kind of serial search model can convincinglyaccount for the speed with which words are recog-nized. It has also been criticized as being based ona rather dated analogy with the cognitive system as adigital computer rather than as a neural system.

Very recently, however, Murray and Forster (2004)have published an account of a series of word recog-nition studies that are claimed to show strong supportfor the serial search model. They claim that the struc-ture within bins, notably the rank ordering of wordswithin a bin in terms of frequency, accounts for thepattern of experimental results on word frequencyeffects more parsimoniously than alternative directaccess models.

More popular models of word recognition involvedirect access from the sensory information to thelexical units. One of the most influential of theearly accounts of this sort was the Logogen Model(Morton, 1969). In this model, perceptual informa-tion, either from visual or auditory analysis, feedsdirectly into the logogen system. This consists of aseries of units, logogens, which represent knownwords. Logogens act as feature counters and when alogogen has accumulated sufficient evidence to reacha threshold it fires. A word then becomes available tothe output buffer, is recognized, and can be articulat-ed. The logogens receive input from the cognitivesystem as well as the sensory input routes and so theresting threshold of the logogen can be varied by, forexample, prior experience of a word, or from thesentence or discourse context. Common words withwhich an individual has had a lot of prior experiencewill have a lower threshold and will fire with lesssensory input and hence be recognized more quickly.Similarly, words that are highly predictable fromcontext will also have their thresholds raised and sowill be recognized rapidly.

The Logogen Model has been used as the basis ofcomputational models of word recognition, notablythe interaction activation model (IAM) proposed byMcLelland and Rumelhart (1981). This was one ofthe early connectionist or PDP accounts of languageprocessing. Researchers were trying to produce com-putational models of language processes which wereinspired by what was known about the structure andprocesses of the human brain (see Cognitive Science:Overview). In word recognition models, this meantthat the feature units were simple units with denseseries of interconnections, which processed by sending

Psycholinguistics: Overview 271

Page 22: Psycholinguistic Research Methods -   - Get a Free

many messages in parallel. The IAM consisted ofthree layers of units, corresponding at the lowestlevel to the visual features of letters, then at the nextlayer to individual letters, then at the top layer tocomplete words, all with dense arrays of interconnec-tions between them, which like nerve pathways couldbe excitatory or inhibitory. The model was able toaccount for a wide range of experimental observa-tions on word recognition. (For more details seeWord Recognition, Written.)

Models of this type have become very widely usedin psycholinguistics to explore a wide variety of lan-guage processes. This trend towards testing computa-tional models against the existing experimentalliterature in a way that can link to new insightsabout the neural processes involved in language andcognitive processes is a popular approach in currentpsycholinguistics.

Dialogue and Gesture

In parallel to this concern with understanding thecognitive processes and the possible neurologicalarchitecture involved in language processing hasbeen a growing interest in language processing incontext. This means not only the growth in studiesof how extended texts are processed but also a devel-oping field of psycholinguistic studies of interactivelanguage use. These studies have focused on languagein its natural setting, interactive dialogue.

Dialogue represents the most ubiquitous form oflanguage use. As young children, we learn to uselanguage through dialogues with our parents andcaregivers. Even educated adults spend a great dealof their time in conversations with family, friends,colleagues. Spoken dialogue is still used to obtainmany forms of goods and services. In the many non-literate cultures in the world dialogue is the main oronly form of linguistic interaction.

Yet till recently dialogue received rather littleresearch attention in psycholinguistics. One of thechallenges for psycholinguists who wished to studydialogue was to derive methods of exploring thephenomenon which would produce testable andgeneralizable research questions and findings. Oneexperimental method which has been used to allowthe study of comparable dialogues from many pairs ofspeakers is the referential communication paradigm,developed by Krauss and Weinheimer (1964). In this,pairs of speakers are presented with an array of cardsdepicting abstract shapes. The speakers have to inter-act to determine which card a speaker is referring toat a given point in the dialogue. These early studiesrevealed key aspects of dialogue, including the waythat over the course of a dialogue, the lengths of

descriptions of even complex and abstract stimulibecome much shorter and more concise. The interac-tion between the speakers was found to be crucial inthis process. If this interaction was disrupted thespeakers were not able to reduce their descriptionsto the same extent.

Later studies on referential dialogues were con-ducted by Clark and colleagues. In one influentialpaper, Clark and Wilkes-Gibbs (1986) developed acollaborative model of dialogue. From studying manypairs of speakers engaged in dialogues, they high-lighted the way speakers work together throughtheir contributions to dialogue to arrive at a sharedand mutual way of referring to things they wish todiscuss. This means that the speaker and the listenerboth take responsibility for assuring that whathas been said is mutually understood or ‘grounded’before the dialogue proceeds.

In this view of dialogue, speakers follow the princi-ple of collaborative effort and try to minimize theiroverall effort in arriving at an agreed description.This is done iteratively over a number of turns ofspeaking, often with each speaker contributing partof the description. This process makes use of theopportunities and limitations of spoken dialogue ina way that highlights its differences from writtenlanguage. In text, the writer can take as much timeas is needed to produce a description that she thinksthe reader will understand. In dialogue, however, thespeaker is under time pressure, as conversation rarelyallows long gaps in which to plan an utterance. Thespeaker therefore has to produce an immediate de-scription and may not be able to retrieve the mostappropriate description or judge what way of describ-ing a referent will be interpretable by the listener. Theadvantage of dialogue, however, is that the speakerand the listener can work together to refine or clarifythe speaker’s initial description till they both share acommon understanding.

The highly collaborative nature of this process isreinforced in two key studies by Clark and colleagues.Clark and Schaefer (1987) showed how contributingto dialogue is characterized by two phases. Each timea speaker wishes to make a contribution, they pro-duce a stretch of speech that is the content of whatthey wish to contribute. This is called the presenta-tion phase. They then require an acceptance phase, inwhich their listener gives evidence of understandingthe previous contribution. So the dialogue involvestwo activities: content specification and grounding,that is attempting to ensure that both speakers un-derstand the content sufficiently for their currentconversational purposes.

The importance of the ability to actively contributeto the dialogue interaction was demonstrated in a

272 Psycholinguistics: Overview

Page 23: Psycholinguistic Research Methods -   - Get a Free

study Schober and Clark (1989). Using the referentialparadigm they had pairs of speakers complete thetask. An additional participant was included in eachinteraction who overheard everything that was saidbut did not take part in the dialogue. This overhearerhad a much harder time trying to identify theintended referents of the descriptions than the conver-sational participant, despite having the same picturesand having heard everything that was said. The sug-gested explanation is that the descriptions were notgrounded for the overhearers. They had no chance tocollaborate and ensure that they understood eachdescription as it emerged during the dialogue.

So there is clear evidence that dialogue is an inter-active and collaborative process that involves speak-ers and listeners attempting to cooperate and achievemutual understanding. The detailed mechanisms thatunderpin these general processes of adaptation to theinterlocutor are now the focus of a good deal ofpsycholinguistic study. There is some controversyover the extent to which speakers are able to adjustand adapt their output to their listener’s needs. Interms of the forms of referring expressions chosenby speakers there is evidence of adjustment to thelistener’s general level of knowledge of the domain.When it comes to adjusting the intelligibility of theirarticulation, speakers seem to be largely egocentric.They reduce the clarity of their word production interms of what is familiar to them as speakers ratherthan modeling their listeners’ needs. The time courseof adaptation is also debated, with some studiesshowing speakers initially produce utterances fromtheir own perspective but later monitor their listenersand adapt. Other studies show listener adaptationsfrom the start of speaking (for more details see Dia-logue and Interaction).

As language processing is studied in more naturalcontexts of use, it becomes clear that speakers andlisteners do not just communicate using the verbalchannel. Visual signals from the mouth, face, hands,and eyes are all important features of communication.Researchers have begun to explore the way thevisual channel is used by speakers and listenersand the relationship between verbal and visualsignals.

More generally, in a dialogue, what we say, howmuch we say, and even the clarity of the way thewords will be spoken have all been shown to changewhen speakers do or do not have access to visualsignals. So speakers who can see one another needto say less to complete a task, use more gestures, canexchange turns of speaking more smoothly, andarticulate their words less than when they cannotsee one another.

The important role of visual signals in the percep-tion of speech and how these are integrated withacoustic information is a fascinating research area.This was highlighted in a seminal study by McGurkand MacDonald (1976). They demonstrated thatwhen a listener hears a phoneme such as ‘ba’ whilewatching a face mouthing ‘ga,’ the sound which isheard is a fusion ‘da.’ This is a powerful illusion thatoccurs even with knowledgeable listeners/viewersand has been demonstrated with young babies (seeAudio-visual Speech Processing).

The role of visual signals in the production andcomprehension of more extended stretches of dis-course has also been the subject of considerablestudy. From studies of conversation and storytelling,the important role of gestures and their relationshipto the accompanying speech has been established. Forsome kinds of gestures there is a close temporal rela-tionship with the accompanying speech. Listenersalso seem to fuse information presented visually andverbally. If they are told a story by a speaker whouses speech and gesture and are then asked to retellthe story later, information originally presentedby gesture, such as the speed of an action or themanner of leaving, is often relayed in speech andvice versa (for more information Gesture andCommunication).

Future Directions in Psycholinguistics

Several trends seem apparent in psycholinguistics.Some of these seem to be the result of improvementsand developments in the research methods availableto psycholinguistics. One is an increased interest indetailed investigations of language in richer, morenaturalistic, contexts. New research techniques suchas improved methods of tracking a speaker or listen-er’s eye movements mean, for example, that studiesof dialogue, or the relationship of a speaker’s produc-tion to the surrounding context, can be studied withthe precision that used to be only possible in studies ofisolated word recognition or sentence processing.

Improvements in the ease and accessibility ofvarious brain-imaging techniques mean that theseare being used not only as contribution to ourunderstanding of the neural substrate of differentlanguage processes. Techniques such as ERP (event-related brain potentials) can now be used more andmore as means to explore the precise time course oflanguage processing. Newer techniques such as MEG(magneto-encephalography) are beginning to offerpsycholinguists not just good information about thetemporal patterns of language processing but alsodetailed information about the location of associated

Psycholinguistics: Overview 273

Page 24: Psycholinguistic Research Methods -   - Get a Free

brain activity. (For more information see Psycholin-guistic Research Methods.) The growing interest inthe neural substrates which support languageprocessing has received a major boost from thedevelopment of these new forms of brain imaging.

The interest in how to build neurologically plausi-ble models was of course one of the drivers behind theexpansion over the last 20 years in connectionistmodels of language. These have made a major contri-bution to our understanding of how a wide variety oflanguage processes, such as spoken or written wordrecognition, might operate and be learned. The chal-lenge in the future will be to see whether connectionistmodels can be implemented for more extensive lan-guage processing, such as text comprehension or con-tribution to dialogues. The way such models can orcannot be scaled up to simulate more complex lan-guage processing will be one of the key challenges forthe next few years.

In the future psycholinguistics will also need toaddress its undoubted Anglocentric bias. The vastmajority of studies of language processing are in factstudies of English language processing. In a numberof areas a few studies are emerging which considerother languages but this effort needs to be greatlyincreased. Over the last 50 years psycholinguisticshas expanded dramatically and made considerableprogress in understanding a wide variety of languageprocesses. With new research techniques and a morebalanced research portfolio in terms of the languagesstudied and the research efforts applied to productionas well as comprehension, spoken as well as writtenlanguage, future progress seems assured.

See also: Audio-visual Speech Processing; Cognitive Sci-

ence: Overview; Coherence: Psycholinguistic Approach;

Dialogue and Interaction; Discourse Processing; Gesture

and Communication; Pauses and Hesitations: Psycholin-

guistic Approach; Psycholinguistic Research Methods;

Psycholinguistics: History; Reading Processes in Adults;

Sentence Processing; Speech Errors: Psycholinguistic

Approach; Speech Production; Speech Recognition: Psy-

chology Approaches; Word Recognition, Written.

Bibliography

Bailey K & Ferreira F (2003). ‘Disfluencies affect the pars-ing of garden-path sentences.’ Journal of Memory andLanguage 49, 183–200.

Bartlett F C (1932). Remembering: a study in experimentaland social psychology. Cambridge: Cambridge UniversityPress.

Bever T G (1970). ‘The cognitive basis for linguistic struc-ture.’ In Hayes J R (ed.) Cognition and the developmentof language. New York: John Wiley. 270–362.

Chomsky N (1957). Syntactic structures. The Hague:Mouton.

Clark H H & Schaefer E (1989). ‘Contributing to dis-course.’ Cognitive Science 13, 259–294.

Clark H H & Wilkes-Gibbs D (1986). ‘Referring as acollaborative process.’ Cognition 22, 1–39.

Clifton C, Traxler M & Mohamed M (2003). ‘The use ofthematic role information in parsing: syntactic processingautonomy revisited.’ Journal of Memory and Language49, 317–334.

Ferreira F & Clifton C (1986). ‘The independence of syn-tactic processing.’ Journal of Memory and Language 25,348–368.

Forster K I (1976). ‘Accessing the mental lexicon.’ In WalesR W E (ed.) New approaches to language mechanisms.Amsterdam: North Holland. 257–287.

Frazier L (1979). On comprehending sentences: syntacticparsing strategies. West Bend: Indiana UniversitiesLinguistic Club.

Haviland S E & Clark H H (1974). ‘What’s new? Acquir-ing new information as a process in comprehension.’Journal of Verbal Learning and Verbal Behavior 13,512–521.

Jusczyk P & Luce P (2002). ‘Speech perception and spokenword recognition: past and present.’ Ear and Hearing 23,2–40.

Krauss R & Weinheimer S (1964). ‘Changes in referencephrases as a function of frequency of use in social inter-actions: a preliminary study.’ Psychonomic Science 1,113–114.

MacDonald M, Pearlmutter N & Seidenberg M (1994).‘Lexical nature of syntactic ambiguity resolution.’Psychological Review 101, 676–703.

Marslen-Wilson W (1989). ‘Access and integration: pro-jecting sound onto meaning.’ In Marslen-Wilson W(ed.) Lexical access and representation. Cambridge,MA: Bradford. 3–24.

Marslen-Wilson W & Welch A (1978). ‘Processing inter-actions and lexical access during word recognition incontinuous speech.’ Cognitive Psychology 10, 29–63.

McClelland J & Elman J (1986). ‘The TRACE model ofspeech perception.’ Cognitive Psychology 18, 1–86.

McClelland J & Rumelhart D (1981). ‘An interactive acti-vation model of context effects in letter perception 1: Anaccount of the basic findings.’ Psychological Review 88,375–407.

McGurk H & MacDonald J (1976). ‘Hearing lips andseeing voices.’ Nature 264, 746–748.

Miller G & Isard S (1963). ‘Some perceptual consequencesof linguistic rules.’ Journal of Verbal Learning and VerbalBehavior 2, 217–228.

Morton J (1969). ‘Interaction of information in wordrecognition.’ Psychological Review 76, 165–178.

Murray W & Forster K (2004). ‘Serial mechanisms in lexi-cal access: the rank hypothesis.’ Psychological Review111, 721–756.

Rayner K (1998). ‘Eye movements in reading and infor-mation processing: 20 years of research.’ PsychologicalBulletin 124, 372–422.

274 Psycholinguistics: Overview

Page 25: Psycholinguistic Research Methods -   - Get a Free

Sanford A J & Garrod S (1981). Understanding writtenlanguage. Chichester, England: John Wiley.

Schober M & Clark H H (1989). ‘Understanding by addres-sees and overhearers.’ Cognitive Psychology 21, 211–232.

Trueswell J, Tannenhaus M & Garnsey S (1994). ‘Semanticinfluences on parsing: use of thematic role informationin syntactic disambiguation.’ Journal of Memory andLanguage 33, 285–318.

Psychosis and LanguageM Simard, Universite Laval, Quebec City, Canada

Y Turgeon, Campbellton Regional Hospital,

Campbellton, Canada

� 2006 Elsevier Ltd. All rights reserved.

Several psychiatric disorders are included under thebroad definition of psychosis. The most common dis-orders are schizophrenia (which is prototypical of thepsychoses), schizophreniform disorder, schizoaffec-tive disorder, and delusional disorder (APA, 1994).These other psychotic disorders share some distinc-tive features with schizophrenia (e.g., a schizoaffec-tive disorder is a disturbance in which a mood episodeand characteristic symptoms of schizophrenia occurtogether). Furthermore, an array of psychiatric andmedical conditions also presents with psychotic attri-butes and is often characterized by presenting typicalsymptoms of schizophrenia.

The contemporary consensual definitions ofschizophrenia and most common psychotic disordersof the Diagnostic and statistical manual have resultedfrom more than 100 years of conceptual work. Thecurrent criteria for schizophrenia derive from theearly works of Emil Kraepelin (1899), Eugen Bleuler(1911), and Kurt Schneider (1957).

Background History of the ContemporaryDiagnostic Criteria for Schizophrenia,Autism, and Asperger’s Syndrome

Emil Kraepelin meticulously described the symptomsof schizophrenia, which he named dementia praecox,referring to the disruption of emotional and cognitivefeatures as well as to the deteriorating course of thedisorder. He also tentatively proposed a pathophysio-logical localization, suggesting that abnormalities inthe frontal lobe would be the substrate of problemswith reasoning and volition, whereas abnormalities inthe temporal lobe would be the substrate of delusionsand hallucinations (Kraepelin, 1899).

In his conception of schizophrenia, Eugen Bleuleremphasized the importance of a formal thought disor-der (disorder of associations or ‘splitting’). To over-come the limitations of Bleuler’s approach in the

observation of the disorder, the concept of thoughtdisorder was transformed into problems of languageand communication behavior in the work of Andreasen(1979) and into disorganized speech in the criteria forschizophrenia in the Diagnostic and statistical manualof mental disorders (4th edition) (DSM-IV) (APA,1994).

In the mid-20th century, Kurt Schneider (1957)described the primary disorders of the experience ofthought representing the current core features of thepositive symptoms. This characterization is known asthe Schneiderian First-Rank symptoms, which in-clude hearing voices speaking one’s thoughts aloud,hearing voices arguing about oneself, hearing voicescommenting on one’s actions as they are occurring,having bodily sensations imposed from outside, attri-buting one’s feelings to external sources, experiencingone’s drives as originating from powerful outsideforces, moving and acting as a result of external con-trols, having one’s thoughts withdrawn from themind, having one’s thoughts inserted into one’s mind,broadcasting of thoughts, and attributing special per-sonal significance to one’s perceptions. Schneider’ssymptoms were thought to be pathognomonic signs,any of which was strongly indicative of a schizophrenicdisorder.

Psychosis is not a unitary concept. The ‘psychotic’designation is often applied to symptom presentationthat may vary considerably, both biologically andbehaviorally. The wide variety of psychiatric andmedical conditions that present with ‘psychotic’ fea-tures attests to the heterogeneity of psychotic disor-ders. It is now recognized that schizophrenia is atleast clinically a heterogeneous disorder (Ragland,2003), and this characteristic necessarily has an im-pact on the presentation of language and speechimpairments that may arise from that condition.

The heterogeneity of autism is also so well ac-knowledged (Eigsti and Shapiro, 2003) that the termautistic spectrum disorder (Rapin and Dunn, 2003) isfrequently used to describe the different extents ofsevere functional deficits in sociability, communica-tive language, imaginative play, and range of intereststhat principally characterize this condition as per theDSM-IV (APA, 1994).

Psychosis and Language 275