6

Click here to load reader

Sound to meaning correspondences facilitate word learning

Embed Size (px)

Citation preview

Page 1: Sound to meaning correspondences facilitate word learning

Cognition 112 (2009) 181–186

Contents lists available at ScienceDirect

Cognition

journal homepage: www.elsevier .com/locate /COGNIT

Brief article

Sound to meaning correspondences facilitate word learning

Lynne C. Nygaard *, Allison E. Cook, Laura L. NamyDepartment of Psychology, Emory University, 4532 Kilgo Circle, Atlanta, GA 30322, USA

a r t i c l e i n f o

Article history:Received 26 January 2009Revised 28 March 2009Accepted 2 April 2009

Keywords:Spoken languageArbitrarinessSound symbolismWord learning

0010-0277/$ - see front matter � 2009 Elsevier B.Vdoi:10.1016/j.cognition.2009.04.001

* Corresponding author.E-mail address: [email protected] (L.C. Nygaar

a b s t r a c t

A fundamental assumption regarding spoken language is that the relationship betweensound and meaning is essentially arbitrary. The present investigation questioned this arbi-trariness assumption by examining the influence of potential non-arbitrary mappingsbetween sound and meaning on word learning in adults. Native English-speaking monol-inguals learned meanings for Japanese words in a vocabulary-learning task. Spoken Japa-nese words were paired with English meanings that: (1) matched the actual meaning ofthe Japanese word (e.g., ‘‘hayai” paired with fast); (2) were antonyms for the actual mean-ing (e.g., ‘‘hayai” paired with slow); or (3) were randomly selected from the set of ant-onyms (e.g., ‘‘hayai” paired with blunt). The results showed that participants learnedthe actual English equivalents and antonyms for Japanese words more accurately andresponded faster than when learning randomly paired meanings. These findings suggestthat natural languages contain non-arbitrary links between sound structure and meaningand further, that learners are sensitive to these non-arbitrary relationships within spokenlanguage.

� 2009 Elsevier B.V. All rights reserved.

1. Introduction

A fundamental assumption regarding spoken languageis that the sound structure of words bears an essentiallyarbitrary relationship to meaning (de Saussure, 1959;Hockett, 1977). Arbitrariness is considered a hallmarkof spoken language that sets it apart from other typesof communication systems, both human and non-human,providing the compositional power that gives spoken lan-guage its flexibility and productivity (Gasser, 2004;Monaghan & Christiansen, 2006). Indeed, linguistic pair-ings of sound to meaning are largely arbitrary. Acrosslanguages, for example, a variety of word forms (e.g.,dog, chien, ferro, cane, kelb) can refer to the same object,action, or concept (e.g., dog), exhibiting seemingly littlerelationship between particular sounds and propertiesof the referent.

. All rights reserved.

d).

Despite this apparent arbitrariness, reliable correspon-dences between sound structure and linguistic categoryexist in natural language, and listeners seem to be sensitiveto these relationships. One of the most obvious examples isonomatopoeia. Resemblance between a word and thesound that it represents (e.g., boom, meow) is commonacross languages, although different languages capturethe sounds using different sets of phonemes (e.g., woofwoof, au au, ham ham, gav gav). Likewise, languages suchas Japanese utilize mimetics – words that resemble theirmeanings and are often used in interactions with childrenand in poetry (e.g., goro goro meaning ‘‘thunder” or ‘‘stom-ach rumbling”; Imai, Kita, Nagumo, & Okada, 2008; Kita,1997). Another example is the existence of phonesthemes,particular sound sequences that are used across a rangeof words within a language to reflect semantically relatedmeanings (e.g., the consonant cluster ‘‘gl” to imply thingsthat shine: glisten, gleam, glitter, glow; Bergen, 2004;Hutchins, 1998). Although these phenomena are commonacross natural languages, they are not systematic or perva-sive within any specific language and may well be specialcases.

Page 2: Sound to meaning correspondences facilitate word learning

182 L.C. Nygaard et al. / Cognition 112 (2009) 181–186

A growing literature suggests a class of subtler, butmore pervasive sound-to-meaning correspondences thatfundamentally challenge assumptions regarding theexclusive arbitrariness of natural language. One of thesesound-to-meaning links consists of correspondences be-tween phonological form and lexical category (Cassidy& Kelly, 2001; Farmer, Christiansen, & Monaghan,2006; Kelly, 1992; Sereno & Jongman, 1990). For exam-ple, listeners displayed a response-time advantage inprocessing sentences that contained nouns and verbswith category-typical phonological structure over sen-tences that contained nouns and verbs with atypicalstructure (Farmer et al., 2006). Shi, Werker, and Morgan(1999) found that even newborns are sensitive to pho-nological properties that differentiate function and con-tent words.

Sound–meaning pairings may also include non-arbi-trary relationships between sound and semantics (Nuck-olls, 1999), suggesting that systematic relationships alsoexist between what words sound like and what they mean(Bergen, 2004; Berlin, 1994; Cassidy, Kelly, & Sharoni,1999; Kunihira, 1971; Nygaard, Herold, & Namy, 2009).Köhler (1947) and more recently, Ramachandran andHubbard (2001), Maurer, Pathman, and Mondloch(2006), and Westbury (2005) have found that English-speaking adults and children interpret nonwords such as‘maluma’ and ‘bouba’ as referring to round, amoeboidshapes and words like ‘takete’ and ‘kiki’ as referring toangular figures. These findings suggest that a resemblanceor relationship between the sound structure of these non-words and perceptual properties of the figures constrainsmeaning, and raises the possibility that these relation-ships may facilitate both word learning and lexicalprocessing.

Although correspondences between phonological char-acteristics and linguistic categories call into question theexclusively arbitrary nature of language, these relation-ships may reflect within-language conventions that haveevolved over time through generalization and borrowingwithin a language. However, cross-linguistic tasks suggestthat sensitivity to these correspondences is not entirelythe result of conventional mappings learned by speakersof a particular language (Berlin, 1994; Brown, Black, &Horowitz, 1955; Brown & Nuttall, 1959; Imai et al.,2008; Tsuru & Fries, 1933; Weiss, 1966). For example,Kunihira (1971) reported that when native English speak-ers were presented with Japanese antonym pairs (e.g.,‘‘ue” and ‘‘shita”) and their English translations (e.g., upand down), they were able to match the two antonymswith the correct English translations. This type of cross-linguistic matching suggests that some aspect of thesound properties of the unfamiliar words systematicallyrelates to meaning and that language users recognizethese correspondences.

The ability to recruit apparent sound symbolism im-plies that there are cross-linguistic sound-to-meaningcorrespondences to which listeners from unrelated lan-guage backgrounds are sensitive. However, these findingsdo not necessarily reveal what function sound symbolicrelations might serve in spoken language processing. Dothese relationships, if consistent across tasks, facilitate

processing or aid learning relative to entirely arbitrarysound-to-meaning pairings? The current experimentinvestigated whether non-arbitrary links between soundand meaning, sound symbolism, facilitates word learningcross-linguistically. We examined the contribution ofsound symbolism to a novel word-learning task in whichmonolingual native English speakers were introduced tonew vocabulary items in Japanese. Using Japanese ant-onyms from Kunihira (1971), we examined the influenceof sound–meaning relationships on learners’ ability tolink unfamiliar words to English meanings. Vocabularylearning was tested in three conditions using a repeatedlearn–test paradigm. In the Match condition, participantslearned Japanese words that were paired with their ac-tual English equivalents (e.g., ‘‘akurai” paired withbright); in the Opposite condition, Japanese words werepaired with the English equivalent of their antonym(e.g., ‘‘akurai” paired with dark); in the Random condi-tion, Japanese words were randomly paired with Englishtranslations of other antonyms from the learning set(e.g., ‘‘akurai” paired with wide). At test, learners wereasked to identify the meaning of the word that theyhad learned in a speeded choice task. Response speedserved as the dependent measure, allowing us to exam-ine whether putative sound symbolic relationships influ-enced the fundamental processing efficiency of accessingnewly acquired vocabulary items.

In general, we expected that learners would acquirethe learned equivalents quickly and as accuracyreached ceiling, any sensitivity to sound–meaning rela-tionships would be reflected in response times. Thus, ifsound symbolic relationships exist in spoken languageand word learners are sensitive to non-arbitrary rela-tionships between sound and meaning, response timesto identify the meanings of newly acquired wordswould be faster to Japanese vocabulary words thatwere paired with their actual English equivalents thanto Japanese vocabulary words that were paired withan unrelated English word. We also considered the pos-sibility that the learning of antonyms for the Englishequivalents would result in a processing benefit aswell. Because antonyms highlight the same conceptualdomain, differing along a single dimension (Murphy &Andrew, 1993), response times to antonyms might beexpected to benefit as well from sound symbolic rela-tionships. However, if learners do not utilize sound–meaning correspondences to learn novel words, varia-tions in word-meaning mappings should have little ef-fect on word learning performance.

2. Method

2.1. Participants

Participants were 104 native speakers of AmericanEnglish who reported no familiarity with the Japaneselanguage and had no known hearing or speechdisorders. Participants either received course credit(n = 68) or were paid $10 for their participation(n = 36).

Page 3: Sound to meaning correspondences facilitate word learning

L.C. Nygaard et al. / Cognition 112 (2009) 181–186 183

2.2. Stimulus materials

The stimuli consisted of 21 Japanese antonym pairs usedby Kunihira (1971) and recorded by a native female Japa-nese speaker (see Appendix). The Japanese antonyms weredivided into three groups of seven words equated for fre-quency of occurrence of their English equivalents (Kucera& Francis, 1967). Three within-subject experimental condi-tions were constructed. In the Match condition, Japanesewords were paired with their English equivalents. In theOpposite condition, Japanese words were paired with theEnglish equivalent of their antonyms. In the Random condi-tion, Japanese words were randomly paired with the correctmeanings of other words in that same learning condition(see Appendix for pairings). Half the subjects learned oneof the Japanese antonyms (List 1 in Appendix) from eachpair and half learned the other set (List 2 in Appendix).Across participants, Japanese words were rotated throughconditions (Match, Opposite, and Random) so that each Jap-anese word appeared in each condition. Words were pre-sented in random order in each learn–test cycle.

To create distractors for use in the test phase, in addi-tion to the English words that were learned as equivalents,a second unrelated English word was randomly assigned toeach Japanese word, with the caveat that word frequencywas equated across pairings. The 21 distractors were Eng-lish translations of the Japanese antonyms that the partic-ipants did not encounter otherwise. The target anddistractor words appeared as fixed pairings in all three testblocks.

2.3. Procedure

Three learning phases alternated with three test phases.In each learning phase, participants were asked to listencarefully to each Japanese word and to pay attention tothe visually presented English word with which it waspaired. Participants were told that they would subse-quently be asked to identify the English ‘‘translation” foreach Japanese word. On each trial, participants saw a fixa-tion cross (500 ms) and then heard a Japanese wordthrough the headphones. The word was then repeatedwhile the English ‘‘translation” appeared on the computerscreen for 2000 ms.

Following each learning phase, participants completeda test phase. Participants heard the Japanese word onceand then saw the two possible English ‘‘translations”, thetarget and distracter, side by side on the computer screen.For example, in the Match condition, the Japanese word‘‘ue” was presented auditorily and participants chose be-tween up (target) and walk (distractor). Participants wereasked to make their choice as quickly as possible using astandard button box.

3. Results

3.1. Accuracy

Although most participants readily learned the novelwords and reached near ceiling performance across

learn–test blocks, a small subset of learners had difficultywith the task. Because our assessment of the processingefficiency of new Japanese vocabulary items required thatparticipants had learned those items, we set a learning cri-terion of 80% correct across training blocks for inclusion inthe study. Based on this criterion, 90 participants were in-cluded in the final analysis.

In order to evaluate learning accuracy, participant(F1) and item (F2) analyses of variance (ANOVAs) withblock (1, 2, and 3) and condition (Match, Opposite,and Random) as factors were conducted on percent cor-rect identification. The analyses revealed a significantmain effect of block, F1(2, 178) = 35.19, partialg2 = .283, p < .001; F2(2, 82) = 30.38, partial g2 = .426,p < .001. As expected, performance significantly im-proved over the three training cycles (MBlock1 = 89.2,MBlock2 = 93.9, MBlock3 = 95.8). A significant main effectof condition was also found for participants, F1(2,178) = 3.17, partial g2 = .034, p < .05; F2(2, 82) = 1.93, par-tial g2 = .045, p = .152. No significant interaction wasfound between block and condition.

Planned comparisons were conducted to evaluatethe main effect of condition. Accuracy was significantlybetter in the Match (M = 94.3) than in the Randomcondition (M = 91.6) by both participants (p < .01) anditems (p < .04). No significant differences were foundbetween Match and Opposite (M = 93.0) or betweenOpposite and Random conditions by either participantsor items.

3.2. Response time

Fig. 1 shows mean response time as a function of condi-tion and training cycle. Response times 2 SDs above or be-low the mean were removed, which accounted forelimination of less than 5% of the data. Participant (F1)and item (F2) ANOVAs with block and condition as factorsrevealed a significant main effect of block, F1(2,178) = 109.42, partial g2 = .551, p < .001; F2(2,82) = 156.59, partial g2 = .792, p < .001, indicating an ex-pected overall decrease in response times across the threeblocks. A marginally significant main effect of conditionwas found by participants, F1(2, 178) = 2.91, partialg2 = .032, p < .06, but not by items, F2 < 1. However, a sig-nificant interaction between block and condition wasfound, F1(4, 356) = 2.53, partial g2 = .028, p < .04;F2(4.164) = 2.823, partial g2 = .064, p < .03, indicating thatthe pattern of response times across blocks changed as afunction of condition, Match, Opposite, or Random. Simplemain effects conducted for each block indicated a signifi-cant effect of condition in Block 3, F1(2, 178) = 4.37, partialg2 = .047, p < .02; F2(2.82) = 3.04, partial g2 = .069, p = .053,but not in Blocks 1 and 2. Planned comparisons betweenconditions for Block 3 revealed significantly better perfor-mance in the Match than in the Random conditions, p < .02,by both participants and items. There was a significant dif-ference between Opposite and Random conditions by par-ticipants (p < .05), but not items. No differences betweenMatch and Opposite were found either for participants oritems. Participants responded more quickly overall to Jap-anese words paired with their correct English equivalents

Page 4: Sound to meaning correspondences facilitate word learning

1000

1200

1400

1600

1800

2000

Block 1 Block 2 Block 3

MatchOppositeRandom

Res

pons

e ti

me

(mse

c)

Learning Cycle

Fig. 1. Mean response time (in msec) as a function of training cycle andlearning condition. Error bars represent standard error of the mean.

184 L.C. Nygaard et al. / Cognition 112 (2009) 181–186

than to Japanese words paired with an unrelated Englishword. Response times in the Opposite condition wereintermediate.

4. Discussion

The current findings suggest that non-arbitrary rela-tionships between sound and meaning exist in naturallanguage and influence the encoding and retrieval ofthe meaning of unfamiliar words. Japanese words thatwere paired with their correct English equivalents dur-ing a garden-variety vocabulary-learning task were re-sponded to more quickly and recognized moreaccurately than incorrect English-to-Japanese mappings.During the first learning block, identification perfor-mance was more accurate when sounds and meaningmatched than when sounds and meaning were randomlypaired. Across learning–test blocks, as identification per-formance neared ceiling, the effect of sound symbolicrelationships emerged in the response time measure.Learners responded significantly faster when the wordform and meaning matched than when the learnedmeaning was unrelated. This finding implies that non-arbitrary or sound symbolic properties of speech provideconstraints that facilitate on-line acquisition and pro-cessing of mappings between sound and meaning. Notonly do learners encode and represent the sound proper-ties of language, but they also recruit this sensitivitywithin the complex, and arguably taxing, cognitive taskof associating novel words with meanings.

Interestingly, the results also suggest that Japanesewords that were paired with the English equivalent oftheir antonym may have benefited from non-arbitrarysound to meaning mappings as well. Murphy and An-drew (1993) have shown that a word’s antonym is

highly similar conceptually, differing in meaning onlyalong a single value (see also, Keysar & Bly, 1995).Consequently, if sound-to-meaning correspondencesfacilitate access to a particular semantic domain, thenrelated concepts or meanings, particularly antonyms,should also benefit from that facilitation. Thus, learnersin the current task appeared sensitive both to thesound symbolic characteristics of the Japanese wordsas well as to the relatedness of learned antonymmeanings.

These findings are consistent with other demonstra-tions of sensitivity to mappings between the soundstructure of language and lexical class or meaning (Ber-gen, 2004; Berlin, 1994; Brown et al., 1955; Farmeret al., 2006; Imai et al., 2008; Kelly, Springer, & Keil,1990; Kunihira, 1971). However, the particular instantia-tion of sound symbolic facilitation demonstrated hereoccurred for mappings that cut across two languages,English and Japanese, from distinct lineages and withunique phonologies. That native English speakers weresensitive to non-arbitrary sound to meaning mappingsin Japanese suggests that performance on this task can-not be explained exclusively by invoking learned within-language regularities. Although previous work hasshown that language users can guess meanings cross-linguistically (Berlin, 1994; Kunihira, 1971), our findingsdemonstrate a genuine functional role in language learn-ing for these cross-linguistic mappings. In the currenttask, non-arbitrary correspondences between sound andmeaning had an implicit effect on the accuracy and effi-ciency with which learners identified the learned Englishequivalents of Japanese words. Sound symbolic relationsinfluenced basic on-line lexical and learning processingin spoken language.

It should be noted that learners may have also beensensitive to reliable cues to meaning in the prosodiccontours of the Japanese utterances. Indeed, previouswork has shown that listeners can infer word meaningfrom prosody, suggesting that prosody can conveysemantic information both within and outside the do-main of emotion (Kunihira, 1971; Nygaard et al., 2009;Shintel, Nusbaum, & Okrent, 2006). Indeed, Kunihira(1971) found superior performance when his Japaneseantonyms were pronounced with ‘‘expressive” prosody.However, he also found that printed and ‘‘monotone”versions of Japanese antonyms were matched to theirrespective referents at above chance rates, indicatingthat phonemic content alone was sufficient to disambig-uate word meaning. Although it is difficult to rule out apossible contribution of prosody, Kunhira’s findings fore-shadow our learning effects and suggest that our learn-ers were sensitive to the sound symbolic propertiesinherent in the Japanese words, in addition to prosodicinformation. Future research will be needed to deter-mine the relative contribution of these two aspects ofspoken language to learners’ ability to associate wordswith their referents.

One potential mechanism by which sound symbolicproperties may facilitate word-learning cross-linguisti-cally is that these mappings may reflect a sensitivityto general perceptually-based cross-modal links between

Page 5: Sound to meaning correspondences facilitate word learning

L.C. Nygaard et al. / Cognition 112 (2009) 181–186 185

sounds and aspects of meaning (Marks, 1978; Martino &Marks, 1999; Melara, 1989). Properties of the soundstructure of language may activate or resemble charac-teristics inherent in the object, event, or property towhich the sound sequence refers, and language learnersmay use this inherent cross-modal mapping to facilitatethe retrieval of word meaning. Such cross-modal corre-spondences may be achieved via some literal or figura-tive resemblance between the sound and meaning (e.g.,vowel height may correlate with relative size), or mayreflect an embodied representation involving simulationof the actual meaning.

Although it is unclear which sound properties of theJapanese words are related to aspects of word meaning,it is clear that these properties were available to nativespeakers of English during word learning. Given the taskof acquiring and accessing novel words in a large lexiconwith a rich, complex semantic space, non-arbitrary rela-tionships may not necessarily be specific. Instead, theymay operate in a general and flexible manner to facili-tate the mapping between word forms and meanings.Thus, reliable mappings between sound and meaning,whether they be probabilistic or inherently cross modal,could work to constrain novel word learning and subse-quent word retrieval and recognition by guiding process-ing to properties and meaning within a particularsemantic context.

The current investigation provides one of the firstdemonstrations that learners can use sound symbolismto derive meaning during spoken language processing.These results challenge the traditional assumption thatwords bear an exclusively arbitrary relationship to theirreferents and that lexical processing relies solely on thisarbitrary relationship. Rather, the sound structure ofspoken language may engage cross-modal perceptual-motor correspondences that permeate the form, struc-ture, and meaning of linguistic communication (Martino& Marks, 2001; Ramachandran & Hubbard, 2001). Assuch, sound symbolic properties of language may beevolutionarily preserved features of spoken languagethat have functional consequences not only for wordlearning and processing in adults, but also for the acqui-sition of spoken language in young children. Althougharbitrariness certainly remains a central design charac-teristic of linguistic structure, these results indicate thatlanguage users also can and do exploit non-arbitraryrelationships in the service of word learning andretrieval.

Acknowledgements

We thank three anonymous reviewers for helpful com-ments and suggestions on a previous draft of this paper.We also thank Jessica Alexander and Stig Rasmussen forstimulus and protocol preparation. An Emory CollegeScholarly Research and Inquiry at Emory (SIRE) undergrad-uate research grant to the second author supported a por-tion of this research.

Appendix

Japanese Word

Match Opposite Random

List 1

akarui bright dark catch nageru throw catch wide nureta wet dry bright omoi heavy light sit semai� narrow wide up suwaru sit stand dry ue up down light amai sweet sour cool katai hard soft rise futoi� fat thin hard neru lie rise sharp nibui blunt sharp fat suzushii cool warm bad ii good bad sweet arai rough smooth short aruku walk run slow hayai fast slow rough mijikai short long walk tetsu iron gold move tsuyoi strong weak iron ugoku move stop weak

List 2

kurai dark bright wet toru� catch throw light kawaita dry wet narrow karui light heavy catch hiroi wide narrow blunt tatsu stand sit down shita down up dark suppai sour sweet warm yawarakai� soft hard lie hosoi� thin fat blunt okiru rise lie soft surudoi sharp blunt bad atatakai warm cool sweet warui bad good thin nameraka smooth rough move hashiru run walk smooth osoi� slow fast weak negai long short run kin� gold iron long yowai weak strong gold tomaru stop move fast

�Words that were modified from the original list in Kunihira (1971).

References

Bergen, B. K. (2004). The psychological reality of phonaesthemes.Language, 80, 290–311.

Berlin, B. (1994). Evidence for pervasive synaesthetic sound symbolism inethnozoological nomenclature. In L. Hinton, J. Nichols, & J. Ohala(Eds.), Sound symbolism (pp. 77–93). New York: Cambridge UniversityPress.

Brown, R. W., Black, A. H., & Horowitz, A. E. (1955). Phonetic symbolism innatural languages. Journal of Abnormal and Social Psychology, 50,388–393.

Page 6: Sound to meaning correspondences facilitate word learning

186 L.C. Nygaard et al. / Cognition 112 (2009) 181–186

Brown, R. W., & Nuttall, R. (1959). Method in phonetic symbolismexperiments. Journal of Abnormal and Social Psychology, 59, 441–445.

Cassidy, K. W., & Kelly, M. H. (2001). Children’s use of phonology to infergrammatical class in vocabulary learning. Psychonomic Bulletin andReview, 8, 519–523.

Cassidy, K. W., Kelly, M. H., & Sharoni, L. J. (1999). Inferring gender fromname phonology. Journal of Experimental Psychology: General, 128,362–381.

de Saussure, F. (1959). Course in general linguistics. New York:Philosophical Library.

Farmer, T. A., Christiansen, M. H., & Monaghan, P. (2006). Phonologicaltypicality influences on-line sentence comprehension. Proceedings ofthe National Academy of Sciences, 103, 12203–12208.

Gasser, M. (2004). The origins of arbitrariness in language. In Proceedingsof the Cognitive Science Society (pp. 434–439). Hilllsdale, NJ: LawrenceErlbaum.

Hockett, C. F. (1977). The view from language: Selected essays 1948–1974.Athens: The University of Georgia Press.

Hutchins, S. S. (1998). The psychological reality, variability, andcompositionality of English phonesthemes. Doctoral dissertation,Emory University. 1998.

Imai, M., Kita, S., Nagumo, M., & Okada, H. (2008). Sound symbolismfacilitates early verb learning. Cognition, 109, 54–65.

Kelly, M. (1992). Using sound to solve syntactic problems: The role ofphonology in grammatical category assignments. PsychologicalReview, 99, 349–364.

Kelly, M., Springer, K., & Keil, F. (1990). The relation between syllablenumber and visual complexity in the acquisition of word meaning.Memory and Cognition, 18, 528–536.

Keysar, B., & Bly, B. (1995). Intuitions of the transparency of idioms: Canone keep a secret by spilling the beans? Journal of Memory andLanguage, 34, 89–109.

Kita, S. (1997). Two-dimensional semantic analysis of Japanese mimetics.Linguistics, 35, 379–415.

Köhler, W. (1947). Gestalt psychology (2nd ed.). New York: Liveright.Kucera, H., & Francis, W. N. (1967). Computational analysis of present-day

American English. Providence, RI: Brown University Press.Kunihira, S. (1971). Effects of the expressive voice on phonetic symbolism.

Journal of Verbal Learning and Verbal Behavior, 10, 427–429.Marks, L. E. (1978). The unity of the senses: Interrelations among the

modalities. New York: Academic Press.

Martino, G., & Marks, L. E. (1999). Perceptual and linguistic interactions inspeeded classifications: Tests of the semantic coding hypothesis.Perception, 28, 903–923.

Martino, G., & Marks, L. E. (2001). Synesthesia: Strong and weak. CurrentDirections in Psychological Science, 10, 61–65.

Maurer, D., Pathman, T., & Mondloch, C. J. (2006). The shape of boubas:Sound–shape correspondences in toddlers and adults. DevelopmentalScience, 9(3), 316–322.

Melara, R. D. (1989). Dimensional interactions between color and pitch.Journal of Experimental Psychology: Human Perception and Performance,15, 69–79.

Monaghan, P., & Christiansen, M. H. (2006). Why form-meaning mappingsare not entirely arbitrary in language. In Proceedings of the 28th annualconference of the Cognitive Science Society (pp. 1838–1843). Mahwah,NJ: Lawrence Erlbaum.

Murphy, G. L., & Andrew, J. M. (1993). The conceptual basis of antonymyand synonymy in adjectives. Journal of Memory and Language, 32,301–319.

Nuckolls, J. B. (1999). The case for sound symbolism. Annual Review ofAnthropology, 28, 225–252.

Nygaard, L. C., Herold, D. S., & Namy, L. L. (2009). The semantics ofprosody: Acoustic and perceptual evidence of prosodic correlates toword meaning. Cognitive Science, 33, 127–146.

Ramachandran, V. S., & Hubbard, E. M. (2001). Synaesthesia – A windowinto perception, thought and language. Journal of ConsciousnessStudies, 8(12), 3–34.

Sereno, J. A., & Jongman, A. (1990). Phonological and form class relationsin the lexicon. Journal of Psycholinguistic Research, 19, 387–404.

Shi, R., Werker, J. F., & Morgan, J. L. (1999). Newborn infants’ sensitivity toperceptual cues to lexical and grammatical words. Cognition, 72,B11–B21.

Shintel, H., Nusbaum, H. C., & Okrent, A. (2006). Analog acousticexpression in speech communication. Journal of Memory andLanguage, 55, 167–177.

Tsuru, S., & Fries, H. S. (1933). A problem in meaning. Journal of GeneralPsychology, 8, 281–284.

Weiss, J. H. (1966). A study of the ability of English speakers to guess themeanings of non-antonym foreign words. Journal of GeneralPsychology, 72, 87–106.

Westbury, C. (2005). Implicit sound symbolism in lexical access: Evidencefrom an interference task. Brain and Language, 93, 10–19.