Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
Noisy-channel models: II Lecture 4 (part 1)
Klinton Bicknell & Roger Levy
LSA Institute 2015 U Chicago
0
a/1
cat/3
sat/3
1
<eps>/1
a
cat/2
sat/2
a/1
cat/3
sat/3
2
<eps>/3
a/2
cat
sat/1
a/1
cat/3
sat/3
3
<eps>/3
a/2
cat/1
sat
a/1
cat/3
sat/3
A new view of reading
2
• rich linguistic knowledge and perceptual input combine to identify text
• trade-off between prior probability and visual input: when linguistic context provides more information, need less perceptual input to be confident
A new view of reading
3
a new bottleneck in reading: gathering visual input
• eye movement decisions (how long to fixate, where to move next) made to get visual input (cf. Legge et al., 1997)
• get more perceptual information about a word by fixating it longer/more
• language system determines the most useful visual input to get-> when/where to move the eyes
• reading as active
• principled solution: optimal control, decision theory
How would this link yield interesting linguistic effects?
4
look at quantitative predictions of this account for shape of linguistic effects via a computational model
trade-off between prior probability and visual input
efficient language users will make fewer and shorter fixations on words with high probability given context
efficient language users will be more likely to skip over words with high probability given context
A computational model
5
probabilistic inference
Bicknell & Levy (2010, 2012)
efficient eye movement control
realistic environment
A computational model
6
probabilistic inference
• combines visual information with probabilistic language knowledge using methods from computational linguistics
Bicknell & Levy (2010, 2012)
efficient compositionefficient inference: closed under
intersection with wFSAs
visual information as weighted finite-state automata (wFSAs)
0 1a:a/0.7b:b/0.3
2a:a/0.2b:b/0.8
3a:a/0.1b:b/0.9
<eps>:<eps>
probabilistic language knowledge
PCFG: S -> NP VP / 0.9
S -> S Conj S / 0.05
n-grams: p(York|New) = .4p(Brunswick|New) = .01
abb
A computational model
7
efficient eye movement control
• determines efficient eye movement behavior in response to incremental comprehension using machine learning methods
• to maximize reward R, a linear function of speed and accuracy
• first-cut accuracy: log probability of sentence string under model beliefs after reading
• parameterized behavior policies control the eyes
• determine parameters that maximize R using reinforcement learning (PEGASUS algorithm, Ng & Jordan, 2000)
Bicknell & Levy (2010, 2012)
fit model behavior to maximize reward, not fit
A computational model
8
realistic environment
• embed model in realistic environment using findings from psychophysics and oculomotor control
• exponentially decreasing visual acuity
• saccade initiation delays
• normally-distributed motor error
Bicknell & Levy (2010, 2012)
A computational model
9
probabilistic inference
Bicknell & Levy (2010, 2012)
efficient eye movement control
realistic environment
Model simulation results
10
methods
• evaluate model on human effects of word frequency and predictability
• frequency: word's overall rate of occurrence in language
• more frequent: dog
• less frequent: parsnip
• predictability: probability of word in context
• more predictable: The children went outside to play …
• less predictable: My friend really likes to play …Bicknell & Levy (2012)
Model simulation results
11
methods
• simulate model reading typical psycholinguistic sentences
• analyze three word-based eye movement measures
• only fit one free parameter of model (visual input rate) to match average human word reading time. (others fit to optimize reading efficiency)
• any match to effect shape falls out of principles of efficient identification and linguistic structure
Bicknell & Levy (2012)
Model simulation results
12
frequency predictability
Bicknell & Levy (2012)
gaze duration: total duration of all fixations on word
human model
250
275
300
325
−6−5−4−3−2 −6−5−4−3−2Log frequency
Gaz
e du
ratio
n (m
s)
human model
225
250
275
300
325
−6 −4 −2 0−6 −4 −2 0Log predictability
Gaz
e du
ratio
n (m
s)
Model simulation results
13
frequency predictability
Bicknell & Levy (2012)
refixation probability: probability of making more than one fixation on the word
human model
0.00
0.05
0.10
0.15
0.20
−6−5−4−3−2 −6−5−4−3−2Log frequency
Ref
ixat
ion
prob
abilit
y
human model
0.00
0.05
0.10
0.15
0.20
0.25
−6 −4 −2 0−6 −4 −2 0Log predictability
Ref
ixat
ion
prob
abilit
y
Model simulation results
14
frequency predictability
Bicknell & Levy (2012)
skipping probability: probability of not directly fixating word
human model
0.2
0.4
0.6
−6−5−4−3−2 −6−5−4−3−2Log frequency
Skip
ping
pro
babi
lity
human model
0.2
0.4
0.6
−6 −4 −2 0−6 −4 −2 0Log predictability
Skip
ping
pro
babi
lity
Model simulation results
15
predictability Bicknell & Levy (2012)
human model
250
275
300
325
−6−5−4−3−2 −6−5−4−3−2Log frequency
Gaz
e du
ratio
n (m
s)
human model
0.00
0.05
0.10
0.15
0.20
−6−5−4−3−2 −6−5−4−3−2Log frequency
Ref
ixat
ion
prob
abilit
y
human model
0.2
0.4
0.6
−6−5−4−3−2 −6−5−4−3−2Log frequency
Skip
ping
pro
babi
lity
human model
225
250
275
300
325
−6 −4 −2 0−6 −4 −2 0Log predictability
Gaz
e du
ratio
n (m
s)
human model
0.00
0.05
0.10
0.15
0.20
0.25
−6 −4 −2 0−6 −4 −2 0Log predictability
Ref
ixat
ion
prob
abilit
y
human model
0.2
0.4
0.6
−6 −4 −2 0−6 −4 −2 0Log predictability
Skip
ping
pro
babi
lity
frequency
predicts shape of all effects
Part One: Summary
16
• implemented model efficiently controlling eyes to gather perceptual info for text identification
• model well predicts shape of linguistic effects on eye movements, fitting only one parameter to average reading time
• provides motivated reason why linguistic effects appear on eye movements in reading
• reason is not because language processing operations take substantial time (in fact, they are instant in the model)
probabilistic inference
Reading
17
rich probabilistic language knowledge
probabilistic inference[perceptual input]
likelihood prior
rational action
[comprehension behavior]Get more input about words with low prior probability given context [cf. surprisal]
principle:
beliefs about identity of the text
Part Two
18
Viewing reading as visual information gathering solves the puzzle of regressions.
The puzzle
19 movie by Piers Cornelissen
Why would it be useful to regress to previous words so often?
~10% of saccades move the eyes back (‘regress’) to previous words
A solution
20
confidence falling
p(jacket)=.9 → p(jacket)=.4
p(racket)=.1 → p(racket)=.6
p(packet)=.0 → p(packet)=.0
confidence high → confidence low
‘From the closet, she pulled out a #acket for the upcoming match’‘From the closet, she pulled out a #acket …’
Bicknell & Levy (2010)
Maybe regressions an effective way to deal with this?
−3
−2
−1
2.00 2.25 2.50 2.75 3.00 3.25Sentence reading time (sec)
Accu
racy
(Log
pro
babi
lity)
non−regressiveregressive
A solution
21
simulations
• compare behavior policies that make regressions to those that don't
• regressive policies are strictly more efficient (both faster and more accurate)
Bicknell & Levy (2010)
but do humans make regressions when confidence falls?
Human regressions
22
methods
• Dundee corpus: large corpus of eye movements reading newspaper editorials [Kennedy & Pynte, 2005]
• derived broad coverage measure of confidence falling: ∆c
• ∆c = log frequency - log predictability (ask me about the derivation!)
• logistic mixed-effects regression model predicting whether or not the eyes will make a regression to a previous word
[statistics]
[eyes]
Bicknell & Levy (2011)
Human regressions
23
methods
• predictors
• ∆c for current word (wordn) & previous word (wordn-1) for late-triggered regressions
• variables suggested by other accounts of regressions (frequency, length, landing position, saccade length, fixation duration)
• predict more regressions for high ∆c (on either word)
Bicknell & Levy (2011)
Human regressions
24
(Log) Saccade lengthWord n−1 fix. dur.
Word n fix. dur.Word n−1 land. pos.
Word n land. pos.(Log) Word n−1 length
(Log) Word n−1 freq.Word n−1 Delta_c
(Log) Word n length(Log) Word n freq.
Word n Delta_c
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
−0.4 0.0 0.4 0.8Standardized coefficient estimate
Pred
icto
r
Relevant for today
• more regressions when current word or previous word makes confidence fall
• confidence falling one of most reliable predictors
Bicknell & Levy (2011)
Part Two: Summary
25
explaining regressions
• efficient reader will update uncertain beliefs about prior material on the basis of new input: confidence falling
• confidence falling & regressions
• regressions help make reading more efficient
• confidence falling explains short-range regressions in naturalistic text
• can differentiate types of ‘processing difficulty’
• words of low prior probability -> more/longer fixations
• losing confidence in prior material -> regressions
Reading
26
rich probabilistic language knowledge
probabilistic inference[perceptual input]
likelihood prior
rational action
[comprehension behavior]
1. Get more input about words with low prior probability given context [cf. surprisal]
2. Regress to previous words if confidence about them falls [cf. EIS]
principles:
beliefs about identity of the text
Summary
27
This part of today's lecture argued that comprehension is well understood as
probabilistic inference on noisy perceptual input.
Conclusion
28
what this buys us
• part 1: a solution to the problem of local coherences
• they change beliefs about what readers thought they saw (EIS)
• part 2: a principled reason why reading times on a word should be well described by surprisal
• need less visual information to be confident in identity of words that have high contextual probability
• part 3: a principled reason why readers should make regressions
• language often makes readers doubt what they thought they saw
Conclusion
29
lots more recent work in this area (papers on class website)
• Gibson et al. (2013): evidence for noisy-channel inferences about what sentences mean
• "The ball kicked the girl" can mean "The girl kicked the ball"
• Lewis et al. (2013): evidence that readers rationally change their eye movement control given their goal functions (as predicted by a computational model)
Probabilistic models of language acquisitionLecture 4 (part 2)
Klinton Bicknell & Roger Levy
LSA Institute 2015University of Chicago
Situating language acquisition
31
day 1: sound categorization
−20 0 20 40 60 80
0.00
00.
005
0.01
00.
015
0.02
00.
025
0.03
0
VOT
Prob
abilit
y de
nsity
/b/ /p/
27
Figure 5.2: Likelihood functions for /b/–/p/ phoneme categorizations, with µb =0, µp = 50, σb = σp = 12. For the inputx = 27, the likelihoods favor /p/.
−20 0 20 40 60 80
0.0
0.2
0.4
0.6
0.8
1.0
VOT
Post
erio
r pro
babi
lity o
f /b/
Figure 5.3: Posterior probability curvefor Bayesian phoneme discrimination asa function of VOT
the conditional distributions over acoustic representations, Pb(x) and Pp(x) for /b/ and/p/ respectively (the likelihood functions), and the prior distribution over /b/ versus /p/.We further simplify the problem by characterizing any acoustic representation x as a singlereal-valued number representing the VOT, and the likelihood functions for /b/ and /p/ asnormal density functions (Section 2.10) with means µb, µp and standard deviations σb, σp
respectively.Figure 5.2 illustrates the likelihood functions for the choices µb = 0, µp = 50, σb = σp =
12. Intuitively, the phoneme that is more likely to be realized with VOT in the vicinity of agiven input is a better choice for the input, and the greater the discrepancy in the likelihoodsthe stronger the categorization preference. An input with non-negligible likelihood for eachphoneme is close to the “categorization boundary”, but may still have a preference. Theseintuitions are formally realized in Bayes’ Rule:
P (/b/|x) = P (x|/b/)P (/b/)
P (x)(5.10)
and since we are considering only two alternatives, the marginal likelihood is simply theweighted sum of the likelihoods under the two phonemes: P (x) = P (x|/b/)P (/b/) +P (x|/p/)P (/p/). If we plug in the normal probability density function we get
P (/b/|x) =1√2πσ2
b
exp!− (x−µb)2
2σ2b
"P (/b/)
1√2πσ2
b
exp!− (x−µb)2
2σ2b
"P (/b/) + 1√
2πσ2p
exp!− (x−µp)2
2σ2p
"P (/p/)
(5.11)
In the special case where σb = σp = σ we can simplify this considerably by cancelling the
Roger Levy – Probabilistic Models in the Study of Language draft, November 6, 2012 85
Situating language acquisition
32
S!
c! category
sound value
• c ~ discrete choice, e.g., p(p) = p(b) = 0.5
• S|c ~ Gaussian(μc, σ2c)
day 1: inferring sound category c from sound token S
Statistical Model
day 2a: sound similarity
Statistical Model
c
S
Choose a category c with probability p(c)
Articulate a target production T with probability p(T|c)
Listener hears speech sound S with probability p(S|T)
Tp(T |c) = N(µc,�
2c )
p(S|T ) = N(T,�2S)
day 2a: inferring target production T from sound token S
Statistical Model
day 2b: incremental parsing
Garden path models 2
• The most famous garden-paths: reduced relative clauses (RRCs) versus main clauses (MCs)
• From the valence + simple-constituency perspective, MC and RRC analyses differ in two places:
The horse raced past the barn fell.
(that was)
…
Statistical Model
w
T ~ PCFG
words are leaves of the trees
T
day 2b: inferring syntactic structure T from words w
Statistical Model
day 3a: noisy channel sentence processing
The coach smiled at the player tossed the frisbee The coach smiled at the player thrown the frisbee The coach smiled toward the player tossed the frisbee The coach smiled toward the player thrown the frisbee
Statistical Model
a tree T ~ PCFG
words are leaves of the tree
visual input I ~ noise(w)
T
I
w
day 3a: inferring words w and trees T from perceptual input I
•
Situating language acquisition
39
day 3b: sentence meaning judgments
Situating language acquisition
40
day 3b: inferring meaning M from linguistic form F and processing pressures P
M P
F
F ~ function(M, P)
Situating language acquisition
41
in all these cases
• step 1: identify the relevant sources of information
• step 2: make a generative model in which
• the thing to be inferred was a latent variable
• the relevant information was used to specify a prior for the latent variable or the likelihood of the data given that latent variable
• step 3: apply Bayesian inference (relatively easy here: given prior and likelihood)
Problems in acquisition
42
step 1
let's apply step 1 (identify relevant information sources) to problems in acquisition
Problems in acquisition
43
learning to segment words
[yuwanttusiðəbʊk]?
-> you want to see the book?
[e.g., Goldwater et al., 2009]
Problems in acquisition
44
learning word meanings
[e.g., Frank et al., 2009]
Problems in acquisition
45
learning phonotactics
[blɪk]
[mdwɨ] Polish mdły: 'tasteless'
[e.g., Hayes & Wilson, 2008]
Problems in acquisition
46
learning syntax
The boy is hungry. -> Is the boy hungry?
The boy who is smiling is happy. -> ???
Is the boy who is smiling happy?
*Is the boy who smiling is happy?
[e.g., Perfors et al., 2011]
Problems in acquisition
47
learning the sound categories in the language from the sounds
• are long VOTs [t] and short VOTs [d] functionally different sounds or just natural variation?
[e.g., Pajak et al., 2013]
Problems in acquisition
48
steps 2 and 3: construct a generative model, perform inference
• a bit different than before
• we'll go in depth through an example of sound category learning
• note: much of the work we'll be discussing is by Dr. Bożena Pająk (now a learning scientist at Duolingo), who also made many of the visualizations you'll see
learning sound categories
• how do we learn that [t] and [d] are different categories?
• information source 1: distributional information
Distributional information
49
VOT
experimental evidence
• we know that babies and adults use distributional information to help infer category structure [Maye & Gerken, 2000; Maye et al., 2002]
• Pajak & Levy (2011) performed an experiment replicating this with adults, which I'll describe
Distributional information
50
/s/ /ss/ length
sound&class:&FRICATIVES&
singleton) geminate)
51
Experimental data (Pajak & Levy 2011)
▪ Distributional training: ▪ adult English native speakers exposed to words in a new language,
where the middle consonant varied along the length dimension
[aja]145ms [ina]205ms [ila]115ms [ama]160ms
…
52
Experimental data (Pajak & Levy 2011)
0
4
8
12
16
0
4
8
12
16
Expt1:sonorantsExpt2:fricatives
100 120 140 160 180 200 220 240 260 280Stimuli length continuum (in msec)
Fam
iliariz
atio
n fre
quen
cy
bimodalunimodal
53
▪ Testing: ▪ participants made judgments about pairs of words
▪ dependent measure: proportion of ‘different’ responses (as opposed to ‘same’) on ‘different’ trials
▪ if learning is successful, we expect:
Example: [ama]-[amma] “Are these two different words in this language or two repetitions of the same word?”
Experimental data (Pajak & Levy 2011)
‘DIFFERENT’ RESPONSES
Bimodal training Unimodal training>
54
Experimental data (Pajak & Levy 2011)
0
4
8
12
16
0
4
8
12
16
Expt1:sonorantsExpt2:fricatives
100 120 140 160 180 200 220 240 260 280Stimuli length continuum (in msec)
Fam
iliariz
atio
n fre
quen
cy
bimodalunimodal
test stimuli
55
Experimental data (Pajak & Levy 2011)
Expt1−trained Expt1−untrained
Prop
ortio
n of
'diff
eren
t' re
spon
ses
0.0
0.2
0.4
0.6
0.8
1.0
BimodalUnimodal
distributional information is used
• adults and babies use distributional information to infer categories
• next steps: put this information into a generative model and perform inference
Distributional information
56
A generative model
57
our model from before of where sounds come from
S!
c! category
sound value
• c ~ discrete choice, e.g., p(p) = p(b) = 0.5
• S|c ~ Gaussian(μc, σ2c)
VOT
still pretty appropriate!
a model of where categories and sounds come from
58
y
µΣ
iφ
m
n
(a) Simple model
y
µΣ
iφ
α
m
n
(b) Bayesian model priorsover category parameters
y
µΣ
iφ
Σφ
α
m
n
(c) Learning category proba-bilities as well
Figure 9.3: Graphical model for simple mixture of Gaussians
−5 0 5 10
0.0
0.1
0.2
0.3
0.4
x
P(x)
Figure 9.4: The generative mixture of Gaussians in one dimension. Model parameters are:φ1 = 0.35, µ1 = 0, σ1 = 1, µ2 = 4, σ2 = 2.
θ = ⟨φ,µ,Σ⟩. This model is known as a mixture of Gaussians, since the observa-tions are drawn from some mixture of individually Gaussian distributions. An illustrationof this generative model in one dimension is given in Figure 9.4, with the Gaussian mixturecomponents drawn above the observations. At the bottom of the graph is one sample fromthis Gaussian mixture in which the underlying categories are distinguished; at the top of thegraph is another sample in which the underlying categories are not distinguished. The rela-tively simple supervised problem would be to infer the means and variances of the Gaussianmixture components from the category-distinguished sample at the bottom of the graph; themuch harder unsupervised problem is to infer these means and variances—and potentiallythe number of mixture components!—from the category-undistinguished sample at the topof the graph. The graphical model corresponding to this unsupervised learning problem isgiven in Figure 9.3a.
Roger Levy – Probabilistic Models in the Study of Language draft, November 6, 2012 210
y ~ N(μi, Σi)i ~ discrete(ϕ)
(S before)(c before)
n = # observationsm = # categories
(σ2 before)
ϕ ~ distribution()μi ~ distribution()Σi ~ distribution()
A generative model
'mixture of Gaussians' made up!}set in
advance
final step: inference
• now we have generative model with prior on variables of interest and well-defined likelihood
• but inference is usually quite hard
• two common classes of methods for inference, which I'll describe now
59
Inference
inference method 1: Expectation Maximization (EM)
• if we knew the category parameters (means, variance, probability), it would be easy to categorize datapoints like before
• if we knew how the datapoints were categorized, it would be easy to find category parameters
• Q: how?
• idea behind EM:
• randomly initialize category assignments of datapoints
• estimate category parameters from current assignments
• now assign datapoints to categories based on category parameters
• estimate category parameters from current assignments
• …60
Inference
EM illustrations (on board)
• bimodal data
• two categories
61
Inference
EM issues
• only finds most likely value for variables, not posterior distribution on them
• can get stuck in 'local optima'
62
Inference
inference method 2: Markov chain Monto Carlo
• like EM, the model is in some state at each point in time (with current guesses for latent variables)
• like EM, the model transitions between states
• unlike EM, the model sometimes moves to lower probability states
• if you calculate the transition probabilities correctly, the amount of time the model spends in each state is proportional to its posterior probability
• thus: don't just get most likely variables out, but full posterior distribution
• also: guaranteed to converge (not getting stuck in local optima)
• however, this guarantee is only with infinite time…
63
Inference
mixture of Gaussian models to learn phonetic categories
• many groups have had success, at least on simple problems (Vallabha et al., 2007; McMurray et al., 2009; Feldman et al., 2009)
• but it seems clear that other information sources are needed too, because categories overlap too much
• next up: work by Pajak and colleagues on another information source
64
Modeling category learning
normally not considered part of the phonetic inventory of the Bulgarian language. The Bulgarian obstruentconsonants are divided into 12 pairs voiced<>voiceless on the criteria of sonority. The only obstruent withouta counterpart is the voiceless velar fricative /x/. The contrast 'voiced vs. voiceless' is neutralized in word-finalposition, where all obstruents are voiceless (as in most Slavic languages); this neutralization is, however, notreflected in the spelling.
BilabialLabio-
dental
Dental/
Alveolar
Post-
alveolarPalatal Velar
Nasalhard m (ɱ) n (ŋ)soft mʲ ɲ
Plosivehard p b t d k ɡsoft pʲ bʲ tʲ dʲ c ɟ
Affricatehard ts͡ (d͡z) tʃ͡ d͡ʒsoft ts͡ʲ (d͡zʲ)
Fricativehard f v s z
ʃ ʒx (ɣ)
soft fʲ vʲ sʲ zʲ (xʲ)
Trillhard rsoft rʲ
Approximant
hard (w)soft j
Lateral
hard ɫ (l)soft ʎ
^1 According to Klagstad Jr. (1958:46–48), /t tʲ d dʲ s sʲ z zʲ n/ are dental. He also analyzes /ɲ/ as palatalizeddental nasal, and provides no information about the place of articulation of /ts͡ ts͡ʲ r rʲ l ɫ/.
^2 Only as an allophone of /m/ and /n/ before /f/ and /v/. Examples: инфлация [iɱˈflaʦijɐ] 'inflation'.[5]
^3 As an allophone of /n/ before /k/ and /g/. Examples: тънко [ˈtɤŋko] 'thin' (neut.), танго [tɐŋˈgɔ] 'tango'.[6]
^4 /ɣ/ exists as an allophone of /x/ only at word boundaries before voiced obstruents. Example: видях го [viˈdʲaɣgo] 'I saw him'.[7]
^5 Not a native phoneme, but appears in borrowings from English.
^6 /l/ can be analyzed as an allophone of /ɫ/ as it appears only before front vowels. A trend of l-vocalization isemerging among younger native speakers and more often in colloquial speech.
Hard and palatalized consonants
BULGARIAN consonants
1
What may help solve this problem
(Wikipedia: Bulgarian phonology)
VOICING PALATALIZATION
§ Learning may be facilitated by languages’ extensive re-‐use of a set of phoneAc dimensions (Clements 2003)
§ Learning may be facilitated by languages’ extensive re-‐use of a set of phoneAc dimensions (Clements 2003)
2
What may help solve this problem
(Wikipedia: Thai language)
Monophthongs of Thai. From Tingsabadh& Abramson (1993:25)
Diphthongs of Thai. From Tingsabadh &Abramson (1993:25)
Front Back
unrounded unrounded rounded
short long short long short long
Close/i/ -ิ
/iː/ -ี
/ɯ/ -ึ
/ɯː/ -ื-
/u/ -ุ
/uː/ -ู
Close-mid/e/เ-ะ
/eː/เ-
/ɤ/เ-อะ
/ɤː/เ-อ
/o/โ-ะ
/oː/โ-
Open-mid/ɛ/แ-ะ
/ɛː/แ-
/ɔ/เ-าะ
/ɔː/-อ
Open /a/
-ะ, -ั-/aː/-า
The vowels each exist in long-short pairs: these are distinct phonemes forming unrelated words in Thai,[12] butusually transliterated the same: เขา (khao) means "he" or "she", while ขาว (khao) means "white".
The long-short pairs are as follows:
Long Short
Thai IPA Example Thai IPA Example
–า /aː/ ฝาน /fǎːn/ 'to slice' –ะ /a/ ฝน /fǎn/ 'to dream'
–ี /iː/ กรีด /krìːt/ 'to cut' –ิ /i/ กริช /krìt/ 'kris'
–ู /uː/ สูด /sùːt/ 'to inhale' –ุ /u/ สุด /sùt/ 'rearmost'
เ– /eː/ เอน /ʔēːn/ 'to recline' เ–ะ /e/ เอ็น /ʔēn/ 'tendon, ligament'
แ– /ɛː/ แพ /pʰɛ́ː / 'to be defeated' แ–ะ /ɛ/ แพะ /pʰɛʔ́/ 'goat'
–ื- /ɯː/ คล่ืน /kʰlɯ̂ːn/ 'wave' –ึ /ɯ/ ข้ึน /kʰɯ̂n/ 'to go up'
เ–อ /ɤː/ เดิน /dɤ̄ː n/ 'to walk' เ–อะ /ɤ/ เงิน /ŋɤn̄/ 'silver'
โ– /oː/ โคน /kʰôːn/ 'to fell' โ–ะ /o/ ขน /kʰôn/ 'thick (soup)'
–อ /ɔː/ กลอง /klɔːŋ/ 'drum' เ–าะ /ɔ/ กลอง /klɔŋ̀/ 'box'
The basic vowels can be combined into diphthongs. Tingsabadh &Abramson (1993) analyze those ending in high vocoids as underlyingly/Vj/ and /Vw/. For purposes of determining tone, those marked with anasterisk are sometimes classified as long:
THAI vowels
LENGTH
§ Learning may be facilitated by languages’ extensive re-‐use of a set of phoneAc dimensions (Clements 2003)
§ ExisAng experimental evidence supports this view § both infants and adults generalize newly learned phoneAc category disAncAons to untrained sounds along the same dimension (McClaskey et al. 1983, Maye et al. 2008, Perfors & Dunbar 2010, Pajak & Levy 2011)
3
What may help solve this problem
/s/ /ss/ length
singleton geminate
/n/ /nn/ length
voicing /b/ /p/
voicing /g/ /k/ voiced voiceless
§ What are the mechanisms underlying generalizaAon? § How do learners make use of informaAon about some
phoneAc categories when learning other categories?
/s/ /ss/ length
/n/ /nn/ length
4
How do people learn phoneAc categories?
How do people learn phoneAc categories?
§ Pajak et al.’s proposal: § in addiAon to learning specific categories, people also
learn category types
/s/ /ss/ length /n/ /nn/ length
5
singleton <-‐> geminate
§ Experimental data illustraAng generalizaAon across analogous disAncAons (from Pajak & Levy 2011)
§ Our computaAonal proposal for how this kind of generalizaAon might be accomplished
§ SimulaAons of Pajak & Levy’s data
6
outline
7
Experimental data (Pajak & Levy 2011)
/s/ /ss/ length
sound class: FRICATIVES
/n/ /nn/ length
sound class: SONORANTS
singleton geminate
8
Experimental data (Pajak & Levy 2011)
§ DistribuAonal training: § adult English naAve speakers exposed to words in a new language, where the middle consonant varied along the length dimension
[aja]145ms [ina]205ms [ila]115ms [ama]160ms
…
9
0
4
8
12
16
0
4
8
12
16
Expt1:sonorantsExpt2:fricatives
100 120 140 160 180 200 220 240 260 280Stimuli length continuum (in msec)
Fam
iliariz
atio
n fre
quen
cy
bimodalunimodal
[n]-‐…-‐[nn] [m]-‐…-‐[mm] [l]-‐…-‐[ll] [j]-‐…-‐[jj]
[s]-‐…-‐[ss] [f]-‐…-‐[ff] [ʃ]-‐…-‐[ʃʃ] [θ]-‐…-‐[θθ]
this difference reflects natural distribuYons of length in different sound classes
Experimental data (Pajak & Levy 2011) Training
10
§ TesAng: § parAcipants made judgments about pairs of words
§ dependent measure: proporAon of ‘different’ responses (as opposed to ‘same’) on ‘different’ trials
§ if learning is successful, we expect:
§ to assess generalizaAon, tesAng included both trained and untrained sound classes (i.e., both sonorants and fricaAves)
Example: [ama]-[amma] “Are these two different words in this language or two repetitions of the same word?”
Experimental data (Pajak & Levy 2011)
‘DIFFERENT’ RESPONSES
Bimodal training Unimodal training >
0
4
8
12
16
0
4
8
12
16
Expt1:sonorantsExpt2:fricatives
100 120 140 160 180 200 220 240 260 280Stimuli length continuum (in msec)
Fam
iliariz
atio
n fre
quen
cy
bimodalunimodal
11
[n]-‐…-‐[nn] [m]-‐…-‐[mm] [l]-‐…-‐[ll] [j]-‐…-‐[jj]
[s]-‐…-‐[ss] [f]-‐…-‐[ff] [ʃ]-‐…-‐[ʃʃ] [θ]-‐…-‐[θθ]
untrained sound class (e.g., ama-‐amma)
trained sound class (e.g., asa-‐assa)
EXPT 1: ALIGNED CATEGORIES
EXPT 2: MISALIGNED CATEGORIES
TesYng
Experimental data (Pajak & Levy 2011)
x
trained sound class (e.g., ama-‐amma) untrained sound class (e.g., asa-‐assa) untrained sound class (e.g., asa-‐assa)
x
12
Expt1−trained Expt1−untrained
Prop
ortio
n of
'diff
eren
t' re
spon
ses
0.0
0.2
0.4
0.6
0.8
1.0
BimodalUnimodal
Expt2−trained Expt2−untrained
Prop
ortio
n of
'diff
eren
t' re
spon
ses
0.0
0.2
0.4
0.6
0.8
1.0
BimodalUnimodal
EXPT 1: ALIGNED CATEGORIES
EXPT 2: MISALIGNED CATEGORIES
Experimental data (Pajak & Levy 2011)
learning generalizaYon
trained untrained
trained untrained
ComputaAonal modeling
① How can we account for distribuAonal learning?
② How can we account for generalizaAon across sound classes?
13
§ Mixture of Gaussians approach (de Boer & Kuhl 2003, Vallabha et al. 2007, McMurray et al. 2009, Feldman et al. 2009, Toscano & McMurray 2010, Dillon et al. 2013)
14
Modeling phoneYc category learning
zi
di i 2 {1..n}
datapoint (perceptual
token)
phoneAc category
di ⇠N (µzi ,s2zi)
length /s/ /ss/
μs, σs2 μss, σss2
120ms 201ms 165ms 182ms 115ms …
inference
H
G0g
zi
di i 2 {1..n}
§ Our general approach (following Feldman et al. 2009): § learning via nonparametric Bayesian inference § using Dirichlet processes, which allow the model to learn the number of categories from the data
15
Modeling phoneYc category learning
datapoint (perceptual
token)
phoneAc category
H : µ ⇠ N (µ0,s2
k0)
s2 ⇠ InvChiSq(n0,s20 )
G0 ⇠ DP(g,H)zi ⇠ G0di ⇠ N (µzi ,s2
zi)
prior
/s/ /ss/
μs, σs2 μss, σss2
length /s/ /ss/
μs, σs2 μss, σss2
x
x
x
x
ComputaAonal modeling
① How can we account for distribuAonal learning?
② How can we account for generalizaAon across sound classes?
16
✓ ① How can we account for distribuAonal learning?
② How can we account for generalizaAon across sound classes?
§ In addiAon to acquiring specific categories, learners infer category types, which can be shared across sound classes
§ This means that already learned categories can be directly re-‐used to categorize other sounds
§ To implement this proposal, we use a hierarchical Dirichlet process, which allows for sharing categories across data groups (here, sound classes)
17
Proposal
/s/ /ss/ length /n/ /nn/ length
singleton <-‐> geminate
fricaYves sonorants
H
G0g
Gca0
zic
dic i 2 {1..nc}c 2 C
18
Modeling generalizaYon: HDP
prior
datapoint (perceptual
token)
phoneAc category
sound class
fricaYves
length /s/ /ss/
sg gem
c = fricaYves
son-sg son-gem
H : µ ⇠ N (µ0,s2
k0)
s2 ⇠ InvChiSq(n0,s20 )
G0 ⇠ DP(g,H)Gc ⇠ DP(a0,G0)zic ⇠ Gc
dic ⇠ N (µzic ,s2zic)
c = sonorants
fric-sg fric-gem
μfric-‐sg σfric-‐sg2
μfric-‐gem σfric-‐gem2
/n/ /nn/ length
sonorants
x
x
x
x
x x
x
x
x
§ But people are able to generalize even when analogous category types are implemented phoneAcally in different ways
§ We want the model to account for potenAal differences between sound classes
19
Modeling generalizaYon: HDP
length /s/ /ss/
fricaYves
length /n/ /nn/
sonorants
H
G0g
Gca0
zic
dic
fc
s f
i 2 {1..nc}c 2 C
20
Modeling generalizaYon: HDP AccounAng for differences between sound classes:
prior
datapoint (perceptual
token)
phoneAc category
sound class
learnable class-‐specific ‘offsets’ by which data in a class are shioed along a phoneAc dimension (cf. Dillon et al. 2013)
H : µ ⇠ N (µ0,s2
k0)
s2 ⇠ InvChiSq(n0,s20 )
G0 ⇠ DP(g,H)Gc ⇠ DP(a0,G0)zic ⇠ Gc
fc ⇠ N (0,s2f )
dic ⇠ N (µzic ,s2zic)+ fc
fricaYves
length /s/ /ss/
length
sonorants
/n/ /nn/
21
SimulaYon results
trained untrained
0.00
0.25
0.50
0.75
1.00
bim
odal
unim
odal
bim
odal
unim
odal
Prop
ortio
n of
2−c
ateg
ory
infe
renc
es
Extended model:Experiment 2
trained untrained
0.00
0.25
0.50
0.75
1.00
bim
odal
unim
odal
bim
odal
unim
odal
Prop
ortio
n of
2−c
ateg
ory
infe
renc
esExtended model:
Experiment 1EXPT 1: ALIGNED CATEGORIES
EXPT 2: MISALIGNED CATEGORIES
Expt1−trained Expt1−untrained
Prop
ortio
n of
'diff
eren
t' re
spon
ses
0.0
0.2
0.4
0.6
0.8
1.0
BimodalUnimodal
Expt2−trained Expt2−untrained
Prop
ortio
n of
'diff
eren
t' re
spon
ses
0.0
0.2
0.4
0.6
0.8
1.0
BimodalUnimodal
HUMAN DATA
trained untrained
trained untrained
other work on category learning
• Feldman et al. (2009) add a (latent) lexicon to category learning
66
Modeling category learning
1000150020002500
300
400
500
600
700
800
900
Second Formant (Hz)
First
Fo
rma
nt (H
z)
Vowel Categories (Men)(a)
i
ɪ
e
ɛ
ɝ
æ
u
ʊo
ɔ
a
ʌ
1000150020002500
300
400
500
600
700
800
900Second Formant (Hz)
Firs
t For
man
t (H
z)
Lexical−Distributional Model(b)
1000150020002500
300
400
500
600
700
800
900Second Formant (Hz)
Firs
t For
man
t (H
z)
Distributional Model(c)
1000150020002500
300
400
500
600
700
800
900
Firs
t For
man
t (H
z)
Second Formant (Hz)
Gradient Descent Algorithm(d)
Figure 3: Ellipses delimit the area corresponding to 90%of vowel tokens for Gaussian categories (a) computed frommen’s vowel productions from Hillenbrand et al. (1995) andlearned by the (b) lexical-distributional model, (c) distribu-tional model, and (d) gradient descent algorithm.
boring vowel categories. Positing the presence of a lexi-con therefore showed evidence of helping the ideal learnerdisambiguate overlapping vowel categories, even though thephonological forms contained in the lexicon were not givenexplicitly to the learner.
Pairwise accuracy and completeness measures were com-puted for each learner as a quantitative measure of modelperformance (Table 1). For these measures, pairs of voweltokens that were correctly placed into the same category werecounted as a hit; pairs of tokens that were incorrectly assignedto different categories when they should have been in thesame category were counted as a miss; and pairs of tokensthat were incorrectly assigned to the same category whenthey should have been in different categories were countedas a false alarm. The accuracy score was computed as
hitshits+false alarms and the completeness score as hits
hits+misses .Both measures were high for the lexical-distributional learner,but accuracy scores were substantially lower for the purelydistributional learners, reflecting the fact that these modelsmistakenly merged several overlapping categories.
Results suggest that as predicted, a model that uses theinput to learn word categories in addition to phonetic cate-gories produces better phonetic category learning results thana model that only learns phonetic categories. Note that thedistributional learners are likely to show better performanceif they are given dimensions beyond just the first two formants(Vallabha et al., 2007) or if they are given more data pointsduring learning. These two solutions actually work againsteach other: as dimensions are added, more data are necessaryto maintain the same learning outcome. Nevertheless, we do
100020003000
200
400
600
800
1000
1200
Second Formant (Hz)
Vowel Categories (All Speakers)
First
Fo
rma
nt (H
z)
(a)
i ɪ
eɛ
ɝ
æ
u
ʊ o
ɔ
a
ʌ
100020003000
200
400
600
800
1000
1200
Second Formant (Hz)
Lexical−Distributional Model
Firs
t For
man
t (H
z)
(b)
100020003000
200
400
600
800
1000
1200
Second Formant (Hz)
Firs
t For
man
t (H
z)
Distributional Model(c)
100020003000
200
400
600
800
1000
1200
Second Formant (Hz)
Firs
t For
man
t (H
z)
Gradient Descent Algorithm(d)
Figure 4: Ellipses delimit the area corresponding to 90% ofvowel tokens for Gaussian categories (a) computed from allspeakers’ vowel productions from Hillenbrand et al. (1995)and learned by the (b) lexical-distributional model, (c) distri-butional model, and (d) gradient descent algorithm.
not wish to suggest that a purely distributional learner cannotacquire phonetic categories. The simulations presented hereare instead meant to demonstrate that in a language wherephonetic categories have substantial overlap, an interactivesystem, where learners can use information from words thatcontain particular speech sounds, can increase the robustnessof phonetic category learning.
DiscussionThis paper has presented a model of phonetic category acqui-sition that allows interaction between speech sound and wordcategorization. The model was not given a lexicon a priori,but was allowed to begin learning a lexicon from the data atthe same time that it was learning to categorize individualspeech sounds, allowing it to take into account the distribu-tion of speech sounds in words. This lexical-distributionallearner outperformed a purely distributional learner on a cor-pus whose categories were based on English vowel cate-gories, showing better disambiguation of overlapping cate-gories from the same number of data points.
Infants learn to segment words from fluent speech aroundthe same time that they begin to show signs of acquiringnative language phonetic categories, and they are able tomap these segmented words onto tokens heard in isolation(Jusczyk & Aslin, 1995), suggesting that they are perform-ing some sort of rudimentary categorization on the wordsthey hear. Infants may therefore have access to informationfrom words that can help them disambiguate overlapping cat-egories. If information from words can feed back to constrainphonetic category learning, the large degree of overlap be-
2212
probabilistic models in acquisition
• in general: still working on trying to incorporate many sources of information
• it seems that infants, children, and adults use many sources of information to learn language
• one very exciting information source: other languages
67
Conclusions
computational psycholinguistics
• this course gave a very broad overview of many areas: perception, comprehension, production, acquisition
• gave a sense for how probabilistic models can be used
• also covered much of the essential technical knowledge to use probabilistic models
• this is still very much the beginning of probabilistic modeling in the study of language: very many open questions and models yet to build!
• hope you enjoyed!
68
Conclusions