10
単語埋込みモデルによる意味論 浅川伸一 1 導入 日本語の文献としては西尾 [13] がある。実例に即した書籍であるので手を動かして理解することができる。 TensorFlow の導入がまとまっているので一読をお勧めする 1 。日本語への翻訳も存在する 2 が,英語に不便を感 じなければ原文を読んだ方が良いだろう。以下に単語埋込みモデルへの動機づけについての文章を引用する。 Image and audio processing systems work with rich, high-dimensional datasets encoded as vectors of the individual raw pixel-intensities for image data, or e.g. power spectral density coefficients for audio data. For tasks like object or speech recognition we know that all the information required to successfully perform the task is encoded in the data (because humans can perform these tasks from the raw data). However, natural language processing systems traditionally treat words as discrete atomic symbols, and therefore ’cat’ may be represented as Id537 and ’dog’ as Id143. These encodings are arbitrary, and provide no useful information to the system regarding the relationships that may exist between the individual symbols. This means that the model can leverage very little of what it has learned about ’cats’ when it is processing data about ’dogs’ (such that they are both animals, four-legged, pets, etc.). Representing words as unique, discrete ids furthermore leads to data sparsity, and usually means that we may need more data in order to successfully train statistical models. Using vector representations can overcome some of these obstacles. Vector space models 3 (VSMs) represent (embed) words in a continuous vector space where semantically similar words are mapped to nearby points (’are embedded nearby each other’). VSMs have a long, rich history in NLP, but all methods depend in some way or another on the Distributional Hypothesis 4 , which states that words that appear in the same contexts share semantic meaning. The different approaches that leverage this principle can be divided into two categories: count-based methods (e.g. Latent Semantic Analysis 5 ), and predictive methods (e.g. neural probabilistic language models 6 ). This distinction is elaborated in much more detail by Baroni et al. 7 , but in a nutshell: Count-based methods compute the statistics of how often some word co-occurs with its neighbor words in a large text corpus, and then map these count-statistics down to a small, dense vector for each word. Predictive models directly try to predict a word from its neighbors in terms of learned small, dense embedding vectors (considered parameters of the model). Word2vec is a particularly computationally-efficient predictive model for learning word embeddings from raw text. It comes in two flavors, the Continuous Bag-of-Words model (CBOW) and the Skip-Gram model (Chapter 3.1 and 3.2 in Mikolov et al 8 .). Algorithmically, these models are similar, except that CBOW predicts target words (e.g. ’mat’) from source context words (’the cat sits on the’), while the skip-gram does the inverse and predicts source context-words from the target words. This inversion might seem like an arbitrary choice, but statistically it has the effect that CBOW smoothes over a lot of the distributional information (by treating an entire context as one observation). For the most part, this turns out to be a useful thing for smaller datasets. However, skip-gram treats each context-target pair as a new observation, and this tends to do better when we have larger datasets. We will focus on the skip-gram model in the rest of this tutorial. 最後から 2 段落目の意味が取りにくいかも知れないが, Baroni らによれば計数に基づく手法 count-based methods とは PCA, SVD, LSI, NMF などの従来モデル (広義には TF/IDF も含まれるだろう) のことであり,予測モデル predictive models とは word2vec (skip-gram, cbow) GloVe の意である。 1 https://www.tensorflow.org/versions/r0.11/tutorials/word2vec/index.html 2 http://media.accel-brain.com/tensorflow-vector-representations-of-words/ 3 https://en.wikipedia.org/wiki/Vector_space_model 4 https://en.wikipedia.org/wiki/Distributional_semantics#Distributional_Hypothesis 5 https://en.wikipedia.org/wiki/Latent_semantic_analysis 6 http://www.scholarpedia.org/article/Neural_net_language_models 7 http://clic.cimec.unitn.it/marco/publications/acl2014/baroni-etal-countpredict-acl2014.pdf 8 https://arxiv.org/pdf/1301.3781.pdf 1

TensorFlow math ja 05 word2vec

Embed Size (px)

Citation preview

Page 1: TensorFlow math ja 05 word2vec

単語埋込みモデルによる意味論

浅川伸一

1 導入日本語の文献としては西尾 [13]がある。実例に即した書籍であるので手を動かして理解することができる。

TensorFlowの導入がまとまっているので一読をお勧めする1。日本語への翻訳も存在する2が,英語に不便を感じなければ原文を読んだ方が良いだろう。以下に単語埋込みモデルへの動機づけについての文章を引用する。

Image and audio processing systems work with rich, high-dimensional datasets encoded as vectors ofthe individual raw pixel-intensities for image data, or e.g. power spectral density coefficients for audiodata. For tasks like object or speech recognition we know that all the information required to successfullyperform the task is encoded in the data (because humans can perform these tasks from the raw data).However, natural language processing systems traditionally treat words as discrete atomic symbols, andtherefore ’cat’ may be represented as Id537 and ’dog’ as Id143. These encodings are arbitrary, and provideno useful information to the system regarding the relationships that may exist between the individualsymbols. This means that the model can leverage very little of what it has learned about ’cats’ when it isprocessing data about ’dogs’ (such that they are both animals, four-legged, pets, etc.). Representing wordsas unique, discrete ids furthermore leads to data sparsity, and usually means that we may need more datain order to successfully train statistical models. Using vector representations can overcome some of theseobstacles. Vector space models3 (VSMs) represent (embed) words in a continuous vector space wheresemantically similar words are mapped to nearby points (’are embedded nearby each other’). VSMshave a long, rich history in NLP, but all methods depend in some way or another on the DistributionalHypothesis4, which states that words that appear in the same contexts share semantic meaning. Thedifferent approaches that leverage this principle can be divided into two categories: count-based methods(e.g. Latent Semantic Analysis5), and predictive methods (e.g. neural probabilistic language models6).

This distinction is elaborated in much more detail by Baroni et al.7, but in a nutshell: Count-basedmethods compute the statistics of how often some word co-occurs with its neighbor words in a largetext corpus, and then map these count-statistics down to a small, dense vector for each word. Predictivemodels directly try to predict a word from its neighbors in terms of learned small, dense embeddingvectors (considered parameters of the model).

Word2vec is a particularly computationally-efficient predictive model for learning word embeddingsfrom raw text. It comes in two flavors, the Continuous Bag-of-Words model (CBOW) and the Skip-Grammodel (Chapter 3.1 and 3.2 in Mikolov et al8.). Algorithmically, these models are similar, except thatCBOW predicts target words (e.g. ’mat’) from source context words (’the cat sits on the’), while theskip-gram does the inverse and predicts source context-words from the target words. This inversion mightseem like an arbitrary choice, but statistically it has the effect that CBOW smoothes over a lot of thedistributional information (by treating an entire context as one observation). For the most part, this turnsout to be a useful thing for smaller datasets. However, skip-gram treats each context-target pair as a newobservation, and this tends to do better when we have larger datasets. We will focus on the skip-grammodel in the rest of this tutorial.

最後から 2段落目の意味が取りにくいかも知れないが,Baroniらによれば計数に基づく手法 count-based methodsとは PCA, SVD, LSI, NMFなどの従来モデル (広義には TF/IDFも含まれるだろう)のことであり,予測モデルpredictive modelsとは word2vec (skip-gram, cbow)や GloVeの意である。

1https://www.tensorflow.org/versions/r0.11/tutorials/word2vec/index.html2http://media.accel-brain.com/tensorflow-vector-representations-of-words/3https://en.wikipedia.org/wiki/Vector_space_model4https://en.wikipedia.org/wiki/Distributional_semantics#Distributional_Hypothesis5https://en.wikipedia.org/wiki/Latent_semantic_analysis6http://www.scholarpedia.org/article/Neural_net_language_models7http://clic.cimec.unitn.it/marco/publications/acl2014/baroni-etal-countpredict-acl2014.pdf8https://arxiv.org/pdf/1301.3781.pdf

1

Page 2: TensorFlow math ja 05 word2vec

2 ミコロフ革命

2.1 いにしえより伝わりし

単語埋込みモデル word embedding modelあるいはベクトル空間モデル vector space modelと呼ばれる一連のモデルは 2013年に突然話題になったように思われるが,1990年代に遡ることができる。最近では理論的考察も進展し一定の成果を達成し周知されたと言える。

pay attention to me!

Figure 1: [3] Fig.1を改変

Figure 2: Thomas Mikolov,右は NIPS2015での講演時

2

Page 3: TensorFlow math ja 05 word2vec

2.2 Mikolovの言語モデル

EXTENSIONS OF RECURRENT NEURAL NETWORK LANGUAGEMODEL

Tomas Mikolov1,2, Stefan Kombrink1, Lukas Burget1, Jan “Honza” Cernocky1, Sanjeev Khudanpur2

1Brno University of Technology, Speech@FIT, Czech Republic2 Department of Electrical and Computer Engineering, Johns Hopkins University, USA

{imikolov,kombrink,burget,cernocky}@fit.vutbr.cz, [email protected]

ABSTRACTWe present several modifications of the original recurrent neural net-work language model (RNN LM). While this model has been shownto significantly outperform many competitive language modelingtechniques in terms of accuracy, the remaining problem is the com-putational complexity. In this work, we show approaches that leadto more than 15 times speedup for both training and testing phases.Next, we show importance of using a backpropagation through timealgorithm. An empirical comparison with feedforward networks isalso provided. In the end, we discuss possibilities how to reduce theamount of parameters in the model. The resulting RNN model canthus be smaller, faster both during training and testing, and moreaccurate than the basic one.

Index Terms— language modeling, recurrent neural networks,speech recognition

1. INTRODUCTION

Statistical models of natural language are a key part of many systems today. The most widely known applications are automatic speech recognition (ASR), machine translation (MT) and optical charac-ter recognition (OCR). In the past, there was always a struggle be-tween those who follow the statistical way, and those who claim that we need to adopt linguistics and expert knowledge to build models of natural language. The most serious criticism of statistical ap-proaches is that there is no true understanding occurring in these models, which are typically limited by the Markov assumption and are represented by n-gram models. Prediction of the next word is often conditioned just on two preceding words, which is clearly in-sufficient to capture semantics. On the other hand, the criticism of linguistic approaches was even more straightforward: despite all the efforts of linguists, statistical approaches were dominating when per-formance in real world applications was a measure.

Thus, there has been a lot of research effort in the field of statistical language modeling. Among models of natural language, neural net- work based models seemed to outperform most of the competition [1] [2], and were also showing steady improvements in state of the art speech recognition systems[3]. The main power of neural network based language models seems to be in their simplicity: almost the same model can be used for prediction of many types of signals, not just language. These models perform implicitly clustering of words in low-dimensional space. Prediction based on this compact representation of words is then more robust. No additional smoothing of probabilities is required.

This work was partly supported by European project DIRAC (FP6-027787), Grant Agency of Czech Republic project No. 102/08/0707, CzechMinistry of Education project No. MSM0021630528 and by BUT FIT grantNo. FIT-10-S-2.

Fig. 1. Simple recurrent neural network.

Among many following modifications of the original model, therecurrent neural network based language model [4] provides furthergeneralization: instead of considering just several preceding words,neurons with input from recurrent connections are assumed to repre-sent short term memory. The model learns itself from the data howto represent memory. While shallow feedforward neural networks(those with just one hidden layer) can only cluster similar words,recurrent neural network (which can be considered as a deep archi-tecture [5]) can perform clustering of similar histories. This allowsfor instance efficient representation of patterns with variable length.

In this work, we show the importance of the Backpropagation through time algorithm for learning appropriate short term memory. Then we show how to further improve the original RNN LM by de-creasing its computational complexity. In the end, we briefly discuss possibilities of reducing the size of the resulting model.

2. MODEL DESCRIPTION

The recurrent neural network described in [4] is also called Elman network [6]. Its architecture is shown in Figure 1. The vector x(t) is formed by concatenating the vector w(t) that represents the current word while using 1 of N coding (thus its size is equal to the size of the vocabulary) and vector s(t − 1) that represents output values in the hidden layer from the previous time step. The network is trained by using the standard backpropagation and contains input, hidden and output layers. Values in these layers are computed as follows:

x(t) = [w(t)Ts(t − 1)T ]T (1)

sj(t) = fX

i

xi(t)uji

!(2)

yk(t) = gX

j

sj(t)vkj

!(3)

5528978-1-4577-0539-7/11/$26.00 ©2011 IEEE ICASSP 2011

Figure 3: Mikolonvの RNNLM。Mikolov+2011より

• 入力層 wと出力層 yは同一次元,総語彙数に一致する。(約 1万語から 20万語)

• 中間層 sは相対的に低次元 (50から 1000ニューロン)

• 入力層から中間層への結合係数行列 U,中間層から出力層への結合係数行列 V,

• 再帰結合係数行列W がなければバイグラム (2-グラム)ニューラルネットワーク言語モデルと等しい

• 中間層ニューロンの出力と出力層ニューロンの出力:

s(t) = f (Uw(t) +Ws (t− 1)) (1)

y (t) = g (Vs (t)) (2)

ここで f(z)はシグモイド関数:

f(z) =1

1 + exp (−z)(3)

g(z)はソフトマックス softmax関数:

g(zm) =exp (zm)∑k exp (zk)

(4)

ちなみにハードマックス関数はg (zm) = argmax

mp (zm) (5)

• 学習については,時刻 tにおける誤差ベクトル e0 (t)の勾配計算にはクロスエントロピー損失を用いる。

eo (t) = d (t)− y (t) (6)

d (t)は出力単語を表すターゲット単語であり時刻 t+1の入力単語 w (t+ 1)[1]では 1-of-k表現, Bengioはワンホットベクトルと呼ぶ。

• 時刻 tにおける中間層から出力層への結合係数行列 V は,中間層ベクトル s (t)と出力層ベクトル y (t)を用いて次式のように計算する

V (t+ 1) = V (t) + αs (t) eo (t)⊤ (7)

ここで αは学習係数である。

• 出力層からの誤差勾配ベクトルから中間層の誤差勾配ベクトルを計算すると,

eh (t) = dh

(eo (t)

⊤V, t

)(8)

誤差ベクトルは関数 dh()をベクトルの各要素に対して適用して

dhj (x, t) = xsj (t) (1− sj (t)) (9)

となる。

3

Page 4: TensorFlow math ja 05 word2vec

• 時刻 tにおける入力層から中間層への結合係数行列 U は,ベクトル s (t)の更新を以下のようにする。

U (t+ 1) = U (t) + αW (t) eh (t)⊤ (10)

時刻 tにおける入力層ベクトル w(t)は,一つのニューロンを除き全て 0である。上式のように結合係数を更新するニューロンは入力単語に対応する一つのニューロンのそれを除いて全て 0なので,計算は高速化できる。

2.3 word2vecMikolovの言語モデルのポイントは図 3の結合係数行列 U がワンホットベクトルを中間層ニューロン数次元

のベクトル空間への射影に成っていることである。このことが word2vecへの道を開いた。すなわち,Mikolovの提案した word2vecは単語をベクトル空間へ射影する [8, 9, 10]9。

w(t)

w(t-2)

w(t-1)

w(t+1)

w(t+2)

Skip-gramは次式のように定式化できる。すなわち単語系列を w1, w2, · · · , wt として

ℓ =1

T

T∑t=1

∑−c≤j≤c,

j =0

log p (wt+j |wt ) (11)

を最大化する。

階層ソフトマックス n (w, j) を j-番目のノードとして L (w) を,パス長とする。n (w, 1) = root でありn (w,L (w)) = wである。ch (n)は任意の nの子ノードとする。

[[x]]は xが真の時 1でそれ以外のときは −1をとるとする。階層ソフトマックスは

p (w | wI) =

L(w)−1∏j=1

σ(

[[n (w, j + 1) = ch (n (n, j)) ]] · v′⊤n(w,j)vwI

)(12)

ここで σ = [1 + exp (−x)]−1シグモイド関数である。

∑Ww=1 p (w | wI) = 1は自明である。∇ log p (wO | wI)

は L (wO)に比例する。

2.4 Negative Sampling

log σ(v′⊤WO

vwI

)+

K∑i=1

Ewi∼Pn(w)

[log σ

(−v′⊤wi

vwI

)](13)

9 Recurrent Neural Network Language Model: http://www.fit.vutbr.cz/˜imikolov/rnnlm/Word2vec: https://github.com/dav/word2vec

4

Page 5: TensorFlow math ja 05 word2vec

-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

中国

日本

フランス

ロシア

ドイツ

イタリア

スペインギリシャ

トルコ

北京

パリ

東京

ポーランド

モスクワ

ポルトガル

ベルリン

ローマアテネ

マドリッド

アンカラ

ワルシャワ

リスボン

Figure 4: SGNSのサンプル

2.5 CBOW

w(t-2)

w(t+1)

w(t-1)

w(t+2)

w(t)

SUM

INPUT PROJECTION OUTPUT

w(t)

INPUT PROJECTION OUTPUT

w(t-2)

w(t-1)

w(t+1)

w(t+2) CBOW Skip-gram

Figure 5: CBOW

[8]よりvector(“King”) - vector(“Man”) + vector(“Woman”) = vector(“Queen”)

a : b = c : dで dが不明とする。埋込ベクトル xa, xb, xc は正規化済。y = xb − xa + xc なる演算により yを求める。正確に同じ位置に該当する単語が存在するとは限らないので最近傍の単語を探す RNNLM[10]ではコサイン類似度 (a.k.a相関係数各ベクトルが正規化してあるから):

w∗ = argmaxw

(xw · y)∥xw∥ ∥y∥

(14)

dist (a, b) = cos θab =(a · b)∥a∥ ∥b∥

(15)

一方,ユークリッド距離は

dist (a, b) = |a− b|2 = |a|2 + |b|2 − 2 |a| |b| cos θab (16)

= |a|2 + |b|2 − 2 (a · b) (17)

5

Page 6: TensorFlow math ja 05 word2vec

3 結果

3.1 アナロジー課題vec(“ベルリン”)-vec(“ドイツ”)+vec(“France”)→vec(“パリ”)vec(“quick”)-vec(“quickly”)+vec(“slow”)→vec(“slowly”)

Figure 6: 左図:3単語対の性差を表す関係。右図:単数形と複数形の関係。各単語は高次元空間に埋め込まれている

Table 1: アナロジー課題の例 (n = 3218)。課題は4番目の単語を探すこと(正解率およそ 72%)新聞

New York New York Times Baltimore Baltimore SunSan Jose San Jose Mercury News Cincinnati Cincinnati Enquirer

アイスホッケーチーム NHLBoston Boston Bruins Montreal Montreal CanadiensPhoenix Phoenix Coyotes Nashville Nashville Predators

バスケットボールチーム NBADetroit Detroit Pistons Toronto Toronto Raptors

Oakland Golden State Warriors Memphis Memphis Grizzlies飛行機会社

Austria Austrian Airlines Spain SpainairBelgium Brussels Airlines Greece Aegean Airlines

会社重役Steve Ballmer Microsoft Larry Page Google

Samuel J. Palmisano IBM Werner Vogels Amazon

Table 2: Examples of the word pair relationships, using the best word vectors from Table 4 (Skipgram model trainedon 783M words with 300 dimensionality) [8]Table.8

Relationship Example 1 Example 2 Example 3France - Paris Italy: Rome Japan: Tokyo Florida: Tallahasseebig - bigger small: larger cold: colder quick: quicker

Miami - Florida B altimore: Maryland Dallas: Texas Kona: HawaiiEinstein - scientist Messi: midfielder Mozart: violinist Picasso: painterSarkozy - France Berlusconi: Italy Merkel: Germany Koizumi: Japan

copper - Cu zinc: Zn gold: Au uranium: plutoniumBerlusconi - Silvio Sarkozy: Nicolas Putin: Medvedev Obama: Barack

Microsoft - Windows Google: Android IBM: Linux Apple: iPhoneMicrosoft - Ballmer Google: Yahoo IBM: McNealy Apple: Jobs

Japan - sushi Germany: bratwurst France: tapas USA: pizza

データセットはダウンロードできる10。10http://2code.google.com/p/word2vec/source/browse/trunk/questions-phrases.txt

6

Page 7: TensorFlow math ja 05 word2vec

Table 3: 意味の関係 (5つ)と統語関係 (9つ)[8]の Table 1Type of relationship Word Pair 1 Word Pair 2共通の都市 Athens Greece Oslo Norway首都 Astana Kazakhstan Harare Zimbabwe国と通貨 Angola kwanza Iran rial州と州都 Chicago Illinois Stockton California男性,女性 brother sister grandson granddaughter形容詞,副詞 to adverb apparent apparently rapid rapidly反意語 possibly impossibly ethical unethical比較級 great greater tough tougher最上級 easy easiest lucky luckiest現在分詞 think thinking read reading国籍を表す形容詞 Switzerland Swiss Cambodia Cambodian過去形 walking walked swimming swam複数形名詞 nouns mouse mice dollar dollars動詞三人称単数現在 work works speak speaks

783M391M196M98M49M24M

50

100

300

600

percent correct dimensionality/training words

Figure 7: 意味論,統語論の正解率 CBOWモデルによる横軸は訓練データセットのサイズ(総語彙数)。グラフの色の違いは埋込層の次元数(ニューロン数)[8]Table2を改変

4 他のモデルとの関係

潜在意味解析 Latent Semantic Analysis (LSA)[5, 6, 7], 潜在ディレクリ配置 Latent Dirichlet Allocation(LDA)[2],主成分分析 Principle Component Analysis (PCA)[11]との比較が行われている。

LDCコーパス総単語数 3.2億語,語彙数 8.2万語,中間層の次元数は 640で比較 [10]

NNLMモデルの成績は RNNモデルより良かった(パラメータ数が8倍)CBOWモデルは NNLMモデルよりも統語関係問題で優れていたが,意味を問う課題では同程度の成績であった。skip-gramモデルは統語問題で CBOWよりやや劣る。しかし, NNLMモデルより良い。意味を問う課題では一番良かった。

7

Page 8: TensorFlow math ja 05 word2vec

Semantic Syntactic Total

Accuracy

skip-gram(300/783M)

CBOW(300/783M)

Our NNLM(100/6B)

Mikolov RNNLM

Huang NNLM(50/990M)

Collobert-Weston NNLM(50/660M)

Figure 8: Comparison of publicly available word vectors on the Semantic-Syntactic Word Relationship test set, andword vectors from our models. Full vocabularies are used.

Skip-gram+RNNLMs

Skip-gram

Log-bilinear model [24]

RNNLMs[19]

Average LSA similarity [32]

4-gram [32]

0 10 20 30 40 50 60

Figure 9: Comparison and combination of models on the Microsoft Sentence Completion Challenge.

Skip-gramは LSAより良くはない。ちなみに SOTAは 58.9%

8

Page 9: TensorFlow math ja 05 word2vec

percent correct

Semantic

syntacticMSR wordRelatedness

RNNLM

NNLM

CBOWskip-gram

Figure 10: 意味的,統語的,関係のモデル比較Mikolov(2013)の Table4を改変

5 実装Pythonistaは gensim11 を使うことになるだろう。

$ pip install -U gensim

gensimは word2vecだけでなく LSA, LSI, SVD, LDA (Latent Dirichlet Allocation)も用意されていて NLP関係者にとっては de facto standardになっている。gensimでサポートされていない手法は古い文献にはMikolovがオリジナルの C++コードが入手できるように書いてる12。しかし既知のとおりこ

のサイトはサービスを終了している。Wikepediaには Vector space modelの詳細な記述がある13。wikipedia.jaの記述とは異なる14。この現状(惨

状?)は何とかしたい。TensorFlowの word2vecチュートリアルは実践的である15。GloVeはリチャード・ソッカー (Richard Socher)やクリス・マニング (Christopher Manning)などスタンフォー

ド大学の自然言語処理研究室で開発されたベクトル埋込モデル [12]であり,正式名称は Global vectors for wordrepresentation16である。コードは GitHubでも公開されている17

さらに skip-thought[4], doc2vecというモデルも存在する。

文献[1]Christopher M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.[2]David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent dirichlet allocation. Journal of Machine Learning

Research, 3:993–1022, 2003.[3]Jeffery L. Elman. Incremental learing, or the importance of starting small. Technical report, University of California,

San Diego, San Diego, CA, 1991.[4]Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, and Sanja

Fidler. Skip-thought vectors. arXiv:1506.06726, 2015.[5]Barbara Landau, Linda B. Smith, and Susan Jones. Syntactic context and the shape bias in children’s and adults’

lexical learning. Journal of memory and language, 31:807–825, 1992.[6]Barbara Landau, Linda B. Smith, and Susan S. Jones. The importance of shape in early lexical learning. Cognitive

Development, 3:299–321, 1988.[7]Thomas K Landauer, Peter W. Foltz, and Darrell Laham. An introduction to latent semantic analysis. Discourse

Processes, 25:259–284, 1998.[8]Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector

space. In Yoshua Bengio and Yann Lecun, editors, Proceedings in the International Conference on LearningRepresentations (ICLR) Workshop, Scottsdale, Arizona, USA, May 2013.

11https://github.com/RaRe-Technologies/gensim12 https://code.google.com/p/word2vec/13https://en.wikipedia.org/wiki/Vector_space_model14https://ja.wikipedia.org/wiki/%E3%83%99%E3%82%AF%E3%83%88%E3%83%AB%E7%A9%BA%E9%96%93%E3%83%A2%

E3%83%87%E3%83%AB15 https://www.tensorflow.org/versions/r0.11/tutorials/word2vec/index.html16 http://nlp.stanford.edu/projects/glove/17https://github.com/stanfordnlp/GloVe

9

Page 10: TensorFlow math ja 05 word2vec

[9]Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. Distributed representations of wordsand phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, andK.Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 3111–3119. CurranAssociates, Inc., 2013.

[10]Tomas Mikolov, Wen tau Yih, and Geoffrey Zweig. Linguistic regularities in continuous spaceword representations.In Proceedings of the 2013 Conference of the North American Chapter of the Association for ComputationalLinguistics: Human Language Technologies NAACL, Atlanta, WA, USA, June 2013.

[11]Karl Pearson. On lines and planes of closest fit to systems of points in space. Philosophical Magazine, 2:559–572,1901.

[12]Jeffrey Pennington, Richard Socher, and Christopher D. Manning. GloVe: Global vectors for word representation. InEmpirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Quatar, Oct. 2014.

[13]西尾泰和. word2vecによる自然言語処理. オライリー・ジャパン,東京, 2014.

10