36
A Mathematical Exploration of Language Models Nikunj Saunshi Princeton University Center of Mathematical Sciences and Applications, Harvard University 10 th February 2021

A Mathematical Exploration of Language Models

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A Mathematical Exploration of Language Models

A Mathematical Exploration of Language Models

Nikunj SaunshiPrinceton University

Center of Mathematical Sciences and Applications, Harvard University10th February 2021

Page 2: A Mathematical Exploration of Language Models

Language Models

LanguageModel

Context𝑠 = “I went to the café and ordered a”

Distribution𝑝⋅|#

0.3…0.2

0.05

0.0001

𝑝⋅|#(“latte”)

𝑝⋅|#(“bagel”)

𝑝⋅|#(“dolphin”)

Next word predictionFor context 𝑠, predict what word 𝑤 would follow it

Cross-entropy objectiveAssign high 𝑝⋅|# 𝑤 for observed (𝑠, 𝑤) pairs

𝔼(#,&) − log 𝑝⋅|# 𝑤(𝑠, 𝑤) pairs

Unlabeled dataGenerate (𝑠, 𝑤) pairs using sentences

Page 3: A Mathematical Exploration of Language Models

Success of Language Models

LanguageModel

Distribution: 𝑝⋅|#Context: 𝑠

Architecture: TransformerParameters: 175 B

Architecture: TransformerParameters: 1542 M

Architecture: RNNParameters: 24 M

Train using Cross-entropy

Text Generation Question Answering

Machine Translation

Downstream tasks

Sentence Classification

I bought coffee

J'ai achetĂŠ du cafĂŠ

“Science” vs “Politics”The capital of Spain is __

It was bright sunny day in …………..

Page 4: A Mathematical Exploration of Language Models

Main QuestionWhy should solving the next word prediction task help solve

seemingly unrelated downstream tasks with very little labeled data?

Rest of the talk

More general framework of “solving Task A helps with Task B”

Our results for Language Models based on recent paperA Mathematical Exploration of Why Language Models Help Solve Downstream Tasks

Saunshi, Malladi, Arora, To Appear in ICLR 2021

Page 5: A Mathematical Exploration of Language Models

Solving Task A helps with Task B

Page 6: A Mathematical Exploration of Language Models

Solving Task A helps with Task B

• Humans can use the ”experience” and “skills” acquired from Task A to for new Task B efficiently

Language ModelingTask A: Next word predictionTask B: Downstream NLP task

→

Ride Bicycle Ride Motorcycle

→

Get a Math degree

Do well inlaw school later

→

Do basic chores Excel at Karate

The Karate Kid

Page 7: A Mathematical Exploration of Language Models

• Adapted in Machine Learning

• More data efficient than supervised learning• Requires fewer labeled samples than solving Task B from scratch using supervised learning

Initialize a model andfine-tune using labeled data

Extract features and learn a classifier using labeled data

Pretrain Model on Task AStage 1

Use Model on Task BStage 2

Other innovative ways of using pretrained model

Solving Task A helps with Task BLanguage Modeling

Task A: Next word predictionTask B: Downstream NLP task

Page 8: A Mathematical Exploration of Language Models

• Transfer learning• Task A: Large supervised learning problem (ImageNet)• Task B: Object detection, Disease detection using X-ray images

• Meta-learning• Task A: Many small tasks related to Task B• Task B: Related tasks (classify characters from a new language)

• Self-supervised learning (e.g. language modeling)• Task A: Constructed using unlabeled data • Task B: Downstream tasks of interest

Requires some labeled data in Task A

Requires only unlabeled data in Task A

Solving Task A helps with Task B

“This is the single most important problem to solve in AI today”- Yann LeCun

https://www.wsj.com/articles/facebook-ai-chief-pushes-the-technologys-limits-11597334361

Page 9: A Mathematical Exploration of Language Models

Self-Supervised Learning

Motivated by following observations• Humans learn by observing/interacting with the world, without explicit supervision• Supervised learning with labels is successful, but human annotations can be expensive• Unlabeled data is available in abundance and is cheap to obtain

• Many practical algorithms following this principle do well on standard benchmarks, sometimes beating even supervised learning!

PrincipleUse unlabeled data to generate labels and construct supervised learning tasks

Page 10: A Mathematical Exploration of Language Models

Self-Supervised Learning

Examples in practice• Images

• Predict color of image from b/w version• Reconstructing part of image from the rest of it• Predict the rotation applied to an image

• Text• Making representation of consecutive sentences in Wikipedia close• Next word prediction• Fill in the multiple blanks in a sentence

Task A: Constructed from unlabeled dataTask B: Downstream task of interest

Just need raw images

Just need a large text corpus

Page 11: A Mathematical Exploration of Language Models

Theory for Self-Supervised Learning

• We have very little mathematical understanding of this important problem.

• Theory can potentially help• Formalize notions of “skill learning” from tasks• Ground existing intuitions in math• Give new insights that can improve/design practical algorithms

• Existing theoretical frameworks fail to capture this setting• Task A and Task B are very different • Task A is agnostic to Task B

• We try to gain some understanding for one such method: language modeling

Page 12: A Mathematical Exploration of Language Models

A Mathematical Exploration of Why Language Models Help Solve Downstream TasksSaunshi, Malladi, Arora, To Appear in ICLR 2021

Page 13: A Mathematical Exploration of Language Models

Theory for Language ModelsTask A: Next word predictionTask B: Downstream NLP task

LanguageModel

Distribution over words:𝑝⋅|#

Context:𝑠 = “I went to the café and ordered a”

Why should solving the next word prediction task help solve seemingly unrelated downstream tasks with very little labeled data?

Stage 1 Pretrain Language

Model using on Next Word Prediction

Stage 2 Use LanguageModel for Downstream

Task

Page 14: A Mathematical Exploration of Language Models

Theoretical setting

Representation LearningPerspective

Role of task & objective

Sentence classification

✓ Extract features from LM, learn linear classifiers: effective, data-efficient, can do math

✘ Finetuning: Hard to quantify its benefit using current deep learning theory

✓ Why next word prediction (w/ cross-entropy objective) intrinsically helps

✘ Inductive biases of architecture/algorithm: current tools are insufficient

✓ First-cut analysis. Already gives interesting insights

✘ Other NLP tasks (question answering, etc.)

LanguageModel

Distribution: 𝑝⋅|#Context: 𝑠

What aspects of pretraining help?

What are downstream tasks?

How to use a pretrained model?

Page 15: A Mathematical Exploration of Language Models

Theoretical setting

Why can language models that do well on cross-entropy objective learn featuresthat are useful for linear classification tasks?

LanguageModel

Distribution: 𝑝⋅|#Context: 𝑠

Extract 𝑑-dim features 𝑓(𝑠)

𝑓

“It was an utter waste of time.”

“I would recommend this movie.”

“Negative”

“Positive”

Page 16: A Mathematical Exploration of Language Models

Result overview

Key idea

Formalization

Verification

Classifications tasks can be rephrased as sentence completion problems, thus making next word prediction a meaningful pretraining task

Show that LM that is 𝝐-optimal in cross-entropy learn featuresthat linearly solve such tasks up to 𝒪 𝜖

Experimentally verify theoretical insights (also design a new objective function)

Why can language models that do well on cross-entropyobjective learn features that are useful for linear

classification tasks?

Page 17: A Mathematical Exploration of Language Models

Outline

• Language modeling• Cross-entropy and Softmax-parametrized LMs

• Downstream tasks• Sentence completion reformulation

• Formal guarantees• 𝜖-optimal LM ⇒𝒪 𝜖 -good on task

• Extensions, discussions and future work

Page 18: A Mathematical Exploration of Language Models

Outline

• Language modeling• Cross-entropy and Softmax-parametrized LMs

• Downstream tasks• Sentence completion reformulation

• Formal guarantees• 𝜖-optimal LM ⇒𝒪 𝜖 -good on task

• Extensions, discussions and future work

Page 19: A Mathematical Exploration of Language Models

Language Modeling: Cross-entropy

LanguageModel

Predicted dist.𝑝⋅|# ∈ ℝ$

Context𝑠 = “I went to the café and ordered a”

0.3…0.2

0.05

0.0001

0.35…0.18

0.047

0.00005

True dist.𝑝⋅|#∗ ∈ ℝ$

ℓ()*+ 𝑝⋅|# = 𝔼#,& − log 𝑝⋅|# 𝑤

Optimal solution: Minimizer of ℓ&'() 𝑝⋅|# is 𝑝⋅|# = 𝑝⋅|#∗

Proof: Can rewrite as ℓ&'() 𝑝⋅|# = 𝔼# 𝐾𝐿 𝑝⋅|#∗ , 𝑝⋅|# + 𝐶cross-entropy

Samples fromWhat does the best language model learn?

(minimizer of cross-entropy)

Page 20: A Mathematical Exploration of Language Models

Language Modeling: Softmax

0.35…0.18

0.047

0.00005

True dist.𝑝⋅|#∗ ∈ ℝ$

min,,-

ℓ()*+ 𝑝,(#)Optimal solution: For fixed Φ, 𝑓∗ that minimizes ℓ&'()satisfies Φ𝑝*∗(#) = Φ𝑝⋅|#∗

Can we still learn 𝑝*(#) = 𝑝⋅|#∗ exactly when 𝑑 < 𝑉?

Language Model

Context: 𝑠

0.3…0.2

0.05

0.0001

𝒇 𝒔 ∈ ℝ𝒅softmax

on 𝚽/𝒇(s)𝚽 ∈ ℝ𝒅×𝑽

Softmax dist.𝑝*(#) ∈ ℝ$

Word embeddingsFeatures

Proof: Use first-order condition (gradient = 0)∇- 𝐾𝐿 𝑝⋅|#∗ , 𝑝- = −Φ𝑝- +Φ𝑝⋅|#∗

Only guaranteed to learn 𝑝⋅|#∗ on the 𝑑-dimensional subspace spanned by 𝚽

0.1−0.521.230.04

−0.3 ⋯ 0.2⋮ ⋱ ⋮0.8 ⋯ −0.1

Page 21: A Mathematical Exploration of Language Models

LMs with cross-entropy aims to learn 𝑝⋅|E∗

Softmax LMs with word embeddings Φ can only be guaranteed to learn Φ𝑝⋅|E∗

Page 22: A Mathematical Exploration of Language Models

Outline

• Language modeling• Cross-entropy and Softmax-parametrized LMs

• Downstream tasks• Sentence completion reformulation

• Formal guarantees• 𝜖-optimal LM ⇒𝒪 𝜖 -good on task

• Extensions, discussions and future work

Page 23: A Mathematical Exploration of Language Models

Classification task → Sentence completion

• Binary classification task 𝒯. E.g. {(“I would recommend this movie.”, +1), …, (“It was an utter waste of time.”, -1)}

• Language models aim to learn 𝑝⋅|#∗ (or on subspace). Can 𝑝⋅|#∗ even help solve 𝒯

I would recommend this movie. ___

𝑝⋅|#∗ J − 𝑝⋅|#∗ L > 0

+1, . . , −1, … , 0 /

𝑝⋅|#∗ J…

𝑝⋅|#∗ L…

𝑝⋅|#∗ ”𝑇ℎ𝑒”

> 0

𝑝⋅|#∗ J…

𝑝⋅|#∗ L…

𝑝⋅|#∗ ”𝑇ℎ𝑒”

𝑣% 𝑝⋅|#∗ > 0Linear classifier over 𝑝⋅|#∗

Page 24: A Mathematical Exploration of Language Models

Classification task → Sentence completion

I would recommend this movie. ___𝑝⋅|#∗ J…

𝑝⋅|#∗ L…

𝑝⋅|#∗ ”𝑇ℎ𝑒”

𝑝⋅|#∗

Page 25: A Mathematical Exploration of Language Models

Classification task → Sentence completion

I would recommend this movie. This movie was ___

𝑝⋅|#∗ (“𝑟𝑜𝑐𝑘”)𝑝⋅|#∗ (“𝑔𝑜𝑜𝑑”)

𝑝⋅|#∗ (“𝑏𝑟𝑖𝑙𝑙𝑖𝑎𝑛𝑡”)…

𝑝⋅|#∗ (“𝑔𝑎𝑟𝑏𝑎𝑔𝑒”)𝑝⋅|#∗ (“𝑏𝑜𝑟𝑖𝑛𝑔”)𝑝⋅|#∗ (“ℎ𝑒𝑙𝑙𝑜”)

Prompt

I would recommend this movie. ___

𝑝⋅|#∗

024…−3−20

𝑣

𝑣% 𝑝⋅|#∗ > 0

Allows for a larger set of words that are grammatically correct completions

Extendable to other classification tasks (e.g., topic classification)

Page 26: A Mathematical Exploration of Language Models

Experimental verification

• Verify sentence completion intuition (𝑝⋅|E∗ can solve a task)• Task: SST is movie review sentiment classification task• Learn linear classifier on subset of words on 𝑝&(#) from pretrained LM*

With prompt:“This movie is” *Used GPT-2 (117M parameters)

𝑘 𝑝&(#)(𝑘 words)

𝑝&(#)(~ 20 words)

𝑓 𝑠(768 dim)

𝑓)*)+ 𝑠(768 dim)

Bag-of-words

SST 2 76.4 78.2 87.6 58.1 80.7

SST* 2 79.4 83.5 89.5 56.7 -

𝑝*(#) J

𝑝*(#) L

𝑝!(#) ”𝑔𝑜𝑜𝑑”𝑝!(#) ”𝑔𝑟𝑒𝑎𝑡”

…𝑝!(#) ”𝑏𝑜𝑟𝑖𝑛𝑔”𝑝!(#) ”𝑏𝑎𝑑”

Features from LM

Features from random init LM

non-LM baseline

Page 27: A Mathematical Exploration of Language Models

Classification tasks can be rephrased as sentence completion problems

This is the same as solving the task using a linear classifier on 𝑝⋅|E∗ , i.e. 𝑣J𝑝⋅|E∗ > 0

Page 28: A Mathematical Exploration of Language Models

Outline

• Language modeling• Cross-entropy and Softmax-parametrized LMs

• Downstream tasks• Sentence completion reformulation

• Formal guarantees• 𝜖-optimal LM ⇒𝒪 𝜖 -good on task

• Extensions, discussions and future work

Page 29: A Mathematical Exploration of Language Models

Natural task

minKℓ𝒯 𝑝⋅|E∗ , 𝑣 ≤ 𝜏𝜏 - Natural task 𝒯

𝜏 captures how ”natural” (amenable to sentence completion reformulation) the classification task is

Sentence completion reformulation⇒ Can solve using 𝑣%𝑝⋅|#∗ > 0

For any 𝐷-dim feature map 𝑔 and classifier 𝑣ℓ𝒯 𝑔 𝑠 , 𝑣 = 𝔼(#,.)[logistic-loss 𝑣%𝑔(𝑠), 𝑦 ]

𝑔

“It was an utter waste of time.”

“I would recommend this movie.”

“Negative”

“Positive”

Page 30: A Mathematical Exploration of Language Models

Main Result

Naturalness of task(sentence completion view)

ℓ𝒯 Φ𝑝: ; ≤ 𝜏 + 𝒪 𝜖

Logistic regression loss of 𝑑-dimensional features Φ𝑝& #

1. 𝒇 is a LM that is 𝝐 − optimal in cross-entropy (does well on next word prediction)

2. 𝒯 is a 𝝉 − natural task (fits sentence completion view)

3. Word embeddings 𝚽 are nice (assigns similar embeddings to synonyms)

Why can language models that do well on cross-entropyobjective learn features that are useful for linear

classification tasks?

Loss due to suboptimality of LM

𝜖 = ℓ/0*+ 𝑝& # − ℓ/0*+ {𝑝⋅|#∗ }

Page 31: A Mathematical Exploration of Language Models

Main Result: closer look

Why can language models that do well on cross-entropy objective learn features that are useful for linear classification tasks?

Guarantees for LM 𝑓 that is 𝜖-optimal in cross-entropy

Use the output probabilities Φ𝑝M(E)as 𝑑- dimensional features

Upper bound on logistic regression loss for natural classification tasks

ℓ𝒯 Φ𝑝, # ≤ 𝜏 + 𝒪 𝜖

Page 32: A Mathematical Exploration of Language Models

AG News 4 68.4 78.3 84.5 90.7

AG News* 4 71.4 83.0 88.0 91.1

Conditional mean features

𝑘 𝑝&(#)(𝑘 words)

𝑝&(#)(~ 20 words)

Φ𝑝&(#)(768 dim)

𝑓(𝑠)(768 dim)

SST 2 76.4 78.2 82.6 87.6

SST* 2 79.4 83.5 87.0 89.5

Φ𝑝M E =3Q

𝑝M E 𝑤 𝜙Q

Weighted average of word embeddings

ℓ&'() 𝑝* #

ℓ 𝒯Φ𝑝 *

#

Observing 𝜖 dependence in practice

New way to extract 𝑑-dimensional features from LM

ℓ𝒯 Φ𝑝, # ≤ 𝜏 + 𝒪 𝜖

Page 33: A Mathematical Exploration of Language Models

Main take-aways

• Classification tasks → Sentence completion → Solve using 𝑣D𝑝⋅|;∗ > 0

• 𝜖-optimal language model will do 𝒪( 𝜖) well on such tasks

• Softmax models can hope to learn Φ𝑝⋅|;∗• Good to assign similar embeddings to synonyms

• Conditional mean features: Φ𝑝:(;)• Mathematically motivated way to extract 𝑑-dimensional features from LMs

Page 34: A Mathematical Exploration of Language Models

More in paper

• Connection between 𝑓 𝑠 and Φ𝑝M(E)

• Use insights to design new objective, alternative to cross-entropy

• Detailed bounds capture other intuitions

Page 35: A Mathematical Exploration of Language Models

Future work

• Understand why 𝑓(𝑠) does better than Φ𝑝M(E) in practice

• Bidirectional and masked language models (BERT and variants)• Theory applies when just one masked token

• Diverse set of NLP tasks• Does sentence completion view extend? Other insights?

• Role of finetuning, inductive biases• Needs more empirical exploration

• Self-supervised learning

Page 36: A Mathematical Exploration of Language Models

Thank you!

• Happy to take questions

• Feel free to email: [email protected]

• ArXiv: https://arxiv.org/abs/2010.03648

ℓ%&'( 𝑝) #

ℓ 𝒯Φ𝑝 )

#

ℓ𝒯 Φ𝑝* # ≤ 𝜏 + 𝒪 𝜖

I would recommend this movie. ___

𝑝⋅|#∗ J − 𝑝⋅|#∗ L > 0

𝑝⋅|#∗ J…

𝑝⋅|#∗ L…

𝑝⋅|#∗ ”𝑇ℎ𝑒”