35
4. Boolean and Vector Space Retrieval Models Many slides in this section are adapted from Prof. Joydeep Ghosh (UT ECE) who in turn adapted them from Prof. Dik Lee (Univ. of Science and Tech, Hong Kong) 1 These notes are based, in part, on notes by Dr. Raymond J. Mooney at the University of Texas at Austin.

4. Boolean and Vector Space Retrieval Models · 2019-02-12 · 4. Boolean and Vector Space Retrieval Models Many slides in this section are adapted from Prof. Joydeep Ghosh (UT ECE)

  • Upload
    others

  • View
    10

  • Download
    4

Embed Size (px)

Citation preview

  • 4. Boolean and Vector Space Retrieval Models

    Many slides in this section are adapted from Prof. Joydeep Ghosh (UT ECE) who in turn adapted them from Prof. Dik

    Lee (Univ. of Science and Tech, Hong Kong)

    1

    These notes are based, in part, on notes by Dr. Raymond J. Mooney at the University of Texas at Austin.

  • 2

    Retrieval Models• A retrieval model specifies the details

    of:– Document representation– Query representation– Retrieval function

    • Determines a notion of relevance.• Notion of relevance can be binary or

    continuous (i.e. ranked retrieval).

  • 3

    Classes of Retrieval Models

    1. Boolean models (set theoretic)2. Vector space models

    (statistical/algebraic)– Latent Semantic Indexing

    3. Probabilistic models

  • 4

    Other Model Dimensions

    • Logical View of Documents– Index terms– Full text– Full text + Structure (e.g. hypertext)

    • User Task– Retrieval– Browsing

  • 5

    Common Preprocessing Steps• Strip unwanted characters/markup (e.g. HTML

    tags, punctuation, numbers, etc.).• Break into tokens (keywords) on whitespace.• Stem tokens to “root” words

    – computational comput• Remove common stopwords (e.g. a, the, it, etc.).• Detect common phrases (possibly using a domain

    specific dictionary).• Build inverted index (keyword list of docs

    containing it).

  • 6

    1. Boolean Model

    • A document is represented as a set of keywords.

    • Queries are Boolean expressions of keywords, connected by AND, OR, and NOT, including the use of brackets to indicate scope.– “Brutus AND Caesar AND NOT Calpurnia”

    • Output: Document is relevant or not. No partial matches or ranking.

  • 7

    • Popular retrieval model because:– Easy to understand for simple queries.– Clean formalism.

    • Primary commercial retrieval tool for over 3 decades• Many professional searchers (e.g., lawyers) still like

    Boolean queries.– You know exactly what you are getting

    • Boolean models can be extended to include ranking (e.g. chronological order).

    • Reasonably efficient implementations possible for normal queries (by query optimization).

    Boolean Model - Pros

  • 8

    Boolean Models − Cons• Very rigid: AND means all; OR means any.• Difficult to express complex user requests.• Difficult to control the number of documents

    retrieved.– All matched documents will be returned.

    • Difficult to rank output.– All matched documents logically satisfy the query.

    • Difficult to perform relevance feedback.– If a document is identified by the user as relevant or

    irrelevant, how should the query be modified?

  • 9

    • e.g. (Cat OR Dog) AND (Collar OR Leash)– Each of the following combinations works:

    Cat x x x x x xDog x x x x xCollar x x xLeash x x x x x x

    Sheet1

    Catxxxxxx

    Dogxxxxx

    Collarxxx

    Leashxxxxxx

    Sheet2

    Sheet3

  • 10

  • 11

    Statistical Models• A document is typically represented by a bag of

    words (unordered words with frequencies).• Bag = set that allows multiple occurrences of the

    same element.• User specifies a set of desired terms with optional

    weights:– Weighted query terms:

    Q = < database 0.5; text 0.8; information 0.2 >– Unweighted query terms:

    Q = < database; text; information >– No Boolean conditions specified in the query.

  • 12

    Statistical Retrieval

    • Retrieval based on similarity between query and documents.

    • Output documents are ranked according to similarity to query.

    • Similarity based on occurrence frequencies of keywords in query and document.

    • Automatic relevance feedback can be supported:– Relevant documents “added” to query.– Irrelevant documents “subtracted” from query.

  • 2. Vector Space Model• Vocabulary V = the set of terms left after pre-

    processing the text (tokenization, stop-word removal, stemming, ...).

    • Each document or query is represented as a |V| = ndimensional vector:– dj = [w1j, w2j, ..., wnj].– wij is the weight of term i in document j.⇒the terms in V form the orthogonal dimensions of a vector space

    • Document = Bag of words:– Vector representation doesn’t consider the ordering of words:

    • John is quicker than Mary vs. Mary is quicker than John.13

  • 14

    Document Collection• A collection of n documents can be represented in the

    vector space model by a term-document matrix.• An entry in the matrix corresponds to the “weight” of a

    term in the document; zero means the term has no significance in the document or it simply doesn’t exist in the document.

    T1 T2 …. TtD1 w11 w21 … wt1D2 w12 w22 … wt2: : : :: : : :Dn w1n w2n … wtn

  • The dictionary

    15

    Document Vectors and Indexes• Conceptually, the index can be viewed as a

    document-term matrix– Each document is represented as an n-dimensional vector (n = no. of terms in

    the dictionary)– Term weights represent the scalar value of each dimension in a document– The inverted file structure is an “implementation model” used in practice to

    store the information captured in this conceptual representation

    nova galaxy heat hollywood film role diet furA 1.0 0.5 0.3B 0.5 1.0C 1.0 0.8 0.7D 0.9 1.0 0.5E 1.0 1.0F 0.9 1.0G 0.5 0.7 0.9H 0.6 1.0 0.3 0.2 0.8I 0.7 0.5 0.1 0.3

    Document Ids

    a documentvector

    Term Weights(in this case normalized)

  • 16

    Issues for Vector Space Model• How to determine important words in a document?

    – Word sense?– Word n-grams (and phrases, idioms,…) terms

    • How to determine the degree of importance of a term within a document and within the entire collection?

    • How to determine the degree of similarity between a document and the query?

    • In the case of the web, what is the collection and what are the effects of links, formatting information, etc.?

  • 17

    The Vector-Space Model• Assume t distinct terms remain after preprocessing;

    call them index terms or the vocabulary.• These “orthogonal” terms form a vector space.

    Dimensionality = t = |vocabulary| • Each term, i, in a document or query, j, is given a

    real-valued weight, wij.• Both documents and queries are expressed as

    t-dimensional vectors:dj = (w1j, w2j, …, wtj)

  • 18

    Graphic RepresentationExample:D1 = 2T1 + 3T2 + 5T3D2 = 3T1 + 7T2 + T3Q = 0T1 + 0T2 + 2T3

    T3

    T1

    T2

    D1 = 2T1+ 3T2 + 5T3

    D2 = 3T1 + 7T2 + T3

    Q = 0T1 + 0T2 + 2T3

    7

    32

    5

    • Is D1 or D2 more similar to Q?• How to measure the degree of

    similarity? Distance? Angle? Projection?

  • 19

    Term Weights: Term Frequency

    • More frequent terms in a document are more important, i.e. more indicative of the topic.

    fij = frequency of term i in document j

    • May want to normalize term frequency (tf) by dividing by the frequency of the most common term in the document:

    tfij = fij / maxi{fij}

  • 20

  • 21

    Term Weights: Inverse Document Frequency

    • Terms that appear in many different documents are less indicative of overall topic.df i = document frequency of term i

    = number of documents containing term iidfi = inverse document frequency of term i,

    = log2 (N/ df i) (N: total number of documents)

    • An indication of a term’s discrimination power.• Log used to dampen the effect relative to tf.

  • 22

    Inverse Document Frequency

    • IDF provides high values for rare words and low values for common words

    41

    10000log

    698.220

    10000log

    301.05000

    10000log

    01000010000log

    =

    =

    =

    =

    Note: log10 is used

    in the examples.

  • 23

    TF-IDF Weighting• A typical combined term importance indicator is

    tf-idf weighting:wij = tfij idfi = tfij log2 (N/ dfi)

    • A term occurring frequently in the document but rarely in the rest of the collection is given high weight.

    • Many other ways of determining term weights have been proposed.

    • Experimentally, tf-idf has been found to work well.

  • 24

    Computing TF-IDF -- An Example

    Given a document containing terms with given frequencies:A(3), B(2), C(1)

    Assume collection contains 10,000 documents and document frequencies of these terms are:

    A(50), B(1300), C(250)Then:A: tf = 3; idf = log2(10000/50) = 7.6; tf-idf = 3*7.6=22.8B: tf = 2; idf = log2 (10000/1300) = 2.9; tf-idf = 2*2.9=5.8C: tf = 1; idf = log2 (10000/250) = 5.3; tf-idf = 1*5.3=5.3

    NOTE: tf values above corrected on 1/23/2019

  • Another tf x idf Example

    25

    Doc 1 Doc 2 Doc 3 Doc 4 Doc 5 Doc 6 df idf = log2(N/df)T1 0 2 4 0 1 0 3 1.00T2 1 3 0 0 0 2 3 1.00T3 0 1 0 2 0 0 2 1.58T4 3 0 1 5 4 0 4 0.58T5 0 4 0 0 0 1 2 1.58T6 2 7 2 1 3 0 5 0.26T7 1 0 0 5 5 1 4 0.58T8 0 1 1 0 0 3 3 1.00

    Doc 1 Doc 2 Doc 3 Doc 4 Doc 5 Doc 6T1 0.00 2.00 4.00 0.00 1.00 0.00T2 1.00 3.00 0.00 0.00 0.00 2.00T3 0.00 1.58 0.00 3.17 0.00 0.00T4 1.75 0.00 0.58 2.92 2.34 0.00T5 0.00 6.34 0.00 0.00 0.00 1.58T6 0.53 1.84 0.53 0.26 0.79 0.00T7 0.58 0.00 0.00 2.92 2.92 0.58T8 0.00 1.00 1.00 0.00 0.00 3.00

    The initial Term x Doc matrix

    (Inverted Index)

    tf x idfTerm x Doc matrix

    Documents represented as vectors of words

  • 26

    tf x idf Normalization

    • Normalize the term weights (so longer documents are not unfairly given more weight)– normalize usually means force all values to fall within a

    certain range, usually between 0 and 1, inclusive– this is more ad hoc than normalization based on vector

    norms, but the basic idea is the same:

    2 21

    log( / )

    ( ) [log( / )]ik k

    ik tik kk

    tf N nwtf N n

    =

    =∑

  • Alternative TF.IDF Weighting Schemes

    • Many search engines allow for different weightings for queries vs. documents:

    • A very standard weighting scheme is:– Document: logarithmic tf, no idf, and cosine normalization– Query: logarithmic tf, idf, no normalization

    27

  • 28

    Query Vector• Query vector is typically treated as a document and also tf-

    idf weighted.• Alternative is for the user to supply weights for the given

    query terms.

  • 29

  • 30

  • 31

    Similarity Measure• A similarity measure is a function that computes

    the degree of similarity between two vectors.

    • Using a similarity measure between the query and each document:– It is possible to rank the retrieved documents in the

    order of presumed relevance.– It is possible to enforce a certain threshold so that the

    size of the retrieved set can be controlled.

  • 32

    Similarity Measure - Inner Product

    • Similarity between vectors for the document di and query q can be computed as the vector inner product (a.k.a. dot product):

    sim(dj,q) = dj•q =

    where wij is the weight of term i in document j and wiq is the weight of term i in the query

    • For binary vectors, the inner product is the number of matched query terms in the document (size of intersection).

    • For weighted term vectors, it is the sum of the products of the weights of the matched terms.

    iq

    t

    iijww∑

    =1

  • 33

    Properties of Inner Product

    • The inner product is unbounded.

    • Favors long documents with a large number of unique terms.

    • Measures how many terms matched but not how many terms are not matched.

  • 34

    Inner Product -- Examples

    Binary:– D = 1, 1, 1, 0, 1, 1, 0

    – Q = 1, 0 , 1, 0, 0, 1, 1

    sim(D, Q) = 3

    Size of vector = size of vocabulary = 70 means corresponding term not found in

    document or query

    Weighted:D1 = 2T1 + 3T2 + 5T3 D2 = 3T1 + 7T2 + 1T3 Q = 0T1 + 0T2 + 2T3

    sim(D1 , Q) = 2*0 + 3*0 + 5*2 = 10sim(D2 , Q) = 3*0 + 7*0 + 1*2 = 2

  • 35

    Cosine Similarity Measure• Cosine similarity measures the cosine of

    the angle between two vectors.• Inner product normalized by the vector

    lengths.

    D1 = 2T1 + 3T2 + 5T3 CosSim(D1 , Q) = 10 / √(4+9+25)(0+0+4) = 0.81D2 = 3T1 + 7T2 + 1T3 CosSim(D2 , Q) = 2 / √(9+49+1)(0+0+4) = 0.13Q = 0T1 + 0T2 + 2T3

    θ2

    t3

    t1

    t2

    D1

    D2

    Q

    θ1

    D1 is 6 times better than D2 using cosine similarity but only 5 times better using inner product.

    ∑ ∑

    = =

    =•

    ⋅=

    ⋅t

    i

    t

    i

    t

    i

    ww

    wwqdqd

    iqij

    iqij

    j

    j

    1 1

    22

    1)(

    CosSim(dj, q) =

    4. Boolean and Vector Space �Retrieval ModelsRetrieval ModelsClasses of Retrieval ModelsOther Model DimensionsCommon Preprocessing Steps1. Boolean ModelBoolean Model - ProsBoolean Models ConsSlide Number 9Slide Number 10Statistical ModelsStatistical Retrieval 2. Vector Space ModelDocument CollectionDocument Vectors and IndexesIssues for Vector Space ModelThe Vector-Space ModelGraphic RepresentationTerm Weights: Term FrequencySlide Number 20Term Weights: Inverse Document FrequencyInverse Document FrequencyTF-IDF WeightingComputing TF-IDF -- An ExampleAnother tf x idf Exampletf x idf NormalizationAlternative TF.IDF Weighting SchemesQuery VectorSlide Number 29Slide Number 30Similarity MeasureSimilarity Measure - Inner ProductProperties of Inner ProductInner Product -- ExamplesCosine Similarity Measure