57
Anatomy of Google (circa 1999) Slides from http://www.cs.huji.ac.il/~sdbi/2000/ google/index.htm Project part B due a month from now (10/26)

Anatomy of Google (circa 1999) Slides from sdbi/2000/google/index.htm Project part B due a month from now (10/26)

  • View
    214

  • Download
    0

Embed Size (px)

Citation preview

Anatomy of Google(circa 1999)

Slides from http://www.cs.huji.ac.il/~sdbi/2000/google/index.htm

Project part B due a month from now (10/26)

Some points…

• Fancy hits?• Why two types of

barrels?• How is indexing

parallelized?• How does Google show

that it doesn’t quite care about recall?

• How does Google avoid crawling the same URL multiple times?

• What are some of the memory saving things they do?

• Do they use TF/IDF?• Do they normalize?

(why not?)• Can they support

proximity queries?• How are “page

synopses” made?

Challenges in Web Search Engines

• Spam– Text Spam– Link Spam– Cloaking

• Content Quality– Anchor text quality

• Quality Evaluation– Indirect feedback

• Web Conventions– Articulate and develop

validation

• Duplicate Hosts– Mirror detection

• Vaguely Structured Data– Page layout– The advantage of making

rendering/content language be same

Information from searchenginewatch.com

Number of indexed pages, self-reportedGoogle: 50% of the web?

Search Engine Size over Time

The “google” paper

Discusses g

oogle’s

Architecture circa 99

Google Search Engine Architecture

SOURCE: BRIN & PAGE

URL Server- Provides URLs to befetched

Crawler is distributedStore Server - compresses and

stores pages for indexingRepository - holds pages for indexing

(full HTML of every page)Indexer - parses documents, records

words, positions, font size, andcapitalization

Lexicon - list of unique words foundHitList – efficient record of word locs+attribsBarrels hold (docID, (wordID, hitList*)*)*

sorted: each barrel has range of wordsAnchors - keep information about links

found in web pagesURL Resolver - converts relative

URLs to absoluteSorter - generates Doc IndexDoc Index - inverted index of all words

in all documents (except stopwords)

Links - stores info about links to eachpage (used for Pagerank)

Pagerank - computes a rank for eachpage retrieved

Searcher - answers queries

Major Data Structures

• Big Files– virtual files spanning multiple file systems– addressable by 64 bit integers– handles allocation & deallocation of File

Descriptions since the OS’s is not enough– supports rudimentary compression

Major Data Structures (2)

• Repository– tradeoff between speed & compression

ratio– choose zlib (3 to 1) over bzip (4 to 1)– requires no other data structure to access

it

Major Data Structures (3)

• Document Index– keeps information about each document– fixed width ISAM (index sequential access mode)

index– includes various statistics

• pointer to repository, if crawled, pointer to info lists

– compact data structure– we can fetch a record in 1 disk seek during search

Major Data Structures (4)

• Lexicon– can fit in memory for reasonable price

• currently 256 MB• contains 14 million words• 2 parts

– a list of words– a hash table

Major Data Structures (4)

• Hit Lists– includes position font & capitalization– account for most of the space used in the

indexes– 3 alternatives: simple, Huffman , hand-

optimized– hand encoding uses 2 bytes for every hit

f size:3cap:1plaintype:4f size:7cap:1fancy

pos: 4hash:4type:4f size:7cap:1anchor

position:12position: 8

Major Data Structures (4)

• Hit Lists (2) hit hit hitdocIDhit hithit hit hithitdocIDhit hithit hit hithit

nhits: 8wordID: 24

wordID: 24 nhits: 8

nhits: 8

nhits: 8wordID: 24null wordID

forward barrels: total 43 GB

null wordID

nhits: 8wordID: 24

wordID: 24

ndocswordIDndocswordIDndocswordID

Lexicon: 293 MB

Nhits: 8Nhits: 8Nhits: 8Nhits: 8hit hit

Inverted Barrels: 41 GBhit hit hit hithit hit hithit hit hit hit

docId: 27docId: 27docId: 27docId: 27

Major Data Structures (5)

• Forward Index– partially ordered– used 64 Barrels– each Barrel holds a range of wordIDs– requires slightly more storage– each wordID is stored as a relative difference from

the minimum wordID of the Barrel– saves considerable time in the sorting

Major Data Structures (6)

• Inverted Index– 64 Barrels (same as the Forward Index)– for each wordID the Lexicon contains a

pointer to the Barrel that wordID falls into– the pointer points to a doclist with their hit

list– the order of the docIDs is important

• by docID or doc word-ranking– Two inverted barrels—the short barrel/full barrel

Major Data Structures (7)

• Crawling the Web– fast distributed crawling system– URLserver & Crawlers are implemented in phyton– each Crawler keeps about 300 connection open– at peek time the rate - 100 pages, 600K per second– uses: internal cached DNS lookup

– synchronized IO to handle events– number of queues

– Robust & Carefully tested

Major Data Structures (8)

• Indexing the Web– Parsing

• should know to handle errors– HTML typos– kb of zeros in a middle of a TAG– non-ASCII characters– HTML Tags nested hundreds deep

• Developed their own Parser– involved a fair amount of work– did not cause a bottleneck

Major Data Structures (9)

• Indexing Documents into Barrels– turning words into wordIDs– in-memory hash table - the Lexicon– new additions are logged to a file– parallelization

• shared lexicon of 14 million pages• log of all the extra words

Major Data Structures (10)

• Indexing the Web– Sorting

• creating the inverted index• produces two types of barrels

– for titles and anchor (Short barrels)– for full text (full barrels)

• sorts every barrel separately• running sorters at parallel• the sorting is done in main memory

Ranking looks at

Short barrels first

And then full barrels

Searching• Algorithm

– 1. Parse the query– 2. Convert word into

wordIDs– 3. Seek to the start of

the doclist in the short barrel for every word

– 4. Scan through the doclists until there is a document that matches all of the search terms

– 5. Compute the rank of that document

– 6. If we’re at the end of the short barrels start at the doclists of the full barrel, unless we have enough

– 7. If were not at the end of any doclist goto step 4

– 8. Sort the documents by rank return the top K

• (May jump here after 40k pages)

The Ranking System

• The information– Position, Font Size, Capitalization– Anchor Text– PageRank

• Hits Types– title ,anchor , URL etc..– small font, large font etc..

The Ranking System (2)

• Each Hit type has it’s own weight– Counts weights increase linearly with counts at first

but quickly taper off this is the IR score of the doc– (IDF weighting??)

• the IR is combined with PageRank to give the final Rank

• For multi-word query– A proximity score for every set of hits with a

proximity type weight• 10 grades of proximity

Feedback

• A trusted user may optionally evaluate the results

• The feedback is saved

• When modifying the ranking function we can see the impact of this change on all previous searches that were ranked

Results

• Produce better results than major commercial search engines for most searches

• Example: query “bill clinton”– return results from the “Whitehouse.gov”– email addresses of the president– all the results are high quality pages– no broken links– no bill without clinton & no clinton without bill

Storage Requirements

• Using Compression on the repository

• about 55 GB for all the data used by the SE

• most of the queries can be answered by just the short inverted index

• with better compression, a high quality SE can fit onto a 7GB drive of a new PC

Storage Statistics

Total size ofFetched Pages

147.8 GB

CompressedRepository

53.5 GB

Short InvertedIndex

4.1 GB

TemporaryAnchor Data

6.6 GB

DocumentIndex Incl.Variable WidthData

9.7 GB

Links Database 3.9 GB

Total WithoutRepository

55.2 GB

Web Page Statistics

Number of WebPages Fetched

24 million

Number of URLsSeen

76.5 million

Number of EmailAddresses

1.7 million

Number of 404’s 1.6 million

System Performance

• It took 9 days to download 26million pages• 48.5 pages per second• The Indexer & Crawler ran simultaneously• The Indexer runs at 54 pages per second• The sorters run in parallel using 4 machines,

the whole process took 24 hours

Computing Page Rank

Practicality• Challenges

– M no longer sparse (don’t represent explicitly!)– Data too big for memory (be sneaky about disk usage)

• Stanford version of Google :– 24 million documents in crawl– 147GB documents– 259 million links– Computing pagerank “few hours” on single 1997 workstation

• But How?– Next discussion from Haveliwala paper…

Efficient Computation: Preprocess

• Remove ‘dangling’ nodes– Pages w/ no children

• Then repeat process– Since now more danglers

• Stanford WebBase– 25 M pages– 81 M URLs in the link graph– After two prune iterations: 19 M nodes

Representing ‘Links’ Table• Stored on disk in binary format

• Size for Stanford WebBase: 1.01 GB– Assumed to exceed main memory

0

1

2

4

3

5

12, 26, 58, 94

5, 56, 69

1, 9, 10, 36, 78

Source node(32 bit int)

Outdegree(16 bit int)

Destination nodes(32 bit int)

Algorithm 1 =

Dest Links (sparse) Source

source node

dest

nod

e

s Source[s] = 1/Nwhile residual > { d Dest[d] = 0 while not Links.eof() { Links.read(source, n, dest1, … destn) for j = 1… n Dest[destj] = Dest[destj]+Source[source]/n } d Dest[d] = c * Dest[d] + (1-c)/N /* dampening */

residual = Source – Dest /* recompute every few iterations */

Source = Dest}

Analysis of Algorithm 1

• If memory is big enough to hold Source & Dest– IO cost per iteration is | Links|– Fine for a crawl of 24 M pages– But web ~ 800 M pages in 2/99 [NEC study]– Increase from 320 M pages in 1997 [same authors]

• If memory is big enough to hold just Dest– Sort Links on source field– Read Source sequentially during rank propagation step– Write Dest to disk to serve as Source for next iteration– IO cost per iteration is | Source| + | Dest| + | Links|

• If memory can’t hold Dest– Random access pattern will make working set = | Dest| – Thrash!!!

Block-Based Algorithm• Partition Dest into B blocks of D pages each

– If memory = P physical pages– D < P-2 since need input buffers for Source & Links

• Partition Links into B files– Linksi only has some of the dest nodes for each source– Linksi only has dest nodes such that

• DD*i <= dest < DD*(i+1)• Where DD = number of 32 bit integers that fit in D pages

=

Dest Links (sparse) Source

source node

dest

nod

e

3

Partitioned Link File

012

435

12, 26 5 1, 9, 10

Source node(32 bit int)

Outdegr(16 bit)

Destination nodes(32 bit int)

21

Num out(16 bit)

1

012

435

58 56 36

11

1

012

435

94 69 78

11

Buckets0-31

Buckets32-63

Buckets64-95

Block-based Page Rank algorithm

Analysis of Block Algorithm

• IO Cost per iteration = – B*| Source| + | Dest| + | Links|*(1+e)– e is factor by which Links increased in size

• Typically 0.1-0.3• Depends on number of blocks

• Algorithm ~ nested-loops join

Comparing the Algorithms

PageRank Convergence…

PageRank Convergence…

Summary of Key Points• PageRank Iterative Algorithm• Rank Sinks• Efficiency of computation – Memory!

– Single precision Numbers.– Don’t represent M* explicitly.– Break arrays into Blocks.– Minimize IO Cost.

• Number of iterations of PageRank.• Weighting of PageRank vs. doc similarity.

2/24

[Un]til I find a steady funderI'll make do with cheap-a## plunderEverybody wants a Google..

Wait! You will never never never need itIt's free; I couldn't leave itEverybody wants a Google shirt

Shameless corp'rate carrion crowsTurn your backs and show your logosEverybody wants a Google shirt

Shopping at job fairsPush my resume[But] jobs aren't what I seekI will be yourwalking student advertisementCan't live on my research stipendEverybody wants a Google shirt

HP, AmazonPixar, Cray, and FordI just can't decideHelp me score the mostfree pens and free umbrellasor a coffee mug from Bell LabsEverybody wants a Google..

("Everybody Wants a Google Shirt" is based on "Everybody Wants to Rule the World" by Tears for Fears. Alternate lyrics by Andy Collins, Kate Deibel, Neil Spring, Steve Wolfman, and Ken Yasuhara.)

Discussion

• What parts of Google did you find to be in line with what you learned until now?

• What parts of Google were different?

Beyond Google (and Pagerank)

• Are backlinks reliable metric of importance? – It is a “one-size-fits-all” measure of importance…

• Not user specific• Not topic specific

– There may be discrepancy between back links and actual popularity (as measured in hits)

» The “sense” of the link is ignored (this is okay if you think that all publicity is good publicity)

• Mark Twain on Classics– “A classic is something everyone wishes they had already read and no one

actually had..” (paraphrase)• Google may be its own undoing…(why would I need back links when I

know I can get to it through Google?)

• Customization, customization, customization… – Yahoo sez about their magic bullet.. (NYT 2/22/04)

– "If you type in flowers, do you want to buy flowers, plant flowers or see pictures of flowers?"

The rest of the slides on Google as well as crawling were not

specifically discussed one at a time, but have been discussed in essence

(read “you are still responsible for them”)

Robot (4)

2. How to extract URLs from a web page?

Need to identify all possible tags and attributes that hold

URLs.

• Anchor tag: <a href=“URL” … > … </a>

• Option tag: <option value=“URL”…> … </option>

• Map: <area href=“URL” …>

• Frame: <frame src=“URL” …>

• Link to an image: <img src=“URL” …>

• Relative path vs. absolute path: <base href= …>

Focused Crawling• Classifier: Is crawled page P

relevant to the topic?– Algorithm that maps page

to relevant/irrelevant• Semi-automatic• Based on page

vicinity..• Distiller:is crawled page P likely

to lead to relevant pages?– Algorithm that maps page

to likely/unlikely• Could be just A/H

computation, and taking HUBS

– Distiller determines the priority of following links off of P