Unstructure P2P Overlay. Improving Search in Peer-to-Peer Networks ICDCS 2002 Beverly Yang Hector...

Preview:

Citation preview

Unstructure P2P Overlay

Improving Search in Peer-to-Peer Networks

ICDCS 2002

Beverly YangHector Garcia-Molina

Current Techniques

• Gnutella– BFS with depth limit D.– Waste bandwidth and processing resources

• Freenet– DFS with depth limit D.– Poor response time.

Iterative Deepening

• Basic idea is to reduce the number of nodes that process a query

• Under policy P= { a, b, c} ;waiting time W

• See example.

Directed BFS

• A source send query messages to just a subset of its neighbors

• A node maintains simple statistics on its neighbors– Number of results received from each neighbor– Latency of connection

Candidate nodes

• Returned the Highest number of results

• Return response messages that have taken the lowest average number of hops

• High messages

Local Indices

• Each node n maintains an index over the data of all nodes within r hops radius.

• All nodes at depths not listed in the policy simply forward the query.

• Example: policy P= { 1, 5}

Experimental Setup

• For each response ,we log:– Number of hops took– IP from which the Response message came– Response time– Individual results

Experimental result

Routing Indices For P-to-P Systems

ICDCS 2002

Introduction• Search in a P2P system

– Mechanisms without an index– Mechanisms with specialized index nodes (cent

ralized search)– Mechanisms with indices at each node

• Structure P2P network• Unstructure P2P network

• Parallel v.s. sequentially search– Response time– Network traffic

Routing indices(RI)• Query

– Documents are on zero or more “topics”, and queries request documents on particular topics.

– Documents topics are independent

• Local index• RI

– Each node has a local routing index which contains following information

• The number of documents along each path• The number of documents on each topic of interest

– Allow a node to select the “best” neighbors to send a query to

• The RI may be “coarser” than the local indices

• Goodness measure– Number of results in a path

• Using Routing indices

– Storage space• N: number of nodes in the P2P network

• b: branching factor

• c: number of categories

• s: counter size in bytes

Centralized index : s*( c+1) *N

Distributed system: s*(c+1)*b (each node)

• Creating routing indices

• Maintaining Routing Indices– Trade off between RI freshness and update cost– No requiring the participation of a

disconnecting node

• Discussion– If the search topics is dependent?– Can the number of “hops” necessary to reach a

document be estimated?

Alternative Routing Indices

• Hop-count RI– Aggregated RIs for each “hop” up to a maximu

m number of hops are stored

– Search cost• Number of messages

– The goodness of a neighbor• The ratio between the number of documents availabl

e through that neighbor and the number of messages required to get those documents

– Regular tree with fanout F

– It takes Fh messages to find all documents at hop h

– Storage cost?

• Exponentially aggregated RI– Store the result of applying the regular-tree cost

formula to a hop-count RI

– How to compute the goodness of a path for the query containing several topics?

Cycles in the P2P network

Efficient Content Location Using Interest-Based Locality in Peer-to-

Peer SystemsKunwadee Sripanidkulchai

Bruce Maggs

Hui Zhang

IEEE INFOCOM 2003

motivation

• Although flooding is simple and robust, it is not scalable.

• A content location solution in which peers organized into an interest-based structure on top of Gnutella.

• The algorithm is called interest-based shortcuts

Interest-based locality

Shortcuts Architecture and Design Goals

• To create additional links on top of a peer-to-peer system’s overlay

• As a separate performance enhancement layer on top of existing content location mechanisms

Content location paths

Shortcut Discovery• The first lookup returns a set of peers that store

the content

• These are potential candidates.

• One peer is selected at random from the set and added

• For scalability, each peer allocates a fixed-size amount of storage to implement shortcuts.

• Alternatives for shortcut discovery– Exchanging shortcut lists between peers

Shortcut selection

• We rank shortcuts based on their perceived utility

• A peer sequentially asking all of the shortcuts on its list.

Ranking metrics

• Probability of providing content

• Latency of the path to the shortcut

• Load at the shortcut

• A combination of metrics can be used based on each peer’s preference

Potential and Limitations

• Adding 5 shortcuts at a time produces success rates that are close to the best possible.

• Slightly increase the shortest path length from 1 to 2 hops will perform better success rate.

Efficient and Scalable Query Routing for Unstructured Peer-to-Peer Networks

A. Kumar, J. Xu and W.W. Zegura

Overview

• As the distance from the node hosting the object increases, fewer bits are used to represent information about the direction in which the object located

Design• Exponential decay bloom filter (EDBF)

– Bloom filter is a data structure for approximately answering set membership questions

• k hash functions, and an array A• A[hi(x)]=1, for i=1…k• (x) =|{i|A[hi(x)]=1, i=1..k}|

– # of 1’s in the filter– (x) /k roughly indicates the probability of finding x along a specific lin

k in the overlay– Noise?

– When there is no noise• one hop away from the object x, (x) is approximately k bits• two hops away from the object x, (x) is approximately k/d

– Decay implementation• Decay rate is 1/d• Nodes reset each of the bits in the EDBFs received from upstream nei

ghbors with a probability (1-d)

• Creation and Maintenance of routing tables

• The initial advertisement is created by taking the union of all advertisements received from neighbors other than the target neighbor

• Decay the combined advertisement by the decay factor d

• Union the result with the local EDBF– The local EDBF is propagated without attenuation

• Loops– Split horizon with poisoned reverse

• Information received from a neighbor j will not be advertised back to j

– Exponentially decay• The count to infinity problem manifests itself as a “decay to

infinitely small amount of information”

• Query forwarding

• If the query is satisfied locally, it is answered

• Otherwise, if the TTL of the query has not expired– If the query was previously seen, it is forwarde

d to a randomly chosen neighbor– Otherwise, the query is forwarded to the neighb

or with the highest (x)

Structure P2P Overlay

Similarity Discovery in structured P2P Overlays

ICPP

Introduction• Structured P2P network

– Only support search with a single keyword

• Similarity between two documents– Keyword sets– Vector space– Measure

• Problems– Search problem– New keyword?

||||cos 1

ba

baab

Meteorograph

• Absolute angle

Publishing and Searching

• Publish– Hash

– Publish the item to a node np with the hash key closest to hash value

• Search problem– Nearest answers– K_nearest answers–

• Partial

• Comprehensive

• Search strategy

• Discussions

• What happened when keyword vector is represented by ?

Other issues

• Load balance

• Changes of vector space– Republished?– Comprehensive set of keywords– Other methods?

SWAM: A Family of Access Methods for Similarity-Search in

Peer-to-Peer Data NetworksFarnoush Banaei-KashaniCyrus Shahabi

(CIKM04)

PDN access method

• Defines

• How to organize the PDN topology to an index-like structure

• How to use the index structure

Hilbert space

• Hilbert space (V, Lp)• Key k = (a1,a2, … , ad)

– d: the dimension of a Vector space– The domain is a contiguous and finite interval o

f R

• The Lp norm with p belongs to Z+– The distance function to measure the dissimilari

ty

Topology

• Topology of a PDN can be modelled as a directed graph G(N, E)

• A(n) is the set of neighbors for node n

• A node maintains– A limited amount of information about its neigh

bors Includes • the key of the tuples maintained at neighbors

• The physical addresses of neighbors

• The processing of the query is completed when all expected tuples in the relevant result set are visited

• Access methods– Join, leave for virtual nodes– Forward for using local information to process

queries and make forwarding decisions

The small world example

• Grid component

• Random graph component

• The process of queries (exact, range, kNN) in the highly locality topology

Flat partitioning

• SWAM also employs the space partitioning idea: flat partitioning

Query Processing

• Exact-Match query processing

• Range query processing

• kNN Query processing

Similarity Search in Peer-to-Peer Databases

IEEE International Conference on Distributed Computing Systems 2005

Data and Query Model

• All data objects are unit vectors in a d-dimensional Euclidean space

• Cosine distance

• Can

Design Details• The indexing scheme

– Locality sensitive hashing function is used to reduce the dimensionality

• r is a d-dimensional unit vector• h(x) is the concatenation of the bits br1(X),br2(X)…brk(X)

– Objects with the same hash value belong to the same cluster and are stored at the node which owns the DHT key h(x)

• Group nearby objects to indices with low hamming distance• To avoid the situation that nearby objects differ in some bit positions i

n their index– t hashing functions are used (replication)

» To ensures that there is a high probability of two related objects hashing onto indices with low hamming distance in at least one of these sets

• The search algorithm– Node u generate Query (x, )– Compute h(x)– Compute the set V of all indices whose hammin

g distance from h(x) is at most r.– Node u queries each of the node in V– Nodes in V return all data objects which match

u’s query– How to determine r?

• Adaptive replication– Ensure the number of copies of each key in the

network is proportional to its popularity• The number of copies of each key is proportional to

the rate at which queries arrive for this key

• Randomized Lookup– The lookup for a specific key terminates

uniformly at random at one of the copies of this key

– Guarantee that the load is balanced uniformly across all copies of all keys in the system.

Discussion

• Search cost ?• What is the cardinality of set V?

• Availability ?

Guaranteeing Correctness and Availability in P2P Range Indices

SIGMOD 2005

Introduction

• Hashing destroys the value ordering among the search key values– Cannot be used to process range queries

efficiently

• Solution– Range indices assign data items to peers

directly based on their search key value– Load balance?

P-ring overview

• Two types of peers– Live peers

• Used to store data item• The data stored in each live peer is between sf and 2*sf (sf: st

orage factor)– Free peers

• Overflow (> 2*sf)– Split its assigned range with a free peer

• Underflow (< sf)– Merge with its successor in the ring to obtain more entr

ies

Incorrect query results

• Inconsistent Ring

• Concurrency in the data store

Solution

• Handling ring inconsistency– Two states

• Joined and joining

• Peer p remains in the joining state until all relevant peers know about p

• Only store items in peers in the joined state

• Handling data store concurrency– P stays in a lock state until psucc locks its range

Supporting Complex Multi-dimensional Queries in P2P systems

IEEE International Conference on Distributing Systems 2005

(HW)

Data Indexing in Peer-to-Peer DHT Networks

ICDCS 2004

• Locating data using incomplete information.– How to search data in a DHT

• Data descriptors and queries– Semi-structured XML data

– Query• Most specific query for d

• Relationship between queries

• Given the most specific query, finding the location of the file is simple

• How about less specific queries

• Solution– Provide query-to-query service

• For a given query q, the index service returns a list of more specific queries, covered by q

– DHT storage system must be extended• Insert(q.qi), q->qi, adds a mapping (q;qi) to the index

of the node responsible for key q.

Recommended