Upload
others
View
0
Download
0
Embed Size (px)
Citation preview
RSt_CGr___Overlay_Benchmarking___IMDEA___v.2.0_2011.05.18.ppt
KOM - Multimedia Communications LabProf. Dr.-Ing. Ralf Steinmetz
© 2011 author(s) of these slides including research results from the KOM research network and TU Darmstadt. Otherwise it is specified at the respective slide17. Mai 2011
Perspectives in BenchmarkingOverlay Networks for Internet Science
IMDEA, Madrid, SpainMay, 18th 2011
Dr.-Ing. Aleksandra Kovecevic
Systemunder
Test (SUT)
Benchmarking-Scenario
Metrics +Requirements Evaluation
Workload
Chord GnutellaXYZ.KOM
KOM – Multimedia Communications Lab 2
KOM @ Technische Universität Darmstadt
Internet Science ?
KOM – Multimedia Communications Lab 3
KOM Research - Goals
Source: http://www.sycor-asia.com/opencms/as/products_services/complementary_services/Telecommunication/
Future Internet
Seamless Multimedia Communications
KOM – Multimedia Communications Lab 4
KOM ResearchSeamless Multimedia Communications
N1 N2R N4N3
NB NB
NB NB
Context-Aware Communication
P2P-Overlay Routing
Testbeds
Network Coding
KOM – Multimedia Communications Lab 5
H(„my data“)
= 3107
2207
7.31.10.25
peer-to-peer.info
12.5.7.31
95.7.6.10
86.8.10.18
planet-lab.orgberkeley.edu
29063485
201116221008709
611
89.11.20.15
?
KOM ResearchTechnologies
� Mobile Communication Technologies
� Multimedia Technologies� Network Security
� Peer-to-Peer Systems
� Sensor Networking Technologies� Service-oriented Computing
�Web Technologies
Technologies
Research in
Network SecurityService-oriented Computing
Web Technologies
Mobile CommunicationTechnologies
Multimedia TechnologiesSensor NetworkingTechnologies
Service-Orientation
Design Paradigm
Service-oriented
Solution Logic
Service-oriented
Architecture
Services
Service
Compositions
Service Inventory
is designed to support
the implementation of
is designed to support the
creation and evolution of
is designed to support
the implementation of
select
from the
is comprised of
standardized
provides specific
principles in
support of
is primarily
distinguished by
can be comprised of
provides a distinct
set of principles that
shape the design of
automate business
processes by
assembling
is comprised of
Peer-to-Peer Systems
Imag
ine-
net-
tech
.com
RSt_CGr___Overlay_Benchmarking___IMDEA___v.2.0_2011.05.18.ppt
KOM - Multimedia Communications LabProf. Dr.-Ing. Ralf Steinmetz
© 2011 author(s) of these slides including research results from the KOM research network and TU Darmstadt. Otherwise it is specified at the respective slide17. Mai 2011
Perspectives in BenchmarkingOverlay Networks for Internet Science
IMDEA, SpainMay, 18th 2011
Dr.-Ing. Aleksandra Kovecevic
Systemunder
Test (SUT)
Benchmarking-Scenario
Metrics +Requirements Evaluation
Workload
Chord GnutellaXYZ.KOM
KOM – Multimedia Communications Lab 7
Internet Science
The workshop objectives are � mostly in discussing
what Internet science is or, perhaps, what it should be.
Is Internet science just a new namefor research � in computer networking and distributed
systems?
The intersection with which discipline � is likely to give Internet science its next
significant breakthrough?
Can Internet science revolutionize online social networking?
� Which specific analytic techniques and experimental methods from the social and natural sciences can enrich the traditional apparatus of Internet technology scientists?
� benchmarking … ?
� What are the right metrics for the expanded problem space?
� What are the concrete ways for Internet technologists to contribute to the other engaged disciplines?
� With the specialization being a common path to success in science, why do we expect the holistic multidisciplinary Internet science to succeed at all?
KOM – Multimedia Communications Lab 8
Outline
Why to benchmark P2P systems?� Motivation
� Overview
How to benchmark P2P systems?� Benchmarking process
� Quality Aspects
� Metrics� Benchmarking Platform
Which P2P system to benchmark?� Example of Benchmarking P2P Overlays for
Networked Virtual Environments
KOM – Multimedia Communications Lab 9
Outline
Why to benchmark P2P systems?� Motivation
� Overview
How to benchmark P2P systems?� Benchmarking process
� Quality Aspects
� Metrics� Benchmarking Platform
Which P2P system to benchmark?� Example of Benchmarking P2P Overlays for
Networked Virtual Environments
KOM – Multimedia Communications Lab 10
Designing and Tuning a P2P Overlay Today
Common Difficulties:� Designing often from scratch
� Design phase often does not consideralready existing systems
� Performance and design problems discovered too late
� Trial & error approach2a. (Re)-Design
Requirementsfulfilled
3. Evaluate
1. RequirementAnalysis
if not
Done
Idea
if yes
2b. ParameterTuning
if not
KOM – Multimedia Communications Lab 11
Engineering Approach
Benefits:� Use components
with well examined behavior� Reusing catalogue for future applications
Comparative Evaluationessential for those steps!
2. Analyze existing overlays
Requirementsfulfilled
3. Create catalogue
1. RequirementAnalysis
if not
Done
Idea
5. Design4. Select
appropriate mechanisms
6. Evaluate
if yes
KOM – Multimedia Communications Lab 12
Comparability in Peer-to-Peer Systems: Ideal
[Pastry] “Pastry: Scalable, decentralized object location and routing for large-scale peer-to-peer systems"
[Chord] “Chord: a scalable peer-to-peer lookup protocol for Internet applications"
Source: [Rowstron2001], [Stoica2001]
KOM – Multimedia Communications Lab 13
Comparability in Peer-to-Peer Systems: Reality
Comparability not possible:�No common understanding of quality aspects�No commonly used workloads and metrics
KOM – Multimedia Communications Lab 14
DFG Research Unit FOR 733: QuaP2PPhase 1: „Improving Quality of Peer-to-Peer Systems“2006 - 2009
Quality of P2P Systems
Retrievability
Coherence
Consistency
Correctness
PerformanceScalability
Flexibility
Stability
Dependability
Service Provisioning
Overlay Operations
Individual Node
Complete System
IP Infrastructure
Availability
Reliability
Robustness/ Fault tolerance
Integrity
Confidentiality
Authentication
Non-repudiation
TrustValidityEfficiencyAdaptability
Security
Costs
Performance
Costs
Fairness
KOM – Multimedia Communications Lab 15
DFG Research Unit FOR 733: QuaP2PPhase 2: „Benchmarking of Peer-to-Peer Systems“2009 – 2012
Definition:� comparative analysis of different systems offering the same functional interface
against a reference (value) or metric using a set of standardized tests, so-called benchmarks.
Goal:� Compare : to find the system which can deal best with the given workload� Evaluate limits : to explore limits of a system under extreme situations
KOM – Multimedia Communications Lab 16
Systemunder
Test (SUT)
Benchmarking-Scenario
Metrics +Requirements Evaluation
Workload
Benchmarking of P2P Overlays
Chord GnutellaXYZ.KOM
KOM – Multimedia Communications Lab 17
Systemunder
Test (SUT)
Benchmarking-Scenario
Metrics +Requirements Evaluation
Workload
Example: Benchmarking Support for Heterogeneity
Load balancing factor
Supporting heterogeneity?
Number of peersNumber of requests per peerPopularity distribution of queried objectsType of request…
Chord Gnutella
“XYZ.KOM is stable up to 50.000 peers and 10 requests per second”
Fulfillment of requirements
Comparison of Systems“Given a certain Workload A Gnutella is more stable then Chord”
XYZ.KOM
KOM – Multimedia Communications Lab 18
Outline
Why to benchmark P2P systems?� Motivation
� Overview
How to benchmark P2P systems?� Benchmarking process
� Quality Aspects
� Metrics� Benchmarking Platform
What P2P system to benchmark?� Example of Benchmarking P2P Overlays for
Networked Virtual Environments
KOM – Multimedia Communications Lab 19
Benchmarking ProcessComponents
1. SUT specifies requirements :� Functional requirements indicating
for which type of systema benchmark can be used
� Non-functional requirements defining interpretation rules which indicate good or how bad the benchmarking results of the benchmarked system are
2. SUT defines the workload to be used for the benchmark
3. SUT specifies a set of metricswhich have to be measured
Workload
Metrics Requirements
P2PQualityAspects
KOM – Multimedia Communications Lab 20
Benchmarking ProcessHierarchy of Evaluating Quality Aspects2006 - 2009
Quality of P2P Systems
Retrievability
Coherence
Consistency
Correctness
PerformanceScalability
Flexibility
Stability
Dependability
Service Provisioning
Overlay Operations
Individual Node
Complete System
IP Infrastructure
Availability
Reliability
Robustness/ Fault tolerance
Integrity
Confidentiality
Authentication
Non-repudiation
TrustValidityEfficiencyAdaptability
Security
Costs
Performance
Costs
Fairness
KOM – Multimedia Communications Lab 21
Benchmarking ProcessHierarchy of Evaluating Quality Aspects2009 - 2012
Scalability
Stability
Robustness
Validity
Performance
Costs
Efficiency
Fairness
Success ratio
Completeness
Lookup delay
Number of hops
Battery consumption
Relative bandwidth usage
Join/Leave delay
∆ of validity/performance/costs
Atomic / Basic metricsDerived metrics
Recovery time
∆ of validity/performance/costs
KOM – Multimedia Communications Lab 22
Benchmarking ProcessChoice of Metrics
Example: Metric for costs in search overlays� In structured overlays, e.g. DHT:
� Number of hops to reach the destination� In unstructured overlays:
� Number of peers that received search message
�We need COMPARABLE quantification of quality aspects
Request
ResponseSpecificOverlay
KOM – Multimedia Communications Lab 23
Benchmarking ProcessMetrics for Benchmarking P2P Systems
Micro metrics:� Explains the results of macro metrics� Crucial in the research analysis
Example:� Metric for a load distribution of an
overlay
� Measures internal behavior of a mechanism
Macro metrics:� On the application layer� Relevant for assessment of quality of a
mechanism
Example:� Recall, response time, freshness, used
traffic (in Bytes/sec)
� Conclusion independent of a specific overlay
P2P-Search-Overlay
P2P-Search-Overlay
Request
Response
Request
Response
SpecificOverlay
KOM – Multimedia Communications Lab 24
Benchmarking ProcessExample of Evaluating Robustness and Stability
Experimenttimeline
Regularchurn
Measurements
Extreme churn
= e.g. failure of 50% of superpeersor tree branch failure
Joining Queries
Recovery time
Pref1
Pref2
Time
Perform
ance
Performance
variation
Robustness
Stability
Workload Metric
Frequent queriesMassive leaves
Massive failures
Performance / Validity / Costs (P/V/C)VARIATION
Recovery timeP/V/C Variation
KOM – Multimedia Communications Lab 25
Benchmarking Process
System
under
Test (SUT)
Benchmarking-Scenario
Metrics +
RequirementsEvaluation
Workload
ScenariosScenarios
MeasurementsMeasurements ModelingModeling Workload GeneratorWorkload Generator BenchmarkingPlatform
BenchmarkingPlatform
Simulation EngineSimulation Engine
MetricsMetrics
Quality AspectsQuality Aspects
BenchmarkDescriptionBenchmarkDescription
ApplicationsApplications
input forderived from
RequirementsRequirements
Application ModelApplication Model
Churn ModelChurn Model
User ModelUser Model
Service ModelService Model
KOM – Multimedia Communications Lab 27
DFG Research Unit FOR 733: QuaP2P
Quality Aspects of P2P Systems
P2P Benchmarking Methodology
Quality of P2P Search Overlays
P2P Search Overlay Layer
Quality of P2P Document and System Management
P2P Service Layer
Quality of P2P-Management Mechanisms
P2P Application Layer
P2P Benchmarking Platform
Measurements
P2P-Gaming, Social Knowledge Network, First Response Scenario
Workloads
N. Liebau R. Steinmetz K. Wehrle
W. Effelsberg
A. Buchmann
M. Mühlhäuser T. Strufe
A. Schürr R. Steinmetz
KOM – Multimedia Communications Lab 28
Outline
Why to benchmark P2P systems?� Motivation
� Overview
How to benchmark P2P systems?� Benchmarking process
� Quality Aspects
� Metrics� Benchmarking Platform
Which P2P system to benchmark?� Example of Benchmarking P2P Overlays for
Networked Virtual Environments
KOM – Multimedia Communications Lab 29
Benchmarking Overlays for Networked Virtual Environments (NVE)
Systemunder
Test (SUT)
Benchmarking-Scenario
Metrics +Requirements Evaluation
Workload
KOM – Multimedia Communications Lab 30
Characteristics and Requirements for Networked Virtual Environments
Properties /characteristics:� Each participant has a limited vision and
interaction range = area of interest (AOI)� All virtual objects and participants have
spatial and temporal alignment in the virtual space
� Our focus : spatial information dissemination where participants move in a continuous virtual world Requirements:
� Consistency of spatial placement of virtual objects and participants
� Real-time behavior, requiring a high responsiveness and low update delays
� Efficient queries for objects and/or disseminating data in a specific region of the virtual world
Objects
Participants
Legend:
AOI (Area of Interest )
KOM – Multimedia Communications Lab 32
Benchmarking Methodology: Workload
A workload should:� reflect the most critical situations within
the lifetime of a SUT � drive SUT to its performance limits to
gain insights on performance bottlenecks
Our workloads:
time
#pee
rs
Heavy churn
time
#pee
rs
Linearly increasing density
time
node
spe
ed
Linear increasing node speed
time
#pee
rs
Massive join
time
#pee
rs
Massive leave/crash
time
#los
t msg
s
Message loss
*Assumption: fixed world size and AOI for each peer
Density ↑�
# AOI neighbors ↑�
bandwidth utilization ↑
* Fixed world and AOI size
Speed ↑�neighborhood change rapidly AND
effort to maintain neighborhood ↑
exponential churn model with a decreasing mean session length
requires reorganization of the neighborhood relations of a large set of peers within a short amount of time
loss of position updates �inconsistent view of peers OR collapse of overlay topology
Systemunder
Test (SUT)
Benchmarking-Scenario
Metrics +Requirements Evaluation
Workload
Baseline
Baseline
Baseline workload:typical average workload
KOM – Multimedia Communications Lab 33
Contact us…
http://www.quap2p.de
http://research.spec.org
http://peerfactsim.com
KOM – Multimedia Communications Lab 34
Some References
Stoica, I., R Morris, D Karger, F Kaashoek, and H. Balakrishnan. Chord: Scalable Peer-to-Peer Lookup Service for Internet Applications. In Proceedings SIGCOMM, ACM, 2001.
[Stoica2001]
Rowstron, A., and P. Druschel. Pastry: Scalable, decentralized object location, and routing for large-scale peer-to-peer systems. In International Conference on Distributed Systems Platforms, Springer, 2001.
[Rowstron2001]
Gross C., Lehn M., Münker C.,Towards a Common Performance Evaluation of Overlaysfor Networked Virtual Environments, In Eleventh International Conference on Peer-to-Peer Computing, IEEE, 2011, (under submission).
[Gross2011]
Kovacevic, A., Peer-to-Peer Location-based Search: Engineering a Novel Peer-to-Peer Overlay Network, PhD thesis, Technische Universität Darmstadt, 2009.
[Kovacevic2009]
Bharambe, A.R., S. Rao, and S. Seshan, Mercury: a scalable publish-subscribe system for internet games. In Proceedings of the 1st workshop on Network and system support for games, ACM, 2002
[Bharambe2002]
Schmieg, Arne, Michael Stieler, Sebastian Jeckel, Patric Kabus, Bettina Kemme, and Alejandro Buchmann. pSense-Maintaining a Dynamic Localized Peer-to-Peer Structure for Position Based Multicast in Games. In Eighth International Conference on Peer-to-Peer Computing, IEEE, 2008.
[Schmieg 2008]
Hu, S., Tsu-han C. VON: A Scalable Peer-to-Peer Network for Virtual Environments. In IEEE Network, 2006
[Hu2006]
KOM – Multimedia Communications Lab 35
Questions & Contact