15
Enabling Flow-level Latency Measurements across Routers in Data Centers Parmjeet Singh, Myungjin Lee Sagar Kumar, Ramana Rao Kompella

Enabling Flow-level Latency Measurements across Routers in Data Centers Parmjeet Singh, Myungjin Lee Sagar Kumar, Ramana Rao Kompella

Embed Size (px)

Citation preview

Page 1: Enabling Flow-level Latency Measurements across Routers in Data Centers Parmjeet Singh, Myungjin Lee Sagar Kumar, Ramana Rao Kompella

Enabling Flow-level Latency Measurements across Routers in

Data Centers

Parmjeet Singh, Myungjin LeeSagar Kumar, Ramana Rao Kompella

Page 2: Enabling Flow-level Latency Measurements across Routers in Data Centers Parmjeet Singh, Myungjin Lee Sagar Kumar, Ramana Rao Kompella

Latency-critical applications in data centers

Guaranteeing low end-to-end latency is important Web search (e.g., Google’s instant search service) Retail advertising Recommendation systems High-frequency trading in financial data centers

Operators want to troubleshoot latency anomalies End-host latencies can be monitored locally Detection, diagnosis and localization through a

network: no native support of latency measurements in a router/switch

Page 3: Enabling Flow-level Latency Measurements across Routers in Data Centers Parmjeet Singh, Myungjin Lee Sagar Kumar, Ramana Rao Kompella

Prior solutions Lossy Difference Aggregator (LDA)

Kompella et al. [SIGCOMM ’09] Aggregate latency statistics

Reference Latency Interpolation (RLI) Lee et al. [SIGCOMM ’10] Per-flow latency measurements

More suitable due to more fine-grained measurements

Page 4: Enabling Flow-level Latency Measurements across Routers in Data Centers Parmjeet Singh, Myungjin Lee Sagar Kumar, Ramana Rao Kompella

Deployment scenario of RLI Upgrading all switches/routers in a data

center network Pros

Provide finest granularity of latency anomaly localization

Cons Significant deployment cost Possible downtime of entire production data

centers

In this work, we are considering partial deployment of RLI Our approach: RLI across Routers (RLIR)

Page 5: Enabling Flow-level Latency Measurements across Routers in Data Centers Parmjeet Singh, Myungjin Lee Sagar Kumar, Ramana Rao Kompella

Overview of RLI architecture

Goal Latency statistics on a per-flow basis between interfaces

Problem setting No storing timestamp for each packet at ingress and

egress due to high storage and communication cost Regular packets do not carry timestamps

RouterIngress I

Egress E

Page 6: Enabling Flow-level Latency Measurements across Routers in Data Centers Parmjeet Singh, Myungjin Lee Sagar Kumar, Ramana Rao Kompella

Overview of RLI architecture

Premise of RLI: delay locality Approach

1) The injector sends reference packets regularly2) Reference packet carries ingress timestamp3) Linear interpolation: compute per-packet latency estimates at the latency estimator4) Per-flow estimates by aggregating per-packet estimates

LatencyEstimator

ReferencePacketInjector

Ingress I Egress E21

RL

Dela

y

Time

R

L

1

Linear interpolationline

2Interpolated

delay

Page 7: Enabling Flow-level Latency Measurements across Routers in Data Centers Parmjeet Singh, Myungjin Lee Sagar Kumar, Ramana Rao Kompella

Full vs. Partial deployment

Full deployment: 16 RLI sender-receiver pairs Partial deployment: 4 RLI senders + 2 RLI

receivers

81.25 % deployment cost reduction

Switch 1 Switch 5

Switch 2 Switch 4

Switch 3

Switch 6

RLI Sender (Reference Packet Injector)RLI Receiver (Latency Estimator)

Page 8: Enabling Flow-level Latency Measurements across Routers in Data Centers Parmjeet Singh, Myungjin Lee Sagar Kumar, Ramana Rao Kompella

Case 1: Presence of cross traffic

Issue: Inaccurate link utilization estimation at the sender leads to high reference packet injection rate

Approach Not actively addressing the issue Evaluation shows no much impact on packet loss rate

increase Details in the paper

Switch 1 Switch 5

Switch 2 Switch 4

Switch 3

Switch 6

RLI Sender (Reference Packet Injector)RLI Receiver (Latency Estimator)

Link utilization estimation on

Switch 1

BottleneckLink

Cross

Traffic

Page 9: Enabling Flow-level Latency Measurements across Routers in Data Centers Parmjeet Singh, Myungjin Lee Sagar Kumar, Ramana Rao Kompella

Case 2: RLI Sender side

Issue: Traffic may take different routes at an intermediate switch

Approach: Sender sends reference packets to all receivers

Switch 1 Switch 5

Switch 2 Switch 4

Switch 3

Switch 6

RLI Sender (Reference Packet Injector)RLI Receiver (Latency Estimator)

Page 10: Enabling Flow-level Latency Measurements across Routers in Data Centers Parmjeet Singh, Myungjin Lee Sagar Kumar, Ramana Rao Kompella

Case 3: RLI Receiver side

Issue: Hard to associate reference packets and regular packets that traversed the same path

Approaches Packet marking: requires native support from routers Reverse ECMP computation: ‘reverse’ engineer

intermediate routes using ECMP hash function IP prefix matching at limited situation

Switch 1 Switch 5

Switch 2 Switch 4

Switch 3

Switch 6

RLI Sender (Reference Packet Injector)RLI Receiver (Latency Estimator)

Page 11: Enabling Flow-level Latency Measurements across Routers in Data Centers Parmjeet Singh, Myungjin Lee Sagar Kumar, Ramana Rao Kompella

Deployment example in fat-tree topology

RLI Sender (Reference Packet Injector)RLI Receiver (Latency Estimator)

IP prefix matching Reverse ECMP computation /IP prefix matching

Page 12: Enabling Flow-level Latency Measurements across Routers in Data Centers Parmjeet Singh, Myungjin Lee Sagar Kumar, Ramana Rao Kompella

Evaluation Simulation setup

Trace: regular traffic (22.4M pkts) + cross traffic (70M pkts)

Simulator

Results Accuracy of per-flow latency estimates

TrafficDivide

r

Switch1

Switch2

Cross Traffic

Injector

PacketTrace

RLIReceiv

er

RLISende

r

CrossTraffic

RegularTraffic

Referencepackets

10% / 1%injection rate

Page 13: Enabling Flow-level Latency Measurements across Routers in Data Centers Parmjeet Singh, Myungjin Lee Sagar Kumar, Ramana Rao Kompella

67%

Accuracy of per-flow latency estimates

10% injection

1% injection 10% injection

1% injection

Bottleneck link utilization: 93%

Relative error

CD

F

1.2% 4.5% 18%31%

Page 14: Enabling Flow-level Latency Measurements across Routers in Data Centers Parmjeet Singh, Myungjin Lee Sagar Kumar, Ramana Rao Kompella

Summary Low latency applications in data centers

Localization of latency anomaly is important

RLI provides flow-level latency statistics, but full deployment (i.e., all routers/switches) cost is expensive

Proposed a solution enabling partial deployment of RLI No too much loss in localization granularity (i.e.,

every other router)

Page 15: Enabling Flow-level Latency Measurements across Routers in Data Centers Parmjeet Singh, Myungjin Lee Sagar Kumar, Ramana Rao Kompella

Thank you! Questions?