15
Linkerd Service mesh with service Discovery backend Leandro Totino Pereira System Engineer

Linkerd – Service mesh with service Discovery backend

Embed Size (px)

Citation preview

Page 1: Linkerd – Service mesh with service Discovery backend

Linkerd – Service mesh withservice Discovery backend

Leandro Totino Pereira – System Engineer

Page 2: Linkerd – Service mesh with service Discovery backend

Orchestration system for containers

Kubernetes – Container system platform based on etcd

Nomad – Container system based on consul

Nomad Kubernetes

Multi-datacenter Native Federation

Multitenancy No Yes

Load Balancing External (consul-templates) Basic Integrated (Services)

Workload Container, java, commands, lxc andqemu hypervisors based.

Just Containers

Network abstraction Port hypervisor based Port/IP services context based

Multi-Datacenter yes No (federation)

abstrations jobs RC

Load Balancer integration External (consul-template or API) Services (basics load balancer) or API

Page 3: Linkerd – Service mesh with service Discovery backend

Service Discovery

Itens Consul Etcd

Cluster protocol Raft (Serf) Raft

Datacenter-aware yes No (Kubernetes federation)

auto-configuration DNS yes no

Service Agents yes no

KV store yes yes

Handler and Watches yes no

events yes no

Page 4: Linkerd – Service mesh with service Discovery backend

Benchmark I Result

Tests send 300,000 requests to key/value stores. One with jsonrpc, the other with gRPC. Both jsonrpc and gRPC code use only one TCP

connection. And another gRPC case with one TCP connection but with multiple clients:

Source: https://blog.gopheracademy.com/advent-2015/etcd-distributed-key-value-store-with-grpc-http2/

Page 5: Linkerd – Service mesh with service Discovery backend

Benchmark II Result

The output shows that Protocol Buffers outperforms JSON and XML in both marshaling and unmarshaling. The result shows the following

numbers:

:

Protocol Buffers Marshal: 819 ns/op

Protocol Buffers Unmarshal: 1163 ns/op

JSON Marshal: 3316 ns/op

JSON Unmarshal: 7196 ns/op

XML Marshal: 9248 ns/op

XML Unmarshal: 30485 ns/op

Source: https://medium.com/@shijuvar/benchmarking-protocol-buffers-json-and-xml-in-go-57fa89b8525

Page 6: Linkerd – Service mesh with service Discovery backend

Linkerd

• linkerd is a transparent proxy that adds service discovery, routing, failure handling, and visibility to modern software applications

• Integration service discovery

• Handles tens of thousands of requests per second per instance with minimal latency overhead. Scales horizontally with ease

• Provides dynamic, scoped, logical routing rules, enabling blue-green deployments, staging, canarying, failover, and more.

• Zipkin, Prometheus and statsd integration

• Multi-container orchestration supported

• Cloud Native Computing Foundation

• 918 commit, 30 contributors, 2244 stars, 30 release

• Slack channel really active

Page 7: Linkerd – Service mesh with service Discovery backend

Linkerd – Integration I

Nomad Integration

JOB Specs:

env { NOMAD_HOST=$HOSTNAME }

Kubernetes Integration

YAML Specs:

env:- name: NODE_NAME

valueFrom:fieldRef:

fieldPath: spec.nodeName- name: POD_IP

valueFrom:fieldRef:

fieldPath: status.podIP- name: http_proxy

value: $(NODE_NAME):4140

Page 8: Linkerd – Service mesh with service Discovery backend

Linkerd Integration II

Container

export http_proxy=$NOMAD_HOST:4140

Test:

For example, if we have a consul servisse named hello, we can resquest passing http header “Host: hello”

curl -sI -H 'Host: hello’ http://

Or if http_proxy is not defined:

curl -sI -H 'Host: hello’ http://$NOMAD_HOST / http://$NODE_NAME

Page 9: Linkerd – Service mesh with service Discovery backend

Linkerd – architecture I

Page 10: Linkerd – Service mesh with service Discovery backend

Linkerd – architecture II

1 – Application in containers register to service Discovery as service

2 – Linkerd gets services from services Discovery

3 – Application communicate by linkerd through http_proxy variable or directly by node_namevariable.

4 - Containers must connect to linkerd in your own host/hypervisor.

5 – Linkerd balance or forward connection to another linkerd.

Page 11: Linkerd – Service mesh with service Discovery backend

Namerd

Page 12: Linkerd – Service mesh with service Discovery backend

Dtab and Dentries

Dtab or Delegation tables (dtabs for short) are lists of routing rules (dentries) that take a “logical path” which looks like to rewrite url paths.

Dtabs can (and often do) have more than one dentry. For example, we could list several stores:

3 - /smitten => /USA/CA/SF/Octavia/432;

2 - /iceCreamStore => /smitten;

1 - /iceCreamStore => /humphrys;

When we try to resolve a path that matches more than one prefix, bottom dentries take precedence. So the path /iceCreamStore/try/allFlavors would resolve first as /humphrys/try/allFlavors. However, if the address for humphrys is unknown (as in this example), we fall back to /smitten/try/allFlavors, which ultimately resolves to /USA/CA/SF/Octavia/432/try/allFlavors.

Page 13: Linkerd – Service mesh with service Discovery backend

Namers– Service discovery

Config Consul/Nomad:

namers:

- kind: io.l5d.consul

host: [ consul server ]

port: 2181

includeTag: true

useHealthCheck: true

Routing:

dtab: |

/svc => /#/io.l5d.consul/dc1/prod;

Config k8s:

namers:

- kind: io.l5d.k8s

host: ip [k8s master]

port: 8001

labelSelector: version

Routing:

dtab: |

/svc => /#/io.l5d.k8s/prod/http;

A namer binds a concrete name to a physical address which is used to setup service discovery backend

access.

Page 14: Linkerd – Service mesh with service Discovery backend

Zipkin integration

Config:

telemetry:

- kind: io.l5d.zipkin

host: [zipkin-host]

port: 9410

sampleRate: 1.0

Page 15: Linkerd – Service mesh with service Discovery backend

Thank you!

Question?

More information:

Linkedin

https://www.linkedin.com/in/leandro-totino-pereira-

06726227

Facebook:

https://www.facebook.com/leandro.totinopereira