Agenda
● Docker Networking?● Kube Basics● Application Topology● Networking in&out of Kube● Q&A
Docker Networking
192.168.1.0/24
mysql172.16.1.2
tomcat172.16.1.1
192.168.2.0/24
tomcat02172.16.1.1
192.168.3.0/24
nginx172.16.1.1
Docker Networking
192.168.1.0/24
mysql172.16.1.2
tomcat172.16.1.1
192.168.2.0/24
tomcat02172.16.1.1
192.168.3.0/24
nginx172.16.1.1
NAT
NAT
NAT
NAT
Kubernetes
● Cluster / Node● Name & Namespaces● Pods● Labels & Selectors● Replication Controllers● Services● Volumes
Kubernetes
● Cluster / Node● Name & Namespaces● Pods● Labels & Selectors● Replication Controllers● Services● Volumes
Cluster
API Server
Scheduler
kubelet
kubelet
kubelet
UI
Client
API
USER MasterNodes
Pod
API Server
Scheduler
kubelet
kubelet
kubelet
API
USER MasterNodes
replica:2name:nginx
cpu:1memory:2gb
Pod
API Server
Scheduler
kubelet
kubelet
kubelet
USER MasterNodes
Pod
API Server
Scheduler
kubelet
kubelet
kubelet
USER MasterNodes
Pod
API Server
Scheduler
kubelet
kubelet
kubelet
USER MasterNodes
Success
Concept :: Pod
● Small Collection of Containers● Run togather in same machine
– Share resources– fate
● Assigned an IP● Share Network Namespace
– IP Address– localhost
pod
tomcat
mysql
API
Concept :: Pod
● Small Collection of Containers● Run togather in same machine
– Share resources– fate
● Assigned an IP● Share Network Namespace
– IP Address– localhost
pod
tomcat
mysql
API
Networking :: Pod
● Pod can reach eachother without NAT– Even across machines
● Pod IPs routable● Assigned an IP● Pods can egress traffic
– If firewalls allows● No brokering of Port numbers
– Never deal with mapping
Networking
● all containers can communicate with all other containers without NAT
● all nodes can communicate with all containers (and viceversa) without NAT
● the IP that a container sees itself as is the same IP that others see it as
Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies):
Networking : RC
$ cat tcrc.yamlapiVersion: v1kind: ReplicationControllermetadata: name: my-tcspec: replicas: 3 template: metadata: labels: app: tomcat spec: containers: - name: tomcat image: dockerfile/tomcat ports: - containerPort: 8080
Application Topology : RC
$ kubectl create -f ./tcrc.yaml
$ kubectl get pods -l app=nginx -o wide
my-tc-6wsf4 1/1 Running 0 2h e2e-test-node-92momy-tc-tr6zt 1/1 Running 0 2h e2e-test-node-92moMy-tc-mz1ap 1/1 Running 0 2h e2e-test-node-92mo
Check your pods ips:
$ kubectl get pods -l app=tomcat -o json | grep podIP
"podIP": "10.240.1.1", "podIP": "10.240.1.2", "podIP": "10.240.1.3",
10.240.1.1:8080 10.240.1.2:8080 10.240.1.3:8080
Networking :: Service
● Pod are ephemeral – Follow lifecycle
● Services are group of pod act as one– Sits behind load balancers
● Gets Stable Virtual IP ● Ports
VIP
Networking : Service$ cat tcsvc.yamlapiVersion: v1kind: Servicemetadata: name: tcsvc labels: app: tomcatspec: ports: - port: 8080 protocol: TCP selector: app: tomcat
$kubectl get svcNAME LABELS SELECTOR IP(S) PORT(S)tcsvc app=tomcat app=tomcat 10.0.116.146 8080/TCP
Application Topology : Service
$ kubectl describe svc nginxsvc
Name: tcsvcNamespace: defaultLabels: app=tomcatSelector: app=tomcatType: ClusterIPIP: 10.0.116.146Port: <unnamed> 8080/TCPEndpoints: 10.240.1.1:8080,10.240.1.2:8080,10.240.1.3:8080Session Affinity: NoneNo events.
$ kubectl get ep
NAME ENDPOINTSTcsvc 10.240.1.1:8080,10.240.1.2:8080,10.240.1.3:8080
$ curl 10.0.116.146:8080
........
Networking :: Service
10.0.116.146:8080
10.240.1.1:8080
Kube-proxy
10.240.1.2:8080 10.240.1.3:8080
api-server
Networking :: Service
10.0.116.146:8080
10.240.1.1:8080
Kube-proxy
10.240.1.2:8080 10.240.1.3:8080
api-serverTCP / UDP
iptableDNAT
iptableDNAT
Networking : DNS
$ kubectl get services kube-dns –namespace=kube-system
NAME LABELS SELECTOR IP(S) PORT(S)kube-dns <none> k8s-app=kube-dns 10.0.0.10 53/UDP 53/TCP
$ cat curlpod.yamlapiVersion: v1kind: Podmetadata: name: curlpodspec: containers: - image: radial/busyboxplus:curl command: - sleep - "3600" imagePullPolicy: IfNotPresent name: curlcontainer restartPolicy: Always
Networking : DNS
And perform a lookup of the nginx Service
$ kubectl create -f ./curlpod.yaml
default/curlpod
$ kubectl get pods curlpod
NAME READY STATUS RESTARTS AGEcurlpod 1/1 Running 0 18s
$ kubectl exec curlpod -- nslookup tcsvc
Server: 10.0.0.10Address 1: 10.0.0.10Name: tcsvcAddress 1: 10.0.116.146
Types Service
● Headless Service– Sometimes you don't need or want loadbalancing and a single service IP.
In this case, you can create "headless" services by specifying "None" for the cluster IP (spec.clusterIP).
– Discovery in their (developer) own way
● External Service– For some parts of your application (e.g. frontends) you may want to
expose a Service onto an external (outside of your cluster, maybe public internet) IP address.
– Kubernetes supports two ways of doing this: NodePorts and LoadBalancers.
Exposing the Service
<<<<<<<<<<<<<<<<<<<<< Type NodePort >>>>>>>>>>>>>>>>>>>>>>
$ kubeclt get svc tcsvc -o json | grep -i nodeport -C 5 { "name": "http-alt", "protocol": "TCP", "port": 8080, "targetPort": 8080, "nodePort": 32188 }
$ kubectl get nodes -o json | grep ExternalIP { "type": "ExternalIP", "address": "104.197.63.17" }
$ curl http://104.197.63.17:30645...
Exposing the Service
<<<<<<<<<<<<<<<<<<<<< Type LoadBalancer >>>>>>>>>>>>>>>>>>>>>> $ kubectl delete rc, svc -l app=tomcat$ kubectl create -f ./tc-app.yaml$ kubectl get svc -o json | grep -i ingress -A 5 "ingress": [ { "ip": "104.197.68.43" } ] }$ curl http://104.197.68.43:8080...
Additional Resources to tap in to (DockYard)
Manage Images
Dashboard
Manage Containers
Apache Licensed Open Source https://github.com/bluemeric/dockyard
Additional Resources to tap in to (#DevOpsFortNight)
● #DevOpsFortnight from BluemericVideo demos / training / webinars/ industry interviews on DevOps for free• Chef• Puppet• CI/CD• Docker• Kube• OpenStack• SDN• Etc...
https://www.youtube.com/channel/UCPUxGV9QCjJUWgSRH5ei5mQ
Additional Resources to tap in to(#gopaddlemeetup)
Bangalore 1st week of September (to be announced).
• Use cases of Docker & Kube• Industry perspective of DevOps• goPaddle (demos, handson, use cases)
Thanks
Bluemeric Technologies Pvt Ltd#187, Pearl Wood, AECS Layout, A Block, Bangalore - 560037, Indiane: +91-8email : [email protected]: http://bluemeric.comtwitter: @bluemeric