KubeCon EU 2016: Kubernetes meets Finagle for Resilient Microservices

Post on 15-Apr-2017

107,887 views 0 download

transcript

Kubernetes meets Finagle for resilient microservicesoliver gouldcto, buoyant

KubeCon EU 2016

oliver gould • cto @ buoyantopen-source microservice infrastructure

• previously, tech lead @ twitter:observability, traffic

• core contributor: finagle

• creator: linkerd

• loves: kubernetes, dogs@olix0r ver@buoyant.io

overview

1. why microservices?

2. finagle: the once and future layer 5

3. resilient rpc

4. introducing linkerd

5. demo

6. questions! answers?

why microservices?

scaling teams

growing software

performance correctness debugging monitoring

securityefficiencyresilience

Resilience is an imperative: our software runs on the truly dismal computers we call datacenters. Besides being heinouslycomplex… they are unreliable and prone to operator error.

Marius Eriksen @mariusRPC Redux

resilience in microservicessoftware you didn’t write

hardware you can’t touch

network you can’t configure

break in new and surprising ways

and your customers shouldn’t notice

resilient microservices require resilient communication

datacenter

[1] physical

[2] link

[3] network

[4] transportkubernetes calico, …

aws, azure, digitalocean, gce, …

your code languages, libraries[7] application

rpc[5] session

[6] presentation json, protobuf, thrift, …

http/2, mux, …

layer 5 deals in requests

finagleTHE ONCE AND FUTURE LAYER 5

github.com/twitter/finagleRPC library (JVM)

asynchronous

built on Netty

scala

functional

strongly typed

first commit: Oct 2010

used by…

programming finagle// proxy requests on 8080 to the users service // with a timeout of 1 second

val users = Http.newClient(“/s/users”)

Http.serve(“:8080”, Service.mk[Request, Response] { req => users(req).within(1.second).handle { case _: TimeoutException => Response(Status.BadGateway) } })

operating finagleservice discovery

circuit breaking

backpressure

timeouts

retries

tracing

metrics

keep-alive

multiplexing

load balancing

per-request routing

service-level objectives

resilient rpcREAL-WORLD MOTIVATIONS FOR

“It’s slow”is the hardest problem you’ll ever debug.

Jeff Hodges @jmhodgesNotes on Distributed Systems for Young Bloods

the more components you deploy, the more problems you have

the more components you deploy, the more problems you have

😩

the more components you deploy, the more problems you have

😩

😩

😩

😩

😩

😩

l5: load balance requests lb algorithms:

• round-robin • fewest connections • queue depth • exponentially-weighted

moving average (ewma) • aperture

where are we spending time?

who’s talking?

😎

layer 5 routing• application configured against a logical name:/s/users

• requests are bound to concrete names:/k8s/prod/http/users

• delegations express routing by rewriting:/s => /k8s/prod/http/s/l5d-docs => /$/inet/linkerd.io/443

per-request routing

GET / HTTP/1.1Host: mysite.comDtab-local: /s/users => /s/users-v2

GET / HTTP/1.1Host: mysite.comDtab-local: /s/slorbs => /s/debugproxy/s/slorbs

so all i have to do is rewrite my app in scala?

github.com/buoyantio/linkerdmicroservice rpc proxy

layer-5 router

aka l5d

built on finagle

pluggable

kubernetes

consul

zookeeper

make layer 5 great again

transport layer security

service discovery

backpressure

timeouts

retries

stats

tracing

routing

multiplexing

load balancing

circuit breaking

service-level objectives

l5d sidecar

books authors

pod A pod B

l5d sidecar

incoming router

outgoing router

io.l5d.k8s namer

service

l5d.yamlnamers:- kind: io.l5d.experimental.k8s authTokenFile: …/serviceaccount/tokenrouters:- protocol: http label: incoming servers: - port: 8080 ip: 0.0.0.0 baseDtab: | /http/1.1 => /$/inet/127.1/8888;

- protocol: http label: outgoing servers: - port: 4140 baseDtab: | /srv => /io.l5d.k8s/default/http; /method => /$/io.buoyant.http.anyMethodPfx/srv; /http/1.1 => /method;

kind: ServiceapiVersion: v1metadata: namespace: default name: $SERVICENAME

spec: selector: app: $SERVICENAME type: LoadBalancer ports: - name: http port: 8080 targetPort: 8080

svc.yaml.sh

linkerd roadmap• use k8s 3rdparty for routing state kubernetes#18835 • DaemonSets deployments? • tighter grpc support netty#3667 • cluster-wide routing control • service-level objectives • application-level circuit breaking • more configurable everything

traffic control with linkerdDEMO

web

books authorsl5d l5d

l5d

web

books authorsl5d l5d

l5d

books-v2l5d

web

books authorsl5d l5d

l5d

books-v2l5d

helium

tracingcontrol

ui

play!

<demo video />

more at linkerd.io

slack: slack.linkerd.io

email: ver@buoyant.io

twitter:

• @olix0r

• @linkerd

thanks!