Cloud Native Python

The road to being a first-class Kubernetes application

Floris Bruynooghe

flub@devork.be

@flubdevork | mastodon.social/@flub

Contents

  • K8s background
  • Echo server example
  • Execution environment
  • Logging
  • Container Lifecycle
  • Monitoring

Kubernetes Basics

Echo Server

ctx = zmq.Context()
poller = zmq.Poller()

def main():
    echo_sock = create_and_bind(zmq.ROUTER, 'tcp://*:1234')
    events = dict(poller.poll())
    while events:
        if echo_sock in events:
            echo_evt(sock)
        events = dict(poller.poll())

Echo Server

def create_and_bind(socktype, endpoint):
    sock = ctx.socket(socktype)
    sock.bind(endpoint)
    poller.register(sock, zmq.POLLIN)
    return sock

def echo_evt(sock):
    peer, *msg = sock.recv_multipart()
    print('Message: {!r}'.format(msg))
    sock.send_multipart([peer] + msg)

Execution Environment

Skip boilerplate

Simple architecture

Make Errors Fatal

apiVersion: apps/v1beta1
kind: Deployment
metadata: {name: echo}
spec:
  replicas: 3
  template:
    metadata:
      labels: {app: echo}
    spec:
      containers:
        - name: echo
          image: echo:1.0.0
          ports:
            - containerPort: 1234
      restartPolicy: Always

Scale via process model

No Concurrency

Service is your Loadbalancer

apiVersion: v1
kind: Service
metadata:
  name: echo
spec:
  selector:
    app: echo
  ports:
    - protocol: TCP
      port: 1234

Logging

  • Default: stdout
  • Ops will aggregate
  • Configurable
spec:
  containers:
    - image: ...
      args:
        - --loglevel=DEBUG

Use Libraries

import sys, logbook

def echo_evt(sock):
    peer, *msg = sock.recv_multipart()
    logbook.info('Message: {!r}', msg)
    sock.send_multipart([peer] + msg)

if __name__ == '__main__':
    handler = logbook.StreamHandler(sys.stdout)
    with handler.applicationbound():
        try:
            main()
        except Exception as err:
            logbook.exception()
            sys.exit(1)

Health Endpoints: Readiness

Refuse no connections

readinessProbe

containers:
  - name: echo
    ...
    readinessProbe:
      tcpSocket:
        port: 1234
  - name: foobar
    ...
    readinessProbe:
      httpGet:
        path: /healthz
        port: 8080

Health Endpoint: Liveness

Don't queue traffic on a stuck pod

livenessProbe

containers:
  - name: echo
    ...
    livenessProbe:
      exec:
        command:
        - /usr/bin/python3
        - /opt/liveness.py
  - name: foobar
    ...
    livenessProbe:
      httpGet:
        path: /heathz
        port: 8080

livenessProbe

# /opt/liveness.py in container
import zmq

ctx = zmq.Context()
sock = zmq.socket(zmq.DEALER)
sock.connect('tcp://localhost:1234')
sock.send_multipart([b'ping'])
evt = sock.poll(500)
if not evt & zmq.POLLIN:
    sys.exit(1)
sys.exit(0)

Termination

Finish your queued requests

Handle SIGTERM

def sighandler(signo, frame):
    sock = ctx.socket(zmq.DEALER)
    sock.connect('inproc://term')
    sock.send(b'x')

signal.signal(signal.SIGTERM, sighandler)
signal.signal(signal.SIGINT, sighandler)

Handle SIGTERM

def main():
    echo_sock = create_and_bind(zmq.ROUTER, 'tcp://*:1234')
    term_sock = create_and_bind(zmq.DEALER, 'inproc://term')#New
    timeout = None                                  # <- New
    events = dict(poller.poll(timeout))             # <- Changed
    while events:
        if srv_sock in events:
            echo_evt(sock)
        if term_sock in events:                        # <- New
            echo_sock.unbind(echo_sock.LAST_ENDPOINT)  # <- New
            timeout = 5000                             # <- New
        events = dict(poller.poll(timeout))         # <- Changed

Monitoring

Prometheus / HTTP-poll based

import prometheus_client as prom

reqs = prom.Counter('echo_request_total', 'Number of requests')

def main():
    prom.start_http_server(8000)
    ...

def echo_evt(sock):
    peer, *msg = sock.recv_multipart()
    logbook.info('Message: {!r}', msg)
    reqs.inc()
    sock.send_multipart([peer] + msg)

Recap

  • Adopt gradually
  • Keep architecture simple
  • Avoid losing requests
  • Instrument and monitor

Thanks! Questions?

http://devork.be/talks/cnpy.html

flub@devork.be

@flubdevork | mastodon.social/@flub