Skip to content

Bulkheads

Introduction

The cloud engineering world loves ocean/ship analogies a lot. Kubernetes, docker, containers, helm, pods, harbor, spinnaker, werf, shipwright, etc. One word from that vocabulary was actually reserved by resiliency engineering as well. It's a bulkhead.

Bulkhead (a.k.a. bulwark) can be viewed as a virtual room of certain capacity. That capacity is your resources that you allow to be used at the same time to process that action. You can define multiple bulkheads per different functionality in your microservice. That will ensure that one part of functionality won't be working at the expense of another.

There is a different ways to implement bulkheads:

  • In multithreaded applications it may take a form of a queue with a fixed-size worker pool
  • In a single-thread event-loop-based application (the Hyx case), it takes a form of concurrency limiting

Hence, the bulkhead is essentially a concurrency limiting mechanism. In turn, concurrency limiting can be seen as a form of rate limiting.

Use Cases

  • Limit the number of concurrent requests in one part of the microservice, so it won't take resources from other parts in case of load increase
  • Shed excessive loads off the microservice

Usage

from fastapi import FastAPI

from hyx.bulkhead import bulkhead

# app is an instance of framework like Flask or FastAPI
app = FastAPI()


@app.post("/projects/")
@bulkhead(max_concurrency=10, max_capacity=100)
def create_new_project():
    """
    Create a new project
    """
    ...
from fastapi import FastAPI

from hyx.bulkhead import bulkhead

# app is an instance of framework like Flask or FastAPI
app = FastAPI()

limiter = bulkhead(max_concurrency=10, max_capacity=100)


@app.post("/projects/")
def create_new_project():
    """
    Create a new project
    """
    async with limiter:
        ...
class hyx.bulkhead.bulkhead(max_concurrency, max_capacity, *, name=None, listeners=None, event_manager=None)

Bulkhead defines a fix-size action "queue" to constrain number of times the operation is executed at the same time as well as number of requests to postpone/queue. Bulkhead can be seen as a throttling mechanist for parallel executions.

Parameters

  • max_concurrency (int) - Max executions at the same time. If the number is exceeded and max_execs allows, remaining executions are going to be queued
  • max_capacity (int) - Overall max number of executions (concurrent and queued). If the number is exceeded, new executions are going to be rejected

Adaptive Limiting

Concurrency is possible to limit adaptively based on statistics around latency of completed requests and some latency objective.

Note

Hyx doesn't provide ARC implementation at this moment. Let us know if this is useful for you.