2021-08-31

7 approaches to accelerating Apache Kafka on K8s

In this white paper, we will review seven techniques that can help to reduce latency in high volume, low criticality Kafka solutions running on Kubernetes.

When thinking of running a low latency, high volume Apache Kafka solution on MicroK8s, Charmed Kubernetes or another K8s distribution, it is worth spending some time planning your deployment upfront. Depending on your workload, there could be a few design approaches that might prove beneficial. In this white paper, we will review seven techniques that can help to reduce latency in high volume, low criticality Kafka solutions running on Kubernetes.

Kubernetes offers many benefits to complex, distributed microservices-driven applications composed around middleware such as Apache Kafka. Benefits include improved solution availability, better resource utilization, server consolidation and deployments that are more robust, more repeatable and more predictable.

However, some of the concerns that will need to be addressed will include service infrastructure planning, resource scheduling constraints, stateful service requirements, performance tuning and resilience. And depending on the requirements, the use case and the criticality of the data, differing design approaches may be needed.

In this white paper, we will examine aspects of Kafka on Kubernetes service design through the lens of performance, including:

  • Kafka memory considerations
  • Broker IO demands and storage planning
  • Topic replication
  • Retention policies
  • Message sizes
  • Tuning parameters
  • Topic partitioning
Contact information
  • In submitting this form, I confirm that I have read and agree to Canonical's Privacy Notice and Privacy Policy.