Install the operator

The operator installs version 3.1.0 of Apache Kafka, and can run on Minikube v0.33.1+ and Kubernetes 1.20.0+.

The operator supports Kafka 2.6.2-3.1.x.


The ZooKeeper and the Kafka clusters need persistent volume (PV) to store data. Therefore, when installing the operator on Amazon EKS with Kubernetes version 1.23 or later, you must install the EBS CSI driver add-on on your cluster.


  • A Kubernetes cluster (minimum 6 vCPU and 10 GB RAM).

We believe in the separation of concerns principle, thus the Koperator does not install nor manage Apache ZooKeeper or cert-manager. If you would like to have a fully automated and managed experience of Apache Kafka on Kubernetes, try Cisco Streaming Data Manager.

Install Koperator and the requirements independently

Install cert-manager

Koperator uses cert-manager for issuing certificates to clients and brokers. Deploy and configure cert-manager if you haven’t already done so.


  • Koperator 0.18.1 and newer supports cert-manager 1.5.3-1.6.x
  • Koperator 0.8.x-0.17.0 supports cert-manager 1.3.x

Install cert-manager and the CustomResourceDefinitions using one of the following methods:

  • Directly:

    # Install the CustomResourceDefinitions and cert-manager itself
    kubectl create -f
  • Using Helm:

    # Add the jetstack helm repo
    helm repo add jetstack
    helm repo update
    # Install the CustomResourceDefinitions
    kubectl apply --validate=false -f
    # Install cert-manager into the cluster
    # Using helm3
    helm install cert-manager --namespace cert-manager --create-namespace --version v1.6.2 jetstack/cert-manager

Verify that the cert-manager pods have been created:

kubectl get pods -n cert-manager

Expected output:

NAME                                      READY   STATUS    RESTARTS   AGE
cert-manager-7747db9d88-vgggn             1/1     Running   0          29m
cert-manager-cainjector-87c85c6ff-q945h   1/1     Running   1          29m
cert-manager-webhook-64dc9fff44-2p6tx     1/1     Running   0          29m

Install ZooKeeper

Kafka requires Apache ZooKeeper. Deploy a ZooKeeper cluster if you don’t already have one.

Note: You are recommended to create a separate ZooKeeper deployment for each Kafka cluster. If you want to share the same ZooKeeper cluster across multiple Kafka cluster instances, use a unique zk path in the KafkaCluster CR to avoid conflicts (even with previous defunct KafkaCluster instances).

  1. Install ZooKeeper using the Pravega’s Zookeeper Operator.

    helm repo add pravega
    helm repo update
    helm install zookeeper-operator --namespace=zookeeper --create-namespace pravega/zookeeper-operator
  2. Create a ZooKeeper cluster.

    kubectl create --namespace zookeeper -f - <<EOF
    kind: ZookeeperCluster
        name: zookeeper
        namespace: zookeeper
        replicas: 1
  3. Verify that ZooKeeper has been deployed.

    kubectl get pods -n zookeeper

    Expected output:

    NAME                                  READY   STATUS    RESTARTS   AGE
    zookeeper-0                           1/1     Running   0          27m
    zookeeper-operator-54444dbd9d-2tccj   1/1     Running   0          28m

Install Prometheus-operator

Install the Prometheus operator and its CustomResourceDefinitions to the default namespace.

  • Directly:

    kubectl create -n default -f
  • Using Helm:

    Add the prometheus repository to Helm:

    helm repo add prometheus-community
    helm repo update

    Install only the Prometheus-operator:

    helm install prometheus --namespace default prometheus-community/kube-prometheus-stack \
    --set prometheusOperator.createCustomResource=true \
    --set defaultRules.enabled=false \
    --set alertmanager.enabled=false \
    --set grafana.enabled=false \
    --set kubeApiServer.enabled=false \
    --set kubelet.enabled=false \
    --set kubeControllerManager.enabled=false \
    --set coreDNS.enabled=false \
    --set kubeEtcd.enabled=false \
    --set kubeScheduler.enabled=false \
    --set kubeProxy.enabled=false \
    --set kubeStateMetrics.enabled=false \
    --set nodeExporter.enabled=false \
    --set prometheus.enabled=false

Install Koperator with Helm

You can deploy Koperator using a Helm chart. Complete the following steps.

  1. Install the Koperator CustomResourceDefinition resources (adjust the version number to the Koperator release you want to install). This is performed in a separate step to allow you to uninstall and reinstall Koperator without deleting your installed custom resources.

    kubectl create --validate=false -f
  2. Add the following repository to Helm.

    helm repo add banzaicloud-stable
    helm repo update
  3. Install Koperator into the kafka namespace:

    helm install kafka-operator --namespace=kafka --create-namespace banzaicloud-stable/kafka-operator
  4. Create the Kafka cluster using the KafkaCluster custom resource. You can find various examples for the custom resource in the Koperator repository.


    After the cluster is created, you cannot change the way the listeners are configured without an outage. If a cluster is created with unencrypted (plain text) listener and you want to switch to SSL encrypted listeners (or the way around), you must manually delete each broker pod. The operator will restart the pods with the new listener configuration.
    • To create a sample Kafka cluster that allows unencrypted client connections, run the following command:

      kubectl create -n kafka -f
    • To create a sample Kafka cluster that allows TLS-encrypted client connections, run the following command. For details on the configuration parameters related to SSL, see Enable SSL encryption in Apache Kafka.

      kubectl create -n kafka -f
  5. If you have installed the Prometheus operator, create the ServiceMonitors. Prometheus will be installed and configured properly for Koperator .

    kubectl create -n kafka -f
  6. Verify that the Kafka cluster has been created.

    kubectl get pods -n kafka

    Expected output:

    NAME                                      READY   STATUS    RESTARTS   AGE
    kafka-0-nvx8c                             1/1     Running   0          16m
    kafka-1-swps9                             1/1     Running   0          15m
    kafka-2-lppzr                             1/1     Running   0          15m
    kafka-cruisecontrol-fb659b84b-7cwpn       1/1     Running   0          15m
    kafka-operator-operator-8bb75c7fb-7w4lh   2/2     Running   0          17m
    prometheus-kafka-prometheus-0             2/2     Running   1          16m

Test your deployment