Client applications outside the Istio mesh

This scenario covers using Kafka ACLs when your client applications are outside the Istio mesh, but in the same Kubernetes cluster as your Kafka cluster. In this scenario, the client applications must present a client certificate to authenticate themselves.

Using Kafka ACLs when your client applications are outside the Istio mesh

Prerequisites

To use Kafka ACLs with Istio mTLS, you need:

  • capability to provision LoadBalancer Kubernetes services
  • a Kafka cluster

Calisti resource requirements

Make sure that your Kubernetes or OpenShift cluster has sufficient resources to install Calisti. The following table shows the number of resources needed on the cluster:

Resource Required
CPU - 32 vCPU in total
- 4 vCPU available for allocation per worker node (If you are testing on a cluster at a cloud provider, use nodes that have at least 4 CPUs, for example, c5.xlarge on AWS.)
Memory - 64 GiB in total
- 4 GiB available for allocation per worker node for the Kubernetes cluster (8 GiB in case of the OpenShift cluster)
Storage 12 GB of ephemeral storage on the Kubernetes worker nodes (for Traces and Metrics)

There are two ways in this documentation to sign certificate for the clients:

Use the CSR operator

  1. Enable ACLs and configure an external listener using Streaming Data Manager. Complete the following steps.

    1. Verify that your deployed Kafka cluster is up and running:

      smm sdm cluster get --namespace <namespace-of-your-cluster> --kafka-cluster <name-of-your-kafka-cluster> --kubeconfig <path-to-kubeconfig-file>
      

      Expected output:

      Namespace  Name   State           Image                               Alerts  Cruise Control Topic Status  Rolling Upgrade Errors  Rolling Upgrade Last Success
      kafka      kafka  ClusterRunning  banzaicloud/kafka:2.13-2.5.0-bzc.1  0       CruiseControlTopicReady      0
      
    2. Enable ACLs and configure an external listener. The deployed Kafka cluster has no ACLs, and external access is disabled by default. Enable them by applying the following changes:

      smm sdm cluster update --namespace kafka --kafka-cluster kafka --kubeconfig <path-to-kubeconfig-file> -f -<<EOF
      apiVersion: kafka.banzaicloud.io/v1beta1
      kind: KafkaCluster
      spec:
        ingressController: "istioingress"
        istioIngressConfig:
          gatewayConfig:
            mode: PASSTHROUGH
      readOnlyConfig: |
          auto.create.topics.enable=false
          offsets.topic.replication.factor=2
          authorizer.class.name=kafka.security.authorizer.AclAuthorizer
          allow.everyone.if.no.acl.found=false
      listenersConfig:
          externalListeners:
          - type: "plaintext"
            name: "external"
            externalStartingPort: 19090
            containerPort: 9094
      EOF
      
    3. The update in the previous step reconfigures the Kafka cluster to receive rolling updates. Verify that this is reflected in the state of the cluster.

      smm sdm cluster get --namespace kafka --kafka-cluster kafka --kubeconfig <path-to-kubeconfig-file>
      

      Expected output:

      Namespace  Name   State                    Image                               Alerts  Cruise Control Topic Status  Rolling Upgrade Errors  Rolling Upgrade Last Success
      kafka      kafka  ClusterRollingUpgrading  banzaicloud/kafka:2.13-2.5.0-bzc.1  0       CruiseControlTopicReady      0
      
    4. Wait until the reconfiguration is finished and the cluster is in the ClusterRunning state. This can take a while, as the rolling upgrade applies changes on a broker-by-broker basis.

  2. Create a Kafka user that the client application will use to identify itself. You can create the user manually using KafkaUser custom resources, or using the Streaming Data Manager web interface. Grant this user access to the topics it needs.

    kubectl create -f - <<EOF
    apiVersion: kafka.banzaicloud.io/v1alpha1
    kind: KafkaUser
    metadata:
      name: external-kafkauser
      namespace: default
    spec:
      clusterRef:
        name: kafka
        namespace: kafka
      secretName: external-kafkauser-secret
      pkiBackendSpec:
        pkiBackend: "k8s-csr"
        signerName: "csr.banzaicloud.io/privateca"
    EOF
    

    Note: By default, the certificate created for the Kafka user with the csr-operator is valid for 86400 seconds (1 day). To generate a certificate with a different validity, add the "csr.banzaicloud.io/certificate-lifetime" annotation to the KafkaUser CR spec. For example, the following CR creates a certificate valid for 604800 seconds (7 days) for the associated Kafka user:

    kubectl create -f - <<EOF
    apiVersion: kafka.banzaicloud.io/v1alpha1
    kind: KafkaUser
    metadata:
      name: external-kafkauser
      namespace: default
    spec:
      annotations:
        csr.banzaicloud.io/certificate-lifetime: "604800"
      clusterRef:
        name: kafka
        namespace: kafka
      secretName: external-kafkauser-secret
      pkiBackendSpec:
        pkiBackend: "k8s-csr"
        signerName: "csr.banzaicloud.io/privateca"
    EOF
    
  3. (Optional) Deploy your client application and test that the configuration is working properly. The following steps use the kcat application as a sample client application.
    1. Deploy the kcat client application into the default namespace, which is outside the Istio mesh.

      kubectl create -f - <<EOF
      apiVersion: v1
      kind: Pod
      metadata:
        name: external-kafka-client
        namespace: default
      spec:
        containers:
        - name: external-kafka-client
          image: edenhill/kcat:1.7.0
          # Just spin & wait forever
          command: [ "/bin/sh", "-c", "--" ]
          args: [ "while true; do sleep 3000; done;" ]
          volumeMounts:
          - name: sslcerts
            mountPath: "/ssl/certs"
        volumes:
        - name: sslcerts
          secret:
            secretName: external-kafkauser-secret
      EOF
      
    2. List the topics using your client application, and provide the certificate that represents the previously created external-kafkauser Kafka user. (Otherwise, Istio automatically rejects the client application.)

      kcat -L -b kafka-all-broker.kafka:29092 -X security.protocol=SSL -X ssl.key.location=/ssl/certs/tls.key -X ssl.certificate.location=/ssl/certs/tls.crt -X ssl.ca.location=/ssl/certs/ca.crt
      

      Metadata for all topics (from broker -1: ssl://kafka-all-broker.kafka:29092/bootstrap): 2 brokers: broker 0 at kafka-0.kafka.svc.cluster.local:29092 (controller) broker 1 at kafka-1.kafka.svc.cluster.local:29092 1 topics: topic "example-topic" with 3 partitions: partition 0, leader 0, replicas: 0,1, isrs: 0,1 partition 1, leader 1, replicas: 1,0, isrs: 0,1 partition 2, leader 0, replicas: 0,1, isrs: 0,1

      The client application should be able to connect to the Kafka broker and access the topics you have granted it access to.

Use cert-manager

  1. Enable ACLs and configure an external listener using Streaming Data Manager. Complete the following steps.

    1. Verify that your deployed Kafka cluster is up and running:

      smm sdm cluster get --namespace <namespace-of-your-cluster> --kafka-cluster <name-of-your-kafka-cluster> --kubeconfig <path-to-kubeconfig-file>
      

      Expected output:

      Namespace  Name   State           Image                               Alerts  Cruise Control Topic Status  Rolling Upgrade Errors  Rolling Upgrade Last Success
      kafka      kafka  ClusterRunning  banzaicloud/kafka:2.13-2.5.0-bzc.1  0       CruiseControlTopicReady      0
      
    2. Enable ACLs and configure an external listener. The deployed Kafka cluster has no ACLs, and external access is disabled by default. Enable them by applying the following changes:

      smm sdm cluster update --namespace kafka --kafka-cluster kafka --kubeconfig <path-to-kubeconfig-file> -f -<<EOF
      apiVersion: kafka.banzaicloud.io/v1beta1
      kind: KafkaCluster
      spec:
        ingressController: "istioingress"
        istioIngressConfig:
          gatewayConfig:
            mode: PASSTHROUGH
      readOnlyConfig: |
          auto.create.topics.enable=false
          offsets.topic.replication.factor=2
          authorizer.class.name=kafka.security.authorizer.AclAuthorizer
          allow.everyone.if.no.acl.found=false
      listenersConfig:
          externalListeners:
          - type: "plaintext"
            name: "external"
            externalStartingPort: 19090
            containerPort: 9094
      EOF
      
    3. The update in the previous step reconfigures the Kafka cluster to receive rolling updates. Verify that this is reflected in the state of the cluster.

      smm sdm cluster get --namespace kafka --kafka-cluster kafka --kubeconfig <path-to-kubeconfig-file>
      

      Expected output:

      Namespace  Name   State                    Image                               Alerts  Cruise Control Topic Status  Rolling Upgrade Errors  Rolling Upgrade Last Success
      kafka      kafka  ClusterRollingUpgrading  banzaicloud/kafka:2.13-2.5.0-bzc.1  0       CruiseControlTopicReady      0
      
    4. Wait until the reconfiguration is finished and the cluster is in the ClusterRunning state. This can take a while, as the rolling upgrade applies changes on a broker-by-broker basis.

  2. Install cert-manager.

    1. Install cert-manager on the cluster. The cert-manager application will issue the client certificates for the client applications. If you already have cert-manager installed and configured on the cluster, skip this step.

      kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.11.0/cert-manager.yaml
      
    2. Specify a cluster issuer for cert-manager that has the same CA or root certificate as the Istio mesh, otherwise, the application’s client certificate won’t be valid for the mTLS enforced by Istio.

      Note: Streaming Data Manager uses CSR operator as an external CA to provide certificate to Istio

      1. Create a new secret from the CA certificate used by Istio in a format that works for cert-manager.

        kubectl create -f - <<EOF
        apiVersion: v1
        kind: Secret
        metadata:
          name: ca-key-pair
          namespace: cert-manager
        data:
          tls.crt: $(kubectl --namespace csr-operator-system get secret csr-operator-cacerts -o 'jsonpath={.data.ca_crt\.pem}')
          tls.key: $(kubectl --namespace csr-operator-system get secret csr-operator-cacerts -o 'jsonpath={.data.ca_key\.pem}')
        EOF
        
      2. Use the secret to create a ClusterIssuer (a Kubernetes resource that represents CAs that are able to generate signed certificates)

        kubectl create -f - <<EOF
        apiVersion: cert-manager.io/v1
        kind: ClusterIssuer
        metadata:
          name: ca-issuer
          namespace: cert-manager
        spec:
          ca:
            secretName: ca-key-pair
        EOF
        

  3. Create a Kafka user that the client application will use to identify itself. You can create the user manually using KafkaUser custom resources, or using the Streaming Data Manager web interface. Grant this user access to the topics it needs.

    kubectl create -f - <<EOF
    apiVersion: kafka.banzaicloud.io/v1alpha1
    kind: KafkaUser
    metadata:
      name: external-kafkauser
      namespace: default
    spec:
      clusterRef:
        name: kafka
        namespace: kafka
      secretName: external-kafkauser-secret
      pkiBackendSpec:
        pkiBackend: "cert-manager"
        issuerRef:
          name: "ca-issuer"
          kind: "ClusterIssuer"
    EOF
    

    Note: The certificate created for the Kafka user with cert-manager is valid for 90 days.

  4. (Optional) Deploy your client application and test that the configuration is working properly. The following steps use the kcat application as a sample client application.
    1. Deploy the kcat client application into the default namespace, which is outside the Istio mesh.

      kubectl create -f - <<EOF
      apiVersion: v1
      kind: Pod
      metadata:
        name: external-kafka-client
        namespace: default
      spec:
        containers:
        - name: external-kafka-client
          image: edenhill/kcat:1.7.0
          # Just spin & wait forever
          command: [ "/bin/sh", "-c", "--" ]
          args: [ "while true; do sleep 3000; done;" ]
          volumeMounts:
          - name: sslcerts
            mountPath: "/ssl/certs"
        volumes:
        - name: sslcerts
          secret:
            secretName: external-kafkauser-secret
      EOF
      
    2. List the topics using your client application, and provide the certificate that represents the previously created external-kafkauser Kafka user. (Otherwise, Istio automatically rejects the client application.)

      kcat -L -b kafka-all-broker.kafka:29092 -X security.protocol=SSL -X ssl.key.location=/ssl/certs/tls.key -X ssl.certificate.location=/ssl/certs/tls.crt -X ssl.ca.location=/ssl/certs/ca.crt
      

      Metadata for all topics (from broker -1: ssl://kafka-all-broker.kafka:29092/bootstrap): 2 brokers: broker 0 at kafka-0.kafka.svc.cluster.local:29092 (controller) broker 1 at kafka-1.kafka.svc.cluster.local:29092 1 topics: topic "example-topic" with 3 partitions: partition 0, leader 0, replicas: 0,1, isrs: 0,1 partition 1, leader 1, replicas: 1,0, isrs: 0,1 partition 2, leader 0, replicas: 0,1, isrs: 0,1

      The client application should be able to connect to the Kafka broker and access the topics you have granted it access to.