Client applications outside the Istio mesh

This scenario covers using Kafka ACLs when your client applications are outside the Istio mesh, but in the same Kubernetes cluster as your Kafka cluster. In this scenario, the client applications must present a client certificate to authenticate themselves.

Using Kafka ACLs when your client applications are outside the Istio mesh

Prerequisites

To use Kafka ACLs with Istio mTLS, you need:

  • a Kubernetes cluster (version 1.19 and above), with
  • at least 12 vCPU and 12 GB of memory, and
  • with the capability to provision LoadBalancer Kubernetes services.
  • A Kafka cluster.

There are two ways in this documentation to sign certificate for the clients:

Use the CSR operator

  1. Enable ACLs and configure an external listener using Streaming Data Manager. Complete the following steps.

    1. Verify that your deployed Kafka cluster is up and running:

      smm sdm cluster get --namespace <namespace-of-your-cluster> --kafka-cluster <name-of-your-kafka-cluster> --kubeconfig <path-to-kubeconfig-file>
      

      Expected output:

      Namespace  Name   State           Image                               Alerts  Cruise Control Topic Status  Rolling Upgrade Errors  Rolling Upgrade Last Success
      kafka      kafka  ClusterRunning  banzaicloud/kafka:2.13-2.5.0-bzc.1  0       CruiseControlTopicReady      0
      
    2. Enable ACLs and configure an external listener. The deployed Kafka cluster has no ACLs, and external access is disabled by default. Enable them by applying the following changes:

      smm sdm cluster update --namespace kafka --kafka-cluster kafka --kubeconfig <path-to-kubeconfig-file> -f -<<EOF
      apiVersion: kafka.banzaicloud.io/v1beta1
      kind: KafkaCluster
      spec:
        ingressController: "istioingress"
        istioIngressConfig:
          gatewayConfig:
            mode: PASSTHROUGH
      readOnlyConfig: |
          auto.create.topics.enable=false
          offsets.topic.replication.factor=2
          authorizer.class.name=kafka.security.authorizer.AclAuthorizer
          allow.everyone.if.no.acl.found=false    
      listenersConfig:
          externalListeners:
          - type: "plaintext"
            name: "external"
            externalStartingPort: 19090
            containerPort: 9094
      EOF
      
    3. The update in the previous step reconfigures the Kafka cluster to receive rolling updates. Verify that this is reflected in the state of the cluster.

      smm sdm cluster get --namespace kafka --kafka-cluster kafka --kubeconfig <path-to-kubeconfig-file>
      

      Expected output:

      Namespace  Name   State                    Image                               Alerts  Cruise Control Topic Status  Rolling Upgrade Errors  Rolling Upgrade Last Success
      kafka      kafka  ClusterRollingUpgrading  banzaicloud/kafka:2.13-2.5.0-bzc.1  0       CruiseControlTopicReady      0
      
    4. Wait until the reconfiguration is finished and the cluster is in the ClusterRunning state. This can take a while, as the rolling upgrade applies changes on a broker-by-broker basis.

  2. Create a Kafka user that the client application will use to identify itself. You can create the user manually using KafkaUser custom resources, or using the Streaming Data Manager web interface. Grant this user access to the topics it needs.

    kubectl create -f - <<EOF
    apiVersion: kafka.banzaicloud.io/v1alpha1
    kind: KafkaUser
    metadata:
      name: external-kafkauser
      namespace: default
    spec:
      clusterRef:
        name: kafka
        namespace: kafka
      secretName: external-kafkauser-secret
      pkiBackendSpec:
        pkiBackend: "k8s-csr"
        signerName: "csr.banzaicloud.io/privateca"
    EOF
    
  3. (Optional) Deploy your client application and test that the configuration is working properly. The following steps use the kcat application as a sample client application.
    1. Deploy the kcat client application into the default namespace, which is outside the Istio mesh.

      kubectl create -f - <<EOF
      apiVersion: v1
      kind: Pod
      metadata:
        name: external-kafka-client
        namespace: default
      spec:
        containers:
        - name: external-kafka-client
          image: edenhill/kcat:1.7.0
          # Just spin & wait forever
          command: [ "/bin/sh", "-c", "--" ]
          args: [ "while true; do sleep 3000; done;" ]
          volumeMounts:
          - name: sslcerts
            mountPath: "/ssl/certs"
        volumes:
        - name: sslcerts
          secret:
            secretName: external-kafkauser-secret
      EOF
      
    2. List the topics using your client application, and provide the certificate that represents the previously created external-kafkauser Kafka user. (Otherwise, Istio automatically rejects the client application.)

      kcat -L -b kafka-all-broker.kafka:29092 -X security.protocol=SSL -X ssl.key.location=/ssl/certs/tls.key -X ssl.certificate.location=/ssl/certs/tls.crt -X ssl.ca.location=/ssl/certs/chain.pem
      

      Metadata for all topics (from broker -1: ssl://kafka-all-broker.kafka:29092/bootstrap): 2 brokers: broker 0 at kafka-0.kafka.svc.cluster.local:29092 (controller) broker 1 at kafka-1.kafka.svc.cluster.local:29092 1 topics: topic "example-topic" with 3 partitions: partition 0, leader 0, replicas: 0,1, isrs: 0,1 partition 1, leader 1, replicas: 1,0, isrs: 0,1 partition 2, leader 0, replicas: 0,1, isrs: 0,1

      The client application should be able to connect to the Kafka broker and access the topics you have granted it access to.

Use cert-manager

  1. Enable ACLs and configure an external listener using Streaming Data Manager. Complete the following steps.

    1. Verify that your deployed Kafka cluster is up and running:

      smm sdm cluster get --namespace <namespace-of-your-cluster> --kafka-cluster <name-of-your-kafka-cluster> --kubeconfig <path-to-kubeconfig-file>
      

      Expected output:

      Namespace  Name   State           Image                               Alerts  Cruise Control Topic Status  Rolling Upgrade Errors  Rolling Upgrade Last Success
      kafka      kafka  ClusterRunning  banzaicloud/kafka:2.13-2.5.0-bzc.1  0       CruiseControlTopicReady      0
      
    2. Enable ACLs and configure an external listener. The deployed Kafka cluster has no ACLs, and external access is disabled by default. Enable them by applying the following changes:

      smm sdm cluster update --namespace kafka --kafka-cluster kafka --kubeconfig <path-to-kubeconfig-file> -f -<<EOF
      apiVersion: kafka.banzaicloud.io/v1beta1
      kind: KafkaCluster
      spec:
        ingressController: "istioingress"
        istioIngressConfig:
          gatewayConfig:
            mode: PASSTHROUGH
      readOnlyConfig: |
          auto.create.topics.enable=false
          offsets.topic.replication.factor=2
          authorizer.class.name=kafka.security.authorizer.AclAuthorizer
          allow.everyone.if.no.acl.found=false    
      listenersConfig:
          externalListeners:
          - type: "plaintext"
            name: "external"
            externalStartingPort: 19090
            containerPort: 9094
      EOF
      
    3. The update in the previous step reconfigures the Kafka cluster to receive rolling updates. Verify that this is reflected in the state of the cluster.

      smm sdm cluster get --namespace kafka --kafka-cluster kafka --kubeconfig <path-to-kubeconfig-file>
      

      Expected output:

      Namespace  Name   State                    Image                               Alerts  Cruise Control Topic Status  Rolling Upgrade Errors  Rolling Upgrade Last Success
      kafka      kafka  ClusterRollingUpgrading  banzaicloud/kafka:2.13-2.5.0-bzc.1  0       CruiseControlTopicReady      0
      
    4. Wait until the reconfiguration is finished and the cluster is in the ClusterRunning state. This can take a while, as the rolling upgrade applies changes on a broker-by-broker basis.

  2. Install cert-manager.

    1. Install cert-manager on the cluster. The cert-manager application will issue the client certificates for the client applications. If you already have cert-manager installed and configured on the cluster, skip this step.

      kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.3/cert-manager.yaml
      
    2. Specify a cluster issuer for cert-manager that has the same CA or root certificate as the Istio mesh, otherwise, the application’s client certificate won’t be valid for the mTLS enforced by Istio.

      1. Get the CA certificate used by Istio:

        kubectl get secrets -n istio-system istio-ca-secret -o yaml
        

        This secret has different fields than what cert-manager expects.

      2. Create a new secret from this in a format that works for cert-manager.

        kubectl create -f - <<EOF
        apiVersion: v1
        kind: Secret
        metadata:
          name: ca-key-pair
          namespace: cert-manager
        data:
          tls.crt: <tls-crt-from-istio-ca-secret>
          tls.key: <your-tls-key-from-istio-ca-secret>
        EOF
        
        kubectl create -f - <<EOF
        apiVersion: cert-manager.io/v1alpha2
        kind: ClusterIssuer
        metadata:
          name: ca-issuer
          namespace: cert-manager
        spec:
          ca:
            secretName: ca-key-pair
        EOF
        

  3. Create a Kafka user that the client application will use to identify itself. You can create the user manually using KafkaUser custom resources, or using the Streaming Data Manager web interface. Grant this user access to the topics it needs.

    kubectl create -f - <<EOF
    apiVersion: kafka.banzaicloud.io/v1alpha1
    kind: KafkaUser
    metadata:
      name: external-kafkauser
      namespace: default
    spec:
      clusterRef:
        name: kafka
        namespace: kafka
      secretName: external-kafkauser-secret
      pkiBackendSpec:
        pkiBackend: "cert-manager"
        issuerRef:
          name: "ca-issuer"
          kind: "ClusterIssuer"
    EOF
    
  4. (Optional) Deploy your client application and test that the configuration is working properly. The following steps use the kcat application as a sample client application.
    1. Deploy the kcat client application into the default namespace, which is outside the Istio mesh.

      kubectl create -f - <<EOF
      apiVersion: v1
      kind: Pod
      metadata:
        name: external-kafka-client
        namespace: default
      spec:
        containers:
        - name: external-kafka-client
          image: edenhill/kcat:1.7.0
          # Just spin & wait forever
          command: [ "/bin/sh", "-c", "--" ]
          args: [ "while true; do sleep 3000; done;" ]
          volumeMounts:
          - name: sslcerts
            mountPath: "/ssl/certs"
        volumes:
        - name: sslcerts
          secret:
            secretName: external-kafkauser-secret
      EOF
      
    2. List the topics using your client application, and provide the certificate that represents the previously created external-kafkauser Kafka user. (Otherwise, Istio automatically rejects the client application.)

      kcat -L -b kafka-all-broker.kafka:29092 -X security.protocol=SSL -X ssl.key.location=/ssl/certs/tls.key -X ssl.certificate.location=/ssl/certs/tls.crt -X ssl.ca.location=/ssl/certs/chain.pem
      

      Metadata for all topics (from broker -1: ssl://kafka-all-broker.kafka:29092/bootstrap): 2 brokers: broker 0 at kafka-0.kafka.svc.cluster.local:29092 (controller) broker 1 at kafka-1.kafka.svc.cluster.local:29092 1 topics: topic "example-topic" with 3 partitions: partition 0, leader 0, replicas: 0,1, isrs: 0,1 partition 1, leader 1, replicas: 1,0, isrs: 0,1 partition 2, leader 0, replicas: 0,1, isrs: 0,1

      The client application should be able to connect to the Kafka broker and access the topics you have granted it access to.