Client applications outside the Kubernetes cluster
This scenario covers using Kafka ACLs when your client applications are outside the Kubernetes cluster. As a result, the client applications must connect to the endpoint bound to the Kafka cluster’s external listener. In this scenario, the client applications must present a client certificate to authenticate themselves.
Prerequisites
To use Kafka ACLs with Istio mTLS, you need:
- capability to provision LoadBalancer Kubernetes services
- a Kafka cluster
Calisti resource requirements
Make sure that your Kubernetes or OpenShift cluster has sufficient resources to install Calisti. The following table shows the number of resources needed on the cluster:
Resource | Required |
---|---|
CPU | - 32 vCPU in total - 4 vCPU available for allocation per worker node (If you are testing on a cluster at a cloud provider, use nodes that have at least 4 CPUs, for example, c5.xlarge on AWS.) |
Memory | - 64 GiB in total - 4 GiB available for allocation per worker node for the Kubernetes cluster (8 GiB in case of the OpenShift cluster) |
Storage | 12 GB of ephemeral storage on the Kubernetes worker nodes (for Traces and Metrics) |
There are two ways in this documentation to sign certificate for the clients:
Use our CSR operator
-
Enable ACLs and configure an external listener using Streaming Data Manager. Complete the following steps.
-
Verify that your deployed Kafka cluster is up and running:
smm sdm cluster get --namespace <namespace-of-your-cluster> --kafka-cluster <name-of-your-kafka-cluster> --kubeconfig <path-to-kubeconfig-file>
Expected output:
Namespace Name State Image Alerts Cruise Control Topic Status Rolling Upgrade Errors Rolling Upgrade Last Success kafka kafka ClusterRunning banzaicloud/kafka:2.13-2.5.0-bzc.1 0 CruiseControlTopicReady 0
-
Enable ACLs and configure an external listener. The deployed Kafka cluster has no ACLs, and external access is disabled by default. Enable them by applying the following changes:
smm sdm cluster update --namespace kafka --kafka-cluster kafka --kubeconfig <path-to-kubeconfig-file> -f -<<EOF apiVersion: kafka.banzaicloud.io/v1beta1 kind: KafkaCluster spec: ingressController: "istioingress" istioIngressConfig: gatewayConfig: mode: PASSTHROUGH readOnlyConfig: | auto.create.topics.enable=false offsets.topic.replication.factor=2 authorizer.class.name=kafka.security.authorizer.AclAuthorizer allow.everyone.if.no.acl.found=false listenersConfig: externalListeners: - type: "plaintext" name: "external" externalStartingPort: 19090 containerPort: 9094 EOF
-
The update in the previous step reconfigures the Kafka cluster to receive rolling updates. Verify that this is reflected in the state of the cluster.
smm sdm cluster get --namespace kafka --kafka-cluster kafka --kubeconfig <path-to-kubeconfig-file>
Expected output:
Namespace Name State Image Alerts Cruise Control Topic Status Rolling Upgrade Errors Rolling Upgrade Last Success kafka kafka ClusterRollingUpgrading banzaicloud/kafka:2.13-2.5.0-bzc.1 0 CruiseControlTopicReady 0
-
Wait until the reconfiguration is finished and the cluster is in the ClusterRunning state. This can take a while, as the rolling upgrade applies changes on a broker-by-broker basis.
-
-
Get the endpoint bound to the external listener of the Kafka cluster:
kubectl get service -n kafka meshgateway-external-kafka
Example output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) meshgateway-external-kafka LoadBalancer 10.10.44.209 aff4c6887766440238fb19c381779eae-1599690198.eu-north-1.elb.amazonaws.com 19090:32480/TCP,19091:30972/TCP,29092:30748/TCP
-
Create a Kafka user that the client application will use to identify itself. You can create the user manually using
KafkaUser
custom resources, or using the Streaming Data Manager web interface. Grant this user access to the topics it needs.kubectl create -f - <<EOF apiVersion: kafka.banzaicloud.io/v1alpha1 kind: KafkaUser metadata: name: external-kafkauser namespace: default spec: clusterRef: name: kafka namespace: kafka secretName: external-kafkauser-secret pkiBackendSpec: pkiBackend: "k8s-csr" signerName: "csr.banzaicloud.io/privateca" EOF
Note: By default, the certificate created for the Kafka user with the csr-operator is valid for 86400 seconds (1 day). To generate a certificate with a different validity, add the
"csr.banzaicloud.io/certificate-lifetime"
annotation to the KafkaUser CR spec. For example, the following CR creates a certificate valid for 604800 seconds (7 days) for the associated Kafka user:kubectl create -f - <<EOF apiVersion: kafka.banzaicloud.io/v1alpha1 kind: KafkaUser metadata: name: external-kafkauser namespace: default spec: annotations: csr.banzaicloud.io/certificate-lifetime: "604800" clusterRef: name: kafka namespace: kafka secretName: external-kafkauser-secret pkiBackendSpec: pkiBackend: "k8s-csr" signerName: "csr.banzaicloud.io/privateca" EOF
Grant this user access to the topics it needs.
Export the public certificate of the CA.
-
If you are using the CSR operator:
kubectl get secret external-kafkauser-secret -o 'go-template={{index .data "chain.pem"}}' | base64 -D > /var/tmp/ca.crt
-
If you are using cert-manager:
kubectl get secret external-kafkauser-secret -o 'go-template={{index .data "ca.crt"}}' | base64 -D > /var/tmp/ca.crt
Alternatively, you can download the CA certificate and the client certificate from the Streaming Data Manager web interface.
Export the client certificate stored in external-kafkauser-secret
which represents Kafka user external-kafkauser
:
kubectl get secret external-kafkauser-secret -o 'go-template={{index .data "tls.crt"}}' | base64 -D > /var/tmp/tls.crt
kubectl get secret external-kafkauser-secret -o 'go-template={{index .data "tls.key"}}' | base64 -D > /var/tmp/tls.key
Use the exported client credentials and the CA certificate in your application to connect to the external listener of the Kafka cluster. (Otherwise, Istio automatically rejects the client application.) The following command is an example to connect with the kcat client application:
kcat -L -b aff4c6887766440238fb19c381779eae-1599690198.eu-north-1.elb.amazonaws.com:29092 -X security.protocol=SSL -X ssl.key.location=/var/tmp/tls.key -X ssl.certificate.location=/var/tmp/tls.crt -X ssl.ca.location=/var/tmp/ca.crt
Metadata for all topics (from broker -1: ssl://aff4c6887766440238fb19c381779eae-1599690198.eu-north-1.elb.amazonaws.com:29092/bootstrap):
2 brokers:
broker 0 at aff4c6887766440238fb19c381779eae-1599690198.eu-north-1.elb.amazonaws.com:19090 (controller)
broker 1 at aff4c6887766440238fb19c381779eae-1599690198.eu-north-1.elb.amazonaws.com:19091
1 topics:
topic "example-topic" with 3 partitions:
partition 0, leader 0, replicas: 0,1, isrs: 0,1
partition 1, leader 1, replicas: 1,0, isrs: 0,1
partition 2, leader 0, replicas: 0,1, isrs: 0,1
Use cert-manager
-
Enable ACLs and configure an external listener using Streaming Data Manager. Complete the following steps.
-
Verify that your deployed Kafka cluster is up and running:
smm sdm cluster get --namespace <namespace-of-your-cluster> --kafka-cluster <name-of-your-kafka-cluster> --kubeconfig <path-to-kubeconfig-file>
Expected output:
Namespace Name State Image Alerts Cruise Control Topic Status Rolling Upgrade Errors Rolling Upgrade Last Success kafka kafka ClusterRunning banzaicloud/kafka:2.13-2.5.0-bzc.1 0 CruiseControlTopicReady 0
-
Enable ACLs and configure an external listener. The deployed Kafka cluster has no ACLs, and external access is disabled by default. Enable them by applying the following changes:
smm sdm cluster update --namespace kafka --kafka-cluster kafka --kubeconfig <path-to-kubeconfig-file> -f -<<EOF apiVersion: kafka.banzaicloud.io/v1beta1 kind: KafkaCluster spec: ingressController: "istioingress" istioIngressConfig: gatewayConfig: mode: PASSTHROUGH readOnlyConfig: | auto.create.topics.enable=false offsets.topic.replication.factor=2 authorizer.class.name=kafka.security.authorizer.AclAuthorizer allow.everyone.if.no.acl.found=false listenersConfig: externalListeners: - type: "plaintext" name: "external" externalStartingPort: 19090 containerPort: 9094 EOF
-
The update in the previous step reconfigures the Kafka cluster to receive rolling updates. Verify that this is reflected in the state of the cluster.
smm sdm cluster get --namespace kafka --kafka-cluster kafka --kubeconfig <path-to-kubeconfig-file>
Expected output:
Namespace Name State Image Alerts Cruise Control Topic Status Rolling Upgrade Errors Rolling Upgrade Last Success kafka kafka ClusterRollingUpgrading banzaicloud/kafka:2.13-2.5.0-bzc.1 0 CruiseControlTopicReady 0
-
Wait until the reconfiguration is finished and the cluster is in the ClusterRunning state. This can take a while, as the rolling upgrade applies changes on a broker-by-broker basis.
-
-
Install cert-manager.
-
Install cert-manager on the cluster. The cert-manager application will issue the client certificates for the client applications. If you already have cert-manager installed and configured on the cluster, skip this step.
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.11.0/cert-manager.yaml
-
Specify a cluster issuer for cert-manager that has the same CA or root certificate as the Istio mesh, otherwise, the application’s client certificate won’t be valid for the mTLS enforced by Istio.
Note: Streaming Data Manager uses CSR operator as an external CA to provide certificate to Istio
-
Create a new secret from the CA certificate used by Istio in a format that works for cert-manager.
kubectl create -f - <<EOF apiVersion: v1 kind: Secret metadata: name: ca-key-pair namespace: cert-manager data: tls.crt: $(kubectl --namespace csr-operator-system get secret csr-operator-cacerts -o 'jsonpath={.data.ca_crt\.pem}') tls.key: $(kubectl --namespace csr-operator-system get secret csr-operator-cacerts -o 'jsonpath={.data.ca_key\.pem}') EOF
-
Use the secret to create a
ClusterIssuer
(a Kubernetes resource that represents CAs that are able to generate signed certificates)kubectl create -f - <<EOF apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: ca-issuer namespace: cert-manager spec: ca: secretName: ca-key-pair EOF
-
-
-
Get the endpoint bound to the external listener of the Kafka cluster:
kubectl get service -n kafka meshgateway-external-kafka
Example output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) meshgateway-external-kafka LoadBalancer 10.10.44.209 aff4c6887766440238fb19c381779eae-1599690198.eu-north-1.elb.amazonaws.com 19090:32480/TCP,19091:30972/TCP,29092:30748/TCP
-
Create a Kafka user that the client application will use to identify itself. You can create the user manually using
KafkaUser
custom resources, or using the Streaming Data Manager web interface. Grant this user access to the topics it needs.kubectl create -f - <<EOF apiVersion: kafka.banzaicloud.io/v1alpha1 kind: KafkaUser metadata: name: external-kafkauser namespace: default spec: clusterRef: name: kafka namespace: kafka secretName: external-kafkauser-secret pkiBackendSpec: pkiBackend: "cert-manager" issuerRef: name: "ca-issuer" kind: "ClusterIssuer" EOF
Note: The certificate created for the Kafka user with cert-manager is valid for 90 days.
Grant this user access to the topics it needs.
Export the public certificate of the CA.
-
If you are using the CSR operator:
kubectl get secret external-kafkauser-secret -o 'go-template={{index .data "chain.pem"}}' | base64 -D > /var/tmp/ca.crt
-
If you are using cert-manager:
kubectl get secret external-kafkauser-secret -o 'go-template={{index .data "ca.crt"}}' | base64 -D > /var/tmp/ca.crt
Alternatively, you can download the CA certificate and the client certificate from the Streaming Data Manager web interface.
Export the client certificate stored in external-kafkauser-secret
which represents Kafka user external-kafkauser
:
kubectl get secret external-kafkauser-secret -o 'go-template={{index .data "tls.crt"}}' | base64 -D > /var/tmp/tls.crt
kubectl get secret external-kafkauser-secret -o 'go-template={{index .data "tls.key"}}' | base64 -D > /var/tmp/tls.key
Use the exported client credentials and the CA certificate in your application to connect to the external listener of the Kafka cluster. (Otherwise, Istio automatically rejects the client application.) The following command is an example to connect with the kcat client application:
kcat -L -b aff4c6887766440238fb19c381779eae-1599690198.eu-north-1.elb.amazonaws.com:29092 -X security.protocol=SSL -X ssl.key.location=/var/tmp/tls.key -X ssl.certificate.location=/var/tmp/tls.crt -X ssl.ca.location=/var/tmp/ca.crt
Metadata for all topics (from broker -1: ssl://aff4c6887766440238fb19c381779eae-1599690198.eu-north-1.elb.amazonaws.com:29092/bootstrap):
2 brokers:
broker 0 at aff4c6887766440238fb19c381779eae-1599690198.eu-north-1.elb.amazonaws.com:19090 (controller)
broker 1 at aff4c6887766440238fb19c381779eae-1599690198.eu-north-1.elb.amazonaws.com:19091
1 topics:
topic "example-topic" with 3 partitions:
partition 0, leader 0, replicas: 0,1, isrs: 0,1
partition 1, leader 1, replicas: 1,0, isrs: 0,1
partition 2, leader 0, replicas: 0,1, isrs: 0,1