Operator mode
To use Streaming Data Manager in operator mode, complete the following steps. In this scenario, the reconcile flow runs on the Kubernetes cluster as an operator that watches the ApplicationManifest CRD schema reference (group supertubes.banzaicloud.io). Any changes made to the watched custom resource triggers the reconcile flow.
Prerequisites
- This guide assumes that Service Mesh Manager is already installed. For details, see Installation.
- Make sure that the cluster resources meet the minimum requirements for Streaming Data Manager.
Steps
-
Deploy the Streaming Data Manager control plane. There should already be an imagePullSecret configured by Service Mesh Manager (for example, smm-pull-secret) to get Streaming Data Manager images.
kubectl apply -f - <<EOF apiVersion: v1 kind: Namespace metadata: name: supertubes-control-plane labels: imps.banzaicloud.io/target: "true" EOF
Expected output:
namespace/supertubes-control-plane created
helm install \ --namespace supertubes-control-plane \ --set imagePullSecrets\[0\].name=smm-pull-secret \ --set operator.image.repository="registry.eticloud.io/sdm/supertubes-control-plane" \ supertubes-control-plane \ oci://registry.eticloud.io/sdm-charts/supertubes-control-plane --version 1.9.0 \ --create-namespace \ --atomic \ --debug
Expected output:
install.go:194: [debug] Original chart version: "" install.go:211: [debug] CHART PATH: /Users/<your-username>/.cache/helm/repository/supertubes-control-plane-1.9.0.tgz # ... NAME: supertubes-control-plane LAST DEPLOYED: Thu Apr 6 18:39:26 2023 NAMESPACE: supertubes-control-plane STATUS: deployed REVISION: 1 TEST SUITE: None USER-SUPPLIED VALUES: imagePullSecrets: - smm-pull-secret operator: image: PullPolicy: Always tag: v1.9.0-dev.1 leaderElection: namespace: supertubes-control-plane COMPUTED VALUES: # ...
-
Deploy the initial Streaming Data Manager components, the csr-operator, and istio-operator. Deploy the Streaming Data Manager ApplicationManifest custom resource which lists the enabled sub-components and their configurations:
For OpenShift:
Note: the controllerSettings.platform’s value needs to be openshift and other resource requirements and special settings are needed for some of the components to run Streaming Data Manager in OpenShift environment.
kubectl apply -f - <<EOF apiVersion: supertubes.banzaicloud.io/v1beta1 kind: ApplicationManifest metadata: name: sdm-applicationmanifest namespace: supertubes-control-plane spec: clusterRegistry: enabled: false namespace: cluster-registry controllerSettings: platform: openshift csrOperator: enabled: true namespace: csr-operator-system imagePullSecretsOperator: enabled: false namespace: supertubes-system istioOperator: enabled: false namespace: istio-system kafkaMinion: enabled: false kafkaOperator: enabled: false namespace: kafka monitoring: grafanaDashboards: enabled: false prometheusOperator: enabled: false namespace: supertubes-system supertubes: enabled: false namespace: supertubes-system zookeeperOperator: enabled: false namespace: zookeeper EOF
For Kubernetes:
kubectl apply -f - <<EOF apiVersion: supertubes.banzaicloud.io/v1beta1 kind: ApplicationManifest metadata: name: sdm-applicationmanifest namespace: supertubes-control-plane spec: clusterRegistry: enabled: false namespace: cluster-registry csrOperator: enabled: true namespace: csr-operator-system imagePullSecretsOperator: enabled: false namespace: supertubes-system istioOperator: enabled: false namespace: istio-system kafkaMinion: enabled: false kafkaOperator: enabled: false namespace: kafka monitoring: grafanaDashboards: enabled: false prometheusOperator: enabled: false namespace: supertubes-system supertubes: enabled: false namespace: supertubes-system zookeeperOperator: enabled: false namespace: zookeeper EOF
Expected output:
applicationmanifest.supertubes.banzaicloud.io/sdm-applicationmanifest created
Check that the csr-operator is running.
kubectl get pods -n csr-operator-system
Expected output:
NAME READY STATUS RESTARTS AGE csr-operator-7ffc679f5b-2t5zd 2/2 Running 0 5m
-
Setup Istio mesh for the Streaming Data Manager.
-
From the secret generated automatically by the CSR-operator (“csr-operator-cacerts” in the “csr-operator-system” namespace), create a new secret into the namespace where Istio is installed (by default, it is “istio-system”), because Istio requires that secret in another format without the CA private key.
kubectl apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: sdm-istio-external-ca-cert namespace: istio-system data: root-cert.pem: $(kubectl --namespace csr-operator-system get secret csr-operator-cacerts -o 'jsonpath={.data.ca_crt\.pem}') EOF
Expected output:
secret/sdm-istio-external-ca-cert created
-
Deploy the IstioControlPlane CR into your cluster.
For OpenShift:
kubectl apply -f - <<EOF apiVersion: servicemesh.cisco.com/v1alpha1 kind: IstioControlPlane metadata: labels: banzaicloud.io/managed-by: supertubes name: sdm-icp-v115x namespace: istio-system spec: containerImageConfiguration: imagePullPolicy: Always imagePullSecrets: - name: smm-pull-secret distribution: cisco istiod: deployment: env: # Skip validating that the peer is from the same trust domain when mTLS is enabled in authentication policy - name: PILOT_SKIP_VALIDATE_TRUST_DOMAIN value: "true" # Indicate to Istiod that we use an external signer (likely to be removed and added to mesh config - from upstream Istio) - name: EXTERNAL_CA value: ISTIOD_RA_KUBERNETES_API # Kubernetes CA signer type (likely to be removed and added to mesh config - from upstream Istio) - name: K8S_SIGNER value: csr.banzaicloud.io/privateca - name: ISTIO_MULTIROOT_MESH value: "true" image: 033498657557.dkr.ecr.us-east-2.amazonaws.com/banzaicloud/istio-pilot:v1.15.3-bzc.1 k8sResourceOverlays: - groupVersionKind: group: apps kind: Deployment version: v1 objectKey: name: istiod-sdm-icp-v115x patches: - parseValue: true path: /spec/template/spec/volumes/- type: replace value: | name: external-ca-cert secret: secretName: sdm-istio-external-ca-cert optional: true - parseValue: true path: /spec/template/spec/containers/name=discovery/volumeMounts/- type: replace value: | name: external-ca-cert mountPath: /etc/external-ca-cert readOnly: true # Amend ClusterRole to add permission for istiod to approve certificate signing by custom signer - groupVersionKind: group: rbac.authorization.k8s.io kind: ClusterRole version: v1 objectKey: name: istiod-sdm-icp-v115x-istio-system patches: - parseValue: true path: /rules/- type: replace value: | apiGroups: - certificates.k8s.io resourceNames: - csr.banzaicloud.io/privateca resources: - signers verbs: - approve meshConfig: defaultConfig: proxyMetadata: PROXY_CONFIG_XDS_AGENT: "true" enableAutoMtls: true protocolDetectionTimeout: 5s meshID: sdm mode: ACTIVE proxy: image: 033498657557.dkr.ecr.us-east-2.amazonaws.com/banzaicloud/istio-proxyv2:v1.15.3-bzc-kafka.0 proxyInit: cni: binDir: /var/lib/cni/bin chained: false confDir: /etc/cni/multus/net.d confFileName: istio-cni-sdm-icp-v115x-istio-system.conf daemonset: image: 033498657557.dkr.ecr.us-east-2.amazonaws.com/banzaicloud/istio-install-cni:v1.15.3-bzc.1 securityContext: privileged: true enabled: true image: 033498657557.dkr.ecr.us-east-2.amazonaws.com/banzaicloud/istio-proxyv2:v1.15.3-bzc-kafka.0 telemetryV2: enabled: true version: 1.15.3 EOF
For Kubernetes:
kubectl create -f - <<EOF apiVersion: servicemesh.cisco.com/v1alpha1 kind: IstioControlPlane metadata: labels: banzaicloud.io/managed-by: supertubes name: sdm-icp-v115x namespace: istio-system spec: containerImageConfiguration: imagePullPolicy: Always imagePullSecrets: - name: smm-pull-secret distribution: cisco istiod: deployment: env: # Skip validating the peer is from the same trust domain when mTLS is enabled in authentication policy - name: PILOT_SKIP_VALIDATE_TRUST_DOMAIN value: "true" # Indicate to Istiod that we use an external signer (likely to be removed and added to mesh config - from upstream Istio) - name: EXTERNAL_CA value: ISTIOD_RA_KUBERNETES_API # Kubernetes CA signer type (likely to be removed and added to mesh config - from upstream Istio) - name: K8S_SIGNER value: csr.banzaicloud.io/privateca - name: ISTIO_MULTIROOT_MESH value: "true" image: 033498657557.dkr.ecr.us-east-2.amazonaws.com/banzaicloud/istio-pilot:v1.15.3-bzc.1 k8sResourceOverlays: - groupVersionKind: group: apps kind: Deployment version: v1 objectKey: name: istiod-sdm-icp-v115x patches: - parseValue: true path: /spec/template/spec/volumes/- type: replace value: | name: external-ca-cert secret: secretName: sdm-istio-external-ca-cert optional: true - parseValue: true path: /spec/template/spec/containers/name=discovery/volumeMounts/- type: replace value: | name: external-ca-cert mountPath: /etc/external-ca-cert readOnly: true # Amend ClusterRole to add permission for istiod to approve certificate signing by custom signer - groupVersionKind: group: rbac.authorization.k8s.io kind: ClusterRole version: v1 objectKey: name: istiod-sdm-icp-v115x-istio-system patches: - parseValue: true path: /rules/- type: replace value: | apiGroups: - certificates.k8s.io resourceNames: - csr.banzaicloud.io/privateca resources: - signers verbs: - approve meshConfig: defaultConfig: proxyMetadata: PROXY_CONFIG_XDS_AGENT: "true" enableAutoMtls: true protocolDetectionTimeout: 5s meshID: sdm mode: ACTIVE proxy: image: 033498657557.dkr.ecr.us-east-2.amazonaws.com/banzaicloud/istio-proxyv2:v1.15.3-bzc-kafka.0 telemetryV2: enabled: true version: 1.15.3 EOF
Expected output:
istiocontrolplane.servicemesh.cisco.com/sdm-icp-v115x created
-
Check for the IstioControlPlane and pods to be available.
kubectl get istiocontrolplanes.servicemesh.cisco.com -n istio-system sdm-icp-v115x
Expected output:
NAME MODE NETWORK STATUS MESH EXPANSION EXPANSION GW IPS ERROR AGE sdm-icp-v115x ACTIVE network1 Available 5m21s
kubectl get pods -n istio-system
Expected output:
For OpenShift:
NAME READY STATUS RESTARTS AGE istio-cni-node-sdm-icp-v115x-2mkxn 1/1 Running 0 2m37s istio-cni-node-sdm-icp-v115x-6t6lx 1/1 Running 0 2m37s istio-cni-node-sdm-icp-v115x-7nxqs 1/1 Running 0 2m37s istio-cni-node-sdm-icp-v115x-htgzw 1/1 Running 0 2m37s istio-cni-node-sdm-icp-v115x-mdrvj 1/1 Running 0 2m37s istio-cni-node-sdm-icp-v115x-mk6vh 1/1 Running 0 2m37s istio-cni-node-sdm-icp-v115x-mstzx 1/1 Running 0 2m37s istio-cni-node-sdm-icp-v115x-qwlvz 1/1 Running 0 2m37s istio-cni-node-sdm-icp-v115x-rjlrz 1/1 Running 0 2m37s istio-cni-node-sdm-icp-v115x-tk5xv 1/1 Running 0 2m37s istio-cni-node-sdm-icp-v115x-x88ls 1/1 Running 0 2m37s istio-cni-node-sdm-icp-v115x-xht6n 1/1 Running 0 2m37s istio-cni-node-sdm-icp-v115x-xv6gw 1/1 Running 0 2m37s istio-operator-5d7cb59c9-g2htw 2/2 Running 0 7m29s istiod-sdm-icp-v115x-54f6c69775-nfs64 1/1 Running 0 2m42s
For Kubernetes:
istio-operator-5d7cb59c9-5q6dx 2/2 Running 0 114m istiod-sdm-icp-v115x-54f6c69775-786bj 1/1 Running 0 5m43s
-
Create the
istiomesh-ca-trust-extension-script
ConfigMap.kubectl apply -f - <<EOF apiVersion: v1 kind: ConfigMap metadata: name: istiomesh-ca-trust-extension-script namespace: supertubes-control-plane data: run.sh: |- #!/bin/sh # Fill these fields properly---------------------- export CA_SECRET_NAMESPACE="istio-system" export CA_SECRET_NAME="sdm-istio-external-ca-cert" # ------------------------------------------------ export ICP_NAME="sdm-icp-v115x" export ICP_NAMESPACE="istio-system" export CA_CERT=\$(kubectl get secret -n \$CA_SECRET_NAMESPACE \$CA_SECRET_NAME -o jsonpath='{.data.root-cert\.pem}' | base64 -d | sed '\$ ! s/\$/\\\n/' | tr -d '\n') read -r -d '' PATCH << EOF {"spec": {"meshConfig": {"caCertificates": [{"pem": "\$CA_CERT"}]}}} EOF read -r -d '' INSERT_PATCH << EOF [{"op": "add", "path": "/spec/meshConfig/caCertificates/-", "value": {"pem": "\$CA_CERT"}}] EOF kubectl patch istiocontrolplanes.servicemesh.cisco.com \$ICP_NAME -n \$ICP_NAMESPACE --type json --patch="\$INSERT_PATCH" || kubectl patch istiocontrolplanes.servicemesh.cisco.com \$ICP_NAME -n \$ICP_NAMESPACE --type merge --patch="\$PATCH" EOF
Expected output:
configmap/istiomesh-ca-trust-extension-script created
-
Create the
istiomesh-ca-trust-extension
Job.kubectl apply -f - <<EOF apiVersion: batch/v1 kind: Job metadata: name: istiomesh-ca-trust-extension namespace: supertubes-control-plane spec: completions: 1 template: metadata: name: istiomesh-ca-trust-extension spec: containers: - command: - /scripts/run.sh image: lachlanevenson/k8s-kubectl:v1.16.10 imagePullPolicy: IfNotPresent name: istio-trust-extension-job volumeMounts: - mountPath: /scripts name: run readOnly: false dnsPolicy: ClusterFirst restartPolicy: Never serviceAccount: supertubes-control-plane serviceAccountName: supertubes-control-plane volumes: - configMap: defaultMode: 365 name: istiomesh-ca-trust-extension-script name: run EOF
Expected output:
job.batch/istiomesh-ca-trust-extension created
Check if the job ran successfully.
kubectl get pods -n supertubes-control-plane
Expected output:
NAME READY STATUS RESTARTS AGE istiomesh-ca-trust-extension-4r9sr 0/1 Completed 0 19s supertubes-control-plane-549f55595f-8pd2z 2/2 Running 0 3h46m
-
-
Deploy the rest of the Streaming Data Manager components. Apply the Streaming Data Manager ApplicationManifest custom resource to enable the rest of the sub-components.
For OpenShift:
Note: the controllerSettings.platform’s value needs to be openshift and other resource requirements and special settings are needed for some of the components to run Streaming Data Manager in OpenShift environment.
kubectl apply -f - <<EOF apiVersion: supertubes.banzaicloud.io/v1beta1 kind: ApplicationManifest metadata: name: sdm-applicationmanifest namespace: supertubes-control-plane spec: clusterRegistry: enabled: false namespace: cluster-registry controllerSettings: platform: openshift csrOperator: enabled: true namespace: csr-operator-system imagePullSecretsOperator: enabled: false namespace: supertubes-system istioOperator: enabled: false namespace: istio-system kafkaMinion: enabled: true kafkaOperator: enabled: true namespace: kafka monitoring: grafanaDashboards: enabled: true label: app.kubernetes.io/supertubes_managed_grafana_dashboard prometheusOperator: enabled: true namespace: supertubes-system valuesOverride: | prometheus: prometheusSpec: resources: limits: cpu: 2 memory: 4Gi requests: cpu: 1 memory: 3Gi prometheus-node-exporter: service: port: 9123 targetPort: 9123 prometheusOperator: admissionWebhooks: createSecretJob: securityContext: allowPrivilegeEscalation: false capabilities: drop: - "ALL" patchWebhookJob: securityContext: allowPrivilegeEscalation: false capabilities: drop: - "ALL" containerSecurityContext: capabilities: drop: - "ALL" resources: limits: cpu: 400m memory: 400Mi requests: cpu: 200m memory: 200Mi supertubes: enabled: true namespace: supertubes-system zookeeperOperator: enabled: true namespace: zookeeper EOF
For Kubernetes:
kubectl apply -f - <<EOF apiVersion: supertubes.banzaicloud.io/v1beta1 kind: ApplicationManifest metadata: name: sdm-applicationmanifest namespace: supertubes-control-plane spec: clusterRegistry: enabled: false namespace: cluster-registry csrOperator: enabled: true namespace: csr-operator-system imagePullSecretsOperator: enabled: false namespace: supertubes-system istioOperator: enabled: false namespace: istio-system kafkaMinion: enabled: true kafkaOperator: enabled: true namespace: kafka monitoring: grafanaDashboards: enabled: true label: app.kubernetes.io/supertubes_managed_grafana_dashboard prometheusOperator: enabled: true namespace: supertubes-system valuesOverride: | prometheus: prometheusSpec: resources: limits: cpu: 2 memory: 2Gi requests: cpu: 1 memory: 1Gi supertubes: enabled: true namespace: supertubes-system zookeeperOperator: enabled: true namespace: zookeeper EOF
Expected output:
applicationmanifest.supertubes.banzaicloud.io/sdm-applicationmanifest configured
Check that the pods have come up successfully.
kubectl get pods -n kafka
Expected output:
NAME READY STATUS RESTARTS AGE kafka-operator-operator-75c4ff6c9f-n9nlt 3/3 Running 2 (116s ago) 2m9s
kubectl get pods -n supertubes-system
Expected output:
NAME READY STATUS RESTARTS AGE prometheus-operator-grafana-b9b6b885f-v7jr8 4/4 Running 0 14m prometheus-operator-kube-state-metrics-659b7c47dd-bzwmc 2/2 Running 2 (14m ago) 14m prometheus-operator-operator-67c7c96d7f-sddqn 2/2 Running 1 (14m ago) 14m prometheus-operator-prometheus-node-exporter-2wgpd 1/1 Running 0 14m prometheus-operator-prometheus-node-exporter-66sm2 1/1 Running 0 14m prometheus-operator-prometheus-node-exporter-8nb2w 1/1 Running 0 14m prometheus-operator-prometheus-node-exporter-b4s4q 1/1 Running 0 14m prometheus-operator-prometheus-node-exporter-l79zm 1/1 Running 0 14m prometheus-operator-prometheus-node-exporter-lm5s9 1/1 Running 0 14m prometheus-operator-prometheus-node-exporter-mvvt9 1/1 Running 0 14m prometheus-operator-prometheus-node-exporter-q2bpv 1/1 Running 0 14m prometheus-operator-prometheus-node-exporter-q7t5q 1/1 Running 0 14m prometheus-operator-prometheus-node-exporter-zk8b7 1/1 Running 0 14m prometheus-prometheus-operator-prometheus-0 3/3 Running 0 9m12s supertubes-657ffd9c4c-ct2ft 3/3 Running 2 (5m4s ago) 5m22s supertubes-ui-backend-779594d67d-hx9ch 2/2 Running 1 (5m11s ago) 5m22s
kubectl get pods -n zookeeper
Expected output:
NAME READY STATUS RESTARTS AGE zookeeper-operator-547f85c689-dkpd8 2/2 Running 1 (2m2s ago) 2m17s zookeeper-operator-post-install-upgrade-bwsmj 0/1 Completed 0 2m17s
Example: Update the settings of a component
-
The following example of editing the ApplicationManifest sets a new password for Grafana.
apiVersion: supertubes.banzaicloud.io/v1beta1 kind: ApplicationManifest metadata: name: sdm-applicationmanifest namespace: supertubes-control-plane spec: # ... monitoring: # ... prometheusOperator: enabled: true namespace: supertubes-system valuesOverride: |- # ... grafana: adminPassword: my-new-password # ... # ...
-
In the status section you can see that the status for monitoring has changed to
Reconciling
.... status: components: istioOperator: meshStatus: Available status: Available kafkaOperator: status: Available monitoring: status: Reconciling supertubes: status: Available zookeeperOperator: clusterStatus: Available status: Available status: Reconciling
-
After successfully applying the new configuration, the status changes to
Available
.... status: components: istioOperator: meshStatus: Available status: Available kafkaOperator: status: Available monitoring: status: Available supertubes: status: Available zookeeperOperator: clusterStatus: Available status: Available status: Succeeded
Example: Create a ZooKeeper cluster and a Kafka cluster
-
Create a ZooKeeper cluster. Streaming Data Manager deploys zookeeper-operator for managing ZooKeeper clusters to be used by Apache Kafka clusters on the same Kubernetes cluster.
Expected output:
zookeepercluster.zookeeper.pravega.io/zookeeper created
Check that the ZooKeeper cluster pods have come up successfully.
kubectl get pods -n zookeeper -l app=zookeeper
Expected output:
NAME READY STATUS RESTARTS AGE zookeeper-0 2/2 Running 0 7m32s
-
[Create a Kafka cluster](Create Kafka cluster). Streaming Data Manager deploys Koperator for managing Kafka resources for Apache Kafka clusters on the same Kubernetes cluster.
Uninstall the Streaming Data Manager control plane
If have used the Streaming Data Manager operator on a cluster and want to delete Streaming Data Manager and the operator, run the following commands.
smm sdm uninstall -a
helm del --purge <supertubes-control-plane-release-name>