Operator mode
The operator mode (also called declarative mode) follows the familiar operator pattern. In operator mode, Streaming Data Manager watches events on the The ApplicationManifest Custom Resource, and triggers a reconciliation for all components in order, the same way you can trigger the reconcile command locally.
Note: Unlike in the declarative CLI mode, in operator mode the Streaming Data Manager operator is running inside Kubernetes, and not on a client machine. This naturally means that this mode is mutually exclusive with the install, delete, and reconcile commands.
Using the operator mode is the recommended way to integrate the Streaming Data Manager installer into a Kubernetes-native continuous delivery solution, for example, Argo, where the integration boils down to applying YAML files to get the installer deployed as an operator.
Existing configurations managed using the reconcile
command work out-of-the box after switching to the operator mode.
Install Streaming Data Manager in operator mode
To use Streaming Data Manager in operator mode, complete the following steps. In this scenario, the reconcile flow runs on the Kubernetes cluster as an operator that watches the The ApplicationManifest Custom Resource. Any changes made to the watched custom resource triggers the reconcile flow.
-
As all Streaming Data Manager docker images are stored in private ECR registries, you must set up your credentials in the cluster before the installing the operator.
-
Create a Kubernetes Secret that contains the following data in the format required by the IMPS operator:
- your AWS credentials (access and secret key), (it’s recommended to create a weak IAM user that has only
programmatic access
privileges and use its credentials) - the ECR repository account ID
033498657557
, and - the ECR repository’s region
us-east-2
kubectl create -f- <<EOF apiVersion: v1 kind: Namespace metadata: name: supertubes-system labels: imps.banzaicloud.io/target: "true" EOF kubectl apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: 033498657557-dkr-ecr-us-east-2-login-config namespace: supertubes-system type: banzaicloud.io/aws-ecr-login-config stringData: accessKeyID: <Your AWS AccessKeyID> secretKey: <Your AWS SecretAccessKey> region: us-east-2 # ECR repository's region to use the token for accountID: "033498657557" # ECR repository's account ID to use the token for EOF
- your AWS credentials (access and secret key), (it’s recommended to create a weak IAM user that has only
-
Deploy the IMPS operator.
helm repo add banzaicloud-stable https://kubernetes-charts.banzaicloud.com kubectl create -f- <<EOF apiVersion: v1 kind: Namespace metadata: name: imps EOF helm install imps banzaicloud-stable/imagepullsecrets --namespace imps
-
Configure the IMPS operator to use the Kubernetes secret with the AWS credentials created in the previous step.
kubectl apply -f - <<EOF apiVersion: images.banzaicloud.io/v1alpha1 kind: ImagePullSecret metadata: name: imps namespace: supertubes-system spec: registry: credentials: - name: 033498657557-dkr-ecr-us-east-2-login-config # Name of the Kubernetes secret that holds the AWS IAM credentials that belongs to the AWS account that has access to ECR namespace: supertubes-system # Namespace of the Kubernetes secret that holds the AWS IAM credentials that belongs to the AWS account that has access to ECR target: namespaces: labels: - matchLabels: imps.banzaicloud.io/target: "true" secret: name: registry-creds # Name of the Kubernetes secret where IMPS operator will store the docker image pull secrets for ECR EOF
-
Deploy the Streaming Data Manager control plane. Private Streaming Data Manager Helm charts are hosted in a S3 bucket. Install the Helm 3 S3 plugin to access them by running the following commands.
helm plugin install https://github.com/hypnoglow/helm-s3.git
kubectl create -f- <<EOF apiVersion: v1 kind: Namespace metadata: name: supertubes-control-plane labels: imps.banzaicloud.io/target: "true" EOF
AWS_REGION=us-east-2 AWS_ACCESS_KEY_ID=<Your AWS AccessKeyID> AWS_SECRET_ACCESS_KEY=<Your AWS SecretAccessKey> helm repo add 'cisco-banzai-s3' "s3://cisco-eti-banzai-charts/charts" AWS_REGION=us-east-2 AWS_ACCESS_KEY_ID=<Your AWS AccessKeyID> AWS_SECRET_ACCESS_KEY=<Your AWS SecretAccessKey> helm repo update AWS_REGION=us-east-2 AWS_ACCESS_KEY_ID=<Your AWS AccessKeyID> AWS_SECRET_ACCESS_KEY=<Your AWS SecretAccessKey> helm install supertubes-cp cisco-banzai-s3/supertubes-control-plane \ --namespace supertubes-control-plane \ --set operator.leaderElection.namespace="supertubes-control-plane" \ --set imagePullSecrets={registry-creds}
-
Deploy the Streaming Data Manager components. Deploy the Streaming Data Manager ApplicationManifest custom resource which lists the enabled sub-components and their configurations:
kubectl apply -n supertubes-control-plane -f- <<EOF apiVersion: supertubes.banzaicloud.io/v1beta1 kind: ApplicationManifest metadata: name: sdm-applicationmanifest namespace: supertubes-control-plane spec: imagePullSecretsOperator: enabled: false namespace: supertubes-system csrOperator: enabled: true namespace: csr-operator-system istioOperator: enabled: true namespace: istio-system kafkaMinion: {} kafkaOperator: enabled: true namespace: kafka monitoring: grafanaDashboards: enabled: true label: app.kubernetes.io/supertubes_managed_grafana_dashboard prometheusOperator: enabled: true namespace: supertubes-system valuesOverride: |- prometheus: prometheusSpec: alertingEndpoints: - namespace: supertubes-system name: prometheus-operator-alertmanager port: http-web pathPrefix: "/" apiVersion: v2 defaultRules: rules: alertmanager: true alertmanager: enabled: true alertmanagerSpec: portName: http-web config: global: resolve_timeout: 5m route: group_by: ['job'] group_wait: 30s group_interval: 5m repeat_interval: 4h receiver: 'null' routes: - match: alertname: Watchdog receiver: 'null' - match_re: alertname: UnderReplicatedPartitions|UnbalancedPartitionCount|UnbalancedLeaderCount|FailedPartitionsReplicaFetchCount|MultipleControllersRunningSimultaneously|ControllerChannelBusy|RequestHandlerPoolExhausted|BrokerPodLowOnMemory group_by: [namespace, kafka_cr] receiver: 'null' - match_re: alertname: ConsumerLagHigh group_by: [group] receiver: 'null' - match_re: alertname: QuorumDown|MinQuorum group_by: [namespace, label_app] receiver: 'null' - match_re: alertname: KubePersistentVolumeFillingUp|KubePersistentVolumeErrors|KubePodCrashLooping|KubePodNotReady|KubeDeploymentReplicasMismatch|KubeStatefulSetReplicasMismatch|KubeContainerWaiting|KubeDaemonSetNotScheduled|KubeHpaReplicasMismatch|KubeHpaMaxedOut|CPUThrottlingHigh|KubeQuotaAlmostFull|KubeQuotaExceeded|KubeQuotaFullyUsed namespace: kafka|zookeeper|supertubes-system|istio-system group_by: [namespace] receiver: 'null' - match_re: alertname: KubeCPUOvercommit|KubeMemoryOvercommit receiver: 'null' - match_re: alertname: KubeletTooManyPods group_by: [node] receiver: 'null' receivers: - name: 'null' supertubes: enabled: true namespace: supertubes-system zookeeperOperator: enabled: true namespace: zookeeper EOF
With the above ApplicationManifest, the Streaming Data Manager control plane does not create an Istio mesh by default, in fact, it waits for the Istio mesh to be created before proceeding with installation for components such as zookeeper-operator.
-
Setup Istio mesh for the Streaming Data Manager.
-
From the secret generated automatically by the CSR-operator (“csr-operator-cacerts” in the “csr-operator-system” namespace), create a new secret into the namespace where Istio is installed (by default, it is “istio-system”), because Istio requires that secret in another format without the CA private key.
kubectl create -f - <<EOF apiVersion: v1 kind: Secret metadata: name: external-ca-cert namespace: istio-system data: root-cert.pem: <ca_crt.pem-from-csr-operator-cacerts> EOF
-
Deploy the IstioControlPlane CR into your cluster.
kubectl create -f - <<EOF apiVersion: servicemesh.cisco.com/v1alpha1 kind: IstioControlPlane metadata: name: icp-sample-v115x namespace: istio-system labels: banzaicloud.io/managed-by: supertubes spec: version: "1.15.3" mode: ACTIVE distribution: cisco meshID: sdm clusterID: <identifier that uniquely identifies the Kubernetes cluster where this istio control plane is deployed to (for example, UID of the kube-system namespace)> k8sResourceOverlays: - groupVersionKind: group: apps kind: Deployment version: v1 objectKey: name: istiod-icp-sample-v115x patches: - parseValue: true path: /spec/template/spec/volumes/- type: replace value: | name: external-ca-cert secret: secretName: external-ca-cert optional: true - parseValue: true path: /spec/template/spec/containers/name=discovery/volumeMounts/- type: replace value: | name: external-ca-cert mountPath: /etc/external-ca-cert readOnly: true # Amend ClusterRole to add permission for istiod to approve certificate signing by custom signer - groupVersionKind: group: rbac.authorization.k8s.io kind: ClusterRole version: v1 objectKey: name: istiod-icp-sample-v115x-istio-system patches: - parseValue: true path: /rules/- type: replace value: | apiGroups: - certificates.k8s.io resourceNames: - csr.banzaicloud.io/privateca resources: - signers verbs: - approve containerImageConfiguration: imagePullSecrets: - name: registry-creds imagePullPolicy: Always proxy: image: 033498657557.dkr.ecr.us-east-2.amazonaws.com/banzaicloud/istio-proxyv2:v1.15.3-bzc-kafka.1 meshConfig: protocolDetectionTimeout: 5s enableAutoMtls: true defaultConfig: proxyMetadata: PROXY_CONFIG_XDS_AGENT: "true" telemetryV2: enabled: true istiod: deployment: image: 033498657557.dkr.ecr.us-east-2.amazonaws.com/banzaicloud/istio-pilot:v1.15.3-bzc.2 env: # Skip validating the peer is from the same trust domain when mTLS is enabled in authentication policy - name: PILOT_SKIP_VALIDATE_TRUST_DOMAIN value: "true" # Indicate to Istiod that we use an external signer (likely to be removed and added to mesh config - from upstream Istio) - name: EXTERNAL_CA value: ISTIOD_RA_KUBERNETES_API # Kuberntes CA signer type (likely to be removed and added to mesh config - from upstream Istio) - name: K8S_SIGNER value: csr.banzaicloud.io/privateca EOF
-
-
(Optional) Create a ZooKeeper cluster. Streaming Data Manager deploys zookeeper-operator for managing ZooKeeper clusters to be used by Apache Kafka clusters on the same Kubernetes cluster.
Wait until the
zookeeperclusters.zookeeper.pravega.io
gets created:kubectl get crd zookeeperclusters.zookeeper.pravega.io
To create a ZooKeeper cluster, run:
kubectl apply -n zookeeper -f- <<EOF apiVersion: zookeeper.pravega.io/v1beta1 kind: ZookeeperCluster metadata: name: zookeeper-server labels: app: zookeeper-server app.kubernetes.io/component: zookeeper app.kubernetes.io/instance: zookeeper-server spec: image: repository: pravega/zookeeper tag: 0.2.13 pullPolicy: IfNotPresent replicas: 3 pod: resources: requests: cpu: 1 memory: "1.5Gi" limits: cpu: "1500m" memory: "1.5Gi" env: - name: ZK_SERVER_HEAP value: "1024" - name: SERVER_JVMFLAGS value: "-Xms512m" affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 50 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - zookeeper-server - key: kind operator: In values: - ZookeeperMember topologyKey: kubernetes.io/hostname - weight: 20 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - zookeeper-server - key: kind operator: In values: - ZookeeperMember topologyKey: failure-domain.beta.kubernetes.io/zone config: initLimit: 10 tickTime: 2000 syncLimit: 5 quorumListenOnAllIPs: true EOF
Example: Update the settings of a component
-
The following example sets a new password for Grafana.
apiVersion: supertubes.banzaicloud.io/v1beta1 kind: ApplicationManifest metadata: name: applicationmanifest-sample spec: clusterRegistry: enabled: false namespace: cluster-registry csrOperator: enabled: false namespace: csr-operator-system istioOperator: enabled: true namespace: istio-system kafkaOperator: enabled: true namespace: kafka supertubes: enabled: true namespace: supertubes-system monitoring: grafanaDashboards: enabled: true prometheusOperator: enabled: true namespace: supertubes-system valuesOverride: |- grafana: adminPassword: my-new-password kafkaMinion: enabled: true zookeeperOperator: enabled: true namespace: zookeeper
You can apply it with the following command:
kubectl apply -f path/to/grafana-password.yaml
-
In the status section you can see that the status for monitoring has changed to
Reconciling
.... status: components: istioOperator: meshStatus: Available status: Available kafkaOperator: status: Available monitoring: status: Reconciling supertubes: status: Available zookeeperOperator: clusterStatus: Available status: Available status: Reconciling
-
After successfully applying the new configuration, the status changes to
Available
.... status: components: istioOperator: meshStatus: Available status: Available kafkaOperator: status: Available monitoring: status: Available supertubes: status: Available zookeeperOperator: clusterStatus: Available status: Available status: Succeeded
Uninstall the Streaming Data Manager Control Plane
If have used the Streaming Data Manager operator on a cluster and want to delete Streaming Data Manager and the operator, run the following commands.
smm sdm uninstall -a
helm del --purge <supertubes-control-plane-release-name>