How to synchronize SDM resources between Kubernetes clusters
The following example demonstrates how to enable SDM resource synchronization between three Kubernetes clusters.
Install Cluster Registry
Note: Skip the installation part if Cluster Registry has already been installed in your Kubernetes clusters
-
In order to attach Kubernetes clusters together to form a group, Cluster Registry must be installed separately on each Kubernetes cluster. Run the install command on the first cluster (cluster1):
smm sdm clusterregistry install
Important Note: Cluster Registry needs a publicly accessible IP address for the Kubernetes API in order to function properly. By default, the API server endpoint IP address queried from the API server will be used; if the
--k8s-api-server-address
is set tofrom-kubeconfig
, the API server address will be read from the kubeconfig file.
--k8s-api-server-address
can also be set to a static value of your choice.If the installation was successful you should see logs similar to the following:
INFO namespace:cluster-registry - pending INFO namespace:cluster-registry - ok INFO customresourcedefinition.apiextensions.k8s.io:clusters.clusterregistry.k8s.cisco.com - pending INFO customresourcedefinition.apiextensions.k8s.io:clusters.clusterregistry.k8s.cisco.com - ok INFO customresourcedefinition.apiextensions.k8s.io:clusterfeatures.clusterregistry.k8s.cisco.com - pending INFO customresourcedefinition.apiextensions.k8s.io:clusterfeatures.clusterregistry.k8s.cisco.com - ok INFO customresourcedefinition.apiextensions.k8s.io:resourcesyncrules.clusterregistry.k8s.cisco.com - pending INFO customresourcedefinition.apiextensions.k8s.io:resourcesyncrules.clusterregistry.k8s.cisco.com - ok INFO creating local Cluster CR {"cluster name": "cluster1", "API server address": "https://3.13.240.171:6443", "cluster ID": "8ae8f091-f8b1-4b2f-926c-22278994c996"} INFO cluster.clusterregistry.k8s.cisco.com:cluster1 created INFO local Cluster CR created {"cluster name": "cluster1", "API server address": [{"serverAddress":"https://3.13.240.171:6443"}], "cluster ID": "8ae8f091-f8b1-4b2f-926c-22278994c996"} INFO deployment.apps:cluster-registry/cluster-registry-controller - pending INFO deployment.apps:cluster-registry/cluster-registry-controller - ok
Additionally, you should be able to query for
Cluster
CRs using kubectl:kubectl get cluster -o wide
The output should be something like:
NAME ID STATUS TYPE SYNCED VERSION PROVIDER DISTRIBUTION REGION STATUS MESSAGE SYNC MESSAGE cluster1 8ae8f091-f8b1-4b2f-926c-22278994c996 Ready Local v1.19.10 amazon PKE us-east-2
In order to check if the Kubernetes API server address was set to the intended value run the following command:
kubectl get cluster cluster1 -o yaml
The output will look something like this:
apiVersion: clusterregistry.k8s.cisco.com/v1alpha1 kind: Cluster metadata: labels: banzaicloud.io/managed-by: supertubes name: cluster1 spec: authInfo: secretRef: name: cluster1 namespace: cluster-registry clusterID: 8ae8f091-f8b1-4b2f-926c-22278994c996 kubernetesApiEndpoints: - serverAddress: https://3.13.240.171:6443 status: conditions: - lastHeartbeatTime: "2022-03-24T02:38:49Z" lastTransitionTime: "2022-03-24T02:31:05Z" message: cluster is ready reason: ClusterIsReady status: "True" type: Ready distribution: PKE kubeProxyVersions: - v1.19.10 kubeletVersions: - v1.19.10 locality: region: us-east-2 regions: - us-east-2 zones: - us-east-2a provider: amazon state: Ready type: Local version: v1.19.10
In the output above there’s a field called
kubernetesApiEndpoints
in the spec. Make sure that is set to the appropriate value. -
Repeat the install command on the second cluster (cluster2):
smm sdm clusterregistry install
-
Repeat the install command on the third cluster (cluster3):
smm sdm clusterregistry install
Note: the Cluster Registry uninstallation can also be done in operator mode.
Attach Kubernetes clusters
-
Attach cluster1 and cluster2 to form a group. On cluster1 run the following command:
smm sdm clusterregistry cluster attach <path/to/cluster2/kubeconfig>
This command does a bi-directional attachment of cluster1 and cluster2. This means that cluster1’s Cluster CR is copied onto cluster2 and cluster2’s Cluster CR is copied onto cluster1. No additional command needs to be executed on cluster2 in order to group cluster1 and cluster2 together.
If the attachment was successful you should see logs similar to the following:
INFO loading peer kubeconfig file {"path": "/Users/admin/Downloads/cluster2.yaml"} INFO loaded peer kubernetes context {"context-name": "kubernetes-admin@cluster2"} INFO resource Sync Rule not found on running cluster {"resourceSyncRuleName": "sdm-core-resources-application-manifest-sink"} INFO resource Sync Rule not found on running cluster {"resourceSyncRuleName": "sdm-core-resources-kafka-acls-sink"} INFO resource Sync Rule not found on running cluster {"resourceSyncRuleName": "sdm-core-resources-kafka-clusters-sink"} INFO resource Sync Rule not found on running cluster {"resourceSyncRuleName": "sdm-core-resources-kafka-resource-selectors-sink"} INFO resource Sync Rule not found on running cluster {"resourceSyncRuleName": "sdm-core-resources-kafka-roles-sink"} INFO resource Sync Rule not found on running cluster {"resourceSyncRuleName": "sdm-core-resources-kafka-topics-sink"} INFO resource Sync Rule not found on running cluster {"resourceSyncRuleName": "sdm-core-resources-kafka-users-sink"} INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-application-manifest-sink created INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-acls-sink created INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-clusters-sink created INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-resource-selectors-sink created INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-roles-sink created INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-topics-sink created INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-users-sink created INFO resource Sync Rule not found on running cluster {"resourceSyncRuleName": "sdm-core-resources-application-manifest-sink"} INFO resource Sync Rule not found on running cluster {"resourceSyncRuleName": "sdm-core-resources-kafka-acls-sink"} INFO resource Sync Rule not found on running cluster {"resourceSyncRuleName": "sdm-core-resources-kafka-clusters-sink"} INFO resource Sync Rule not found on running cluster {"resourceSyncRuleName": "sdm-core-resources-kafka-resource-selectors-sink"} INFO resource Sync Rule not found on running cluster {"resourceSyncRuleName": "sdm-core-resources-kafka-roles-sink"} INFO resource Sync Rule not found on running cluster {"resourceSyncRuleName": "sdm-core-resources-kafka-topics-sink"} INFO resource Sync Rule not found on running cluster {"resourceSyncRuleName": "sdm-core-resources-kafka-users-sink"} INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-application-manifest-sink created INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-acls-sink created INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-clusters-sink created INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-resource-selectors-sink created INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-roles-sink created INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-topics-sink created INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-users-sink created INFO copying peer kubernetes cluster CR and secret to local kubernetes cluster... INFO serviceaccount:supertubes-system/cluster-registry-sdm created INFO serviceaccount:supertubes-system/cluster-registry-sdm-reader created INFO clusterrole.rbac.authorization.k8s.io:cluster-registry-sdm created INFO clusterrole.rbac.authorization.k8s.io:cluster-registry-sdm-reader created INFO clusterrolebinding.rbac.authorization.k8s.io:cluster-registry-sdm created INFO clusterrolebinding.rbac.authorization.k8s.io:cluster-registry-sdm-reader created INFO clusterfeature.clusterregistry.k8s.cisco.com:sdm-core-resources created INFO serviceaccount:supertubes-system/cluster-registry-sdm - pending INFO serviceaccount:supertubes-system/cluster-registry-sdm - ok INFO serviceaccount:supertubes-system/cluster-registry-sdm-reader - pending INFO serviceaccount:supertubes-system/cluster-registry-sdm-reader - ok INFO clusterrole.rbac.authorization.k8s.io:cluster-registry-sdm - pending INFO clusterrole.rbac.authorization.k8s.io:cluster-registry-sdm - ok INFO clusterrole.rbac.authorization.k8s.io:cluster-registry-sdm-reader - pending INFO clusterrole.rbac.authorization.k8s.io:cluster-registry-sdm-reader - ok INFO clusterrolebinding.rbac.authorization.k8s.io:cluster-registry-sdm - pending INFO clusterrolebinding.rbac.authorization.k8s.io:cluster-registry-sdm - ok INFO clusterrolebinding.rbac.authorization.k8s.io:cluster-registry-sdm-reader - pending INFO clusterrolebinding.rbac.authorization.k8s.io:cluster-registry-sdm-reader - ok INFO clusterfeature.clusterregistry.k8s.cisco.com:sdm-core-resources - pending INFO clusterfeature.clusterregistry.k8s.cisco.com:sdm-core-resources - ok INFO secret:cluster-registry/cluster2 created INFO cluster.clusterregistry.k8s.cisco.com:cluster2 created INFO secret:cluster-registry/cluster2 - pending INFO secret:cluster-registry/cluster2 - ok INFO cluster.clusterregistry.k8s.cisco.com:cluster2 - pending INFO cluster.clusterregistry.k8s.cisco.com:cluster2 - ok INFO successfully copied peer kubernetes cluster CR and secret to local kubernetes cluster INFO copying local kubernetes cluster CR and secret to peer kubernetes cluster... INFO serviceaccount:supertubes-system/cluster-registry-sdm created INFO serviceaccount:supertubes-system/cluster-registry-sdm-reader created INFO clusterrole.rbac.authorization.k8s.io:cluster-registry-sdm created INFO clusterrole.rbac.authorization.k8s.io:cluster-registry-sdm-reader created INFO clusterrolebinding.rbac.authorization.k8s.io:cluster-registry-sdm created INFO clusterrolebinding.rbac.authorization.k8s.io:cluster-registry-sdm-reader created INFO clusterfeature.clusterregistry.k8s.cisco.com:sdm-core-resources created INFO serviceaccount:supertubes-system/cluster-registry-sdm - pending INFO serviceaccount:supertubes-system/cluster-registry-sdm - ok INFO serviceaccount:supertubes-system/cluster-registry-sdm-reader - pending INFO serviceaccount:supertubes-system/cluster-registry-sdm-reader - ok INFO clusterrole.rbac.authorization.k8s.io:cluster-registry-sdm - pending INFO clusterrole.rbac.authorization.k8s.io:cluster-registry-sdm - ok INFO clusterrole.rbac.authorization.k8s.io:cluster-registry-sdm-reader - pending INFO clusterrole.rbac.authorization.k8s.io:cluster-registry-sdm-reader - ok INFO clusterrolebinding.rbac.authorization.k8s.io:cluster-registry-sdm - pending INFO clusterrolebinding.rbac.authorization.k8s.io:cluster-registry-sdm - ok INFO clusterrolebinding.rbac.authorization.k8s.io:cluster-registry-sdm-reader - pending INFO clusterrolebinding.rbac.authorization.k8s.io:cluster-registry-sdm-reader - ok INFO clusterfeature.clusterregistry.k8s.cisco.com:sdm-core-resources - pending INFO clusterfeature.clusterregistry.k8s.cisco.com:sdm-core-resources - ok INFO secret:cluster-registry/cluster1 created INFO cluster.clusterregistry.k8s.cisco.com:cluster1 created INFO secret:cluster-registry/cluster1 - pending INFO secret:cluster-registry/cluster1 - ok INFO cluster.clusterregistry.k8s.cisco.com:cluster1 - pending INFO cluster.clusterregistry.k8s.cisco.com:cluster1 - ok INFO successfully copied local kubernetes cluster CR and secret to peer kubernetes cluster
Cluster1 should now have two
Cluster
CRs: one that represents itself and another that represents cluster2. To verify they both exist execute the following against cluster1. (Make sure kubectl is set to query cluster1):kubectl get cluster -o wide
The output should be something like:
NAME ID STATUS TYPE SYNCED VERSION PROVIDER DISTRIBUTION REGION STATUS MESSAGE SYNC MESSAGE cluster1 8ae8f091-f8b1-4b2f-926c-22278994c996 Ready Local v1.19.10 amazon PKE us-east-2 cluster2 68fe4a8a-2fcd-4754-8ac0-282906310edf Ready Peer True v1.19.10 amazon PKE us-east-2 all participating clusters are in sync
Similarly, Cluster2 should also have two
Cluster
CRs; one that represents itself and another that represents cluster1. To verify they both exist execute the following against cluster2. (Again, make sure kubectl is set to query cluster2):kubectl get cluster -o wide
The output should be something like:
NAME ID STATUS TYPE SYNCED VERSION PROVIDER DISTRIBUTION REGION STATUS MESSAGE SYNC MESSAGE cluster1 8ae8f091-f8b1-4b2f-926c-22278994c996 Ready Peer True v1.19.10 amazon PKE us-east-2 all participating clusters are in sync cluster2 68fe4a8a-2fcd-4754-8ac0-282906310edf Ready Local v1.19.10 amazon PKE us-east-2
Important Note: After executing the
attach
command the SDM cli might prompt you with a question during the attachment process similar to the following:Existing resource secret:cluster-registry/cluster2 is not yet managed by us
This is because the existing
cluster2
secret does not have the necessary permissions to modify resources on cluster2, it can only read resources from cluster2. If you select theSkip this resource
option, then later on if you need to detach cluster2 using cluster1’s kubernetes api endpoint, cluster1 will not be able to fully complete the detachment of cluster2. Specifically, it won’t be able to modify resource sync rules on cluster2 or delete cluster1 resources from cluster2. If you need to be able to detach cluster2 using cluster1’s kubernetes api endpoint in the future, it is best to select theManage this resource from now on
option. -
Grouping the clusters together via SDM has enabled SDM resources (including
KafkaCluster
) to be synced across cluster1 and cluster2. Lets take a look atKafkaCluster
resources on cluster1.On cluster1 execute the following:
kubectl get kafkacluster -o wide -A
The output should be something like:
NAMESPACE NAME CLUSTER STATE CLUSTER ALERT COUNT LAST SUCCESSFUL UPGRADE UPGRADE ERROR COUNT AGE kafka kafka ClusterRunning 0 0 29m kafka kafka-cluster2-615d ClusterRunning 0 0 12m
As you can see there are now two KafkaCluster objects. The first one, named kafka, represents the Kafka cluster running on the local Kubernetes cluster. The second one, named kafka-cluster2-615d, represents the Kafka cluster running on cluster2 and has been replicated from cluster2. It’s original name was kafka on cluster2, but in order to prevent naming collisions replicated resources' names are modified to include the original cluster name and a 4 digit hash of the cluster name.
Now let’s take a look at
KafkaCluster
resources on cluster2.On cluster2 execute the following:
kubectl get kafkacluster -o wide -A
The output should be something like:
NAMESPACE NAME CLUSTER STATE CLUSTER ALERT COUNT LAST SUCCESSFUL UPGRADE UPGRADE ERROR COUNT AGE kafka kafka ClusterRunning 0 0 29m kafka kafka-cluster1-a140 ClusterRunning 0 0 13m
As you can see there are again two KafkaCluster objects. kafka-cluster1-a140 has been replicated from cluster1.
-
Attach the third cluster (cluster3) to the group.
On cluster3 execute the following:
smm sdm clusterregistry cluster attach <path/to/cluster1/kubeconfig>
After the attachment completes, use kubectl to get all
Cluster
CRs on cluster3 to verify that the attachment succeeded:kubectl get cluster -o wide
The output should be something like:
NAME ID STATUS TYPE SYNCED VERSION PROVIDER DISTRIBUTION REGION STATUS MESSAGE SYNC MESSAGE cluster1 8ae8f091-f8b1-4b2f-926c-22278994c996 Ready Peer True v1.19.10 amazon PKE us-east-2 all participating clusters are in sync cluster2 68fe4a8a-2fcd-4754-8ac0-282906310edf Ready Peer True v1.19.10 amazon PKE us-east-2 all participating clusters are in sync cluster3 92bd8f09-83ab-4c22-926c-93872087acde Ready Local v1.19.10 amazon PKE us-east-2
Although cluster3 was only explicitly attached to cluster1, cluster3 also sees the information for cluster2. When a cluster joins a group it automatically gets the cluster information for all Kubernetes clusters already present in the group.
Detach Kubernetes clusters
In order to detach cluster2 from the cluster group (that was formed by the three clusters), execute the following command on either cluster in the cluster group:
smm sdm clusterregistry cluster detach <path/to/cluster2/kubeconfig>
This command will delete all of the resources that have been synchronized from cluster2, including the Cluster
CR
that represents cluster2, and will stop SDM related resources (such as KafkaCluster
, KafkaTopic
, KafkaUser
, etc)
from being replicated between cluster2 and the remaining clusters in the cluster group (in this case cluster1 and cluster3).
More specifically, on cluster2 all SDM resources that were synchronized from the peer clusters in the group (in this case
cluster1 and cluster3) will be deleted; on cluster2’s peer clusters all SDM that were synchronized from cluster2 (in this
case cluster1 and cluster3), will be deleted.
Note: the detachment of cluster2 from the group is accomplished under the hood by deleting the SDM
ClusterFeature
andResourceSyncRule
CRs from cluster2
During the detachment you should see logs similar to the following:
? Are you sure to use the current context? kubernetes-admin@cluster1 (API Server: https://3.138.78.132:6443) Yes
2022-03-28T18:43:56.617-0700 INFO loading detaching peer cluster kubeconfig file {"path": "/Users/admin/Downloads/cluster2.yaml"}
2022-03-28T18:43:56.618-0700 INFO loaded detaching peer cluster kubernetes context {"context-name": "kubernetes-admin@cluster2"}
? Are you sure to detach the cluster with name cluster2? Yes
2022-03-28T18:44:06.931-0700 INFO deleting Supertubes resources from the detaching cluster {"name": "cluster2"}
2022-03-28T18:44:07.447-0700 INFO clusterfeature.clusterregistry.k8s.cisco.com:sdm-core-resources - pending
2022-03-28T18:44:07.547-0700 INFO clusterfeature.clusterregistry.k8s.cisco.com:sdm-core-resources - ok
2022-03-28T18:44:07.547-0700 INFO clusterfeature.clusterregistry.k8s.cisco.com:sdm-core-resources deleted
2022-03-28T18:44:07.747-0700 INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-application-manifest-sink - pending
2022-03-28T18:44:07.847-0700 INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-application-manifest-sink - ok
2022-03-28T18:44:07.847-0700 INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-application-manifest-sink deleted
2022-03-28T18:44:08.047-0700 INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-acls-sink - pending
2022-03-28T18:44:08.147-0700 INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-acls-sink - ok
2022-03-28T18:44:08.147-0700 INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-acls-sink deleted
2022-03-28T18:44:08.347-0700 INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-clusters-sink - pending
2022-03-28T18:44:08.447-0700 INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-clusters-sink - ok
2022-03-28T18:44:08.447-0700 INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-clusters-sink deleted
2022-03-28T18:44:08.647-0700 INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-resource-selectors-sink - pending
2022-03-28T18:44:08.747-0700 INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-resource-selectors-sink - ok
2022-03-28T18:44:08.747-0700 INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-resource-selectors-sink deleted
2022-03-28T18:44:08.947-0700 INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-roles-sink - pending
2022-03-28T18:44:09.047-0700 INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-roles-sink - ok
2022-03-28T18:44:09.047-0700 INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-roles-sink deleted
2022-03-28T18:44:09.247-0700 INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-topics-sink - pending
2022-03-28T18:44:09.347-0700 INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-topics-sink - ok
2022-03-28T18:44:09.347-0700 INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-topics-sink deleted
2022-03-28T18:44:09.547-0700 INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-users-sink - pending
2022-03-28T18:44:09.647-0700 INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-users-sink - ok
2022-03-28T18:44:09.647-0700 INFO resourcesyncrule.clusterregistry.k8s.cisco.com:sdm-core-resources-kafka-users-sink deleted
2022-03-28T18:44:10.347-0700 INFO clusterrole.rbac.authorization.k8s.io:cluster-registry-sdm - pending
2022-03-28T18:44:10.430-0700 INFO clusterrole.rbac.authorization.k8s.io:cluster-registry-sdm - ok
2022-03-28T18:44:10.430-0700 INFO clusterrole.rbac.authorization.k8s.io:cluster-registry-sdm deleted
2022-03-28T18:44:10.647-0700 INFO clusterrole.rbac.authorization.k8s.io:cluster-registry-sdm-reader - pending
2022-03-28T18:44:10.747-0700 INFO clusterrole.rbac.authorization.k8s.io:cluster-registry-sdm-reader - ok
2022-03-28T18:44:10.747-0700 INFO clusterrole.rbac.authorization.k8s.io:cluster-registry-sdm-reader deleted
2022-03-28T18:44:10.947-0700 INFO clusterrolebinding.rbac.authorization.k8s.io:cluster-registry-sdm - pending
2022-03-28T18:44:11.047-0700 INFO clusterrolebinding.rbac.authorization.k8s.io:cluster-registry-sdm - ok
2022-03-28T18:44:11.047-0700 INFO clusterrolebinding.rbac.authorization.k8s.io:cluster-registry-sdm deleted
2022-03-28T18:44:11.247-0700 INFO serviceaccount:supertubes-system/cluster-registry-sdm - pending
2022-03-28T18:44:11.347-0700 INFO serviceaccount:supertubes-system/cluster-registry-sdm - ok
2022-03-28T18:44:11.347-0700 INFO serviceaccount:supertubes-system/cluster-registry-sdm deleted
2022-03-28T18:44:11.547-0700 INFO clusterrolebinding.rbac.authorization.k8s.io:cluster-registry-sdm-reader - pending
2022-03-28T18:44:11.647-0700 INFO clusterrolebinding.rbac.authorization.k8s.io:cluster-registry-sdm-reader - ok
2022-03-28T18:44:11.647-0700 INFO clusterrolebinding.rbac.authorization.k8s.io:cluster-registry-sdm-reader deleted
2022-03-28T18:44:11.847-0700 INFO serviceaccount:supertubes-system/cluster-registry-sdm-reader - pending
2022-03-28T18:44:11.947-0700 INFO serviceaccount:supertubes-system/cluster-registry-sdm-reader - ok
2022-03-28T18:44:11.947-0700 INFO serviceaccount:supertubes-system/cluster-registry-sdm-reader deleted
? Existing resource applicationmanifest.supertubes.banzaicloud.io:default/applicationmanifest-cluster1-a140 is not managed by us Delete
2022-03-28T18:44:35.397-0700 INFO applicationmanifest.supertubes.banzaicloud.io:default/applicationmanifest-cluster1-a140 - pending
2022-03-28T18:44:35.497-0700 INFO applicationmanifest.supertubes.banzaicloud.io:default/applicationmanifest-cluster1-a140 - ok
2022-03-28T18:44:35.497-0700 INFO applicationmanifest.supertubes.banzaicloud.io:default/applicationmanifest-cluster1-a140 deleted
? Existing resource applicationmanifest.supertubes.banzaicloud.io:default/applicationmanifest-cluster3-947d is not managed by us Delete
2022-03-28T18:44:39.430-0700 INFO applicationmanifest.supertubes.banzaicloud.io:default/applicationmanifest-cluster3-947d - pending
2022-03-28T18:44:39.547-0700 INFO applicationmanifest.supertubes.banzaicloud.io:default/applicationmanifest-cluster3-947d - ok
2022-03-28T18:44:39.548-0700 INFO applicationmanifest.supertubes.banzaicloud.io:default/applicationmanifest-cluster3-947d deleted
2022-03-28T18:44:39.947-0700 INFO kafkacluster.kafka.banzaicloud.io:kafka/kafka-cluster1-a140 - pending
2022-03-28T18:44:40.048-0700 INFO kafkacluster.kafka.banzaicloud.io:kafka/kafka-cluster1-a140 - ok
2022-03-28T18:44:40.048-0700 INFO kafkacluster.kafka.banzaicloud.io:kafka/kafka-cluster1-a140 deleted
2022-03-28T18:44:40.348-0700 INFO kafkacluster.kafka.banzaicloud.io:kafka/kafka-cluster3-947d - pending
2022-03-28T18:44:40.448-0700 INFO kafkacluster.kafka.banzaicloud.io:kafka/kafka-cluster3-947d - ok
2022-03-28T18:44:40.448-0700 INFO kafkacluster.kafka.banzaicloud.io:kafka/kafka-cluster3-947d deleted
? Existing resource kafkarole.kafka.banzaicloud.io:consumer-cluster1-a140 is not managed by us Delete
2022-03-28T18:44:58.601-0700 INFO kafkarole.kafka.banzaicloud.io:consumer-cluster1-a140 - pending
2022-03-28T18:44:58.698-0700 INFO kafkarole.kafka.banzaicloud.io:consumer-cluster1-a140 - ok
2022-03-28T18:44:58.698-0700 INFO kafkarole.kafka.banzaicloud.io:consumer-cluster1-a140 deleted
? Existing resource kafkarole.kafka.banzaicloud.io:consumer-cluster3-947d is not managed by us Delete
2022-03-28T18:45:03.001-0700 INFO kafkarole.kafka.banzaicloud.io:consumer-cluster3-947d - pending
2022-03-28T18:45:03.098-0700 INFO kafkarole.kafka.banzaicloud.io:consumer-cluster3-947d - ok
2022-03-28T18:45:03.098-0700 INFO kafkarole.kafka.banzaicloud.io:consumer-cluster3-947d deleted
? Existing resource kafkarole.kafka.banzaicloud.io:idempotent-producer-cluster1-a140 is not managed by us Delete
2022-03-28T18:45:06.248-0700 INFO kafkarole.kafka.banzaicloud.io:idempotent-producer-cluster1-a140 - pending
2022-03-28T18:45:06.348-0700 INFO kafkarole.kafka.banzaicloud.io:idempotent-producer-cluster1-a140 - ok
2022-03-28T18:45:06.348-0700 INFO kafkarole.kafka.banzaicloud.io:idempotent-producer-cluster1-a140 deleted
? Existing resource kafkarole.kafka.banzaicloud.io:idempotent-producer-cluster3-947d is not managed by us Delete
2022-03-28T18:45:11.502-0700 INFO kafkarole.kafka.banzaicloud.io:idempotent-producer-cluster3-947d - pending
2022-03-28T18:45:11.598-0700 INFO kafkarole.kafka.banzaicloud.io:idempotent-producer-cluster3-947d - ok
2022-03-28T18:45:11.598-0700 INFO kafkarole.kafka.banzaicloud.io:idempotent-producer-cluster3-947d deleted
? Existing resource kafkarole.kafka.banzaicloud.io:producer-cluster1-a140 is not managed by us Delete
2022-03-28T18:45:16.398-0700 INFO kafkarole.kafka.banzaicloud.io:producer-cluster1-a140 - pending
2022-03-28T18:45:16.498-0700 INFO kafkarole.kafka.banzaicloud.io:producer-cluster1-a140 - ok
2022-03-28T18:45:16.498-0700 INFO kafkarole.kafka.banzaicloud.io:producer-cluster1-a140 deleted
? Existing resource kafkarole.kafka.banzaicloud.io:producer-cluster3-947d is not managed by us Delete
2022-03-28T18:45:21.148-0700 INFO kafkarole.kafka.banzaicloud.io:producer-cluster3-947d - pending
2022-03-28T18:45:21.248-0700 INFO kafkarole.kafka.banzaicloud.io:producer-cluster3-947d - ok
2022-03-28T18:45:21.248-0700 INFO kafkarole.kafka.banzaicloud.io:producer-cluster3-947d deleted
2022-03-28T18:45:21.248-0700 INFO successfully deleted Supertubes resources from the detaching cluster {"name": "cluster2"}
2022-03-28T18:45:22.224-0700 INFO deleting Supertubes resources that were synced from the detaching cluster to its peer cluster {"peer cluster": "cluster1"}
? Existing resource applicationmanifest.supertubes.banzaicloud.io:default/applicationmanifest-cluster2-615d is not managed by us Delete
2022-03-28T18:45:27.119-0700 INFO applicationmanifest.supertubes.banzaicloud.io:default/applicationmanifest-cluster2-615d - pending
2022-03-28T18:45:27.204-0700 INFO applicationmanifest.supertubes.banzaicloud.io:default/applicationmanifest-cluster2-615d - ok
2022-03-28T18:45:27.204-0700 INFO applicationmanifest.supertubes.banzaicloud.io:default/applicationmanifest-cluster2-615d deleted
2022-03-28T18:45:27.598-0700 INFO kafkacluster.kafka.banzaicloud.io:kafka/kafka-cluster2-615d - pending
2022-03-28T18:45:27.698-0700 INFO kafkacluster.kafka.banzaicloud.io:kafka/kafka-cluster2-615d - ok
2022-03-28T18:45:27.698-0700 INFO kafkacluster.kafka.banzaicloud.io:kafka/kafka-cluster2-615d deleted
? Existing resource kafkarole.kafka.banzaicloud.io:consumer-cluster2-615d is not managed by us Delete
2022-03-28T18:45:34.898-0700 INFO kafkarole.kafka.banzaicloud.io:consumer-cluster2-615d - pending
2022-03-28T18:45:34.998-0700 INFO kafkarole.kafka.banzaicloud.io:consumer-cluster2-615d - ok
2022-03-28T18:45:34.998-0700 INFO kafkarole.kafka.banzaicloud.io:consumer-cluster2-615d deleted
? Existing resource kafkarole.kafka.banzaicloud.io:idempotent-producer-cluster2-615d is not managed by us Delete
2022-03-28T18:45:41.298-0700 INFO kafkarole.kafka.banzaicloud.io:idempotent-producer-cluster2-615d - pending
2022-03-28T18:45:41.398-0700 INFO kafkarole.kafka.banzaicloud.io:idempotent-producer-cluster2-615d - ok
2022-03-28T18:45:41.398-0700 INFO kafkarole.kafka.banzaicloud.io:idempotent-producer-cluster2-615d deleted
? Existing resource kafkarole.kafka.banzaicloud.io:producer-cluster2-615d is not managed by us Delete
2022-03-28T18:45:44.951-0700 INFO kafkarole.kafka.banzaicloud.io:producer-cluster2-615d - pending
2022-03-28T18:45:45.048-0700 INFO kafkarole.kafka.banzaicloud.io:producer-cluster2-615d - ok
2022-03-28T18:45:45.049-0700 INFO kafkarole.kafka.banzaicloud.io:producer-cluster2-615d deleted
2022-03-28T18:45:45.049-0700 INFO successfully deleted Supertubes resources that were synced from the detaching cluster {"peer cluster": "cluster1"}
2022-03-28T18:45:46.001-0700 INFO deleting Supertubes resources that were synced from the detaching cluster to its peer cluster {"peer cluster": "cluster3"}
? Existing resource applicationmanifest.supertubes.banzaicloud.io:default/applicationmanifest-cluster2-615d is not managed by us Delete all
2022-03-28T18:46:08.849-0700 INFO applicationmanifest.supertubes.banzaicloud.io:default/applicationmanifest-cluster2-615d - pending
2022-03-28T18:46:08.949-0700 INFO applicationmanifest.supertubes.banzaicloud.io:default/applicationmanifest-cluster2-615d - ok
2022-03-28T18:46:08.949-0700 INFO applicationmanifest.supertubes.banzaicloud.io:default/applicationmanifest-cluster2-615d deleted
2022-03-28T18:46:09.149-0700 INFO applicationmanifest.supertubes.banzaicloud.io:default/applicationmanifest-cluster2-615d-cluster2-615d - pending
2022-03-28T18:46:09.249-0700 INFO applicationmanifest.supertubes.banzaicloud.io:default/applicationmanifest-cluster2-615d-cluster2-615d - ok
2022-03-28T18:46:09.249-0700 INFO applicationmanifest.supertubes.banzaicloud.io:default/applicationmanifest-cluster2-615d-cluster2-615d deleted
2022-03-28T18:46:09.549-0700 INFO kafkacluster.kafka.banzaicloud.io:kafka/kafka-cluster2-615d - pending
2022-03-28T18:46:09.632-0700 INFO kafkacluster.kafka.banzaicloud.io:kafka/kafka-cluster2-615d - ok
2022-03-28T18:46:09.632-0700 INFO kafkacluster.kafka.banzaicloud.io:kafka/kafka-cluster2-615d deleted
2022-03-28T18:46:09.857-0700 INFO kafkacluster.kafka.banzaicloud.io:kafka/kafka-cluster2-615d-cluster2-615d - pending
2022-03-28T18:46:09.949-0700 INFO kafkacluster.kafka.banzaicloud.io:kafka/kafka-cluster2-615d-cluster2-615d - ok
2022-03-28T18:46:09.949-0700 INFO kafkacluster.kafka.banzaicloud.io:kafka/kafka-cluster2-615d-cluster2-615d deleted
2022-03-28T18:46:10.227-0700 INFO kafkarole.kafka.banzaicloud.io:consumer-cluster2-615d - pending
2022-03-28T18:46:10.311-0700 INFO kafkarole.kafka.banzaicloud.io:consumer-cluster2-615d - ok
2022-03-28T18:46:10.311-0700 INFO kafkarole.kafka.banzaicloud.io:consumer-cluster2-615d deleted
2022-03-28T18:46:10.750-0700 INFO kafkarole.kafka.banzaicloud.io:idempotent-producer-cluster2-615d - pending
2022-03-28T18:46:10.849-0700 INFO kafkarole.kafka.banzaicloud.io:idempotent-producer-cluster2-615d - ok
2022-03-28T18:46:10.849-0700 INFO kafkarole.kafka.banzaicloud.io:idempotent-producer-cluster2-615d deleted
2022-03-28T18:46:11.049-0700 INFO kafkarole.kafka.banzaicloud.io:producer-cluster2-615d - pending
2022-03-28T18:46:11.149-0700 INFO kafkarole.kafka.banzaicloud.io:producer-cluster2-615d - ok
2022-03-28T18:46:11.149-0700 INFO kafkarole.kafka.banzaicloud.io:producer-cluster2-615d deleted
2022-03-28T18:46:11.149-0700 INFO successfully deleted Supertubes resources that were synced from the detaching cluster {"peer cluster": "cluster3"}
2022-03-28T18:46:11.149-0700 INFO successfully detached cluster {"name": "cluster2"}
Note: If one of the clusters becomes unreachable during the detachment process, the CLI will error out. Since the command is idempotent you can complete the process later on when the cluster is reachable. Alternatively, you can delete the following resources manually in order to complete the detachment process:
- ClusterFeature:
sdm-core-resources
- ResourceSyncRule:
sdm-core-resources-application-manifest-sink
,sdm-core-resources-kafka-acls-sink
,sdm-core-resources-kafka-clusters-sink
,sdm-core-resources-kafka-resource-selectors-sink
,sdm-core-resources-kafka-roles-sink
,sdm-core-resources-kafka-topics-sink
,sdm-core-resources-kafka-users-sink
- ClusterRole:
cluster-registry-sdm
,cluster-registry-sdm-reader
- ClusterRoleBinding:
cluster-registry-sdm
,cluster-registry-sdm-reader
- ServiceAccount:
cluster-registry-sdm
,cluster-registry-sdm-reader
Uninstall Cluster Registry
In order to uninstall Cluster Registry, execute the following command on cluster1:
smm sdm clusterregistry uninstall
If it runs successfully you should see logs similar to the following:
2022-03-24T10:47:31.388-0700 INFO applicationManifest.readiness waiting {"cluster-registry": "0.1.3"}
INFO applicationManifest.readiness done {"cluster-registry": "0.1.3"}
INFO applicationManifest.cluster-registry reconciling
INFO applicationManifest.cluster-registry syncing resources
INFO applicationManifest.cluster-registry object eligible for delete {"gvk": "/v1, Kind=ServiceAccount", "namespace": "cluster-registry", "name": "cluster-registry-controller"}
INFO applicationManifest.cluster-registry object eligible for delete {"gvk": "/v1, Kind=ServiceAccount", "namespace": "cluster-registry", "name": "cluster-registry-controller-reader"}
INFO applicationManifest.cluster-registry object eligible for delete {"gvk": "rbac.authorization.k8s.io/v1, Kind=ClusterRole", "namespace": "", "name": "cluster-registry-controller"}
INFO applicationManifest.cluster-registry object eligible for delete {"gvk": "rbac.authorization.k8s.io/v1, Kind=ClusterRole", "namespace": "", "name": "cluster-registry-controller-aggregated"}
INFO applicationManifest.cluster-registry object eligible for delete {"gvk": "rbac.authorization.k8s.io/v1, Kind=ClusterRole", "namespace": "", "name": "cluster-registry-controller-reader"}
INFO applicationManifest.cluster-registry object eligible for delete {"gvk": "rbac.authorization.k8s.io/v1, Kind=ClusterRole", "namespace": "", "name": "cluster-registry-controller-reader-aggregated"}
INFO applicationManifest.cluster-registry object eligible for delete {"gvk": "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding", "namespace": "", "name": "cluster-registry-controller"}
INFO applicationManifest.cluster-registry object eligible for delete {"gvk": "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding", "namespace": "", "name": "cluster-registry-controller-reader"}
INFO applicationManifest.cluster-registry object eligible for delete {"gvk": "rbac.authorization.k8s.io/v1, Kind=Role", "namespace": "cluster-registry", "name": "cluster-registry-controller-leader-election"}
INFO applicationManifest.cluster-registry object eligible for delete {"gvk": "rbac.authorization.k8s.io/v1, Kind=RoleBinding", "namespace": "cluster-registry", "name": "cluster-registry-controller-leader-election"}
INFO applicationManifest.cluster-registry object eligible for delete {"gvk": "/v1, Kind=Service", "namespace": "cluster-registry", "name": "cluster-registry-controller"}
INFO applicationManifest.cluster-registry object eligible for delete {"gvk": "apps/v1, Kind=Deployment", "namespace": "cluster-registry", "name": "cluster-registry-controller"}
INFO applicationManifest.cluster-registry will prune unmmanaged resource {"name": "cluster-registry-controller", "namespace": "cluster-registry", "group": "", "version": "v1", "listKind": "ServiceAccount"}
INFO applicationManifest.cluster-registry will prune unmmanaged resource {"name": "cluster-registry-controller-reader", "namespace": "cluster-registry", "group": "", "version": "v1", "listKind": "ServiceAccount"}
INFO applicationManifest.cluster-registry will prune unmmanaged resource {"name": "cluster-registry-controller", "namespace": "", "group": "rbac.authorization.k8s.io", "version": "v1", "listKind": "ClusterRole"}
INFO applicationManifest.cluster-registry will prune unmmanaged resource {"name": "cluster-registry-controller-aggregated", "namespace": "", "group": "rbac.authorization.k8s.io", "version": "v1", "listKind": "ClusterRole"}
INFO applicationManifest.cluster-registry will prune unmmanaged resource {"name": "cluster-registry-controller-reader", "namespace": "", "group": "rbac.authorization.k8s.io", "version": "v1", "listKind": "ClusterRole"}
INFO applicationManifest.cluster-registry will prune unmmanaged resource {"name": "cluster-registry-controller-reader-aggregated", "namespace": "", "group": "rbac.authorization.k8s.io", "version": "v1", "listKind": "ClusterRole"}
INFO applicationManifest.cluster-registry will prune unmmanaged resource {"name": "cluster-registry-controller", "namespace": "", "group": "rbac.authorization.k8s.io", "version": "v1", "listKind": "ClusterRoleBinding"}
INFO applicationManifest.cluster-registry will prune unmmanaged resource {"name": "cluster-registry-controller-reader", "namespace": "", "group": "rbac.authorization.k8s.io", "version": "v1", "listKind": "ClusterRoleBinding"}
INFO applicationManifest.cluster-registry will prune unmmanaged resource {"name": "cluster-registry-controller-leader-election", "namespace": "cluster-registry", "group": "rbac.authorization.k8s.io", "version": "v1", "listKind": "Role"}
INFO applicationManifest.cluster-registry will prune unmmanaged resource {"name": "cluster-registry-controller-leader-election", "namespace": "cluster-registry", "group": "rbac.authorization.k8s.io", "version": "v1", "listKind": "RoleBinding"}
INFO applicationManifest.cluster-registry will prune unmmanaged resource {"name": "cluster-registry-controller", "namespace": "cluster-registry", "group": "", "version": "v1", "listKind": "Service"}
INFO applicationManifest.cluster-registry will prune unmmanaged resource {"name": "cluster-registry-controller", "namespace": "cluster-registry", "group": "apps", "version": "v1", "listKind": "Deployment"}
INFO reconciliation is in progress
INFO applicationManifest.cluster-registry.readiness waiting
INFO applicationManifest.cluster-registry.readiness done
INFO applicationManifest.cluster-registry.removal waiting
INFO applicationManifest.cluster-registry.removal done
INFO applicationManifest.cluster-registry reconciled
INFO reconciled...re-check (1/3)
INFO reconciled...re-check (2/3)
INFO reconciled...re-check (3/3)
INFO applicationmanifest.supertubes.banzaicloud.io:default/applicationmanifest - ok
Note: the above procedures can also be done in operator mode.
Note: If any of the clusters becomes unreachable during the uninstall process, the CLI will show an error. In order to complete the uninstall process you will need to delete the following resources manually:
- ClusterFeature:
sdm-core-resources
- ResourceSyncRule:
sdm-core-resources-application-manifest-sink
,sdm-core-resources-kafka-acls-sink
,sdm-core-resources-kafka-clusters-sink
,sdm-core-resources-kafka-resource-selectors-sink
,sdm-core-resources-kafka-roles-sink
,sdm-core-resources-kafka-topics-sink
,sdm-core-resources-kafka-users-sink
- ClusterRole:
cluster-registry-sdm
,cluster-registry-sdm-reader
- ClusterRoleBinding:
cluster-registry-sdm
,cluster-registry-sdm-reader
- ServiceAccount:
cluster-registry-sdm
,cluster-registry-sdm-reader
- Deployment:
cluster-registry-controller
in namespacecluster-registry
- Namespace:
cluster-registry
- All
Cluster
CRsYou may also need to delete resources that were replicated from the cluster you’re deleting onto other clusters in the group.