Create multi-cluster mesh

Prerequisites

To create a multi-cluster mesh with Service Mesh Manager, you need:

  • At least two Kubernetes clusters, with access to their kubeconfig files.
  • The Service Mesh Manager CLI tool installed on your computer.
  • Network connectivity properly configured between the participating clusters.

Create a multi-cluster mesh

To create a multi-cluster mesh with Service Mesh Manager, complete the following steps.

  1. Install Service Mesh Manager to the primary cluster using the following command. This will install all Service Mesh Manager components to the cluster. Run smm install -a

    Note: If you are installing Service Mesh Manager on a managed Kubernetes solution of a public cloud provider (for example, Amazon EKS, AKS, or GKE) or kOps, the cluster name auto-discovered by Service Mesh Manager is incompatible with Kubernetes resource naming restrictions and Istio’s method of identifying clusters in a multicluster mesh.

    In earlier Service Mesh Manager versions, you had to manually use the --cluster-name parameter to set a cluster name that complies with the RFC 1123 DNS subdomain/label format (alphanumeric string without “_” or “.” characters). Starting with Service Mesh Manager version 1.11, non-compliant names are automatically converted using the following rules:

    • Replace ‘_’ characters with ‘-’
    • Replace ‘.’ characters with ‘-’
    • Replace ‘:’ characters with ‘-’
    • Truncate the name to 63 characters

    If you experience errors during the installation, try running the installation in verbose mode: smm install -v

    Calisti supports KUBECONFIG contexts having the following authentication methods:

    • certfile and keyfile
    • certdata and keydata
    • bearer token
    • exec/auth provider

    Username-password pairs are not supported.

    If you are installing Service Mesh Manager in a test environment, you can install it without requiring authentication by running:

    smm install --anonymous-auth -a
    
  2. On the primary Service Mesh Manager cluster, attach the peer cluster to the mesh using one of the following commands.

    Note: To understand the difference between the remote Istio and primary Istio clusters, see the Istio control plane models section in the official Istio documentation. The short summary is that remote Istio clusters do not have a separate Istio control plane, while primary Istio clusters do.

    The following commands automate the process of creating the resources necessary for the peer cluster, generate and set up the kubeconfig for that cluster, and attach the cluster to the mesh.

    • To attach a remote Istio cluster with the default options, run:

      smm istio cluster attach <PEER_CLUSTER_KUBECONFIG_FILE>
      
    • To attach a primary Istio cluster (one that has an active Istio control plane installed), run:

      smm istio cluster attach <PEER_CLUSTER_KUBECONFIG_FILE> --active-istio-control-plane
      

      Note: If the name of the cluster cannot be used as a Kubernetes resource name (for example, because it contains the underscore, colon, or another special character), you must manually specify a name to use when you are attaching the cluster to the service mesh. For example:

      smm istio cluster attach <PEER_CLUSTER_KUBECONFIG_FILE> --name <KUBERNETES_COMPLIANT_CLUSTER_NAME> --active-istio-control-plane
      

      Otherwise, the following error occurs when you try to attach the cluster:

      could not attach peer cluster: graphql: Secret "example-secret" is invalid: metadata.name: Invalid value: "gke_gcp-cluster_region": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.'**
      
  3. Verify the name that will be used to refer to the cluster in the mesh. To use the name of the cluster, press Enter.

    Note: If you are installing Service Mesh Manager on a managed Kubernetes solution of a public cloud provider (for example, Amazon EKS, AKS, or GKE) or kOps, the cluster name auto-discovered by Service Mesh Manager is incompatible with Kubernetes resource naming restrictions and Istio’s method of identifying clusters in a multicluster mesh.

    In earlier Service Mesh Manager versions, you had to manually use the --cluster-name parameter to set a cluster name that complies with the RFC 1123 DNS subdomain/label format (alphanumeric string without “_” or “.” characters). Starting with Service Mesh Manager version 1.11, non-compliant names are automatically converted using the following rules:

    • Replace ‘_’ characters with ‘-’
    • Replace ‘.’ characters with ‘-’
    • Replace ‘:’ characters with ‘-’
    • Truncate the name to 63 characters
    ? Cluster must be registered. Please enter the name of the cluster (<current-name-of-the-cluster>)
    
  4. Wait until the peer cluster is attached. Attaching the peer cluster takes some time, because it can be completed only after the ingress gateway address works. You can verify that the peer cluster is attached successfully with the following command:

    smm istio cluster status
    

    The process is finished when you see Available in the Status field of all clusters.

    To attach other clusters, or to customize the network settings of the cluster, see Attach a new cluster to the mesh.

  5. Deploy the demo application. You can deploy the demo application in a distributed way to multiple clusters with the following commands:

    smm demoapp install -s frontpage,catalog,bookings,postgresql
    smm -c <PEER_CLUSTER_KUBECONFIG_FILE> demoapp install -s movies,payments,notifications,analytics,database,mysql --peer
    

    After installation, the demo application automatically starts generating traffic, and the dashboard draws a picture of the data flow. (If it doesn’t, run the smm demoapp load start command, or Generate load on the UI. If you want to stop generating traffic, run smm demoapp load stop.)

    If you are looking to deploy your own application, check out Deploy custom application for some guidelines.

  6. If you are installing Service Mesh Manager on a managed Kubernetes solution of a public cloud provider (for example, AWS, Azure, or Google Cloud), assign admin roles, so that you can tail the logs of your containers from the Service Mesh Manager UI, use Service Level Objectives and perform various tasks from the CLI that require custom permissions. Run the following command:

    kubectl create clusterrolebinding user-cluster-admin --clusterrole=cluster-admin --user=<gcp/aws/azure username>
    

    CAUTION:

    Assigning administrator roles might be very dangerous because it gives wide access to your infrastructure. Be careful and do that only when you’re confident in what you’re doing.
  7. Open the dashboard and look around.

    smm dashboard
    
  8. If you have purchased a commercial license for Service Mesh Manager, apply the license. For details, see Paid tier.

Kafka in the multi-cluster service mesh

You can install Streaming Data Manager on the primary Calisti cluster. After the installation, you can make the Kafka brokers on the primary cluster accessible from the peer clusters by setting up DNS resolution for them, as shown in the following section.

Kafka Broker Service DNS resolution

The Kafka brokers are accessible via any cluster in the service mesh. However, DNS resolution for the broker services is required for workloads in the peer clusters for the traffic to make it to the sidecar proxies. To achieve this, you can either:

  • use Istio Proxy DNS (required to be setup in the global service mesh config), or
  • add Kubernetes services for the Kafka brokers in the peer clusters.

The following solution uses the cluster-registry-controller to synchronize the services between clusters.

  1. On the primary Calisti cluster with the Kafka brokers, run:

    cat <<EOF | kubectl apply -f -
    apiVersion: clusterregistry.k8s.cisco.com/v1alpha1
    kind: ClusterFeature
    metadata:
      name: kafka-source
    spec:
      featureName: smm.k8s.cisco.com/kafka-source
    EOF
    
  2. On the peer clusters that need access to the Kafka brokers, run:

    kubectl create ns kafka --kubeconfig <PEER_CLUSTER_KUBECONFIG_FILE>
    cat <<EOF | kubectl apply --kubeconfig <PEER_CLUSTER_KUBECONFIG_FILE> -f -
    apiVersion: clusterregistry.k8s.cisco.com/v1alpha1
    kind: ResourceSyncRule
    metadata:
      labels:
      name: kafka-service-sink
    spec:
      clusterFeatureMatch:
      - featureName: smm.k8s.cisco.com/kafka-source
      groupVersionKind:
        kind: Service
        version: v1
      rules:
      - match:
        - labels:
          - matchLabels:
              app: kafka
              kafka_cr: kafka
          objectKey: {}
        mutations:
          overrides:
          - path: /spec/clusterIP
            type: remove
          - path: /spec/clusterIPs?
            type: remove
    EOF
    

Turn on Kafka client functionality in demoapp workloads

The demo application is made up of a general data traffic generator project called allspark. It has the capability to be configured as Kafka clients. The following commands setup the bookings service to produce to the KafkaTopic recommendations-topic. This topic was automatically created by the demoapp install command if run while Streaming Data Manager is installed.

kubectl set env deploy/bookings -c bookings REQUESTS="http://analytics:8080/#1 http://payments:8080/#1 kafka-produce://kafka-all-broker.kafka.svc.cluster.local:29092/recommendations-topic?message=bookings-message#1"

Setup the movies-v3 service in the peer cluster to consume from the recommendations-topic topic.

kubectl set env deploy/movies-v3 -c movies --kubeconfig <PEER_CLUSTER_KUBECONFIG_FILE> KAFKASERVER_BOOTSTRAP_SERVER=kafka-all-broker.kafka.svc.cluster.local:29092 KAFKASERVER_TOPIC=recommendations-topic KAFKASERVER_CONSUMER_GROUP=recommendations-group

After the pods restart, enable the smm-demo and kafka namespaces on the Calisti dashboard to see the Kafka client traffic.

Kafka traffic with the demo application in a multi-cluster service mesh Kafka traffic with the demo application in a multi-cluster service mesh

Cleanup

  1. To remove the demo application from a peer cluster, run the following command:

    smm -c <PEER_CLUSTER_KUBECONFIG_FILE> demoapp uninstall
    
  2. To remove a peer cluster from the mesh, run the following command:

    smm istio cluster detach <PEER_CLUSTER_KUBECONFIG_FILE>
    

For details, see Detach a cluster from the mesh.