Install SMM - GitOps - single cluster

This guide details how to set up a GitOps environment for Service Mesh Manager using Argo CD. The same principles can be used for other tools as well.

CAUTION:

Do not push the secrets directly into the git repository, especially when it is a public repository. Argo CD provides solutions to keep secrets safe.

Architecture

The high level architecture for Argo CD with a single-cluster Service Mesh Manager consists of the following components:

  • A git repository that stores the various charts and manifests,
  • a management cluster that runs the Argo CD server, and
  • the Service Mesh Manager cluster managed by Argo CD.

Service Mesh Manager GitOps architecture Service Mesh Manager GitOps architecture

Prerequisites

To complete this procedure, you need:

  • A free registration for the Service Mesh Manager download page
  • A Kubernetes or OpenShift cluster to deploy Argo CD on (called management-cluster in the examples).
  • A Kubernetes or OpenShift cluster to deploy Service Mesh Manager on (called workload-cluster-1 in the examples).

CAUTION:

Supported providers and Kubernetes versions

The cluster must run a Kubernetes version that Service Mesh Manager supports: Kubernetes 1.21, 1.22, 1.23, 1.24.

Service Mesh Manager is tested and known to work on the following Kubernetes providers:

  • Amazon Elastic Kubernetes Service (Amazon EKS)
  • Google Kubernetes Engine (GKE)
  • Azure Kubernetes Service (AKS)
  • Red Hat OpenShift 4.11
  • On-premises installation of stock Kubernetes with load balancer support (and optionally PVCs for persistence)

Calisti resource requirements

Make sure that your Kubernetes or OpenShift cluster has sufficient resources to install Calisti. The following table shows the number of resources needed on the cluster:

Resource Required
CPU - 32 vCPU in total
- 4 vCPU available for allocation per worker node (If you are testing on a cluster at a cloud provider, use nodes that have at least 4 CPUs, for example, c5.xlarge on AWS.)
Memory - 64 GiB in total
- 4 GiB available for allocation per worker node for the Kubernetes cluster (8 GiB in case of the OpenShift cluster)
Storage 12 GB of ephemeral storage on the Kubernetes worker nodes (for Traces and Metrics)

These minimum requirements need to be available for allocation within your cluster, in addition to the requirements of any other loads running in your cluster (for example, DaemonSets and Kubernetes node-agents). If Kubernetes cannot allocate sufficient resources to Service Mesh Manager, some pods will remain in Pending state, and Service Mesh Manager will not function properly.

Enabling additional features, such as High Availability increases this value.

The default installation, when enough headroom is available in the cluster, should be able to support at least 150 running Pods with the same amount of Services. For setting up Service Mesh Manager for bigger workloads, see scaling Service Mesh Manager.

Procedure overview

The high-level steps of the procedure are:

  1. Install Argo CD and register the clusters
  2. Prepare the Git repository
  3. Deploy Service Mesh Manager

Install Argo CD

Complete the following steps to install Argo CD on the management cluster.

Set up the environment

  1. Set the KUBECONFIG location and context name for the management-cluster cluster.

    MANAGEMENT_CLUSTER_KUBECONFIG=management_cluster_kubeconfig.yaml
    MANAGEMENT_CLUSTER_CONTEXT=management-cluster
    kubectl config --kubeconfig "${MANAGEMENT_CLUSTER_KUBECONFIG}" get-contexts "${MANAGEMENT_CLUSTER_CONTEXT}"
    

    Expected output:

    CURRENT   NAME                 CLUSTER              AUTHINFO   NAMESPACE
    *         management-cluster   management-cluster
    
  2. Set the KUBECONFIG location and context name for the workload-cluster-1 cluster.

    WORKLOAD_CLUSTER_1_KUBECONFIG=workload_cluster_1_kubeconfig.yaml
    WORKLOAD_CLUSTER_1_CONTEXT=workload-cluster-1
    kubectl config --kubeconfig "${WORKLOAD_CLUSTER_1_KUBECONFIG}" get-contexts "${WORKLOAD_CLUSTER_1_CONTEXT}"
    

    Expected output:

    CURRENT   NAME                 CLUSTER              AUTHINFO                                          NAMESPACE
    *         workload-cluster-1   workload-cluster-1
    

    Repeat this step for any additional workload clusters you want to use.

  3. Add the cluster configurations to KUBECONFIG. Include any additional workload clusters you want to use.

    KUBECONFIG=$KUBECONFIG:$MANAGEMENT_CLUSTER_KUBECONFIG:$WORKLOAD_CLUSTER_1_KUBECONFIG
    
  4. Make sure the management-cluster Kubernetes context is the current context.

    kubectl config use-context "${MANAGEMENT_CLUSTER_CONTEXT}"
    

    Expected output:

    Switched to context "management-cluster".
    

Install Argo CD Server

  1. Create the argocd namespace.

    kubectl create namespace argocd
    

    Expected output:

    namespace/argocd created
    
  2. On OpenShift: Run the following command to grant the service accounts access to the argocd namespace.

    oc adm policy add-scc-to-group privileged system:serviceaccounts:argocd
    

    Expected output:

    clusterrole.rbac.authorization.k8s.io/system:openshift:scc:privileged added: "system:serviceaccounts:argocd"
    
  3. Deploy Argo CD.

    kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
    
    customresourcedefinition.apiextensions.k8s.io/applications.argoproj.io created
    customresourcedefinition.apiextensions.k8s.io/applicationsets.argoproj.io created
    customresourcedefinition.apiextensions.k8s.io/appprojects.argoproj.io created
    serviceaccount/argocd-application-controller created
    serviceaccount/argocd-applicationset-controller created
    serviceaccount/argocd-dex-server created
    serviceaccount/argocd-notifications-controller created
    serviceaccount/argocd-redis created
    serviceaccount/argocd-repo-server created
    serviceaccount/argocd-server created
    role.rbac.authorization.k8s.io/argocd-application-controller created
    role.rbac.authorization.k8s.io/argocd-applicationset-controller created
    role.rbac.authorization.k8s.io/argocd-dex-server created
    role.rbac.authorization.k8s.io/argocd-notifications-controller created
    role.rbac.authorization.k8s.io/argocd-server created
    clusterrole.rbac.authorization.k8s.io/argocd-application-controller created
    clusterrole.rbac.authorization.k8s.io/argocd-server created
    rolebinding.rbac.authorization.k8s.io/argocd-application-controller created
    rolebinding.rbac.authorization.k8s.io/argocd-applicationset-controller created
    rolebinding.rbac.authorization.k8s.io/argocd-dex-server created
    rolebinding.rbac.authorization.k8s.io/argocd-notifications-controller created
    rolebinding.rbac.authorization.k8s.io/argocd-redis created
    rolebinding.rbac.authorization.k8s.io/argocd-server created
    clusterrolebinding.rbac.authorization.k8s.io/argocd-application-controller created
    clusterrolebinding.rbac.authorization.k8s.io/argocd-server created
    configmap/argocd-cm created
    configmap/argocd-cmd-params-cm created
    configmap/argocd-gpg-keys-cm created
    configmap/argocd-notifications-cm created
    configmap/argocd-rbac-cm created
    configmap/argocd-ssh-known-hosts-cm created
    configmap/argocd-tls-certs-cm created
    secret/argocd-notifications-secret created
    secret/argocd-secret created
    service/argocd-applicationset-controller created
    service/argocd-dex-server created
    service/argocd-metrics created
    service/argocd-notifications-controller-metrics created
    service/argocd-redis created
    service/argocd-repo-server created
    service/argocd-server created
    service/argocd-server-metrics created
    deployment.apps/argocd-applicationset-controller created
    deployment.apps/argocd-dex-server created
    deployment.apps/argocd-notifications-controller created
    deployment.apps/argocd-redis created
    deployment.apps/argocd-repo-server created
    deployment.apps/argocd-server created
    statefulset.apps/argocd-application-controller created
    networkpolicy.networking.k8s.io/argocd-application-controller-network-policy created
    networkpolicy.networking.k8s.io/argocd-applicationset-controller-network-policy created
    networkpolicy.networking.k8s.io/argocd-dex-server-network-policy created
    networkpolicy.networking.k8s.io/argocd-notifications-controller-network-policy created
    networkpolicy.networking.k8s.io/argocd-redis-network-policy created
    networkpolicy.networking.k8s.io/argocd-repo-server-network-policy created
    networkpolicy.networking.k8s.io/argocd-server-network-policy created
    
  4. Wait until the installation is complete, then check that the Argo CD pods are up and running.

    kubectl get pods -n argocd
    

    The output should be similar to:

    NAME                                                    READY   STATUS    RESTARTS   AGE
    pod/argocd-application-controller-0                     1/1     Running   0          7h59m
    pod/argocd-applicationset-controller-78b8b554f9-pgwbl   1/1     Running   0          7h59m
    pod/argocd-dex-server-6bbc85c688-8p7zf                  1/1     Running   0          16h
    pod/argocd-notifications-controller-75847756c5-dbbm5    1/1     Running   0          16h
    pod/argocd-redis-f4cdbff57-wcpxh                        1/1     Running   0          7h59m
    pod/argocd-repo-server-d5c7f7ffb-c8962                  1/1     Running   0          7h59m
    pod/argocd-server-76497676b-pnvf4                       1/1     Running   0          7h59m
    
  5. For the Argo CD UI, set the argocd-server service type to LoadBalancer.

    kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
    

    Expected output:

    service/argocd-server patched
    
  6. Patch the App of Apps health check in Argo CD configuration to ignore diffs of controller/operator managed fields. For details about this patch, see the Argo CD documentation sections Resource Health and Diffing Customization.

    Apply the new Argo CD health check configurations:

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: argocd-cm
      namespace: argocd
      labels:
        app.kubernetes.io/name: argocd-cm
        app.kubernetes.io/part-of: argocd
    data:
      # App of app health check
      resource.customizations.health.argoproj.io_Application: |
        hs = {}
        hs.status = "Progressing"
        hs.message = ""
        if obj.status ~= nil then
          if obj.status.health ~= nil then
            hs.status = obj.status.health.status
            if obj.status.health.message ~= nil then
              hs.message = obj.status.health.message
            end
          end
        end
        return hs
      # Ignoring RBAC changes made by AggregateRoles
      resource.compareoptions: |
        # disables status field diffing in specified resource types
        ignoreAggregatedRoles: true
    
        # disables status field diffing in specified resource types
        # 'crd' - CustomResourceDefinition-s (default)
        # 'all' - all resources
        # 'none' - disabled
        ignoreResourceStatusField: all
    EOF
    

    Expected output:

    configmap/argocd-cm configured
    
  7. Get the initial password for the admin user.

    kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo
    

    Expected output:

    argocd-admin-password
    
  8. Check the external-ip-or-hostname address of the argocd-server service.

    kubectl get service -n argocd argocd-server
    

    The output should be similar to:

    NAME                                      TYPE           CLUSTER-IP      EXTERNAL-IP               PORT(S)                      AGE
    argocd-server                             LoadBalancer   10.108.14.130   external-ip-or-hostname   80:31306/TCP,443:30063/TCP   7d13h
    
  9. Open the https://external-ip-or-hostname URL and log in to the Argo CD server using the password received in the previous step.

    # Exactly one of hostname or IP will be available and used for the remote URL.
    open https://$(kubectl get service -n argocd argocd-server -o jsonpath='{.status.loadBalancer.ingress[0].hostname}{.status.loadBalancer.ingress[0].ip}')
    

Install Argo CD CLI

  1. Install Argo CD CLI on your computer. For details, see the Argo CD documentation.

  2. Log in with the CLI:

    # Exactly one of hostname or IP will be available and used for the remote URL.
    argocd login $(kubectl get service -n argocd argocd-server -o jsonpath='{.status.loadBalancer.ingress[0].hostname}{.status.loadBalancer.ingress[0].ip}') --insecure --username admin --password $(kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d)
    

    Expected output:

    'admin:login' logged in successfully
    

For more details about Argo CD installation, see the Argo CD getting started guide.

Register clusters

  1. Register the clusters that will run Service Mesh Manager in Argo CD. In this example, register workload-cluster-1 using one of the following methods.

    • Register the cluster from the command line by running:

      argocd cluster add --kubeconfig "${WORKLOAD_CLUSTER_1_KUBECONFIG}" "${WORKLOAD_CLUSTER_1_CONTEXT}"
      

      Expected output:

      WARNING: This will create a service account `argocd-manager` on the cluster referenced by context `workload-cluster-1` with full cluster level privileges. Do you want to continue [y/N]? y
      INFO[0005] ServiceAccount "argocd-manager" created in namespace "kube-system"
      INFO[0005] ClusterRole "argocd-manager-role" created
      INFO[0005] ClusterRoleBinding "argocd-manager-role-binding" created
      INFO[0011] Created bearer token secret for ServiceAccount "argocd-manager"
      Cluster 'https://workload-cluster-1-ip-or-hostname' added
      
    • Alternatively, you can register clusters declaratively as Kubernetes secrets. Modify the following command for your environment and apply it. For details, see the Argo CD documentation.

      WORKLOAD_CLUSTER_1_IP="https://workload-cluster-1-IP" ARGOCD_BEARER_TOKEN="authentication-token" ARGOCD_CA_B64="base64 encoded certificate" ; kubectl apply -f - <<EOF
      apiVersion: v1
      kind: Secret
      metadata:
        name: workload-cluster-1-secret
        labels:
          argocd.argoproj.io/secret-type: cluster
      type: Opaque
      stringData:
        name: workload-cluster-1
        server: "${WORKLOAD_CLUSTER_1_IP}"
        config: |
          {
            "bearerToken": "${ARGOCD_BEARER_TOKEN}",
            "tlsClientConfig": {
              "insecure": false,
              "caData": "${ARGOCD_CA_B64}"
            }
          }
      EOF
      
  2. Make sure that the cluster is registered in Argo CD by running the following command:

    argocd cluster list
    

    The output should be similar to:

    SERVER                                      NAME                VERSION  STATUS   MESSAGE                                                  PROJECT
    https://kubernetes.default.svc              in-cluster                   Unknown  Cluster has no applications and is not being monitored.
    https://workload-cluster-1-ip-or-hostname   workload-cluster-1           Unknown  Cluster has no applications and is not being monitored.
    

Prepare Git repository

  1. Create an empty repository called calisti-gitops on GitHub (or another provider that Argo CD supports) and initialize it with a README.md file so that you can clone the repository. Because Service Mesh Manager credentials will be stored in this repository, make it a private repository.

    GITHUB_ID="github-id"
    GITHUB_REPOSITORY_NAME="calisti-gitops"
    
  2. Obtain a personal access token to the repository (on GitHub, see Creating a personal access token), that has the following permissions:

    • admin:org_hook
    • admin:repo_hook
    • read:org
    • read:public_key
    • repo
  3. Log in with your personal access token with git.

    export GH_TOKEN="github-personal-access-token" # Note: this environment variable needs to be exported so the `git` binary is going to use it automatically for authentication.
    
  4. Clone the repository into your local workspace.

    git clone "https://github.com/${GITHUB_ID}/${GITHUB_REPOSITORY_NAME}.git"
    

    Expected output:

    Cloning into 'calisti-gitops'...
    remote: Enumerating objects: 144, done.
    remote: Counting objects: 100% (144/144), done.
    remote: Compressing objects: 100% (93/93), done.
    remote: Total 144 (delta 53), reused 135 (delta 47), pack-reused 0
    Receiving objects: 100% (144/144), 320.08 KiB | 746.00 KiB/s, done.
    Resolving deltas: 100% (53/53), done.
    
  5. Add the repository to Argo CD by running the following command. Alternatively, you can add it on Argo CD Web UI.

    argocd repo add "https://github.com/${GITHUB_ID}/${GITHUB_REPOSITORY_NAME}.git" --name "${GITHUB_REPOSITORY_NAME}" --username "${GITHUB_ID}" --password "${GH_TOKEN}"
    

    Expected output:

    Repository 'https://github.com/github-id/calisti-gitops.git' added
    
  6. Verify that the repository is connected by running:

    argocd repo list
    

    In the output, Status should be Successful:

    TYPE  NAME            REPO                                             INSECURE  OCI    LFS    CREDS  STATUS      MESSAGE  PROJECT
    git   calisti-gitops  https://github.com/github-id/calisti-gitops.git  false     false  false  true   Successful
    
  7. Change into the root directory of the cloned repository and create the following directories.

    cd "${GITHUB_REPOSITORY_NAME}"
    
    mkdir -p apps/demo-app apps/smm-controlplane apps/smm-operator charts demo-app manifests
    

    The final structure of the repository will look like this:

    .
    ├── apps
    │   ├── demo-app
    │   │   └── demo-app.yaml
    │   ├── smm-controlplane
    │   │   └── smm-controlplane.yaml
    │   └── smm-operator
    │       └── smm-operator.yaml
    ├── charts
    │   └── smm-operator
    │       └── ...
    ├── demo-app
    │   ├── demo-app-ns.yaml
    │   └── demo-app.yaml
    └── manifests
        ├── cert-manager-namespace.yaml
        └── smm-controlplane.yaml
    
    • The apps folder contains the Argo CD Application of the smm-operator, the smm-controlplane, and the demo-app.
    • The charts folder contains the Helm chart of the smm-operator.
    • The demo-app folder contains the manifest files of the demo application that represents your business application.
    • The manifests folder contains the smm-controlplane file and the cert-manager namespace file.

Prepare the helm charts

  1. You need an active Service Mesh Manager registration to download the Service Mesh Manager charts and images. You can sign up for free, or obtain Enterprise credentials on the official Cisco Service Mesh Manager page. After registration, you can obtain your username and password from the Download Center. Set them as environment variables.

    CALISTI_USERNAME="<your-calisti-username>"
    
    CALISTI_PASSWORD="<your-calisti-password>"
    
  2. Download the smm-operator chart from registry.eticloud.io into the charts directory of your Service Mesh Manager GitOps repository and extract it. Run the following commands:

    export HELM_EXPERIMENTAL_OCI=1 # Needed prior to Helm version 3.8.0
    
    echo "${CALISTI_PASSWORD}" | helm registry login registry.eticloud.io -u "${CALISTI_USERNAME}" --password-stdin
    

    Expected output:

    Login Succeeded
    
    helm pull oci://registry.eticloud.io/smm-charts/smm-operator --destination ./charts/ --untar --version 1.12.1
    

    Expected output:

    Pulled: registry.eticloud.io/smm-charts/smm-operator:latest-stable-version
    Digest: sha256:someshadigest
    

Deploy Service Mesh Manager

Deploy the smm-operator application

Complete the following steps to deploy the smm-operator chart using Argo CD.

  1. Create an Argo CD Application CR for smm-operator.

    Before running the following command, edit it if needed:

    • If you are not using a GitHub repository, set the repoURL field to your repository.
    • For multi-cluster setups, the Kubernetes API server address of one cluster must be reachable from other clusters. The API server addresses are private for certain clusters (for example, OpenShift) and not reachable by default from other clusters. In such case, use the PUBLIC_API_SERVER_ENDPOINT_ADDRESS variable to provide an address that’s reachable from the other clusters. This can be a public address, or one that’s routable from the other clusters.

    ARGOCD_CLUSTER_NAME="${WORKLOAD_CLUSTER_1_CONTEXT}" PUBLIC_API_SERVER_ENDPOINT_ADDRESS="" ; cat > "apps/smm-operator/smm-operator-app.yaml" <<EOF
    # apps/smm-operator/smm-operator-app.yaml
    apiVersion: argoproj.io/v1alpha1
    kind: Application
    metadata:
      name: smm-operator
      namespace: argocd
      finalizers:
        - resources-finalizer.argocd.argoproj.io
    spec:
      project: default
      source:
        repoURL: https://github.com/${GITHUB_ID}/${GITHUB_REPOSITORY_NAME}.git
        targetRevision: HEAD
        path: charts/smm-operator
        helm:
          parameters:
          - name: "global.ecr.enabled"
            value: 'false'
          - name: "global.basicAuth.username"
            value: "${CALISTI_USERNAME}"
          - name: "global.basicAuth.password"
            value: "${CALISTI_PASSWORD}"
          - name: "apiServerEndpointAddress"
            value: "${PUBLIC_API_SERVER_ENDPOINT_ADDRESS}" # The publicly accessible address of the k8s api server. Some Cloud providers have different API Server endpoint for internal and for public access. In that case the public endpoint needs to be specified here.
      destination:
        name: ${ARGOCD_CLUSTER_NAME}
        namespace: smm-registry-access
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
        syncOptions:
          - Validate=false
          - PruneLast=true
          - CreateNamespace=true
          - Replace=true
    EOF
    
  2. Commit and push the calisti-gitops repository.

    git add apps/smm-operator charts/smm-operator
    git commit -m "add smm-operator app"
    
    [main e6c4b4a] add smm-operator app
    36 files changed, 80324 insertions(+)
    create mode 100644 apps/smm-operator/smm-operator-app.yaml
    create mode 100644 charts/smm-operator/.helmignore
    create mode 100644 charts/smm-operator/Chart.yaml
    create mode 100644 charts/smm-operator/README.md
    create mode 100644 charts/smm-operator/crds/clusterfeature-crd.yaml
    create mode 100644 charts/smm-operator/crds/clusters-crd.yaml
    create mode 100644 charts/smm-operator/crds/crd-alertmanagerconfigs.yaml
    create mode 100644 charts/smm-operator/crds/crd-alertmanagers.yaml
    create mode 100644 charts/smm-operator/crds/crd-podmonitors.yaml
    create mode 100644 charts/smm-operator/crds/crd-probes.yaml
    create mode 100644 charts/smm-operator/crds/crd-prometheuses.yaml
    create mode 100644 charts/smm-operator/crds/crd-prometheusrules.yaml
    create mode 100644 charts/smm-operator/crds/crd-servicemonitors.yaml
    create mode 100644 charts/smm-operator/crds/crd-thanosrulers.yaml
    create mode 100644 charts/smm-operator/crds/crds.yaml
    create mode 100644 charts/smm-operator/crds/health.yaml
    create mode 100644 charts/smm-operator/crds/istio-operator-v1-crds.yaml
    create mode 100644 charts/smm-operator/crds/istio-operator-v2-crds.gen.yaml
    create mode 100644 charts/smm-operator/crds/istiooperator-crd.yaml
    create mode 100644 charts/smm-operator/crds/koperator-crds.yaml
    create mode 100644 charts/smm-operator/crds/metadata-crd.yaml
    create mode 100644 charts/smm-operator/crds/resourcesyncrules-crd.yaml
    create mode 100644 charts/smm-operator/crds/sre.yaml
    create mode 100644 charts/smm-operator/templates/_helpers.tpl
    create mode 100644 charts/smm-operator/templates/authproxy-rbac.yaml
    create mode 100644 charts/smm-operator/templates/authproxy-service.yaml
    create mode 100644 charts/smm-operator/templates/cert-manager-namespace.yaml
    create mode 100644 charts/smm-operator/templates/ecr.deployment.yaml
    create mode 100644 charts/smm-operator/templates/ecr.secret.yaml
    create mode 100644 charts/smm-operator/templates/ecr.service-account.yaml
    create mode 100644 charts/smm-operator/templates/namespace.yaml
    create mode 100644 charts/smm-operator/templates/operator-psp-basic.yaml
    create mode 100644 charts/smm-operator/templates/operator-rbac.yaml
    create mode 100644 charts/smm-operator/templates/operator-service.yaml
    create mode 100644 charts/smm-operator/templates/operator-statefulset.yaml
    create mode 100644 charts/smm-operator/values.yaml
    
    git push
    

    Expected output:

    Enumerating objects: 48, done.
    Counting objects: 100% (48/48), done.
    Delta compression using up to 12 threads
    Compressing objects: 100% (44/44), done.
    Writing objects: 100% (47/47), 282.18 KiB | 1.99 MiB/s, done.
    Total 47 (delta 20), reused 0 (delta 0), pack-reused 0
    remote: Resolving deltas: 100% (20/20), done.
    To github.com:pregnor/calisti-gitops.git
    + 8dd47c2...db9e7af main -> main (forced update)
    
  3. Apply the Application manifest.

    kubectl apply -f "apps/smm-operator/smm-operator-app.yaml"
    

    Expected output:

    application.argoproj.io/smm-operator created
    
  4. Verify that the applications have been added to Argo CD and are healthy.

    argocd app list
    

    Expected output:

    NAME          CLUSTER             NAMESPACE            PROJECT  STATUS  HEALTH   SYNCPOLICY  CONDITIONS  REPO                                             PATH                 TARGET
    smm-operator  workload-cluster-1  smm-registry-access  default  Synced  Healthy  Auto-Prune  <none>      https://github.com/github-id/calisti-gitops.git  charts/smm-operator  HEAD
    
  5. Check the smm-operator application on the Argo CD Web UI.

    SMM Operator SMM Operator

Deploy the smm-controlplane application

  1. Create the following namespace for the Service Mesh Manager ControlPlane.

    cat > manifests/cert-manager-namespace.yaml <<EOF
    # manifests/cert-manager-namespace.yaml
    apiVersion: v1
    kind: Namespace
    metadata:
      annotations:
        argocd.argoproj.io/sync-wave: "1"
      name: cert-manager
    EOF
    
  2. Create the smm-controlplane CR for the ControlPlane. For OpenShift installations, add platform: openshift to the spec section.

    ARGOCD_CLUSTER_NAME="${WORKLOAD_CLUSTER_1_CONTEXT}" ISTIO_MINOR_VERSION="1.15" ; cat > "manifests/smm-controlplane.yaml" <<EOF
    # manifests/smm-controlplane.yaml
    apiVersion: smm.cisco.com/v1alpha1
    kind: ControlPlane
    metadata:
      annotations:
        argocd.argoproj.io/sync-wave: "10"
      name: smm
    spec:
      # platform: openshift # Uncomment for OpenShift installations
      certManager:
        enabled: true
        namespace: cert-manager
      clusterName: ${ARGOCD_CLUSTER_NAME}
      clusterRegistry:
        enabled: true
        namespace: cluster-registry
      log: {}
      meshManager:
        enabled: true
        istio:
          enabled: true
          istioCRRef:
            name: cp-v${ISTIO_MINOR_VERSION/.}x
            namespace: istio-system
          operators:
            namespace: smm-system
        namespace: smm-system
      nodeExporter:
        enabled: true
        namespace: smm-system
        psp:
          enabled: false
        rbac:
          enabled: true
      oneEye: {}
      registryAccess:
        enabled: true
        imagePullSecretsController: {}
        namespace: smm-registry-access
        pullSecrets:
        - name: smm-registry.eticloud.io-pull-secret
          namespace: smm-registry-access
      repositoryOverride:
        host: registry.eticloud.io
        prefix: smm
      role: active
      smm:
        exposeDashboard:
          meshGateway:
            enabled: true
        als:
          enabled: true
          log: {}
        application:
          enabled: true
          log: {}
        auth:
          forceUnsecureCookies: true
          mode: anonymous
        certManager:
          enabled: true
        enabled: true
        federationGateway:
          enabled: true
          name: smm
          service:
            enabled: true
            name: smm-federation-gateway
            port: 80
        federationGatewayOperator:
          enabled: true
        impersonation:
          enabled: true
        istio:
          revision: cp-v${ISTIO_MINOR_VERSION/.}x.istio-system
        leo:
          enabled: true
          log: {}
        log: {}
        namespace: smm-system
        prometheus:
          enabled: true
          replicas: 1
        prometheusOperator: {}
        releaseName: smm
        role: active
        sre:
          enabled: true
        useIstioResources: true
    EOF
    
  3. Create the Argo CD Application CR for the smm-controlplane.

    ARGOCD_CLUSTER_NAME="${WORKLOAD_CLUSTER_1_CONTEXT}" ; cat > "apps/smm-controlplane/smm-controlplane-app.yaml" <<EOF
    # apps/smm-controlplane/smm-controlplane-app.yaml
    apiVersion: argoproj.io/v1alpha1
    kind: Application
    metadata:
      name: smm-controlplane
      namespace: argocd
      finalizers:
        - resources-finalizer.argocd.argoproj.io
    spec:
      project: default
      source:
        repoURL: https://github.com/${GITHUB_ID}/${GITHUB_REPOSITORY_NAME}.git
        targetRevision: HEAD
        path: manifests
      destination:
        name: ${ARGOCD_CLUSTER_NAME}
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
        syncOptions:
        - Validate=false
        - CreateNamespace=true
        - PrunePropagationPolicy=foreground
        - PruneLast=true
        - Replace=true
    EOF
    
  4. Commit the changes and push the calisti-gitops repository.

    git add apps/smm-controlplane manifests
    git commit -m "add smm-controlplane app"
    

    Expected output:

    [main 25ba7e8] add smm-controlplane app
    3 files changed, 212 insertions(+)
    create mode 100644 apps/smm-controlplane/smm-controlplane-app.yaml
    create mode 100644 manifests/cert-manager-namespace.yaml
    create mode 100644 manifests/smm-controlplane.yaml
    
    git push
    

    Expected output:

    Enumerating objects: 12, done.
    Counting objects: 100% (12/12), done.
    Delta compression using up to 10 threads
    Compressing objects: 100% (10/10), done.
    Writing objects: 100% (10/10), 2.70 KiB | 2.70 MiB/s, done.
    Total 10 (delta 1), reused 0 (delta 0), pack-reused 0
    remote: Resolving deltas: 100% (1/1), done.
    To github.com:<username>/calisti-gitops.git
      529545a..25ba7e8  main -> main
    
  5. Apply the Application manifest.

    kubectl apply -f "apps/smm-controlplane/smm-controlplane-app.yaml"
    

    Expected output:

    application.argoproj.io/smm-controlplane created
    
  6. Verify that the application has been added to Argo CD and is healthy.

    argocd app list
    

    Expected output:

    NAME              CLUSTER             NAMESPACE            PROJECT  STATUS     HEALTH   SYNCPOLICY  CONDITIONS  REPO                                             PATH                 TARGET
    smm-controlplane  workload-cluster-1                       default  Synced     Healthy  Auto-Prune  <none>      https://github.com/github-id/calisti-gitops.git  manifests            HEAD
    smm-operator      workload-cluster-1  smm-registry-access  default  Synced     Healthy  Auto-Prune  <none>      https://github.com/github-id/calisti-gitops.git  charts/smm-operator  HEAD
    
  7. Check that all pods are healthy and running in the smm-system namespace of workload-cluster-1.

    kubectl get pods -n smm-system --kubeconfig "${WORKLOAD_CLUSTER_1_KUBECONFIG}" --context "${WORKLOAD_CLUSTER_1_CONTEXT}"
    
    NAME                                               READY   STATUS    RESTARTS        AGE
    istio-operator-v115x-85495cd76f-q7n22              2/2     Running   4 (7m36s ago)   17m
    mesh-manager-0                                     2/2     Running   4 (7m35s ago)   18m
    prometheus-smm-prometheus-0                        4/4     Running   0               15m
    smm-7f95479ff7-rzh2g                               2/2     Running   0               16m
    smm-7f95479ff7-v52vp                               2/2     Running   0               16m
    smm-als-8487fdf4f7-ddklg                           2/2     Running   0               16m
    smm-authentication-7888dfc6d7-w7tdq                2/2     Running   0               16m
    smm-federation-gateway-84f9fbf54d-7glvp            2/2     Running   0               16m
    smm-federation-gateway-operator-6cb99c5798-9fj25   2/2     Running   4 (7m36s ago)   16m
    smm-grafana-95ff96dd9-m6rx6                        3/3     Running   0               16m
    smm-health-86dc8c98d6-pv7bk                        2/2     Running   3 (7m35s ago)   16m
    smm-health-api-5df5b76bf5-lvbsp                    2/2     Running   0               16m
    smm-ingressgateway-7d59684cf7-jsj7f                1/1     Running   0               16m
    smm-ingressgateway-external-59f9874787-p55wr       1/1     Running   0               16m
    smm-kubestatemetrics-f4766d7b8-9mc9f               2/2     Running   0               16m
    smm-leo-9fc8db6db-vlzpw                            2/2     Running   0               16m
    smm-prometheus-operator-6558dbddc8-bgdh9           3/3     Running   1 (16m ago)     16m
    smm-sre-alert-exporter-6656f98dd8-8wvdx            2/2     Running   0               16m
    smm-sre-api-77b65ff6bd-spzk2                       2/2     Running   0               16m
    smm-sre-controller-59d6cdd588-7cvbk                2/2     Running   3 (7m35s ago)   16m
    smm-tracing-6c85986bfd-xjjqw                       2/2     Running   0               16m
    smm-vm-integration-cdd8d8688-sk79s                 2/2     Running   3 (7m35s ago)   16m
    smm-web-84d697fdb4-2fbkm                           3/3     Running   0               16m
    
  8. Check the application on Argo CD Web UI.

    # Exactly one of hostname or IP will be available and used for the remote URL.
    open https://$(kubectl get service -n argocd argocd-server -o jsonpath='{.status.loadBalancer.ingress[0].hostname}{.status.loadBalancer.ingress[0].ip}')
    

    Argo CD Web UI Argo CD Web UI

At this point, you have successfully installed smm-operator and smm-controlplane on workload-cluster-1.

Deploy an application

If you want to deploy an application into the service mesh, complete the following steps. The examples use the Service Mesh Manager demo application.

  1. Create a namespace for the application: create the demo-app-ns.yaml file.

    cat > demo-app/demo-app-ns.yaml << EOF
    apiVersion: v1
    kind: Namespace
    metadata:
      labels:
        app.kubernetes.io/instance: smm-demo
        app.kubernetes.io/name: smm-demo
        app.kubernetes.io/part-of: smm-demo
        app.kubernetes.io/version: 0.1.4
        istio.io/rev: cp-v115x.istio-system
      name: smm-demo
    EOF
    
  2. Create a manifest for Network Attachment Definitions.

    cat > demo-app/smm-demo-nad.yaml << EOF
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: istio-cni-cp-v${ISTIO_MINOR_VERSION/.}x-istio-system
      namespace: smm-demo
      annotations:
        argocd.argoproj.io/sync-wave: "3"
    EOF
    
  3. Create the demo-app.yaml file.

    cat > demo-app/demo-app.yaml << EOF
    apiVersion: smm.cisco.com/v1alpha1
    kind: DemoApplication
    metadata:
      name: smm-demo
      namespace: smm-demo
    spec:
      autoscaling:
        enabled: true
      controlPlaneRef:
        name: smm
      deployIstioResources: true
      deploySLOResources: true
      enabled: true
      enabledComponents:
      - frontpage
      - catalog
      - bookings
      - postgresql
      - payments
      - notifications
      - movies
      - analytics
      - database
      - mysql
      istio:
        revision: cp-v115x.istio-system
      load:
        enabled: true
        maxRPS: 30
        minRPS: 10
        swingPeriod: 1380000000000
      replicas: 1
      resources:
        limits:
          cpu: "2"
          memory: 192Mi
        requests:
          cpu: 40m
          memory: 64Mi
    EOF
    
  4. Create an Argo CD Application file for the application. Create the demo-app.yaml file.

    ARGOCD_CLUSTER_NAME="${WORKLOAD_CLUSTER_1_CONTEXT}" ; cat > apps/demo-app/demo-app.yaml << EOF
    apiVersion: argoproj.io/v1alpha1
    kind: Application
    metadata:
      name: demo-app
      namespace: argocd
      finalizers:
        - resources-finalizer.argocd.argoproj.io
    spec:
      project: default
      source:
        repoURL: https://github.com/${GITHUB_ID}/${GITHUB_REPOSITORY_NAME}.git
        targetRevision: HEAD
        path: demo-app
      destination:
        name: ${ARGOCD_CLUSTER_NAME}
        namespace: smm-demo
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
        syncOptions:
        - Validate=false
        - CreateNamespace=true
        - PruneLast=true
        - Replace=true
    EOF
    
  5. Commit and push the calisti-gitops repository.

    git add apps/demo-app demo-app
    git commit -m "add demo app"
    

    Expected output:

    [main 58a236e] add demo app
    3 files changed, 74 insertions(+)
    create mode 100644 apps/demo-app/demo-app.yaml
    create mode 100644 demo-app/demo-app-ns.yaml
    create mode 100644 demo-app/demo-app.yaml
    
    git push
    

    Expected output:

    Enumerating objects: 10, done.
    Counting objects: 100% (10/10), done.
    Delta compression using up to 10 threads
    Compressing objects: 100% (7/7), done.
    Writing objects: 100% (8/8), 1.37 KiB | 1.37 MiB/s, done.
    Total 8 (delta 0), reused 0 (delta 0), pack-reused 0
    To github.com:<username>/calisti-gitops.git
      e16549e..58a236e  main -> main
    
  6. Deploy the application.

    kubectl apply -f apps/demo-app/demo-app.yaml
    
  7. Wait until all the pods in the application namespace (smm-demo) are up and running.

    kubectl get pods -n smm-demo --kubeconfig "${WORKLOAD_CLUSTER_1_KUBECONFIG}" --context "${WORKLOAD_CLUSTER_1_CONTEXT}"
    

    Expected output:

    NAME                                READY   STATUS    RESTARTS   AGE
    analytics-v1-7899bd4d4-bnf24        2/2     Running   0          109s
    bombardier-6455fd74f6-jndpv         2/2     Running   0          109s
    bookings-v1-559768454c-7vhzr        2/2     Running   0          109s
    catalog-v1-99b7bb56d-fjvhl          2/2     Running   0          109s
    database-v1-5cb4b4ff67-95ttk        2/2     Running   0          109s
    frontpage-v1-5b4dcbfcb4-djr72       2/2     Running   0          108s
    movies-v1-78fcf666dc-z8c2z          2/2     Running   0          108s
    movies-v2-84d9f5658f-kc65j          2/2     Running   0          108s
    movies-v3-86bbbc9745-r84bl          2/2     Running   0          108s
    mysql-d6b6b78fd-b7dwb               2/2     Running   0          108s
    notifications-v1-794c5dd8f6-lndh4   2/2     Running   0          108s
    payments-v1-858d4b4ffc-vtxxl        2/2     Running   0          108s
    postgresql-555fd55bdb-jn5pq         2/2     Running   0          108s
    
  8. Verify that the application appears on the Argo CD admin view, it is Healthy, and Synced.

    SMM Operator Argo CD admin SMM Operator Argo CD admin

Access the Service Mesh Manager dashboard

  1. You can access the Service Mesh Manager dashboard via the smm-ingressgateway-external LoadBalancer external-ip-or-hostname address. Run the following command to retrieve the IP address:

    kubectl get services -n smm-system smm-ingressgateway-external --kubeconfig "${WORKLOAD_CLUSTER_1_KUBECONFIG}" --context "${WORKLOAD_CLUSTER_1_CONTEXT}"
    

    Expected output:

    NAME                          TYPE           CLUSTER-IP   EXTERNAL-IP                PORT(S)        AGE
    smm-ingressgateway-external   LoadBalancer   10.0.0.199   external-ip-or-hostname    80:32505/TCP   2m28s
    
  2. Open the Service Mesh Manager dashboard using one of the following methods:

    • Open the http://<external-ip-or-hostname> URL in your browser.

    • Run the following command to open the dashboard with your default browser:

      # Exactly one of hostname or IP will be available and used for the remote URL.
      open http://$(kubectl get services -n smm-system smm-ingressgateway-external -o jsonpath='{.status.loadBalancer.ingress[0].hostname}{.status.loadBalancer.ingress[0].ip}' --kubeconfig "${WORKLOAD_CLUSTER_1_KUBECONFIG}" --context "${WORKLOAD_CLUSTER_1_CONTEXT}")
      
    • If you have installed the Service Mesh Manager CLI on your machine, run the following command to open the Service Mesh Manager Dashboard in the default browser.

      smm dashboard --kubeconfig "${WORKLOAD_CLUSTER_1_KUBECONFIG}" --context "${WORKLOAD_CLUSTER_1_CONTEXT}"
      

      Expected output:

      ✓ validate-kubeconfig ❯ checking cluster reachability...
      ✓ opening Service Mesh Manager at http://127.0.0.1:50500
      
  3. Check the deployments on the dashboard, for example, on the MENU > Overview, MENU > MESH, and MENU > TOPOLOGY pages.

Service Mesh Manager Overview Service Mesh Manager Overview

Service Mesh Manager Mesh Service Mesh Manager Mesh

Service Mesh Manager Topology Service Mesh Manager Topology