Install SMM with the SMM Operator chart

SMM Operator is a Kubernetes operator to deploy and manage Service Mesh Manager. In this chart the CRD is not managed by the operator, and we expect CI/CD tools to take care of updating CRD.

In case you have your own cluster deployed and are authorized to fetch images from the Cisco provided repositories, then you can rely on BasicAuth(url, username, password) for authentication required to pull images.

You can get a Username and Password by signing up for the Free tier version of Service Mesh Manager.

Prerequisites

Helm version 3.7 or newer.

Steps

  1. Create the cert-manager namespace:

    kubectl create ns cert-manager
    
  2. Run the following helm commands. Replace <your-username> and <your-password> with the ones shown on your Service Mesh Manager download page.

    export HELM_EXPERIMENTAL_OCI=1
    echo <your-password> | helm registry login registry.eticloud.io -u '<your-username>' --password-stdin
    
    helm pull oci://registry.eticloud.io/smm-charts/smm-operator --version 1.12.1
    
    helm install \
      --create-namespace \
      --namespace=smm-registry-access \
      --set "global.ecr.enabled=false" \
      --set "global.basicAuth.username=<your-username>" \
      --set "global.basicAuth.password=<your-password>" \
      smm-operator \
      oci://registry.eticloud.io/smm-charts/smm-operator --version 1.12.1
    

    For multi-cluster setups, the Kubernetes API server address of one cluster must be reachable from other clusters. The API server addresses are private for certain clusters (for example, OpenShift) and not reachable by default from other clusters. In such case, use the --set "apiServerEndpointAddress=<PUBLIC_API_SERVER_ENDPOINT_ADDRESS>" flag to provide an address that’s reachable from the other clusters. This can be a public address, or one that’s routable from the other clusters.

    Expected output:

      Pulled: registry.eticloud.io/smm-charts/smm-operator:1.12.1
      Digest: sha256:c67150bca937103db8831d73574d695aace034590c55569bdc60c58d400f7a5b
      NAME: smm-operator
      LAST DEPLOYED: Thu May  4 09:23:14 2023
      NAMESPACE: smm-registry-access
      STATUS: deployed
      REVISION: 1
      TEST SUITE: None
    

    (The smm-registry-access namespace is used because smm-operator should be in the same namespace as the imagepullsecrets-controller.)

    Verify the helm install

    helm list -n smm-registry-access
    

    Expected output:

    NAME        	NAMESPACE          	REVISION	UPDATED                              	STATUS  	CHART                   	APP VERSION
    smm-operator	smm-registry-access	1       	2023-05-04 09:23:14.681227 +0200 CEST	deployed	smm-operator-1.12.1	v1.12.1
    

    Verify the operator pod is up and running

    kubectl get pods -n smm-registry-access
    

    Expected output:

    NAME             READY   STATUS    RESTARTS   AGE
    smm-operator-0   2/2     Running   0          5m5s
    
  3. Install Service Mesh Manager by creating a ControlPlane resource. We recommend that you start with the following ControlPlane resource. This CR assumes that you are using docker-registry authentication, and the secret referenced in the .spec.registryAccess is used to pull smm-operator image and sync across other namespaces created by the smm-operator chart.

    For OpenShift 4.11 installation set the spec.platform=openshift field.

    Replace <cluster-name> with the name of your cluster. The cluster name format must comply with the RFC 1123 DNS subdomain/label format (alphanumeric string without “_” or “.” characters). Otherwise, you get an error message starting with: Reconciler error: cannot determine cluster name controller=controlplane, controllerGroup=smm.cisco.com, controllerKind=ControlPlane

    kubectl apply -f - << EOF
    apiVersion: smm.cisco.com/v1alpha1
    kind: ControlPlane
    metadata:
      name: smm
    spec:
      clusterName: <cluster-name>
      certManager:
        enabled: true
        namespace: cert-manager
      clusterRegistry:
        enabled: true
        namespace: cluster-registry
      log: {}
      meshManager:
        enabled: true
        istio:
          enabled: true
          istioCRRef:
            name: cp-v115x
            namespace: istio-system
          operators:
            namespace: smm-system
        namespace: smm-system
      nodeExporter:
        enabled: true
        namespace: smm-system
        psp:
          enabled: false
        rbac:
          enabled: true
      oneEye: {}
      registryAccess:
        enabled: true
        imagePullSecretsController: {}
        namespace: smm-registry-access
        pullSecrets:
          - name: smm-registry.eticloud.io-pull-secret
            namespace: smm-registry-access
      repositoryOverride:
        host: registry.eticloud.io
        prefix: smm
      role: active
      smm:
        als:
          enabled: true
          log: {}
        application:
          enabled: true
          log: {}
        auth:
          mode: impersonation
        certManager:
          enabled: true
        enabled: true
        federationGateway:
          enabled: true
          name: smm
          service:
            enabled: true
            name: smm-federation-gateway
            port: 80
        federationGatewayOperator:
          enabled: true
        impersonation:
          enabled: true
        istio:
          revision: cp-v115x.istio-system
        leo:
          enabled: true
          log: {}
        log: {}
        namespace: smm-system
        prometheus:
          enabled: true
          replicas: 1
        prometheusOperator: {}
        releaseName: smm
        role: active
        sre:
          enabled: true
        useIstioResources: true
    EOF
    
  4. Verify all pods in the smm-system namespace is up and running

    kubectl get pods -n smm-system
    

    Expected output:

    NAME                                               READY   STATUS    RESTARTS      AGE
    istio-operator-v113x-7fd87bcd79-7g6wz              2/2     Running   0             26m
    istio-operator-v115x-657f9f58b8-mg8pw              2/2     Running   0             26m
    mesh-manager-0                                     2/2     Running   0             26m
    prometheus-node-exporter-9hrpr                     1/1     Running   0             23m
    prometheus-node-exporter-h866t                     1/1     Running   0             23m
    prometheus-node-exporter-j9ljd                     1/1     Running   0             23m
    prometheus-node-exporter-qsc2r                     1/1     Running   0             23m
    prometheus-smm-prometheus-0                        4/4     Running   0             24m
    smm-77c6d4fd6-czdbg                                2/2     Running   0             25m
    smm-77c6d4fd6-vrzqp                                2/2     Running   0             25m
    smm-als-8698db887b-vr96g                           2/2     Running   0             25m
    smm-authentication-57b44b8d94-spcbh                2/2     Running   0             25m
    smm-federation-gateway-6698684fb9-4l5q9            2/2     Running   0             24m
    smm-federation-gateway-operator-5f7868448c-59d6l   2/2     Running   0             25m
    smm-grafana-5c5bf778fb-zlg4s                       3/3     Running   0             25m
    smm-health-5994fcb477-n6f8v                        2/2     Running   0             25m
    smm-health-api-5d49fd6c84-v2jml                    2/2     Running   0             25m
    smm-ingressgateway-988b74656-cgb9x                 1/1     Running   0             25m
    smm-kubestatemetrics-58ff74d48c-mwvz5              2/2     Running   0             25m
    smm-leo-6f4dfccdbc-wm5gb                           2/2     Running   0             25m
    smm-prometheus-operator-b5dd94cc-7bg5z             3/3     Running   1 (24m ago)   25m
    smm-sre-alert-exporter-759547d77f-jffr4            2/2     Running   0             25m
    smm-sre-api-84cb7974c5-vhjcf                       2/2     Running   0             25m
    smm-sre-controller-6c999f7dfc-9szqq                2/2     Running   0             25m
    smm-tracing-6b9f9cdd74-gj9j5                       2/2     Running   0             25m
    smm-vm-integration-5b66db6c9c-xtlcv                2/2     Running   0             25m
    smm-web-75994644d8-cm996                           3/3     Running   0             25m
    

Uninstalling the chart

To uninstall/delete the ControlPlane resource and the smm-operator release, complete the following steps.

  1. Run:

    kubectl delete controlplanes.smm.cisco.com smm
    
  2. Wait until all pods are deleted. This will take a couple of minutes. After all pods are deleted run:

    helm uninstall --namespace=smm-registry-access smm-operator
    
  3. Delete the following namespaces:

    kubectl delete namespaces cert-manager cluster-registry istio-system smm-registry-access smm-system
    
  4. Delete the Cluster CR

    kubectl delete clusters.clusterregistry.k8s.cisco.com <cluster-name>
    

Chart configuration

The following table lists the configurable parameters of the Service Mesh Manager chart and their default values.

Parameter Description Default
operator.image.repository Operator container image repository registry.eticloud.io/smm/smm-operator
operator.image.tag Operator container image tag Same as chart version
operator.image.pullPolicy Operator container image pull policy IfNotPresent
operator.resources CPU/Memory resource requests/limits (YAML) Memory: 256Mi, CPU: 200m
prometheusMetrics.enabled If true, use direct access for Prometheus metrics false
prometheusMetrics.authProxy.enabled If true, use auth proxy for Prometheus metrics true
prometheusMetrics.authProxy.image.repository Auth proxy container image repository gcr.io/kubebuilder/kube-rbac-proxy
prometheusMetrics.authProxy.image.tag Auth proxy container image tag v0.5.0
prometheusMetrics.authProxy.image.pullPolicy Auth proxy container image pull policy IfNotPresent
rbac.enabled Create rbac service account and roles true
rbac.psp.enabled Create pod security policy and binding false
ecr.enabled Should SMM Operator Chart handle the ECR login procedure true
ecr.accessKeyID Access Key ID to be used for ECR logins Empty
ecr.secretAccessKey Secret Access Key to be used for ECR logins Empty