Install SMM - GitOps - single cluster
This guide details how to set up a GitOps environment for Service Mesh Manager using Argo CD. The same principles can be used for other tools as well.
CAUTION:
Do not push the secrets directly into the git repository, especially when it is a public repository. Argo CD provides solutions to keep secrets safe.Architecture
The high level architecture for Argo CD with a single-cluster Service Mesh Manager consists of the following components:
- A git repository that stores the various charts and manifests,
- a management cluster that runs the Argo CD server, and
- the Service Mesh Manager cluster managed by Argo CD.
Prerequisites
To complete this procedure, you need:
- A free registration for the Service Mesh Manager download page
- A Kubernetes or OpenShift cluster to deploy Argo CD on (called
management-cluster
in the examples). - A Kubernetes or OpenShift cluster to deploy Service Mesh Manager on (called
workload-cluster-1
in the examples).
CAUTION:
Supported providers and Kubernetes versions
The cluster must run a Kubernetes version that Service Mesh Manager supports: Kubernetes 1.21, 1.22, 1.23, 1.24.
Service Mesh Manager is tested and known to work on the following Kubernetes providers:
- Amazon Elastic Kubernetes Service (Amazon EKS)
- Google Kubernetes Engine (GKE)
- Azure Kubernetes Service (AKS)
- Red Hat OpenShift 4.11
- On-premises installation of stock Kubernetes with load balancer support (and optionally PVCs for persistence)
Calisti resource requirements
Make sure that your Kubernetes or OpenShift cluster has sufficient resources to install Calisti. The following table shows the number of resources needed on the cluster:
Resource | Required |
---|---|
CPU | - 32 vCPU in total - 4 vCPU available for allocation per worker node (If you are testing on a cluster at a cloud provider, use nodes that have at least 4 CPUs, for example, c5.xlarge on AWS.) |
Memory | - 64 GiB in total - 4 GiB available for allocation per worker node for the Kubernetes cluster (8 GiB in case of the OpenShift cluster) |
Storage | 12 GB of ephemeral storage on the Kubernetes worker nodes (for Traces and Metrics) |
These minimum requirements need to be available for allocation within your cluster, in addition to the requirements of any other loads running in your cluster (for example, DaemonSets and Kubernetes node-agents). If Kubernetes cannot allocate sufficient resources to Service Mesh Manager, some pods will remain in Pending state, and Service Mesh Manager will not function properly.
Enabling additional features, such as High Availability increases this value.
The default installation, when enough headroom is available in the cluster, should be able to support at least 150 running Pods
with the same amount of Services
. For setting up Service Mesh Manager for bigger workloads, see scaling Service Mesh Manager.
Procedure overview
The high-level steps of the procedure are:
Install Argo CD
Complete the following steps to install Argo CD on the management cluster.
Set up the environment
-
Set the KUBECONFIG location and context name for the
management-cluster
cluster.MANAGEMENT_CLUSTER_KUBECONFIG=management_cluster_kubeconfig.yaml MANAGEMENT_CLUSTER_CONTEXT=management-cluster kubectl config --kubeconfig "${MANAGEMENT_CLUSTER_KUBECONFIG}" get-contexts "${MANAGEMENT_CLUSTER_CONTEXT}"
Expected output:
CURRENT NAME CLUSTER AUTHINFO NAMESPACE * management-cluster management-cluster
-
Set the KUBECONFIG location and context name for the
workload-cluster-1
cluster.WORKLOAD_CLUSTER_1_KUBECONFIG=workload_cluster_1_kubeconfig.yaml WORKLOAD_CLUSTER_1_CONTEXT=workload-cluster-1 kubectl config --kubeconfig "${WORKLOAD_CLUSTER_1_KUBECONFIG}" get-contexts "${WORKLOAD_CLUSTER_1_CONTEXT}"
Expected output:
CURRENT NAME CLUSTER AUTHINFO NAMESPACE * workload-cluster-1 workload-cluster-1
Repeat this step for any additional workload clusters you want to use.
-
Add the cluster configurations to KUBECONFIG. Include any additional workload clusters you want to use.
KUBECONFIG=$KUBECONFIG:$MANAGEMENT_CLUSTER_KUBECONFIG:$WORKLOAD_CLUSTER_1_KUBECONFIG
-
Make sure the
management-cluster
Kubernetes context is the current context.kubectl config use-context "${MANAGEMENT_CLUSTER_CONTEXT}"
Expected output:
Switched to context "management-cluster".
Install Argo CD Server
-
Create the
argocd
namespace.kubectl create namespace argocd
Expected output:
namespace/argocd created
-
On OpenShift: Run the following command to grant the service accounts access to the
argocd
namespace.oc adm policy add-scc-to-group privileged system:serviceaccounts:argocd
Expected output:
clusterrole.rbac.authorization.k8s.io/system:openshift:scc:privileged added: "system:serviceaccounts:argocd"
-
Deploy Argo CD.
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
-
Wait until the installation is complete, then check that the Argo CD pods are up and running.
kubectl get pods -n argocd
The output should be similar to:
NAME READY STATUS RESTARTS AGE pod/argocd-application-controller-0 1/1 Running 0 7h59m pod/argocd-applicationset-controller-78b8b554f9-pgwbl 1/1 Running 0 7h59m pod/argocd-dex-server-6bbc85c688-8p7zf 1/1 Running 0 16h pod/argocd-notifications-controller-75847756c5-dbbm5 1/1 Running 0 16h pod/argocd-redis-f4cdbff57-wcpxh 1/1 Running 0 7h59m pod/argocd-repo-server-d5c7f7ffb-c8962 1/1 Running 0 7h59m pod/argocd-server-76497676b-pnvf4 1/1 Running 0 7h59m
-
For the Argo CD UI, set the
argocd-server service
type toLoadBalancer
.kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
Expected output:
service/argocd-server patched
-
Patch the App of Apps health check in Argo CD configuration to ignore diffs of controller/operator managed fields. For details about this patch, see the Argo CD documentation sections Resource Health and Diffing Customization.
Apply the new Argo CD health check configurations:
kubectl apply -f - <<EOF apiVersion: v1 kind: ConfigMap metadata: name: argocd-cm namespace: argocd labels: app.kubernetes.io/name: argocd-cm app.kubernetes.io/part-of: argocd data: # App of app health check resource.customizations.health.argoproj.io_Application: | hs = {} hs.status = "Progressing" hs.message = "" if obj.status ~= nil then if obj.status.health ~= nil then hs.status = obj.status.health.status if obj.status.health.message ~= nil then hs.message = obj.status.health.message end end end return hs # Ignoring RBAC changes made by AggregateRoles resource.compareoptions: | # disables status field diffing in specified resource types ignoreAggregatedRoles: true # disables status field diffing in specified resource types # 'crd' - CustomResourceDefinition-s (default) # 'all' - all resources # 'none' - disabled ignoreResourceStatusField: all EOF
Expected output:
configmap/argocd-cm configured
-
Get the initial password for the
admin
user.kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo
Expected output:
argocd-admin-password
-
Check the
external-ip-or-hostname
address of theargocd-server
service.kubectl get service -n argocd argocd-server
The output should be similar to:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE argocd-server LoadBalancer 10.108.14.130 external-ip-or-hostname 80:31306/TCP,443:30063/TCP 7d13h
-
Open the
https://external-ip-or-hostname
URL and log in to the Argo CD server using the password received in the previous step.# Exactly one of hostname or IP will be available and used for the remote URL. open https://$(kubectl get service -n argocd argocd-server -o jsonpath='{.status.loadBalancer.ingress[0].hostname}{.status.loadBalancer.ingress[0].ip}')
Install Argo CD CLI
-
Install Argo CD CLI on your computer. For details, see the Argo CD documentation.
-
Log in with the CLI:
# Exactly one of hostname or IP will be available and used for the remote URL. argocd login $(kubectl get service -n argocd argocd-server -o jsonpath='{.status.loadBalancer.ingress[0].hostname}{.status.loadBalancer.ingress[0].ip}') --insecure --username admin --password $(kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d)
Expected output:
'admin:login' logged in successfully
For more details about Argo CD installation, see the Argo CD getting started guide.
Register clusters
-
Register the clusters that will run Service Mesh Manager in Argo CD. In this example, register
workload-cluster-1
using one of the following methods.-
Register the cluster from the command line by running:
argocd cluster add --kubeconfig "${WORKLOAD_CLUSTER_1_KUBECONFIG}" "${WORKLOAD_CLUSTER_1_CONTEXT}"
Expected output:
WARNING: This will create a service account `argocd-manager` on the cluster referenced by context `workload-cluster-1` with full cluster level privileges. Do you want to continue [y/N]? y INFO[0005] ServiceAccount "argocd-manager" created in namespace "kube-system" INFO[0005] ClusterRole "argocd-manager-role" created INFO[0005] ClusterRoleBinding "argocd-manager-role-binding" created INFO[0011] Created bearer token secret for ServiceAccount "argocd-manager" Cluster 'https://workload-cluster-1-ip-or-hostname' added
-
Alternatively, you can register clusters declaratively as Kubernetes secrets. Modify the following command for your environment and apply it. For details, see the Argo CD documentation.
WORKLOAD_CLUSTER_1_IP="https://workload-cluster-1-IP" ARGOCD_BEARER_TOKEN="authentication-token" ARGOCD_CA_B64="base64 encoded certificate" ; kubectl apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: workload-cluster-1-secret labels: argocd.argoproj.io/secret-type: cluster type: Opaque stringData: name: workload-cluster-1 server: "${WORKLOAD_CLUSTER_1_IP}" config: | { "bearerToken": "${ARGOCD_BEARER_TOKEN}", "tlsClientConfig": { "insecure": false, "caData": "${ARGOCD_CA_B64}" } } EOF
-
-
Make sure that the cluster is registered in Argo CD by running the following command:
argocd cluster list
The output should be similar to:
SERVER NAME VERSION STATUS MESSAGE PROJECT https://kubernetes.default.svc in-cluster Unknown Cluster has no applications and is not being monitored. https://workload-cluster-1-ip-or-hostname workload-cluster-1 Unknown Cluster has no applications and is not being monitored.
Prepare Git repository
-
Create an empty repository called
calisti-gitops
on GitHub (or another provider that Argo CD supports) and initialize it with a README.md file so that you can clone the repository. Because Service Mesh Manager credentials will be stored in this repository, make it a private repository.GITHUB_ID="github-id" GITHUB_REPOSITORY_NAME="calisti-gitops"
-
Obtain a personal access token to the repository (on GitHub, see Creating a personal access token), that has the following permissions:
- admin:org_hook
- admin:repo_hook
- read:org
- read:public_key
- repo
-
Log in with your personal access token with
git
.export GH_TOKEN="github-personal-access-token" # Note: this environment variable needs to be exported so the `git` binary is going to use it automatically for authentication.
-
Clone the repository into your local workspace.
git clone "https://github.com/${GITHUB_ID}/${GITHUB_REPOSITORY_NAME}.git"
Expected output:
Cloning into 'calisti-gitops'... remote: Enumerating objects: 144, done. remote: Counting objects: 100% (144/144), done. remote: Compressing objects: 100% (93/93), done. remote: Total 144 (delta 53), reused 135 (delta 47), pack-reused 0 Receiving objects: 100% (144/144), 320.08 KiB | 746.00 KiB/s, done. Resolving deltas: 100% (53/53), done.
-
Add the repository to Argo CD by running the following command. Alternatively, you can add it on Argo CD Web UI.
argocd repo add "https://github.com/${GITHUB_ID}/${GITHUB_REPOSITORY_NAME}.git" --name "${GITHUB_REPOSITORY_NAME}" --username "${GITHUB_ID}" --password "${GH_TOKEN}"
Expected output:
Repository 'https://github.com/github-id/calisti-gitops.git' added
-
Verify that the repository is connected by running:
argocd repo list
In the output, Status should be Successful:
TYPE NAME REPO INSECURE OCI LFS CREDS STATUS MESSAGE PROJECT git calisti-gitops https://github.com/github-id/calisti-gitops.git false false false true Successful
-
Change into the root directory of the cloned repository and create the following directories.
cd "${GITHUB_REPOSITORY_NAME}"
mkdir -p apps/demo-app apps/smm-controlplane apps/smm-operator charts demo-app manifests
The final structure of the repository will look like this:
. ├── apps │ ├── demo-app │ │ └── demo-app.yaml │ ├── smm-controlplane │ │ └── smm-controlplane.yaml │ └── smm-operator │ └── smm-operator.yaml ├── charts │ └── smm-operator │ └── ... ├── demo-app │ ├── demo-app-ns.yaml │ └── demo-app.yaml └── manifests ├── cert-manager-namespace.yaml └── smm-controlplane.yaml
- The
apps
folder contains the Argo CD Application of thesmm-operator
, thesmm-controlplane
, and thedemo-app
. - The
charts
folder contains the Helm chart of thesmm-operator
. - The
demo-app
folder contains the manifest files of the demo application that represents your business application. - The
manifests
folder contains thesmm-controlplane
file and thecert-manager
namespace file.
- The
Prepare the helm charts
-
You need an active Service Mesh Manager registration to download the Service Mesh Manager charts and images. You can sign up for free, or obtain Enterprise credentials on the official Cisco Service Mesh Manager page. After registration, you can obtain your username and password from the Download Center. Set them as environment variables.
CALISTI_USERNAME="<your-calisti-username>"
CALISTI_PASSWORD="<your-calisti-password>"
-
Download the
smm-operator
chart fromregistry.eticloud.io
into thecharts
directory of your Service Mesh Manager GitOps repository and extract it. Run the following commands:export HELM_EXPERIMENTAL_OCI=1 # Needed prior to Helm version 3.8.0 echo "${CALISTI_PASSWORD}" | helm registry login registry.eticloud.io -u "${CALISTI_USERNAME}" --password-stdin
Expected output:
Login Succeeded
helm pull oci://registry.eticloud.io/smm-charts/smm-operator --destination ./charts/ --untar --version 1.12.1
Expected output:
Pulled: registry.eticloud.io/smm-charts/smm-operator:latest-stable-version Digest: sha256:someshadigest
Deploy Service Mesh Manager
Deploy the smm-operator application
Complete the following steps to deploy the smm-operator
chart using Argo CD.
-
Create an Argo CD Application CR for
smm-operator
.Before running the following command, edit it if needed:
- If you are not using a GitHub repository, set the
repoURL
field to your repository. -
For multi-cluster setups, the Kubernetes API server address of one cluster must be reachable from other clusters. The API server addresses are private for certain clusters (for example, OpenShift) and not reachable by default from other clusters. In such case, use the
PUBLIC_API_SERVER_ENDPOINT_ADDRESS
variable to provide an address that’s reachable from the other clusters. This can be a public address, or one that’s routable from the other clusters.
ARGOCD_CLUSTER_NAME="${WORKLOAD_CLUSTER_1_CONTEXT}" PUBLIC_API_SERVER_ENDPOINT_ADDRESS="" ; cat > "apps/smm-operator/smm-operator-app.yaml" <<EOF # apps/smm-operator/smm-operator-app.yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: smm-operator namespace: argocd finalizers: - resources-finalizer.argocd.argoproj.io spec: project: default source: repoURL: https://github.com/${GITHUB_ID}/${GITHUB_REPOSITORY_NAME}.git targetRevision: HEAD path: charts/smm-operator helm: parameters: - name: "global.ecr.enabled" value: 'false' - name: "global.basicAuth.username" value: "${CALISTI_USERNAME}" - name: "global.basicAuth.password" value: "${CALISTI_PASSWORD}" - name: "apiServerEndpointAddress" value: "${PUBLIC_API_SERVER_ENDPOINT_ADDRESS}" # The publicly accessible address of the k8s api server. Some Cloud providers have different API Server endpoint for internal and for public access. In that case the public endpoint needs to be specified here. destination: name: ${ARGOCD_CLUSTER_NAME} namespace: smm-registry-access syncPolicy: automated: prune: true selfHeal: true syncOptions: - Validate=false - PruneLast=true - CreateNamespace=true - Replace=true EOF
- If you are not using a GitHub repository, set the
-
Commit and push the
calisti-gitops
repository.git add apps/smm-operator charts/smm-operator git commit -m "add smm-operator app"
git push
Expected output:
Enumerating objects: 48, done. Counting objects: 100% (48/48), done. Delta compression using up to 12 threads Compressing objects: 100% (44/44), done. Writing objects: 100% (47/47), 282.18 KiB | 1.99 MiB/s, done. Total 47 (delta 20), reused 0 (delta 0), pack-reused 0 remote: Resolving deltas: 100% (20/20), done. To github.com:pregnor/calisti-gitops.git + 8dd47c2...db9e7af main -> main (forced update)
-
Apply the Application manifest.
kubectl apply -f "apps/smm-operator/smm-operator-app.yaml"
Expected output:
application.argoproj.io/smm-operator created
-
Verify that the applications have been added to Argo CD and are healthy.
argocd app list
Expected output:
NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET smm-operator workload-cluster-1 smm-registry-access default Synced Healthy Auto-Prune <none> https://github.com/github-id/calisti-gitops.git charts/smm-operator HEAD
-
Check the
smm-operator
application on the Argo CD Web UI.
Deploy the smm-controlplane application
-
Create the following namespace for the Service Mesh Manager ControlPlane.
cat > manifests/cert-manager-namespace.yaml <<EOF # manifests/cert-manager-namespace.yaml apiVersion: v1 kind: Namespace metadata: annotations: argocd.argoproj.io/sync-wave: "1" name: cert-manager EOF
-
Create the
smm-controlplane
CR for theControlPlane
. For OpenShift installations, addplatform: openshift
to thespec
section.ARGOCD_CLUSTER_NAME="${WORKLOAD_CLUSTER_1_CONTEXT}" ISTIO_MINOR_VERSION="1.15" ; cat > "manifests/smm-controlplane.yaml" <<EOF # manifests/smm-controlplane.yaml apiVersion: smm.cisco.com/v1alpha1 kind: ControlPlane metadata: annotations: argocd.argoproj.io/sync-wave: "10" name: smm spec: # platform: openshift # Uncomment for OpenShift installations certManager: enabled: true namespace: cert-manager clusterName: ${ARGOCD_CLUSTER_NAME} clusterRegistry: enabled: true namespace: cluster-registry log: {} meshManager: enabled: true istio: enabled: true istioCRRef: name: cp-v${ISTIO_MINOR_VERSION/.}x namespace: istio-system operators: namespace: smm-system namespace: smm-system nodeExporter: enabled: true namespace: smm-system psp: enabled: false rbac: enabled: true oneEye: {} registryAccess: enabled: true imagePullSecretsController: {} namespace: smm-registry-access pullSecrets: - name: smm-registry.eticloud.io-pull-secret namespace: smm-registry-access repositoryOverride: host: registry.eticloud.io prefix: smm role: active smm: exposeDashboard: meshGateway: enabled: true als: enabled: true log: {} application: enabled: true log: {} auth: forceUnsecureCookies: true mode: anonymous certManager: enabled: true enabled: true federationGateway: enabled: true name: smm service: enabled: true name: smm-federation-gateway port: 80 federationGatewayOperator: enabled: true impersonation: enabled: true istio: revision: cp-v${ISTIO_MINOR_VERSION/.}x.istio-system leo: enabled: true log: {} log: {} namespace: smm-system prometheus: enabled: true replicas: 1 prometheusOperator: {} releaseName: smm role: active sre: enabled: true useIstioResources: true EOF
-
Create the Argo CD Application CR for the
smm-controlplane
.ARGOCD_CLUSTER_NAME="${WORKLOAD_CLUSTER_1_CONTEXT}" ; cat > "apps/smm-controlplane/smm-controlplane-app.yaml" <<EOF # apps/smm-controlplane/smm-controlplane-app.yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: smm-controlplane namespace: argocd finalizers: - resources-finalizer.argocd.argoproj.io spec: project: default source: repoURL: https://github.com/${GITHUB_ID}/${GITHUB_REPOSITORY_NAME}.git targetRevision: HEAD path: manifests destination: name: ${ARGOCD_CLUSTER_NAME} syncPolicy: automated: prune: true selfHeal: true syncOptions: - Validate=false - CreateNamespace=true - PrunePropagationPolicy=foreground - PruneLast=true - Replace=true EOF
-
Commit the changes and push the
calisti-gitops
repository.git add apps/smm-controlplane manifests git commit -m "add smm-controlplane app"
Expected output:
[main 25ba7e8] add smm-controlplane app 3 files changed, 212 insertions(+) create mode 100644 apps/smm-controlplane/smm-controlplane-app.yaml create mode 100644 manifests/cert-manager-namespace.yaml create mode 100644 manifests/smm-controlplane.yaml
git push
Expected output:
Enumerating objects: 12, done. Counting objects: 100% (12/12), done. Delta compression using up to 10 threads Compressing objects: 100% (10/10), done. Writing objects: 100% (10/10), 2.70 KiB | 2.70 MiB/s, done. Total 10 (delta 1), reused 0 (delta 0), pack-reused 0 remote: Resolving deltas: 100% (1/1), done. To github.com:<username>/calisti-gitops.git 529545a..25ba7e8 main -> main
-
Apply the Application manifest.
kubectl apply -f "apps/smm-controlplane/smm-controlplane-app.yaml"
Expected output:
application.argoproj.io/smm-controlplane created
-
Verify that the application has been added to Argo CD and is healthy.
argocd app list
Expected output:
NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET smm-controlplane workload-cluster-1 default Synced Healthy Auto-Prune <none> https://github.com/github-id/calisti-gitops.git manifests HEAD smm-operator workload-cluster-1 smm-registry-access default Synced Healthy Auto-Prune <none> https://github.com/github-id/calisti-gitops.git charts/smm-operator HEAD
-
Check that all pods are healthy and running in the
smm-system
namespace ofworkload-cluster-1
.kubectl get pods -n smm-system --kubeconfig "${WORKLOAD_CLUSTER_1_KUBECONFIG}" --context "${WORKLOAD_CLUSTER_1_CONTEXT}"
-
Check the application on Argo CD Web UI.
# Exactly one of hostname or IP will be available and used for the remote URL. open https://$(kubectl get service -n argocd argocd-server -o jsonpath='{.status.loadBalancer.ingress[0].hostname}{.status.loadBalancer.ingress[0].ip}')
At this point, you have successfully installed smm-operator
and smm-controlplane
on workload-cluster-1
.
Deploy an application
If you want to deploy an application into the service mesh, complete the following steps. The examples use the Service Mesh Manager demo application.
-
Create a namespace for the application: create the
demo-app-ns.yaml
file.cat > demo-app/demo-app-ns.yaml << EOF apiVersion: v1 kind: Namespace metadata: labels: app.kubernetes.io/instance: smm-demo app.kubernetes.io/name: smm-demo app.kubernetes.io/part-of: smm-demo app.kubernetes.io/version: 0.1.4 istio.io/rev: cp-v115x.istio-system name: smm-demo EOF
-
Create a manifest for Network Attachment Definitions.
cat > demo-app/smm-demo-nad.yaml << EOF apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: istio-cni-cp-v${ISTIO_MINOR_VERSION/.}x-istio-system namespace: smm-demo annotations: argocd.argoproj.io/sync-wave: "3" EOF
-
Create the
demo-app.yaml
file.cat > demo-app/demo-app.yaml << EOF apiVersion: smm.cisco.com/v1alpha1 kind: DemoApplication metadata: name: smm-demo namespace: smm-demo spec: autoscaling: enabled: true controlPlaneRef: name: smm deployIstioResources: true deploySLOResources: true enabled: true enabledComponents: - frontpage - catalog - bookings - postgresql - payments - notifications - movies - analytics - database - mysql istio: revision: cp-v115x.istio-system load: enabled: true maxRPS: 30 minRPS: 10 swingPeriod: 1380000000000 replicas: 1 resources: limits: cpu: "2" memory: 192Mi requests: cpu: 40m memory: 64Mi EOF
-
Create an Argo CD Application file for the application. Create the
demo-app.yaml
file.ARGOCD_CLUSTER_NAME="${WORKLOAD_CLUSTER_1_CONTEXT}" ; cat > apps/demo-app/demo-app.yaml << EOF apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: demo-app namespace: argocd finalizers: - resources-finalizer.argocd.argoproj.io spec: project: default source: repoURL: https://github.com/${GITHUB_ID}/${GITHUB_REPOSITORY_NAME}.git targetRevision: HEAD path: demo-app destination: name: ${ARGOCD_CLUSTER_NAME} namespace: smm-demo syncPolicy: automated: prune: true selfHeal: true syncOptions: - Validate=false - CreateNamespace=true - PruneLast=true - Replace=true EOF
-
Commit and push the
calisti-gitops
repository.git add apps/demo-app demo-app git commit -m "add demo app"
Expected output:
[main 58a236e] add demo app 3 files changed, 74 insertions(+) create mode 100644 apps/demo-app/demo-app.yaml create mode 100644 demo-app/demo-app-ns.yaml create mode 100644 demo-app/demo-app.yaml
git push
Expected output:
Enumerating objects: 10, done. Counting objects: 100% (10/10), done. Delta compression using up to 10 threads Compressing objects: 100% (7/7), done. Writing objects: 100% (8/8), 1.37 KiB | 1.37 MiB/s, done. Total 8 (delta 0), reused 0 (delta 0), pack-reused 0 To github.com:<username>/calisti-gitops.git e16549e..58a236e main -> main
-
Deploy the application.
kubectl apply -f apps/demo-app/demo-app.yaml
-
Wait until all the pods in the application namespace (
smm-demo
) are up and running.kubectl get pods -n smm-demo --kubeconfig "${WORKLOAD_CLUSTER_1_KUBECONFIG}" --context "${WORKLOAD_CLUSTER_1_CONTEXT}"
Expected output:
NAME READY STATUS RESTARTS AGE analytics-v1-7899bd4d4-bnf24 2/2 Running 0 109s bombardier-6455fd74f6-jndpv 2/2 Running 0 109s bookings-v1-559768454c-7vhzr 2/2 Running 0 109s catalog-v1-99b7bb56d-fjvhl 2/2 Running 0 109s database-v1-5cb4b4ff67-95ttk 2/2 Running 0 109s frontpage-v1-5b4dcbfcb4-djr72 2/2 Running 0 108s movies-v1-78fcf666dc-z8c2z 2/2 Running 0 108s movies-v2-84d9f5658f-kc65j 2/2 Running 0 108s movies-v3-86bbbc9745-r84bl 2/2 Running 0 108s mysql-d6b6b78fd-b7dwb 2/2 Running 0 108s notifications-v1-794c5dd8f6-lndh4 2/2 Running 0 108s payments-v1-858d4b4ffc-vtxxl 2/2 Running 0 108s postgresql-555fd55bdb-jn5pq 2/2 Running 0 108s
-
Verify that the application appears on the Argo CD admin view, it is Healthy, and Synced.
Access the Service Mesh Manager dashboard
-
You can access the Service Mesh Manager dashboard via the
smm-ingressgateway-external
LoadBalancerexternal-ip-or-hostname
address. Run the following command to retrieve the IP address:kubectl get services -n smm-system smm-ingressgateway-external --kubeconfig "${WORKLOAD_CLUSTER_1_KUBECONFIG}" --context "${WORKLOAD_CLUSTER_1_CONTEXT}"
Expected output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE smm-ingressgateway-external LoadBalancer 10.0.0.199 external-ip-or-hostname 80:32505/TCP 2m28s
-
Open the Service Mesh Manager dashboard using one of the following methods:
-
Open the
http://<external-ip-or-hostname>
URL in your browser. -
Run the following command to open the dashboard with your default browser:
# Exactly one of hostname or IP will be available and used for the remote URL. open http://$(kubectl get services -n smm-system smm-ingressgateway-external -o jsonpath='{.status.loadBalancer.ingress[0].hostname}{.status.loadBalancer.ingress[0].ip}' --kubeconfig "${WORKLOAD_CLUSTER_1_KUBECONFIG}" --context "${WORKLOAD_CLUSTER_1_CONTEXT}")
-
If you have installed the Service Mesh Manager CLI on your machine, run the following command to open the Service Mesh Manager Dashboard in the default browser.
smm dashboard --kubeconfig "${WORKLOAD_CLUSTER_1_KUBECONFIG}" --context "${WORKLOAD_CLUSTER_1_CONTEXT}"
Expected output:
✓ validate-kubeconfig ❯ checking cluster reachability... ✓ opening Service Mesh Manager at http://127.0.0.1:50500
-
-
Check the deployments on the dashboard, for example, on the MENU > Overview, MENU > MESH, and MENU > TOPOLOGY pages.