Before deploying Service Mesh Manager on your cluster, complete the following tasks.
Create a cluster
You need a Kubernetes cluster to run Service Mesh Manager (and optionally, Streaming Data Manager). If you don’t already have a Kubernetes cluster to work with, create one with one of the methods described in Create a test cluster.
Supported providers and Kubernetes versions
The cluster must run a Kubernetes version that Service Mesh Manager supports: Kubernetes 1.21, 1.22, 1.23, 1.24.
Service Mesh Manager is tested and known to work on the following Kubernetes providers:
- Amazon Elastic Kubernetes Service (Amazon EKS)
- Google Kubernetes Engine (GKE)
- Azure Kubernetes Service (AKS)
- Red Hat OpenShift 4.11
- On-premises installation of stock Kubernetes with load balancer support (and optionally PVCs for persistence)
Calisti resource requirements
Make sure that your Kubernetes or OpenShift cluster has sufficient resources to install Calisti. The following table shows the number of resources needed on the cluster:
|CPU||- 32 vCPU in total
- 4 vCPU available for allocation per worker node (If you are testing on a cluster at a cloud provider, use nodes that have at least 4 CPUs, for example, c5.xlarge on AWS.)
|Memory||- 64 GiB in total
- 4 GiB available for allocation per worker node for the Kubernetes cluster (8 GiB in case of the OpenShift cluster)
|Storage||12 GB of ephemeral storage on the Kubernetes worker nodes (for Traces and Metrics)|
These minimum requirements need to be available for allocation within your cluster, in addition to the requirements of any other loads running in your cluster (for example, DaemonSets and Kubernetes node-agents). If Kubernetes cannot allocate sufficient resources to Service Mesh Manager, some pods will remain in Pending state, and Service Mesh Manager will not function properly.
Enabling additional features, such as High Availability increases this value.
The default installation, when enough headroom is available in the cluster, should be able to support at least 150 running
Pods with the same amount of
Services. For setting up Service Mesh Manager for bigger workloads, see scaling Service Mesh Manager.
Install the Service Mesh Manager tool
Install the Service Mesh Manager command-line tool. You can use the Service Mesh Manager CLI tool to install Service Mesh Manager and other components on your cluster.
Note: The Service Mesh Manager CLI supports macOS and Linux (x86_64). On Windows, install the Windows Subsystem for Linux (WSL) and use the Linux binary.
Install the Service Mesh Manager CLI for your environment.
- If you have already received access to the Service Mesh Manager binaries, see Accessing the Service Mesh Manager binaries.
- If you are new to Service Mesh Manager, you can also use the free edition of Service Mesh Manager for evaluation.
Set Kubernetes configuration and context.
The Service Mesh Manager command-line tool uses your current Kubernetes context, as set in the KUBECONFIG environment variable (
~/.kube/configby default). Check if the cluster is the same as the one you plan to deploy the Service Mesh Manager. Run the following command:
kubectl config get-contexts
If there are multiple contexts in the Kubeconfig file, specify the one you want to use with the
use-contextparameter, for example:
kubectl config use-context <context-to-use>
Deploy Service Mesh Manager
After you have completed the previous steps, you can install Service Mesh Manager on a single cluster, or you can form a multi-cluster mesh right away.
Note: The default version of Service Mesh Manager is built with the standard SSL libraries. To use a FIPS-compliant version of Istio, see Install FIPS images.
Select the installation method you want to use:
You can install Service Mesh Manager on a single cluster first, and attach additional clusters later to form a multi-cluster mesh.