Getting started with the Free Tier

This getting started guide helps you access and install the free version of Streaming Data Manager. The guide uses the CLI installation method, for other options, see the installation chapter.

Free tier limitations

  • The free tier of Calisti allows you to use Service Mesh Manager and Streaming Data Manager on maximum of two Kubernetes clusters where the total number of worker nodes in your clusters is 10. For details, see Licensing options.

To buy an enterprise license, contact your Cisco sales representative, or directly Cisco Emerging Technologies and Incubation.

Prerequisites

Before deploying Streaming Data Manager on your cluster, complete the prerequisites.

CAUTION:

To install Streaming Data Manager on an existing Service Mesh Manager installation, the cluster must run Service Mesh Manager version 1.11 or later. If your cluster is running an earlier Service Mesh Manager version, you must upgrade it first.

CAUTION:

Supported providers and Kubernetes versions

The cluster must run a Kubernetes version that Service Mesh Manager supports: Kubernetes 1.21, 1.22, 1.23, 1.24.

Service Mesh Manager is tested and known to work on the following Kubernetes providers:

  • Amazon Elastic Kubernetes Service (Amazon EKS)
  • Google Kubernetes Engine (GKE)
  • Azure Kubernetes Service (AKS)
  • Red Hat OpenShift 4.11
  • On-premises installation of stock Kubernetes with load balancer support (and optionally PVCs for persistence)

Calisti resource requirements

Make sure that your Kubernetes or OpenShift cluster has sufficient resources to install Calisti. The following table shows the number of resources needed on the cluster:

Resource Required
CPU - 32 vCPU in total
- 4 vCPU available for allocation per worker node (If you are testing on a cluster at a cloud provider, use nodes that have at least 4 CPUs, for example, c5.xlarge on AWS.)
Memory - 64 GiB in total
- 4 GiB available for allocation per worker node for the Kubernetes cluster (8 GiB in case of the OpenShift cluster)
Storage 12 GB of ephemeral storage on the Kubernetes worker nodes (for Traces and Metrics)

These minimum requirements need to be available for allocation within your cluster, in addition to the requirements of any other loads running in your cluster (for example, DaemonSets and Kubernetes node-agents). If Kubernetes cannot allocate sufficient resources to Service Mesh Manager, some pods will remain in Pending state, and Service Mesh Manager will not function properly.

Enabling additional features, such as High Availability increases this value.

The default installation, when enough headroom is available in the cluster, should be able to support at least 150 running Pods with the same amount of Services. For setting up Service Mesh Manager for bigger workloads, see scaling Service Mesh Manager.

CAUTION:

When using Streaming Data Manager on Amazon EKS, you must install the EBS CSI driver add-on on your cluster.

Preparation

To access and install Calisti, complete the following steps.

  1. You’ll need a Cisco Customer account to download Calisti. If you don’t already have one here’s how to sign up:

    1. Visit the Cisco Account registration page and complete the registration form.
    2. Look out for an email from no-reply@mail-id.cisco.com titled Activate Account and click on the Activate Account button to activate your account.
  2. Download the Calisti command-line tools.

    1. Visit the Calisti download center.
    2. If you’re redirected to the home page, check the upper right-hand corner to see if you’re signed in. If you see a login button go ahead and login using your Cisco Customer account credentials. If, instead, you see “welcome, ” then you are already logged in.
    3. Once you have logged in, navigate to the Calisti download center again.
    4. Read and accept the End-User License Agreement (EULA).
    5. Download the Service Mesh Manager command-line tool (CLI) suitable for your system. The CLI supports macOS and Linux (x86_64). On Windows, install the Windows Subsystem for Linux (WSL) and use the Linux binary.
    6. Extract the archive. The archive contains two binaries, smm for Service Mesh Manager, and supertubes for Streaming Data Manager.
    7. Navigate to the directory where you have extracted the CLI.

    Note: For information on how to download the CLI using ORAS, see Download the CLI using ORAS.

  3. The Calisti download page shows your credentials that you can use to access the Service Mesh Manager and Streaming Data Manager docker images.

    Open a terminal and login to the image registries of Calisti by running the following command. The <your-password> and <your-username> parts contain the access credentials to the registries.

    SMM_REGISTRY_PASSWORD=<your-password> ./smm activate \
      --host=registry.eticloud.io \
      --prefix=smm \
      --user='<your-username>'
    

Install Streaming Data Manager on a single cluster

  1. Run the following command. This will install the main Service Mesh Manager and Streaming Data Manager components.

    • On Kubernetes:

      smm install -a --install-sdm
      
    • On OpenShift (for details, see OpenShift integration):

      smm install -a --install-sdm --platform=openshift
      

    Note: If you are installing Service Mesh Manager on a managed Kubernetes solution of a public cloud provider (for example, Amazon EKS, AKS, or GKE) or kOps, the cluster name auto-discovered by Service Mesh Manager is incompatible with Kubernetes resource naming restrictions and Istio’s method of identifying clusters in a multicluster mesh.

    In earlier Service Mesh Manager versions, you had to manually use the --cluster-name parameter to set a cluster name that complies with the RFC 1123 DNS subdomain/label format (alphanumeric string without “_” or “.” characters). Starting with Service Mesh Manager version 1.11, non-compliant names are automatically converted using the following rules:

    • Replace ‘_’ characters with ‘-’
    • Replace ‘.’ characters with ‘-’
    • Replace ‘:’ characters with ‘-’
    • Truncate the name to 63 characters

    Calisti supports KUBECONFIG contexts having the following authentication methods:

    • certfile and keyfile
    • certdata and keydata
    • bearer token
    • exec/auth provider

    Username-password pairs are not supported.

    If you are installing Streaming Data Manager in a test environment, you can install it without requiring authentication by running:

    smm install --install-sdm --anonymous-auth -a
    

    If you experience errors during the installation, try running the installation in verbose mode: smm install -v

    Note: the smm install -a --install-sdm command assumes that there is a default storage class available on the cluster, to provision the needed volumes. If your Kubernetes environment doesn’t have a default storage class, then the CRs deployed by Streaming Data Manager must be adjusted in order to work in your environment. In that case, request a demo and describe your use case so we can guide you through the configuration details as part of the demo.

  2. Wait until the installation is completed. This can take a few minutes.

  3. (Optional) Install the demo application. Installing the demo application automatically creates brokers and other Kafka resources for you.

    smm demoapp install
    

    Note: You can check the manifests of the related resources (located in the kafka namespace) by running the following commands:

    List of brokers: kubectl get pods --namespace kafka -o yaml

    List of topics: kubectl get kafkatopics --namespace kafka -o yaml

    The KafkaCluster CRDs: kubectl get kafkacluster --namespace kafka -o yaml

    The configuration of the brokers (in the data: section):

    kubectl get configmaps kafka-config-0 --namespace kafka -o yaml

    kubectl get configmaps kafka-config-1 --namespace kafka -o yaml

    For a detailed list of the resources created when installing the demo application, see Install the demo application.

  4. (Optional) Deploy a broker that can receive messages from your producers. If you have installed the demo application you can skip this step, because the demo application automatically creates brokers for you. For details on deploying brokers, see Create Kafka cluster.

  5. Run the following command to open the dashboard. If you don’t already have Istio or Kafka workload and traffic, the dashboard will be empty.

    smm dashboard
    

    The Streaming Data Manager Dashboard for your Kafka traffic

  6. (Optional)

    If you are installing Service Mesh Manager on a managed Kubernetes solution of a public cloud provider (for example, AWS, Azure, or Google Cloud), assign admin roles, so that you can tail the logs of your containers from the Service Mesh Manager UI, use Service Level Objectives and perform various tasks from the CLI that require custom permissions. Run the following command:

    kubectl create clusterrolebinding user-cluster-admin --clusterrole=cluster-admin --user=<gcp/aws/azure username>
    

    CAUTION:

    Assigning administrator roles might be very dangerous because it gives wide access to your infrastructure. Be careful and do that only when you’re confident in what you’re doing.

    If the above command fails, for example, with a timeout, simply re-run the command.

Produce and consume Kafka messages

  1. Verify that the Apache Kafka cluster is running:

    smm sdm cluster get -n kafka --kafka-cluster kafka -c <path-to-k8s-cluster-kubeconfig-file>
    

    Expected output:

    Namespace  Name   State           Image                               Alerts  Cruise Control Topic Status  Rolling Upgrade Errors  Rolling Upgrade Last Success
    kafka      kafka  ClusterRunning  banzaicloud/kafka:2.13-2.8.1        0       CruiseControlTopicReady      0
    
  2. Create a topic with one partition and a replication factor of one. You can either use the dashboard, or run the following command:

    smm sdm cluster topic create -n kafka --kafka-cluster kafka -c <path-to-k8s-cluster-kubeconfig-file>  -f- <<EOF
    apiVersion: kafka.banzaicloud.io/v1alpha1
    kind: KafkaTopic
    metadata:
      name: test-topic
    spec:
      name: test-topic
      partitions: 1
      replicationFactor: 1
      config:
        "retention.ms": "28800000"
        "cleanup.policy": "delete"
    EOF
    
  3. Run the following commands to write messages to the new topic.

    kubectl apply -n kafka -f- <<EOF
    apiVersion: v1
    kind: Pod
    metadata:
      name: kcat
    spec:
      containers:
      - name: kafka-test
        image: "edenhill/kcat:1.7.1"
        # Just spin & wait forever
        command: [ "/bin/ash", "-c", "--" ]
        args: [ "while true; do sleep 3000; done;" ]
    EOF
    
    kubectl exec -n kafka -it kcat ash
    
    kcat -b kafka-all-broker:29092 -P -t test-topic
    

    Type some test messages, then press CTRL+D

  4. Verify that you can read back the messages, and which broker is the partition leader for test-topic.

    kcat -b kafka-all-broker:29092 -C -t test-topic -c2
    

    Expected output:

    Metadata for test-topic (from broker -1: kafka-all-broker:29092/bootstrap):
     2 brokers:
      broker 0 at kafka-0.kafka.svc.cluster.local:29092 (controller)
      broker 1 at kafka-1.kafka.svc.cluster.local:29092
     1 topics:
      topic "test-topic" with 1 partitions:
        partition 0, leader 1, replicas: 1, isrs: 1