Prerequisites

Before trying to attach a virtual machine to your mesh, make sure to:

You can attach VMs only to active Istio cluster that’s running the Service Mesh Manager controlplane.

Configuration prerequisites

To attach external machines, the Service Mesh Manager dashboard needs to be exposed so that smm-agent can fetch the required configuration data. For details, see Exposing the Dashboard.

Supported operating systems

Right now the following operating systems are verified to be working to be added to the mesh:

  • Ubuntu 20.04+ (64-bit)
  • RedHat Enterprise Linux 8 (64-bit)

However, any operating system using Deb or RPM package managers and systemd as init should be able to execute the same procedure.

Package dependencies

OS Required Packages Example Install Command
Ubuntu curl iptables sudo hostname apt-get install -y curl iptables sudo hostname
RHEL curl iptables sudo hostname yum install -y curl hostname iptables sudo

Network prerequisites

Because of the way Istio operates, the VM is only able to resolve services and DNS names from the same Kubernetes namespace as it’s attached to. This means that communication from the VM to other Kubernetes namespaces is not possible.

Cluster access to VM

The cluster must be able to access the following ports exposed from the VM:

  • TCP ports 16400, 16401
  • Every port you define for the workloadgroup

The Kubernetes clusters in the mesh must be able to access every port on the VM that is used to serve mesh-traffic. For example, if the VM runs a web server on port 80, then port 80 must be accessible from every pod in the member clusters. (The workloadgroup defined for the service should indicate that the service is available via port 80).

Determining the VM’s IP address

From the clusters point of view, the VM’s IP address may not be the IP address that appears on the network interfaces in the VM’s operating system. For example, if the VM is exposed via a load balancer instance of a cloud service provider, then the Service Mesh Manager clusters can reach the VM via the IP address (or IP addresses) of the load balancer.

While it is expected that the administrators integrating VM’s into the service-mesh have the ability to identify the VM’s IP from the point of view of the service-mesh, the fallback behavior of

The smm-agent application queries the https://ifconfig.me/ip site to determine the IP that the public internet sees for the VM. If the IP that the site returns is not the IP that the clusters in the service mesh should use to reach the VM, then set the VM’s IP address to use for the service mesh communication during the smm-agent setup.

Note: This document is not a comprehensive guide on how to expose the VMs' via IP.

VM access to cluster

Istio can work in two distinct ways when it comes to network topologies:

  • If the virtual machine has no direct connection to the pod’s IP addresses, it can rely on a meshexpansion gateway and use the different network approach. Unless latency is of uttermost importance, we highly recommend using this approach as it allows for more flexibility when it comes to attaching VMs for multiple separated networks.
  • If the virtual machine can access the pod’s IP addresses directly, then you can use the same network approach.

Different network

To configure the different network model, the WorkloadGroup’s .spec.network field must be set to a different network than the networks used by the current Istio deployment.

To check which network the existing Istio control planes are attached to, run the following command:

kubectl get istiocontrolplanes -A

The output should be similar to:

NAMESPACE      NAME       MODE     NETWORK    STATUS      MESH EXPANSION   EXPANSION GW IPS                 ERROR   AGE
istio-system   cp-v115x   ACTIVE   network1   Available   true             ["13.48.73.61","13.51.88.187"]           9d

Istio uses the network1 network name, so set the WorkloadGroup’s network setting to something different, such as vm-network-1.

Firewall settings

From the networking perspective the machines should be able to access:

  • the meshexpansion-gateways, and
  • the exposed dashboard ports.
  1. To get the IP addresses of meshexpansion-gateways, check the services in the istio-system namespace:

    kubectl get services -n istio-system istio-meshexpansion-cp-v115x
    

    The output should be similar to:

    NAME                                    TYPE           CLUSTER-IP     EXTERNAL-IP                                                               PORT(S)                                                                                           AGE
    istio-meshexpansion-cp-v115x            LoadBalancer   10.10.82.80    a4b01735600f547ceb3c03b1440dd134-690669273.eu-north-1.elb.amazonaws.com   15021:30362/TCP,15012:31435/TCP,15017:30627/TCP,15443:32209/TCP,50600:31545/TCP,59411:32614/TCP   9d
    
  2. To get the IP addresses of exposed dashboard ports, check the services in the smm-system namespace:

    kubectl get services -n smm-system smm-ingressgateway-external
    

    The output should be similar to:

    smm-ingressgateway-external      LoadBalancer   10.10.153.139   a4dcb5db6b9384585bba6cd45c2a0959-1520071115.eu-north-1.elb.amazonaws.com                   80:31088/TCP
    
  3. Configure your firewalls.

    1. Make sure that the DNS names shown in the EXTERNAL-IP column are accessible from the VM instances.

Same network

To configure the same network model, the WorkloadGroup’s .spec.network field must be set to the same network as the one used by the current Istio deployment.

To check which network the existing Istio control planes are attached to, run the following command:

kubectl get istiocontrolplanes -A

The output should be similar to:

NAMESPACE      NAME       MODE     NETWORK    STATUS      MESH EXPANSION   EXPANSION GW IPS                 ERROR   AGE
istio-system   cp-v115x   ACTIVE   network1   Available   true             ["13.48.73.61","13.51.88.187"]           9d

Istio is using the network1 network name, so set the WorkloadGroup’s network setting to network1, too.