Upgrading your business applications
Overview
When a Calisti upgrade includes a new minor or major Istio release, Calisti runs both versions of the Istio control plane on the upgraded cluster, so you can gradually migrate your workloads to the new Istio version.
To list the available control planes, run:
kubectl get istiocontrolplanes -n istio-system
The output should be similar to:
NAME MODE NETWORK STATUS MESH EXPANSION EXPANSION GW IPS ERROR AGE
cp-v113x ACTIVE network1 Available true ["3.122.28.53","3.122.43.249"] 87m
cp-v115x ACTIVE network1 Available true ["3.122.31.252","18.195.79.209"] 66m
Here cp-v113x
is running Istio 1.13.x, while cp-v115x
is running Istio 1.15.3.
A special label on the namespaces specifies which Istio control plane the proxies use in that namespace. In the following example the smm-demo
namespace is attached to the cp-v113x.istio-system
control plane (where the .istio-system
is the name of the namespace of the Istio control plane).
kubctl get ns smm-demo -o yaml
The output should be similar to:
apiVersion: v1
kind: Namespace
metadata:
...
labels:
istio.io/rev: cp-v113x.istio-system
name: smm-demo
spec:
finalizers:
- kubernetes
status:
phase: Active
Both cp-v113x
and cp-v115x
are able to discover services in all namespaces. This means that:
- Workloads can communicate with each other regardless which Istio control plane they are attached to.
- In case of an error, any namespace can be rolled back to use the previous version of the Istio control plane by simply changing the annotation
Migrate workload to a new Istio control plane
After upgrading Calisti to a new version that includes a new minor or major Istio version, you have to modify your workloads to use the new Istio control plane. Complete the following steps.
-
Before starting the migration of the workloads to the new Istio control plane, check the Validation UI and fix any errors with your configuration.
-
Find the name of the new Istio control plane by running the following command:
kubectl get istiocontrolplanes -n istio-system
The output should be similar to:
NAME MODE NETWORK STATUS MESH EXPANSION EXPANSION GW IPS ERROR AGE cp-v113x ACTIVE network1 Available true ["3.122.28.53","3.122.43.249"] 87m cp-v115x ACTIVE network1 Available true ["3.122.31.252","18.195.79.209"] 66m
In this case the new Istio Control Plane is called
cp-v115x
which is running Istio 1.15.3. -
Migrate a namespace to the new Istio control plane. Complete the following steps.
-
Select a namespace, preferably one with the least impact on production traffic. Edit the
istio.io/rev
label on the namespace by running:kubectl label ns <your-namespace> istio.io/rev=cp-v115x.istio-system --overwrite
Expected output:
namespace/<your-namespace> labeled
-
Restart all
Controllers
(Deployments
,StatefulSets
, and so on) in the namespace. After the restart, the workloads in the namespace are attached to the new Istio control plane. For example, to restart the deployments in a namespace, you can run:kubectl rollout restart deployment -n <name-of-your-namespace>
-
Test your application to verify that it works with the new control plane as expected. In case of any issues, refer to the rollback section to roll back to the original Istio control plane.
-
-
Migrate your other namespaces.
-
After all of the applications has been migrated to the new control plane and you have verified that the applications work as expected, you can delete the old Istio control plane.
Roll back the data plane to the old control plane in case of issues
CAUTION:
Perform this step only if you have issues with your data plane pods, which were working with the old Istio control plane, and you deliberately want to move your workloads back to that control plane!-
If there is a problem and you want to roll the namespace back to the old control plane, set the istio.io/rev label on the namespace to point to the old Istio control plane, and restart the pod using the
kubectl rollout restart deployment
command:kubectl label ns <name-of-your-namespace-with-issues> istio.io/rev=cp-v113x.istio-system kubectl rollout restart deployment -n <name-of-your-namespace-with-issues>