Traffic tap

The traffic tap feature of Service Mesh Manager enables you to monitor live access logs of the Istio sidecar proxies. Each sidecar proxy outputs access information for the individual HTTP requests or HTTP/gRPC streams.

The access logs contain information about the:

  • reporter proxy,
  • source and destination workloads,
  • request,
  • response, as well as the
  • timings.

Note: For workloads that are running on virtual machines, the name of the pod is the hostname of the virtual machine.

Traffic tap using the UI

Traffic tap is also available from the dashboard. To use it, complete the following steps.

  1. Select MENU > TRAFFIC TAP.

  2. Select the reporter (namespace or workload) from the REPORTING SOURCE field.

    Service Mesh Manager tap Service Mesh Manager tap

  3. Click FILTERS to set additional filters, for example, HTTP method, destination, status code, or HTTP headers.

  4. Click START STREAMING.

  5. Select an individual log to see its details:

    Service Mesh Manager tap Service Mesh Manager tap

  6. After you are done, click PAUSE STREAMING.

Traffic tap using the CLI

These examples work out of the box with the demo application packaged with Service Mesh Manager. Change the service name and namespace to match your service.

To watch the access logs for an individual namespace, workload or pod, use the smm tap command. For example, to tap the smm-demo namespace, run:

smm tap ns/smm-demo

The output should be similar to:

✓ start tapping max-rps=0
2022-04-25T10:56:47Z outbound frontpage-v1-776d76965-b7w76 catalog-v1-5864c4b7d7-j5cmf "http GET / HTTP11" 200 121.499879ms "tcp://10.10.48.169:8080"
2022-04-25T10:56:47Z outbound frontpage-v1-776d76965-b7w76 catalog-v1-5864c4b7d7-j5cmf "http GET / HTTP11" 200 123.066985ms "tcp://10.10.48.169:8080"
2022-04-25T10:56:47Z inbound bombardier-66786577f7-sgv8z frontpage-v1-776d76965-b7w76 "http GET / HTTP11" 200 145.422013ms "tcp://10.20.2.98:8080"
2022-04-25T10:56:47Z outbound frontpage-v1-776d76965-b7w76 catalog-v1-5864c4b7d7-j5cmf "http GET / HTTP11" 200 129.024302ms "tcp://10.10.48.169:8080"
2022-04-25T10:56:47Z outbound frontpage-v1-776d76965-b7w76 catalog-v1-5864c4b7d7-j5cmf "http GET / HTTP11" 200 125.462172ms "tcp://10.10.48.169:8080"
2022-04-25T10:56:47Z inbound bombardier-66786577f7-sgv8z frontpage-v1-776d76965-b7w76 "http GET / HTTP11" 200 143.590923ms "tcp://10.20.2.98:8080"
2022-04-25T10:56:47Z outbound frontpage-v1-776d76965-b7w76 catalog-v1-5864c4b7d7-j5cmf "http GET / HTTP11" 200 121.868301ms "tcp://10.10.48.169:8080"
2022-04-25T10:56:47Z inbound bombardier-66786577f7-sgv8z frontpage-v1-776d76965-b7w76 "http GET / HTTP11" 200 145.090036ms "tcp://10.20.2.98:8080"
...

Filter on workload or pod

You can tap into specific workloads and pods, for example:

  • Tap the bookings-v1 workload in the smm-demo namespace:

    smm tap --ns smm-demo workload/bookings-v1
    
  • Tap a pod of the bookings app in the smm-demo namespace:

    POD_NAME=$(kubectl get pod -n smm-demo -l app=bookings -o jsonpath="{.items[0]..metadata.name}")
    smm tap --ns smm-demo pod/$POD_NAME
    

At large volume it’s difficult to find the relevant or problematic logs, but you can use filter flags to display only the relevant lines, for example:

# Show only server errors
smm tap ns/smm-demo --method GET --response-code 500,599

The output can be similar to:

2020-02-06T14:00:13Z outbound frontpage-v1-57468c558c-8c9cb bookings:8080 " GET / HTTP11" 503 173.099µs "tcp://10.10.111.111:8080"                               2020-02-06T14:00:18Z outbound frontpage-v1-57468c558c-8c9cb bookings:8080 " GET / HTTP11" 503 157.164µs "tcp://10.10.111.111:8080"                               2020-02-06T14:00:19Z outbound frontpage-v1-57468c558c-4w26k bookings:8080 " GET / HTTP11" 503 172.541µs "tcp://10.10.111.111:8080"                               2020-02-06T14:00:15Z outbound frontpage-v1-57468c558c-8c9cb bookings:8080 " GET / HTTP11" 503 165.05µs "tcp://10.10.111.111:8080"                                2020-02-06T14:00:15Z outbound frontpage-v1-57468c558c-8c9cb bookings:8080 " GET / HTTP11" 503 125.671µs "tcp://10.10.111.111:8080"                               2020-02-06T14:00:19Z outbound frontpage-v1-57468c558c-8c9cb bookings:8080 " GET / HTTP11" 503 101.701µs "tcp://10.10.111.111:8080"

You can also change the output format to JSON, and use the jq command line tool to further filter or map the log entries, for example:

# Show pods with a specific user-agent
smm tap ns/smm-demo -o json | jq 'select(.request.userAgent=="fasthttp") | .source.name'

The output can be similar to:

"payments-v1-7c955bccdd-vt2pg"
"bookings-v1-7d8d76cd6b-f96tm"
"bookings-v1-7d8d76cd6b-f96tm"
"payments-v1-7c955bccdd-vt2pg"
"bookings-v1-7d8d76cd6b-f96tm"