tech.guitarrapc.cóm

Technical updates

KubesharkでKubernetesのトラフィックを見てみる

「ちょっとこのサーバーの通信がおかしそう、通信の中身を見たい」となったときに便利なのがWiresharkです。tcpdumpのようなコマンドラインツールもありますが、GUIでパケットの中身を見られるWiresharkは非常に便利です。 ではKubernetes環境で動いているPodの通信を見たいときはどうすればよいでしょうか? Podの中にWiresharkを入れてもそのPod内の通信しか見られませんし、ホストに入れてもKubernetesの仮想ネットワークの中身は見られません。

これを可能にするのがKubesharkです。今回はそのメモ。

2025/11/26時点

現在、kubeshark.coはDNSトラブルでアクセス問題が生じています。これが安定するまで挙動が不確かなので注意です。まぁまぁひどい状況で、早く安定するとイイデスネ。

alt text

Kubesharkとは

KubesharkはKubernetes環境向けのWiresharkのようなツールです。Kubernetesクラスター内のすべてのPod間通信をキャプチャし、プロトコルレベルでの可視化を提供します。KubesharkはKubernetesのネットワークトラフィックをリアルタイムで監視し、コンテナ、Pod、ノード、クラスター間のすべての通信とペイロードをキャプチャします。

TCP、UDP、HTTP、gRPC、DNS、Redis、Kafkaなど様々なプロトコルに対応しており、またKubernetesのメタデータ(Pod名、ネームスペース、ラベルなど)と連携してトラフィックをフィルタリングできます。これこれって感じのツールです。

いい感じのダッシュボードが提供されており、ブラウザでアクセスしてトラフィックを確認できます。

alt text

今キャプチャしているトラフィックでネットワークマップが描画され、通信量が見えるのはとても良いです。トラフィックマップは見られても、通信量を可視化したものは意外とないんですよね。

alt text

価格設定は4ノードまでは無料、それ以上は有料です。小規模なクラスターで試したりするといいでしょう。

インストール

インストールはhomebrewとhelmの2パターンあります。

CLIでインストールする

homebrewの場合、kubesharkのCLIツールを使います。

brew install kubeshark
kubeshark tap

# クリーンナップ
kubeshark clean

helmでインストールする

helmでインストールしましょう。1

helm repo add kubeshark https://helm.kubehq.com
helm repo update
helm upgrade --install kubeshark kubeshark/kubeshark --version 52.3.92 -n default

# クリーンナップ
helm uninstall kubeshark -n default

起動確認

起動すると次の表示になります。kubesharkの場合、defaultネームスペースにインストールがちょうどいい感じな感じします。

$ k get po -n default
NAME                                READY   STATUS    RESTARTS   AGE
kubeshark-front-6cb67f87df-dsgq8    1/1     Running   0          91m
kubeshark-hub-fc65c5867-slkkh       1/1     Running   0          91m
kubeshark-worker-daemon-set-799l2   2/2     Running   0          32m
kubeshark-worker-daemon-set-c66x6   2/2     Running   0          32m
kubeshark-worker-daemon-set-l9fsl   2/2     Running   0          91m

アクセス

インストール完了したら、port-forwardでアクセスします。Dexなども対応がありますが、シンプルにport-forwardでアクセスするのが手っ取り早いです。

$ kubectl port-forward service/kubeshark-front 8899:80
Forwarding from 127.0.0.1:8899 -> 8080
Forwarding from [::1]:8899 -> 8080
Handling connection for 8899
Handling connection for 8899
Handling connection for 8899
Handling connection for 8899
Handling connection for 8899

ブラウザでhttp://localhost:8899にアクセスします。ちょっとすると画面が表示されます。2

ワークロードを準備する

動作を確認するため、ちょうどいいワークロードを展開しましょう。

  1. ゲストブックアプリケーション: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook?hl=ja
  2. Bookinfoアプリケーション3: https://github.com/digitalocean/kubernetes-sample-apps/tree/master/bookinfo-example
  3. etcdクラスタ: https://etcd.io/docs/v3.5/op-guide/kubernetes/

Guestbookアプリケーション

Redisをバックエンドに使ったシンプルなWebアプリケーションです。Redisのリーダー・フォロワー構成もあるので、Kubernetes内の通信がいろいろ発生します。

# デプロイ
$ kubectl apply -f ./guestbook.yaml

# 動作確認
$ kubectl get po -n guestbook
NAME                              READY   STATUS    RESTARTS   AGE
frontend-6b46678c94-2lx2v         1/1     Running   0          18m
frontend-6b46678c94-xtkpw         1/1     Running   0          20m
redis-follower-66847965fb-m9vtk   1/1     Running   0          20m
redis-follower-66847965fb-w8d7h   1/1     Running   0          20m
redis-leader-665d87459f-ctzvd     1/1     Running   0          20m

# ポートフォワード
$ kubectl port-forward svc/frontend 8090:80 -n guestbook

# 削除
$ kubectl delete -f ./guestbook.yaml

クリックでguestbook.yamlの定義を見る

apiVersion: v1
kind: Namespace
metadata:
  name: guestbook
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-leader
  namespace: guestbook
  labels:
    app: redis
    role: leader
    tier: backend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
        role: leader
        tier: backend
    spec:
      containers:
        - name: leader
          image: "docker.io/redis:6.0.5"
          resources:
            requests:
              cpu: 100m
              memory: 100Mi
          ports:
            - containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
  name: redis-leader
  namespace: guestbook
  labels:
    app: redis
    role: leader
    tier: backend
spec:
  ports:
    - port: 6379
      targetPort: 6379
  selector:
    app: redis
    role: leader
    tier: backend
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-follower
  namespace: guestbook
  labels:
    app: redis
    role: follower
    tier: backend
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
        role: follower
        tier: backend
    spec:
      containers:
        - name: follower
          image: us-docker.pkg.dev/google-samples/containers/gke/gb-redis-follower:v2
          resources:
            requests:
              cpu: 100m
              memory: 100Mi
          ports:
            - containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
  name: redis-follower
  namespace: guestbook
  labels:
    app: redis
    role: follower
    tier: backend
spec:
  ports:
    # the port that this service should serve on
    - port: 6379
  selector:
    app: redis
    role: follower
    tier: backend
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
  namespace: guestbook
spec:
  replicas: 2
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
        - name: php-redis
          image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5
          env:
            - name: GET_HOSTS_FROM
              value: "dns"
          resources:
            requests:
              cpu: 100m
              memory: 100Mi
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: frontend
  namespace: guestbook
  labels:
    app: guestbook
    tier: frontend
spec:
  # type: LoadBalancer
  type: ClusterIP
  ports:
    # the port that this service should serve on
    - port: 80
  selector:
    app: guestbook
    tier: frontend

デプロイするとhttp://localhost:8090にアクセスして、動作確認できます。

alt text

Bookinfoアプリケーション

簡易的なマイクロサービスアプリケーションです。通信がいろいろ発生するので、トラフィックの確認にちょうどいいです。

# デプロイ
$ kubectl apply -f ./bookinfo.yaml

# 動作確認
$ kubectl get po -n bookinfo
NAME                              READY   STATUS    RESTARTS   AGE
details-v1-5556dbb5b-sx5zs        1/1     Running   0          18m
productpage-v1-7d8dc8b558-pzpkf   1/1     Running   0          18m
ratings-v1-66fbfdcc7b-pcznd       1/1     Running   0          18m
reviews-v1-5d4d5544f6-4qg2b       1/1     Running   0          18m
reviews-v2-7c6c945484-ft2f4       1/1     Running   0          18m
reviews-v3-8648897d5b-j9b65       1/1     Running   0          18m

# ポートフォワード
$ kubectl port-forward svc/productpage 8091:9080 -n bookinfo

# 削除
$ kubectl delete -f ./bookinfo.yaml

クリックでbookinfo.yamlの定義を見る

apiVersion: v1
kind: Namespace
metadata:
  name: bookinfo
---
# Copyright Istio Authors
#
#   Licensed under the Apache License, Version 2.0 (the "License");
#   you may not use this file except in compliance with the License.
#   You may obtain a copy of the License at
#
#       http://www.apache.org/licenses/LICENSE-2.0
#
#   Unless required by applicable law or agreed to in writing, software
#   distributed under the License is distributed on an "AS IS" BASIS,
#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#   See the License for the specific language governing permissions and
#   limitations under the License.

##################################################################################################
# This file defines the services, service accounts, and deployments for the Bookinfo sample.
#
# To apply all 4 Bookinfo services, their corresponding service accounts, and deployments:
#
#   kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
#
# Alternatively, you can deploy any resource separately:
#
#   kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml -l service=reviews # reviews Service
#   kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml -l account=reviews # reviews ServiceAccount
#   kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml -l app=reviews,version=v3 # reviews-v3 Deployment
##################################################################################################

##################################################################################################
# Details service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: details
  namespace: bookinfo
  labels:
    app: details
    service: details
spec:
  ports:
    - port: 9080
      name: http
  selector:
    app: details
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: bookinfo-details
  namespace: bookinfo
  labels:
    account: details
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: details-v1
  namespace: bookinfo
  labels:
    app: details
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: details
      version: v1
  template:
    metadata:
      labels:
        app: details
        version: v1
    spec:
      serviceAccountName: bookinfo-details
      containers:
        - name: details
          image: docker.io/istio/examples-bookinfo-details-v1:1.16.4
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 9080
          securityContext:
            runAsUser: 1000
---
##################################################################################################
# Ratings service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: ratings
  namespace: bookinfo
  labels:
    app: ratings
    service: ratings
spec:
  ports:
    - port: 9080
      name: http
  selector:
    app: ratings
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: bookinfo-ratings
  namespace: bookinfo
  labels:
    account: ratings
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ratings-v1
  namespace: bookinfo
  labels:
    app: ratings
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ratings
      version: v1
  template:
    metadata:
      labels:
        app: ratings
        version: v1
    spec:
      serviceAccountName: bookinfo-ratings
      containers:
        - name: ratings
          image: docker.io/istio/examples-bookinfo-ratings-v1:1.16.4
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 9080
          securityContext:
            runAsUser: 1000
---
##################################################################################################
# Reviews service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: reviews
  namespace: bookinfo
  labels:
    app: reviews
    service: reviews
spec:
  ports:
    - port: 9080
      name: http
  selector:
    app: reviews
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: bookinfo-reviews
  namespace: bookinfo
  labels:
    account: reviews
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: reviews-v1
  namespace: bookinfo
  labels:
    app: reviews
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: reviews
      version: v1
  template:
    metadata:
      labels:
        app: reviews
        version: v1
    spec:
      serviceAccountName: bookinfo-reviews
      containers:
        - name: reviews
          image: docker.io/istio/examples-bookinfo-reviews-v1:1.16.4
          imagePullPolicy: IfNotPresent
          env:
            - name: LOG_DIR
              value: "/tmp/logs"
          ports:
            - containerPort: 9080
          volumeMounts:
            - name: tmp
              mountPath: /tmp
            - name: wlp-output
              mountPath: /opt/ibm/wlp/output
          securityContext:
            runAsUser: 1000
      volumes:
        - name: wlp-output
          emptyDir: {}
        - name: tmp
          emptyDir: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: reviews-v2
  namespace: bookinfo
  labels:
    app: reviews
    version: v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: reviews
      version: v2
  template:
    metadata:
      labels:
        app: reviews
        version: v2
    spec:
      serviceAccountName: bookinfo-reviews
      containers:
        - name: reviews
          image: docker.io/istio/examples-bookinfo-reviews-v2:1.16.4
          imagePullPolicy: IfNotPresent
          env:
            - name: LOG_DIR
              value: "/tmp/logs"
          ports:
            - containerPort: 9080
          volumeMounts:
            - name: tmp
              mountPath: /tmp
            - name: wlp-output
              mountPath: /opt/ibm/wlp/output
          securityContext:
            runAsUser: 1000
      volumes:
        - name: wlp-output
          emptyDir: {}
        - name: tmp
          emptyDir: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: reviews-v3
  namespace: bookinfo
  labels:
    app: reviews
    version: v3
spec:
  replicas: 1
  selector:
    matchLabels:
      app: reviews
      version: v3
  template:
    metadata:
      labels:
        app: reviews
        version: v3
    spec:
      serviceAccountName: bookinfo-reviews
      containers:
        - name: reviews
          image: docker.io/istio/examples-bookinfo-reviews-v3:1.16.4
          imagePullPolicy: IfNotPresent
          env:
            - name: LOG_DIR
              value: "/tmp/logs"
          ports:
            - containerPort: 9080
          volumeMounts:
            - name: tmp
              mountPath: /tmp
            - name: wlp-output
              mountPath: /opt/ibm/wlp/output
          securityContext:
            runAsUser: 1000
      volumes:
        - name: wlp-output
          emptyDir: {}
        - name: tmp
          emptyDir: {}
---
##################################################################################################
# Productpage services
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: productpage
  namespace: bookinfo
  labels:
    app: productpage
    service: productpage
spec:
  ports:
    - port: 9080
      name: http
  selector:
    app: productpage
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: bookinfo-productpage
  namespace: bookinfo
  labels:
    account: productpage
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: productpage-v1
  namespace: bookinfo
  labels:
    app: productpage
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: productpage
      version: v1
  template:
    metadata:
      labels:
        app: productpage
        version: v1
    spec:
      serviceAccountName: bookinfo-productpage
      containers:
        - name: productpage
          image: docker.io/istio/examples-bookinfo-productpage-v1:1.16.4
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 9080
          volumeMounts:
            - name: tmp
              mountPath: /tmp
          securityContext:
            runAsUser: 1000
      volumes:
        - name: tmp
          emptyDir: {}

デプロイするとhttp://localhost:8091にアクセスして、動作確認できます。

alt text

etcdクラスタ

シンプルなetcdクラスタをKubernetes上に構築します。etcdクライアントのメッシュ通信なので、DNS通信が確認できます。

# デプロイ
$ kubectl apply -f ./etcd.yaml

# 動作確認
$ kubectl get po -n etcd
NAME     READY   STATUS    RESTARTS   AGE
etcd-0   0/1     Pending   0          2m8s
etcd-1   0/1     Pending   0          2m8s
etcd-2   0/1     Pending   0          2m8s

# 削除
$ kubectl delete -f ./etcd.yaml

クリックでetcd.yamlの定義を見る

# kubectl apply -f ./etcd.yaml
# kubectl delete -f ./etcd.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: etcd
---
apiVersion: v1
kind: Service
metadata:
  name: etcd
  namespace: etcd
spec:
  type: ClusterIP
  clusterIP: None
  selector:
    app: etcd
  ##
  ## Ideally we would use SRV records to do peer discovery for initialization.
  ## Unfortunately discovery will not work without logic to wait for these to
  ## populate in the container. This problem is relatively easy to overcome by
  ## making changes to prevent the etcd process from starting until the records
  ## have populated. The documentation on statefulsets briefly talk about it.
  ##   https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id
  publishNotReadyAddresses: true
  ##
  ## The naming scheme of the client and server ports match the scheme that etcd
  ## uses when doing discovery with SRV records.
  ports:
    - name: etcd-client
      port: 2379
    - name: etcd-server
      port: 2380
    - name: etcd-metrics
      port: 8080
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  namespace: etcd
  name: etcd
spec:
  ##
  ## The service name is being set to leverage the service headlessly.
  ## https://kubernetes.io/docs/concepts/services-networking/service/#headless-services
  serviceName: etcd
  ##
  ## If you are increasing the replica count of an existing cluster, you should
  ## also update the --initial-cluster-state flag as noted further down in the
  ## container configuration.
  replicas: 3
  ##
  ## For initialization, the etcd pods must be available to eachother before
  ## they are "ready" for traffic. The "Parallel" policy makes this possible.
  podManagementPolicy: Parallel
  ##
  ## To ensure availability of the etcd cluster, the rolling update strategy
  ## is used. For availability, there must be at least 51% of the etcd nodes
  ## online at any given time.
  updateStrategy:
    type: RollingUpdate
  ##
  ## This is label query over pods that should match the replica count.
  ## It must match the pod template's labels. For more information, see the
  ## following documentation:
  ##   https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors
  selector:
    matchLabels:
      app: etcd
  ##
  ## Pod configuration template.
  template:
    metadata:
      ##
      ## The labeling here is tied to the "matchLabels" of this StatefulSet and
      ## "affinity" configuration of the pod that will be created.
      ##
      ## This example's labeling scheme is fine for one etcd cluster per
      ## namespace, but should you desire multiple clusters per namespace, you
      ## will need to update the labeling schema to be unique per etcd cluster.
      labels:
        app: etcd
      annotations:
        ##
        ## This gets referenced in the etcd container's configuration as part of
        ## the DNS name. It must match the service name created for the etcd
        ## cluster. The choice to place it in an annotation instead of the env
        ## settings is because there should only be 1 service per etcd cluster.
        serviceName: etcd
    spec:
      ##
      ## Configuring the node affinity is necessary to prevent etcd servers from
      ## ending up on the same hardware together.
      ##
      ## See the scheduling documentation for more information about this:
      ##   https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
      affinity:
        ## The podAntiAffinity is a set of rules for scheduling that describe
        ## when NOT to place a pod from this StatefulSet on a node.
        podAntiAffinity:
          ##
          ## When preparing to place the pod on a node, the scheduler will check
          ## for other pods matching the rules described by the labelSelector
          ## separated by the chosen topology key.
          requiredDuringSchedulingIgnoredDuringExecution:
            ## This label selector is looking for app=etcd
            - labelSelector:
                matchExpressions:
                  - key: app
                    operator: In
                    values:
                      - etcd
              ## This topology key denotes a common label used on nodes in the
              ## cluster. The podAntiAffinity configuration essentially states
              ## that if another pod has a label of app=etcd on the node, the
              ## scheduler should not place another pod on the node.
              ##   https://kubernetes.io/docs/reference/labels-annotations-taints/#kubernetesiohostname
              topologyKey: "kubernetes.io/hostname"
      ##
      ## Containers in the pod
      containers:
        ## This example only has this etcd container.
        - name: etcd
          image: quay.io/coreos/etcd:v3.5.21
          imagePullPolicy: IfNotPresent
          ports:
            - name: etcd-client
              containerPort: 2379
            - name: etcd-server
              containerPort: 2380
            - name: etcd-metrics
              containerPort: 8080
          ##
          ## These probes will fail over TLS for self-signed certificates, so etcd
          ## is configured to deliver metrics over port 8080 further down.
          ##
          ## As mentioned in the "Monitoring etcd" page, /readyz and /livez were
          ## added in v3.5.12. Prior to this, monitoring required extra tooling
          ## inside the container to make these probes work.
          ##
          ## The values in this readiness probe should be further validated, it
          ## is only an example configuration.
          readinessProbe:
            httpGet:
              path: /readyz
              port: 8080
            initialDelaySeconds: 10
            periodSeconds: 5
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 30
          ## The values in this liveness probe should be further validated, it
          ## is only an example configuration.
          livenessProbe:
            httpGet:
              path: /livez
              port: 8080
            initialDelaySeconds: 15
            periodSeconds: 10
            timeoutSeconds: 5
            failureThreshold: 3
          env:
            ##
            ## Environment variables defined here can be used by other parts of the
            ## container configuration. They are interpreted by Kubernetes, instead
            ## of in the container environment.
            ##
            ## These env vars pass along information about the pod.
            - name: K8S_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: HOSTNAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: SERVICE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.annotations['serviceName']
            ##
            ## Configuring etcdctl inside the container to connect to the etcd node
            ## in the container reduces confusion when debugging.
            - name: ETCDCTL_ENDPOINTS
              value: $(HOSTNAME).$(SERVICE_NAME):2379
            ##
            ## TLS client configuration for etcdctl in the container.
            ## These files paths are part of the "etcd-client-certs" volume mount.
            # - name: ETCDCTL_KEY
            #   value: /etc/etcd/certs/client/tls.key
            # - name: ETCDCTL_CERT
            #   value: /etc/etcd/certs/client/tls.crt
            # - name: ETCDCTL_CACERT
            #   value: /etc/etcd/certs/client/ca.crt
            ##
            ## Use this URI_SCHEME value for non-TLS clusters.
            - name: URI_SCHEME
              value: "http"
          ## TLS: Use this URI_SCHEME for TLS clusters.
          # - name: URI_SCHEME
          # value: "https"
          ##
          ## If you're using a different container, the executable may be in a
          ## different location. This example uses the full path to help remove
          ## ambiguity to you, the reader.
          ## Often you can just use "etcd" instead of "/usr/local/bin/etcd" and it
          ## will work because the $PATH includes a directory containing "etcd".
          command:
            - /usr/local/bin/etcd
          ##
          ## Arguments used with the etcd command inside the container.
          args:
            ##
            ## Configure the name of the etcd server.
            - --name=$(HOSTNAME)
            ##
            ## Configure etcd to use the persistent storage configured below.
            - --data-dir=/data
            ##
            ## In this example we're consolidating the WAL into sharing space with
            ## the data directory. This is not ideal in production environments and
            ## should be placed in it's own volume.
            - --wal-dir=/data/wal
            ##
            ## URL configurations are parameterized here and you shouldn't need to
            ## do anything with these.
            - --listen-peer-urls=$(URI_SCHEME)://0.0.0.0:2380
            - --listen-client-urls=$(URI_SCHEME)://0.0.0.0:2379
            - --advertise-client-urls=$(URI_SCHEME)://$(HOSTNAME).$(SERVICE_NAME):2379
            ##
            ## This must be set to "new" for initial cluster bootstrapping. To scale
            ## the cluster up, this should be changed to "existing" when the replica
            ## count is increased. If set incorrectly, etcd makes an attempt to
            ## start but fail safely.
            - --initial-cluster-state=new
            ##
            ## Token used for cluster initialization. The recommendation for this is
            ## to use a unique token for every cluster. This example parameterized
            ## to be unique to the namespace, but if you are deploying multiple etcd
            ## clusters in the same namespace, you should do something extra to
            ## ensure uniqueness amongst clusters.
            - --initial-cluster-token=etcd-$(K8S_NAMESPACE)
            ##
            ## The initial cluster flag needs to be updated to match the number of
            ## replicas configured. When combined, these are a little hard to read.
            ## Here is what a single parameterized peer looks like:
            ##   etcd-0=$(URI_SCHEME)://etcd-0.$(SERVICE_NAME):2380
            - --initial-cluster=etcd-0=$(URI_SCHEME)://etcd-0.$(SERVICE_NAME):2380,etcd-1=$(URI_SCHEME)://etcd-1.$(SERVICE_NAME):2380,etcd-2=$(URI_SCHEME)://etcd-2.$(SERVICE_NAME):2380
            ##
            ## The peer urls flag should be fine as-is.
            - --initial-advertise-peer-urls=$(URI_SCHEME)://$(HOSTNAME).$(SERVICE_NAME):2380
            ##
            ## This avoids probe failure if you opt to configure TLS.
            - --listen-metrics-urls=http://0.0.0.0:8080
          ##
          ## These are some configurations you may want to consider enabling, but
          ## should look into further to identify what settings are best for you.
          # - --auto-compaction-mode=periodic
          # - --auto-compaction-retention=10m
          ##
          ## TLS client configuration for etcd, reusing the etcdctl env vars.
          # - --client-cert-auth
          # - --trusted-ca-file=$(ETCDCTL_CACERT)
          # - --cert-file=$(ETCDCTL_CERT)
          # - --key-file=$(ETCDCTL_KEY)
          ##
          ## TLS server configuration for etcdctl in the container.
          ## These files paths are part of the "etcd-server-certs" volume mount.
          # - --peer-client-cert-auth
          # - --peer-trusted-ca-file=/etc/etcd/certs/server/ca.crt
          # - --peer-cert-file=/etc/etcd/certs/server/tls.crt
          # - --peer-key-file=/etc/etcd/certs/server/tls.key
          ##
          ## This is the mount configuration.
          volumeMounts:
            - name: etcd-data
              mountPath: /data
          ##
          ## TLS client configuration for etcdctl
          # - name: etcd-client-tls
          #   mountPath: "/etc/etcd/certs/client"
          #   readOnly: true
          ##
          ## TLS server configuration
          # - name: etcd-server-tls
          #   mountPath: "/etc/etcd/certs/server"
          #   readOnly: true
      volumes:
      ##
      ## TLS client configuration
      # - name: etcd-client-tls
      #   secret:
      #     secretName: etcd-client-tls
      #     optional: false
      ##
      ## TLS server configuration
      # - name: etcd-server-tls
      #   secret:
      #     secretName: etcd-server-tls
      #     optional: false
  ##
  ## This StatefulSet will uses the volumeClaimTemplate field to create a PVC in
  ## the cluster for each replica. These PVCs can not be easily resized later.
  volumeClaimTemplates:
    - metadata:
        name: etcd-data
      spec:
        accessModes: ["ReadWriteOnce"]
        ##
        ## In some clusters, it is necessary to explicitly set the storage class.
        ## This example will end up using the default storage class.
        storageClassName: "etcd-sc"
        resources:
          requests:
            storage: 1Gi
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: etcd-sc
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: ebs.csi.eks.amazonaws.com
volumeBindingMode: WaitForFirstConsumer
parameters:
  type: gp3
  encrypted: "true"

Kubesharkでトラフィックを確認する

KubesharkのダッシュボードでAPI CALLSから通信を確認します。そのままの状態だとトラフィックがダダ流れです。ぼんやり見るにはいいでしょう。

alt text

全量トラフィックのサービスマップは時々見たくなります。

alt text

ノイズなトラフィックはクエリでフィルタリングできます。どのようなフィルタができるかは、クエリの右端にあるハンバーガーメニューからなんとなくわかります。

alt text

あるいは、今あるトラフィックにマウスホバーするとフィルタに追加するかポップアップができます。

alt text

alt text

ポチポチクリックしてから、Applyを推すとその内容でフィルタされます。便利。

node.name == "i-028d42e8587e84db7" and dst.ip == "127.0.0.1"and dst.port == "8080"

alt text

ヘルスチェックを除外する

Kubernetesあるあるなのが、Podのヘルスチェックトラフィックです。これがあるとノイズになるので、除外しましょう。多くの場合、/healthz/healthにアクセスしているので、これらを除外します。ついでにdnsやerrorも除外します。これだけで、実際のトラフィックが見やすくなります。

!dns and !error and request.path != "/health" and request.path != "/healthz"

alt text

ネームスペースを絞り込む

Guestbookはguestbook、Bookinfoはbookinfo、etcdはetcdネームスペースにデプロイしました。これらに絞り込むとさらに見やすくなります。

!dns and !error and src.namespace == "bookinfo"

alt text

GETリクエストの例

bookinfoのGETリクエストの例です。/details/0にリクエストして、レスポンスがJSONで返ってきているのがわかります。

{"id":0,"author":"William Shakespeare","year":1595,"type":"paperback","pages":200,"publisher":"PublisherA","language":"English","ISBN-10":"1234567890","ISBN-13":"123-1234567890"}
Request Response
alt text alt text

この時のサービスマップはパスも表示されていてすごいです。

alt text

DNSトラフィックやエラーを除外する

アプリケーションのトラフィックを見たい場合、DNSトラフィックやエラーはノイズになることが多いです。これらを除外しましょう。これはデフォルトのクエリでもあります。

!dns and !error

DNSトラフィックを確認する

DNSトラフィックを確認するには、dnsをクエリに追加します。サーバー間通信で接続先が解決できないっていうときに絶大な効果を発揮します。

dns

alt text

Request Response
alt text alt text

サービスマップで、このDNSトラフィックを確認できます。

alt text

Redisトラフィックを確認する

Redisトラフィックを確認するには、redisをクエリに追加します。Guestbookアプリケーションのバックエンドで使われているので、これを使うとGuestbookアプリケーションの通信に絞り込めます。

redis and src.namespace == "guestbook"

alt text

Request Response
alt text alt text

ステータスコード404に絞り込む

レスポンスステータスコードが404の通信に絞りこむことも簡単です。

!dns and request.path != "/health" and request.path != "/healthz" and response.status == 404

alt text

まとめ

価格や影響を考えると本番で常時使うというより、開発環境や再現環境で使うのが手始めにはよい感じです。少なくとも、Kubernetes内部で流れているトラフィックを見る手段はかなり簡便に導入、利用できるのは確かです。

ぜひお試しください。

参考情報


  1. Artifact Hubにはhelm repo add kubeshark https://helm.kubeshark.coとなっていますが、現状DNSトラブルでアクセスできないため、公式GitHubのREADMEにあるhttps://helm.kubehq.com/を使います。また、DNSトラブルの影響か最新チャートでは起動時のライセンスチェックにこけてkubeshark-hubがうまく起動できないため、--version 52.3.92を指定することで安定起動できます。
  2. DNSトラブルがあるとうまく初期画面が表示されませんが、ワークロードいれたりちょっと待つと表示されます
  3. https://raw.githubusercontent.com/istio/istio/release-1.14/samples/bookinfo/platform/kube/bookinfo.yamlから取得したものをベースにしています