Skip to content

Cilium Policy Statistics

A challenge faced by the users of Cilium ecosystem is that Cilium network log (cilium-monitor) does not provide any mechanism to collect statistics about the flows that are allowed or denied based on a particular policy.

To solve this problem, Accuknox has added improvements to the Cilium network logs. In the patched version, cilium-monitor logs will provide the following information about the flows.

  • PolicyName - Name of the policy with which the flow matches.
  • action - Action taken - Allowed or Denied

Users can build tools/scripts which utilize the cilium-monitor logs to collect per-policy statistics.


The following steps showcases how to collect logs from Cilium which has the above mentioned information.

1. Setup

Refer to our Cilium installation guide and install Cilium in your kubernetes cluster.

2. Deploy Demo Application

kubectl create -f
The demo application has the following topology.


3. Apply Network Policy

Let's apply a policy which enables tiefighter pod to access deathstar's HTTP APIs.

apiVersion: ""
kind: CiliumNetworkPolicy
  name: "rule1"
  description: "L3-L4 policy to restrict deathstar access to empire ships only"
      org: empire
      class: deathstar
  - fromEndpoints:
    - matchLabels:
        org: empire
    - ports:
      - port: "80"
        protocol: TCP 
To apply the above policy, execute the following command.
kubectl apply -f

4. Collect logs

a. Identity the Cilium Agent Pod

First we have to identity the cilium pod which is running in the same node as the deathstar pod.

kubectl get pods -l class=deathstar -o wide
NAME                        READY   STATUS    RESTARTS      AGE     IP           NODE         NOMINATED NODE   READINESS GATES
deathstar                   1/1     Running   1 (17h ago)   2d13h   k8s-master   <none>           <none>

kubectl -n kube-system get pods -l k8s-app=cilium -o wide
cilium-lrg5j   1/1     Running   1 (17h ago)   35h   k8s-worker1   <none>           <none>
cilium-m89bs   1/1     Running   2 (17h ago)   35h    k8s-master    <none>           <none>

From the above output, we can identity that cilium-m89bs is running in the same node as the deathstar pod.

b. Run the Cilium Monitor

Run the cilium-monitor process inside the identified cilium pod.

kubectl -n kube-system exec -it cilium-m89bs -- cilium monitor -t policy-verdict

c. Access Deathstar's APIs

Open another shell and execute the following command.

kubectl exec -it tiefighter -- curl -m 5 -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
Ship landed
kubectl exec -it xwing -- curl -m 5 -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
curl: (28) Connection timed out after 5001 milliseconds
command terminated with exit code 28

d. Cilium Monitor Logs

If you see the output of the cilium-monitor, you will see a similar output.

Policy verdict log: flow 0x0, PolicyName rule1, local EP ID 262, remote ID 3884, proto 6, ingress, action ALLOW auditMode:false, match all, -> tcp SYN

Policy verdict log: flow 0x0, PolicyName implicit-default-deny,, local EP ID 262, remote ID 30448, proto 6, ingress, action DENY auditMode:true, match all, -> tcp SYN

The log indicates the following,

  1. tiefighter --> deathstar: Policy verdict is ALLOW because of policy rule1.

  2. xwing --> deathstar: Policy verdict is DENY because of implicit-default-deny. i.e In Cilium, if a allow policy for the specific flow does not exist then the default action is DENY.

Users can utilize the above information from cilium-monitor logs and build tools on top of that to collect per-policy statistics.

Back to top