Skip to content

KubeArmor and Cilium on K3's Cluster

Overview

This user journey guides you to install and verify the compatibility of Kuberarmor and Cilium on K3's by applying policies on kubernetes workloads.

Step 1: Install Virtualbox

sudo apt-get install virtualbox -y

Step 2: Install Vagrant

wget https://releases.hashicorp.com/vagrant/2.2.14/vagrant_2.2.14_x86_64.deb
sudo apt install ./vagrant_2.2.14_x86_64.deb
vagrant –version
vagrant plugin install vagrant-scp
vagrant plugin list

Step 3: Configure Ubuntu on Vagrant Virtualbox

nano VagrantFile
# -*- mode: ruby -*-
# vi: set ft=ruby :

ENV['VAGRANT_NO_PARALLEL'] = 'yes'

Vagrant.configure(2) do |config|

  NodeCount = 2

  (1..NodeCount).each do |i|
    config.vm.define "ubuntuvm#{i}" do |node|
      node.vm.box               = "generic/ubuntu2004"
      node.vm.box_check_update  = false
      node.vm.box_version       = "3.3.0"
      node.vm.hostname          = "ubuntuvm#{i}.example.com"
      node.vm.network "private_network", ip: "192.168.56.4#{i}"
      node.vm.provider "virtualbox" do |v|
        v.name    = "ubuntuvm#{i}"
        v.memory  = 1024
        v.cpus    = 1
      end

    end

  end

end
vagrant up

Alt

vagrant ssh ubuntu1

Alt

Step 4: Install K3's on Ubuntuvm

vagrant ssh ubuntuvm1
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--node-ip=192.168.56.41 --flannel-iface=eth1" sh -s - --write-kubeconfig-mode 644

Note: node-ip=192.168.56.41 → ubuntuvm1 ip

Alt

systemctl status k3s
which kubectl
kubectl get nodes
cat  /var/lib/rancher/k3s/server/token 

Alt

exit
vagrant scp ubuntuvm1:/etc/rancher/k3s/k3s.yaml ~/.kube/config
nano ~/.kube/config

Change the server: 127.0.0.1 to 192.168.56.41

kubectl get nodes

Login in to Ubuntuvm2

vagrant ssh ubuntuvm2
ip a show
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--node-ip=192.168.56.42 --flannel-iface=eth1" K3S_URL="https://192.168.56.41:6443" K3S_TOKEN="K107b9ecf5076c2a0c79760aa0e545d7464f11bbd27c643b1f1b8eef34758af1b89::server:985d51052287fd7554e989bd742c7f31"  sh -

Note: IP & Token may change

exit
kubectl get nodes

Alt

Step 5: Karmor Install

Install Karmor CLI:

curl -sfL https://raw.githubusercontent.com/kubearmor/kubearmor-client/main/install.sh | sudo sh -s -- -b /usr/local/bin
karmor install  

Alt

Karmor verify:

kubectl get pods -n kube-system | grep kubearmor
Alt

Step 6: Cilium Install

curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-amd64.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
rm cilium-linux-amd64.tar.gz{,.sha256sum}

cilium install 
Alt

Cilium verify:

kubectl get pods -n kube-system | grep cilium 

Alt

Cilium Hubble enable:

cilium hubble enable

Alt

Cilium Hubble verify:

kubectl get pods -n kube-system | grep hubble

Alt

Step 7: Kubearmor Policy

1. Create a nginx deployment

kubectl create deployment nginx --image nginx
kubectl get pods --show-labels

Alt

2. Explore the policy

nano nginx-kubearmor-policy.yaml  
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: nginx-kubearmor-policy
 # namespace: accuknox-agents # Change your namespace
spec:
  tags: ["MITRE", "T1082"]
  message: "System owner discovery command is blocked"
  selector:
    matchLabels:
      app: nginx # use your own label here
  process:
    severity: 3
    matchPaths:
      - path: /usr/bin/who
      - path: /usr/bin/w
      - path: /usr/bin/id
      - path: /usr/bin/whoami
  action: Block

Alt

3. Apply the policy

Alt

kubectl apply -f nginx-kubearmor-policy.yaml  
kubectl get ksp

Alt

Note: Policy will work based on matched lables Ex: (app: nginx)

4. Policy voilation

kubectl exec -it nginx-766b69bd4b-8jttd -- bash  

Alt

Kubearmor SVC port forward to monitor the logs

kubectl port-forward -n kube-system svc/kubearmor --address 0.0.0.0 --address :: 32767:32767

Alt

Verifying policy violation logs

karmor log

Alt

Alt

Step 8: Cilium Policy

1. Create a tightfighter & deathstart deployment

nano tightfighter-deathstart-app.yaml
apiVersion: v1
kind: Service
metadata:
  name: deathstar
  labels:
    app.kubernetes.io/name: deathstar
spec:
  type: ClusterIP
  ports:
  - port: 80
  selector:
    org: empire
    class: deathstar
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deathstar
  labels:
    app.kubernetes.io/name: deathstar
spec:
  replicas: 2
  selector:
    matchLabels:
      org: empire
      class: deathstar
  template:
    metadata:
      labels:
        org: empire
        class: deathstar
        app.kubernetes.io/name: deathstar
    spec:
      containers:
      - name: deathstar
        image: docker.io/cilium/starwars
---
apiVersion: v1
kind: Pod
metadata:
  name: tiefighter
  labels:
    org: empire
    class: tiefighter
    app.kubernetes.io/name: tiefighter
spec:
  containers:
  - name: spaceship
    image: docker.io/tgraf/netperf
---
apiVersion: v1
kind: Pod
metadata:
  name: xwing
  labels:
    app.kubernetes.io/name: xwing
    org: alliance
    class: xwing
spec:
  containers:
  - name: spaceship
    image: docker.io/tgraf/netperf
kubectl apply -f tightfighter-deathstart-app.yaml 
kubectl get pods --show-labels

Alt

2. Explore the policy

nano sample-cilium-ingress-policy.yaml                                
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: "rule1-ingress"
spec:
  description: "L7 policy to restrict access to specific HTTP call"
  endpointSelector:
    matchLabels:
      class: deathstar
  ingress:
  - toPorts:
    - ports:
      - port: "80"
        protocol: TCP
      rules:
        http:
        - method: "POST"
          path: "/v1/request-landing"

Alt

3. Apply the policy

kubectl apply -f sample-cilium-ingress-policy.yaml   

Alt

kubectl get cnp

Alt

4. Policy voilation

kubectl get svc
kubectl exec -n default tiefighter -- curl -s -XPOST 10.100.255.199/v1/request-landing 
kubectl exec -n default tiefighter -- curl -s -XPOST 10.100.255.199/v1/bye 

Alt

Cilium SVC port forward to monitor the logs

cilium hubble port-forward

Alt

Step 9: Install the Hubble CLI Client

exportHUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
curl -L --remote-name-all https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz{,.sha256sum}
sha256sum --check hubble-linux-amd64.tar.gz.sha256sum

Alt

sudo tar xzvfC hubble-linux-amd64.tar.gz /usr/local/bin
rm hubble-linux-amd64.tar.gz{,.sha256sum}

Step 10: Monitoring the Cilium voilation logs

hubble observe -f --protocol http --pod tiefighter

Alt

Back to top