Skip to content

KubeArmor and Cilium on EKS Cluster

Overview

This user journey guides you to install and verify the compatibility of Kuberarmor and Cilium on EKS by applying policies on kubernetes workloads.

Step 1: Create a EKS Cluster

Install EKS CTL, AWS CLI, Helm tools

cat eks-config.yaml 
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: eks-ubuntu-cluster
  region: us-east-2

nodeGroups:
  - name: ng-1
    instanceType: c5a.xlarge
    amiFamily: "Ubuntu2004"
    desiredCapacity: 1
    volumeSize: 80
    ssh:
      allow: true
    preBootstrapCommands:
- "sudo apt install linux-headers-$(uname -r)"

Official Link: Sample eks-config.yaml

Note:

EKS suported image types:

  • AmazonLinux2

  • Ubuntu2004

  • Ubuntu1804

  • Bottlerocket

  • WindowsServer2019CoreContainer

  • WindowsServer2019FullContainer

  • WindowsServer2004CoreContainer

  • WindowsServer20H2CoreContainer

eksctl create cluster -f eks-config.yaml

Alt

aws eks --region us-east-2 update-kubeconfig --name eks-ubuntu-cluster

Alt

Step 2: Karmor Install

Install karmor CLI:

curl -sfL https://raw.githubusercontent.com/kubearmor/kubearmor-client/main/install.sh | sudo sh -s -- -b /usr/local/bin
karmor version
karmor install  

Alt

Karmor verify:

kubectl get pods -n kube-system | grep kubearmor

Alt

Step 3: Cilium Install

curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-amd64.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
rm cilium-linux-amd64.tar.gz{,.sha256sum}
cilium install 

Alt

Cilium verify:

kubectl get pods -n kube-system | grep cilium 

Alt

Cilium Hubble enable:

cilium hubble enable

Alt

Cilium Hubble verify:

kubectl get pods -n kube-system | grep hubble

Alt

Step 4: Kubearmor Policy

1. Create a nginx deployment

kubectl create deployment nginx --image nginx
kubectl get pods --show-labels

Alt

2. Explore the policy

cat nginx-kubearmor-policy.yaml 
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: nginx-kubearmor-policy
 # namespace: accuknox-agents # Change your namespace
spec:
  tags: ["MITRE", "T1082"]
  message: "System owner discovery command is blocked"
  selector:
    matchLabels:
      app: nginx # use your own label here
  process:
    severity: 3
    matchPaths:
      - path: /usr/bin/who
      - path: /usr/bin/w
      - path: /usr/bin/id
      - path: /usr/bin/whoami
  action: Block

Alt

3. Apply the policy

kubectl apply -f nginx-kubearmor-policy.yaml  
Alt

Note: Policy will work based on matched lables. Ex: (app: nginx)

4. Policy violation

kubectl exec -it nginx-766b69bd4b-8jttd -- bash  

Alt

5. Kubearmor SVC port forward to monitor the logs

kubectl port-forward -n kube-system svc/kubearmor --address 0.0.0.0 --address :: 32767:32767

Alt

6. Verifying policy violation logs

karmor log

Alt

Alt

Step 5: Cilium Policy

1. Create a tightfighter and deathstart deployment

cat tightfighter-deathstart-app.yaml
apiVersion: v1
kind: Service
metadata:
  name: deathstar
  labels:
    app.kubernetes.io/name: deathstar
spec:
  type: ClusterIP
  ports:
  - port: 80
  selector:
    org: empire
    class: deathstar
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deathstar
  labels:
    app.kubernetes.io/name: deathstar
spec:
  replicas: 2
  selector:
    matchLabels:
      org: empire
      class: deathstar
  template:
    metadata:
      labels:
        org: empire
        class: deathstar
        app.kubernetes.io/name: deathstar
    spec:
      containers:
      - name: deathstar
        image: docker.io/cilium/starwars
---
apiVersion: v1
kind: Pod
metadata:
  name: tiefighter
  labels:
    org: empire
    class: tiefighter
    app.kubernetes.io/name: tiefighter
spec:
  containers:
  - name: spaceship
    image: docker.io/tgraf/netperf
---
apiVersion: v1
kind: Pod
metadata:
  name: xwing
  labels:
    app.kubernetes.io/name: xwing
    org: alliance
    class: xwing
spec:
  containers:
  - name: spaceship
    image: docker.io/tgraf/netperf
kubectl apply -f tightfighter-deathstart-app.yaml 

kubectl get pods --show-labels

Alt

2. Explore the policy

cat sample-cilium-ingress-policy.yaml
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: "rule1-ingress"
spec:
  description: "L7 policy to restrict access to specific HTTP call"
  endpointSelector:
    matchLabels:
      class: deathstar
  ingress:
  - toPorts:
    - ports:
      - port: "80"
        protocol: TCP
      rules:
        http:
        - method: "POST"
          path: "/v1/request-landing"

Alt

3. Apply the policy

kubectl apply -f sample-cilium-ingress-policy.yaml 

Alt

4. Policy violation

kubectl get svc 
kubectl exec -n default tiefighter -- curl -s -XPOST 10.100.255.199/v1/request-landing 
kubectl exec -n default tiefighter -- curl -s -XPOST 10.100.255.199/v1/bye 

5. Cilium SVC port forward to monitor the logs

cilium hubble port-forward

Alt

6. Monitoring the cilium violation logs

hubble observe -f --protocol http --pod tiefighter

Alt

Back to top