Skip to content

KubeArmor and Cilium on SUSE Linux Enterprise Server 15

Overview

This user journey guides you to install and verify the compatibility of Kuberarmor and Cilium on SUSE Linux Enterprise Server 15 with 5.3 Kernel Version by applying policies on kubernetes workloads.

Step 1: Install etcd in control plane VM

Create etcd user:

groupadd --system etcd

useradd --home-dir "/var/lib/etcd" \
      --system \
      --shell /bin/false \
      -g etcd \
      etcd
Create the necessary directories:

mkdir -p /etc/etcd
chown etcd:etcd /etc/etcd
mkdir -p /var/lib/etcd
chown etcd:etcd /var/lib/etcd

Determine your system architecture:

uname -m

Alt

Download and install the etcd tarball for x86_64/amd64:

ETCD_VER=v3.2.7
rm -rf /tmp/etcd && mkdir -p /tmp/etcd
curl -L \     https://github.com/coreos/etcd/releases/download/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz \
      -o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
tar xzvf /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz \
      -C /tmp/etcd --strip-components=1
cp /tmp/etcd/etcd /usr/bin/etcd
cp /tmp/etcd/etcdctl /usr/bin/etcdctl

Or download and install the etcd tarball for arm64:

ETCD_VER=v3.2.7
rm -rf /tmp/etcd && mkdir -p /tmp/etcd
curl -L \  https://github.com/coreos/etcd/releases/download/${ETCD_VER}/etcd-${ETCD_VER}-linux-arm64.tar.gz \
      -o /tmp/etcd-${ETCD_VER}-linux-arm64.tar.gz
tar xzvf /tmp/etcd-${ETCD_VER}-linux-arm64.tar.gz \
      -C /tmp/etcd --strip-components=1
cp /tmp/etcd/etcd /usr/bin/etcd
cp /tmp/etcd/etcdctl /usr/bin/etcdctl

Create and edit the .yml file:

sudo vi /etc/etcd/etcd.conf.yml
name: controller
data-dir: /var/lib/etcd
initial-cluster-state: 'new'
initial-cluster-token: 'etcd-cluster-01'
initial-cluster: controller=http://0.0.0.0:2380
initial-advertise-peer-urls: http://0.0.0.0:2380
advertise-client-urls: http://0.0.0.0:2379
listen-peer-urls: http://0.0.0.0:2380
listen-client-urls: http://0.0.0.0:2379

Create and edit the .service file:

sudo vi /usr/lib/systemd/system/etcd.service
[Unit]
After=network.target
Description=etcd - highly-available key value store

[Service]
# Uncomment this on ARM64.
# Environment="ETCD_UNSUPPORTED_ARCH=arm64"
LimitNOFILE=65536
Restart=on-failure
Type=notify
ExecStart=/usr/bin/etcd --config-file /etc/etcd/etcd.conf.yml
User=etcd

[Install]
WantedBy=multi-user.target

Reload systemd service files with:

systemctl daemon-reload

Enable and start the etcd service:

systemctl enable etcd
systemctl start etcd
systemctl status etcd

Alt

Step 2: Install KVM-Service in control plane

Download the latest RPM Package

wget https://github.com/kubearmor/kvm-service/releases/download/0.1/kvmservice_0.1_linux-amd64.rpm

Alt

zypper install kvmservice_0.1_linux-amd64.rpm

Alt

systemctl status kvmservice 

Alt

Step 3: Install Karmor in control plane

curl -sfL https://raw.githubusercontent.com/kubearmor/kubearmor-client/main/install.sh | sudo sh -s -- -b /usr/local/bin

Step 4: Onboard VMs using Karmor

cat kvmpolicy1.yaml
apiVersion: security.kubearmor.com/v1
kind: KubeArmorVirtualMachine
metadata:
  name: testvm1
  labels:
    name: vm1
    vm: true

Run this command to add the VM:

karmor vm add kvmpolicy1.yaml

Alt

To see the onboarded VM’s

karmor vm list

Alt

Step 5: Generate installation scripts for configured worker VMs

Generate VM installation scripts for the configured VM by running the following command:

karmor vm --kvms getscript -v testvm1

Alt

Step 6: Execute the installation script in VMs

Note: Docker needs to install before running the script.

Install pre-requisites:

sudo zypper ref
sudo zypper in bcc-tools bcc-examples

Alt

fullkver=$(zypper se -s kernel-default-devel | awk '{split($0,a,"|"); print a[4]}' | grep $(uname -r | awk '{gsub("-default", "");print}') | sed -e 's/^[ \t]*//' | tail -n 1)

Alt

zypper -n --config /var/opt/carbonblack/response/zypp.conf install -f -y kernel-default-devel="$fullkver"

Alt

zypper in apparmor-utils

Alt

zypper in apparmor-profiles

Alt

systemctl restart apparmor.service
vi testvm1.sh

Comment the following line on script and save it:

#sudo docker run --name kubearmor $DOCKER_OPTS $KUBEARMOR_IMAGE $KUBEARMOR_OPTS

Alt

Note: Upcoming release will fix the above comment section.

Execute the installation script:

Copy the generated installation scripts to appropriate VMs using scp or rsync method and execute the scripts to run Cilium.

The script downloads Cilium Docker images and run them as containers in each VM. Cilium running in each VM connects to the KVM-Service control plane to register themselves and receive information about other VMs in the cluster, labels, IPs and configured security policies.

Execute the script on worker VM by running the following command:

./testvm1.sh

Alt

Note: Make sure the kvm-service is running on control plane VM & To onboard more worker VM repeat Step 4, Step 5 & Step 6.

You can verify by running following command:

sudo docker ps

Alt

Step 7: Install Kubearmor on worker VMs

Download the latest release of KubeArmor

wget https://github.com/kubearmor/KubeArmor/releases/download/v0.3.1/kubearmor_0.3.1_linux-amd64.rpm

Alt

zypper install kubearmor_0.3.1_linux-amd64.rpm

Alt

Start & check the status of Kubearmor:

sudo systemctl start kubearmor
sudo systemctl enable kubearmor
sudo systemctl status kubearmor

Alt

Step 8: Apply and Verify Kubearmor system policy

cat khp-example-vmname.yaml
apiVersion: security.kubearmor.com/v1
kind: KubeArmorHostPolicy
metadata:
  name: khp-02
spec:
  severity: 5
  file:
    matchPaths:
    - path: /proc/cpuinfo
  action:
    Block

Run this command to apply the policy:

karmor vm  policy add khp-example-vmname.yaml

Step 9: Policy Violation

With the above mentioned policy enforced in the VM, if a user tries to access /proc/cpuinfo file, user will see permission denied error and karmor log will show the alert log for blocking the file access as shown below.

cat /proc/cpuinfo

Alt

Verifying Policy Violation Logs:

karmor log

Alt

Step 10: Apply and Verify Cilium network policy

1. Allow connectivity with the control plane ( and port 2379)

cat vm-allow-control-plane.yaml
kind: CiliumNetworkPolicy
metadata:
  name: "vm-allow-control-plane"
spec:
  description: "Policy to allow traffic to kv-store"
  nodeSelector:
    matchLabels:
      name: vm1
  egress:
  - toCIDR:
    - 10.138.0.5/32
    toPorts:
    - ports:
      - port: "2379"
        protocol: TCP

2. For SSH connectivity allow port 22 and 169.254.169.254 port 80

cat vm-allow-ssh.yaml
kind: CiliumNetworkPolicy
metadata:
  name: "vm-allow-ssh"
spec:
  description: "Policy to allow SSH"
  nodeSelector:
    matchLabels:
      name: vm1
  egress:
  - toPorts:
    - ports:
      - port: "22"
        protocol: TCP
  - toCIDR:
    - 169.254.169.254/32
    toPorts:
    - ports:
      - port: "80"
        protocol: TCP

3. This policy block the DNS access in VM

cat vm-dns-visibility.yaml
kind: CiliumNetworkPolicy
metadata:
  name: "vm-dns-visibility"
spec:
  description: "Policy to enable DNS visibility"
  nodeSelector:
    matchLabels:
      name: vm1
  egress:
  - toPorts:
    - ports:
      - port: "53"
        protocol: ANY
      rules:
        dns:
        - matchPattern: "*"

4. This policy allow access of “www.google.co.in” alone in VM

cat vm-allow-www-google-co-in.yaml 
kind: CiliumNetworkPolicy
metadata:
  name: "vm-allow-www.google.co.in"
spec:
  description: "Policy to allow traffic to www.google.co.in"
  nodeSelector:
    matchLabels:
      name: vm1
  egress:
  - toFQDNs:
    - matchName: www.google.co.in
    toPorts:
    - ports:
      - port: "80"
        protocol: TCP
      - port: "443"
        protocol: TCP

Run this command to apply the policy:

karmor vm --kvms policy add vm-allow-control-plane.yaml
karmor vm --kvms policy add vm-allow-ssh.yaml 
karmor vm --kvms policy add vm-dns-visibility.yaml
karmor vm --kvms policy add vm-allow-www-google-co-in.yaml

Alt

Step 11: Policy Violation on worker node

curl http://www.google.co.in/

Alt

curl https://go.dev/

Alt

Verifying Policy Violation Logs:

docker exec -it cilium hubble observe -f -t policy-verdict

Alt

Back to top