Skip to content

Running Kubearmor/Cilium On VMs Using Non-K8s Control Plane

Kubearmor is a runtime security engine that protects the host/VM from unknown threats. Cilium is a network security engine that can be used to enforce network policy in VMs.

With Kubearmor and Cilium running on a VM, it is possible to enforce host based security policies and secure the VM both at the system and network level.

Why we need a control plane?

With Kubearmor running on multiple VMs, it is difficult and time consuming to enforce policies on each VM. In addition to that, Cilium needs a control plane to distribute information about identities, labels and IPs to the Cilium agents running in multiple VMs.

Hence the solution is to manage all the VMs in the network from a control plane.

KVM-Service

Accuknox's KVM-Service (Kubearmor Virtual Machine Service) is an application designed to act as a control plane in a non-k8s environment and manage Kubearmor and Cilium running in multiple VMs. Using KVM-Service, users can manage their VMs, associate labels to them and enforce security policies.

Note: KVM-Service requires that all the managed VMs should be within the same network.

Design of KVM-Service

Alt Text

Components Involved and it's use

  • Non-K8s Control Plane
    • etcd : etcd is used as a key-value storage and is used to store the label information, IPs and unique identity of each configured VM.
    • KVM-Service : Manages connection with VM, handles VM onboarding/offboarding, label management and policy enforcement.
    • Karmor (Support utility) : A CLI utility which interacts with KVM-Service for VM onboarding/offboarding, policy enforcement and label management.
  • VMs : Actual VMs connected in the network

Installation Guide

The following steps describes the process of onboarding VMs and enforcing policies in the VMs using KVM-Service. 1. Install and run KVM-Service and dependencies on a VM (or standalone linux machine). This VM acts as the non-k8s control plane. 2. Onboard the workload VMs 3. Download the VM installation scripts 4. Run the installation scripts 5. Enforce policies in VM from the control plane 6. Manage labels

Note : All the steps are tested and carried out in debian based OS distribution.

Step 1: Install etcd in control plane

  1. Install etcd using below command

    sudo apt-get install etcd
    
    Output:
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    The following NEW packages will be installed:
      etcd
    0 upgraded, 1 newly installed, 0 to remove and 8 not upgraded.
    Need to get 2,520 B of archives.
    After this operation, 16.4 kB of additional disk space will be used.
    Get:1 http://in.archive.ubuntu.com/ubuntu focal/universe amd64 etcd all 3.2.26+dfsg-6 [2,520 B]
    Fetched 2,520 B in 0s (9,080 B/s)
    Selecting previously unselected package etcd.
    (Reading database ... 246471 files and directories currently installed.)
    Preparing to unpack .../etcd_3.2.26+dfsg-6_all.deb ...
    Unpacking etcd (3.2.26+dfsg-6) ...
    Setting up etcd (3.2.26+dfsg-6) ...
    

  2. Once etcd is installed, configure the following values in /etc/default/etcd as shown below.

    ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379
    ETCD_ADVERTISE_CLIENT_URLS=http://0.0.0.0:2379
    

  3. Restart etcd

    sudo service etcd restart
    

  4. Check the status

    sudo service etcd status
    
    Output:
    ● etcd.service - etcd - highly-available key value store
         Loaded: loaded (/lib/systemd/system/etcd.service; enabled; vendor preset: enabled)
         Active: active (running) since Sun 2022-01-16 12:23:58 IST; 30min ago
           Docs: https://github.com/coreos/etcd
                 man:etcd
       Main PID: 1087 (etcd)
          Tasks: 24 (limit: 18968)
         Memory: 84.2M
         CGroup: /system.slice/etcd.service
                 └─1087 /usr/bin/etcd
    
    Jan 16 12:23:57 LEGION etcd[1087]: 8e9e05c52164694d as single-node; fast-forwarding 9 ticks (election ticks 10)
    Jan 16 12:23:58 LEGION etcd[1087]: 8e9e05c52164694d is starting a new election at term 88
    Jan 16 12:23:58 LEGION etcd[1087]: 8e9e05c52164694d became candidate at term 89
    Jan 16 12:23:58 LEGION etcd[1087]: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 89
    Jan 16 12:23:58 LEGION etcd[1087]: 8e9e05c52164694d became leader at term 89
    Jan 16 12:23:58 LEGION etcd[1087]: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 89
    Jan 16 12:23:58 LEGION etcd[1087]: published {Name:LEGION ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32
    Jan 16 12:23:58 LEGION etcd[1087]: ready to serve client requests
    Jan 16 12:23:58 LEGION systemd[1]: Started etcd - highly-available key value store.
    Jan 16 12:23:58 LEGION etcd[1087]: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
    

Step 2: Install KVM-Service in control plane

  1. Clone KVM-Service code and checkout to non-k8s branch.

git clone https://github.com/kubearmor/kvm-service.git
Output:
Cloning into 'kvm-service'...
remote: Enumerating objects: 1252, done.
remote: Counting objects: 100% (215/215), done.
remote: Compressing objects: 100% (111/111), done.
remote: Total 1252 (delta 122), reused 132 (delta 102), pack-reused 1037
Receiving objects: 100% (1252/1252), 139.62 MiB | 1.70 MiB/s, done.
Resolving deltas: 100% (702/702), done.
cd into kvm-service directory:
cd kvm-service/
Run this command:
git checkout non-k8s
Output:
Branch 'non-k8s' set up to track remote branch 'non-k8s' from 'origin'.
Switched to a new branch 'non-k8s'

  1. Navigate to kvm-service/src/service/ and execute the following command to compile KVM-Service code.

    make
    
    Output:
    logname: no login name
    cd /home/wazir/go/src/github.com/kubearmor/kvm-service/src/service; go mod tidy
    cd /home/wazir/go/src/github.com/kubearmor/kvm-service/src/service; go build -ldflags "-w -s -X main.BuildDate=2022-03-01T10:00:34Z -X main.GitCommit=beb3ab8 -X main.GitBranch=non-k8s -X main.GitState=dirty -X main.GitSummary=beb3ab8" -o kvmservice main.go
    

  2. Once compilation is successful, run KVM-Service using the following command.

    sudo ./kvmservice --non-k8s 2> /dev/null 
    
    Output:
    2022-01-16 13:06:16.304185      INFO    BUILD-INFO: commit:901ea26, branch: non-k8s, date: 2022-01-16T07:35:51Z, version: 
    2022-01-16 13:06:16.304278      INFO    Initializing all the KVMS daemon attributes
    2022-01-16 13:06:16.304325      INFO    Establishing connection with etcd service => http://localhost:2379
    2022-01-16 13:06:16.333682      INFO    Initialized the ETCD client!
    2022-01-16 13:06:16.333748      INFO    Initiliazing the KVMServer => podip:192.168.0.14 clusterIP:192.168.0.14 clusterPort:32770
    2022-01-16 13:06:16.333771      INFO    KVMService attributes got initialized
    2022-01-16 13:06:17.333915      INFO    Starting HTTP Server
    2022-01-16 13:06:17.334005      INFO    Starting Cilium Node Registration Observer
    2022-01-16 13:06:17.334040      INFO    Triggered the keepalive ETCD client
    2022-01-16 13:06:17.334077      INFO    Starting gRPC server
    2022-01-16 13:06:17.334149      INFO    ETCD: Getting raw values key:cilium/state/noderegister/v1
    
    2022-01-16 13:06:17.335092      INFO    Successfully KVMServer Listening on port 32770
    

Step 3: Install karmor in control plane

Run the following command to install karmor utility

curl -sfL https://raw.githubusercontent.com/kubearmor/kubearmor-client/main/install.sh | sudo sh -s -- -b /usr/local/bin

Step 4: Onboard VMs using karmor

Few example YAMLs are provided under kvm-service/examples for VM onboarding. The same can be used for reference.

Lets use kvmpolicy1.yaml and kvmpolicy2.yaml file and onboard two VMs.

cat kvmpolicy1.yaml 
Output:
apiVersion: security.kubearmor.com/v1
kind: KubeArmorVirtualMachine
metadata:
  name: testvm1
  labels:
    name: vm1
    vm: true
Run this command:
karmor vm add kvmpolicy1.yaml 
Output:
Success
The above output shows that the first VM is given the name testvm1 and is configured with two labels name:vm1 and vm:true.

cat kvmpolicy2.yaml 
Output:
apiVersion: security.kubearmor.com/v1
kind: KubeArmorVirtualMachine
metadata:
  name: testvm2
  labels:
    name: vm2
    vm: true
Run this command:
karmor vm add kvmpolicy2.yaml
Output:
Success
The above output shows that the second VM is given the name testvm2 and is configured with two labels name:vm1 and vm:true.

When a new VM is onboarded, the KVM-Service assigns a new identity to it. To see the list of onboarded VMs, execute the following the command.

karmor vm list
Output:
List of configured vms are : 
[ VM : testvm1, Identity : 1090 ]
[ VM : testvm2, Identity : 35268 ]

Step 5: Generate installation scripts for configured VMs

Generate VM installation scripts for the configured VM

karmor vm --kvms getscript -v testvm1
Output:
VM installation script copied to testvm1.sh
Run this command:
karmor vm --kvms getscript -v testvm2
Output:
VM installation script copied to testvm2.sh

Step 6: Execute the installation script in VMs

Copy the generated installation scripts to appropriate VMs and execute the scripts to run Kubearmor and Cilium.

The script downloads Kubearmor and Cilium Docker images and run them as containers in each VM. Kubearmor and Cilium running in each VM connects to the KVM-Service control plane to register themselves and receive information about other VMs in the cluster, labels, IPs and configured security policies.

Step 7: Apply and verify Kubearmor system policy

  1. Few example YAMLs are provided under kvm-service/examples for Kubearmor policy enforcement in VM. The same can be used for reference.

cat khp-example-vmname.yaml
Output:
apiVersion: security.kubearmor.com/v1
kind: KubeArmorHostPolicy
metadata:
  name: khp-02
spec:
  nodeSelector:
    matchLabels:
      name: vm1
  severity: 5
  file:
    matchPaths:
    - path: /proc/cpuinfo
  action:
    Block
Run this command:
karmor vm --kvms policy add khp-example-vmname.yaml
Output:
Success
2. To verify the enforced policy in VM, run karmor in VM and watch on alerts.
karmor log
Output:
gRPC server: localhost:32767
Created a gRPC client (localhost:32767)
Checked the liveness of the gRPC server
Started to watch alerts

  1. With the above mentioned policy enforced in the VM, if a user tries to access /proc/cpuinfo file, user will see permission denied error and karmor log will show the alert log for blocking the file access as shown below.
    cat /proc/cpuinfo
    
    Output:
    cat: /proc/cpuinfo: Permission denied
    
    Run this command:
    karmor log
    
    Output:
    gRPC server: localhost:32767
    Created a gRPC client (localhost:32767)
    Checked the liveness of the gRPC server
    Started to watch alerts
    
    == Alert / 2022-01-16 08:24:33.153921 ==
    Cluster Name: default
    Host Name: 4511a8accc65
    Policy Name: khp-02
    Severity: 5
    Type: MatchedHostPolicy
    Source: cat
    Operation: File
    Resource: /proc/cpuinfo
    Data: syscall=SYS_OPENAT fd=-100 flags=O_RDONLY
    Action: Block
    Result: Permission denied
    

Step 8: Apply and verify Cilium network policy

  1. The example policy provided in kvm-service/examples/cnp-l7.yaml describes a network policy which only allows request to two HTTP endpoints - /hello and /bye and block everything else.

cat cnp-l7.yaml
Output:
kind: CiliumNetworkPolicy
metadata:
  name: "rule1"
spec:
  description: "Allow only certain HTTP endpoints"
  nodeSelector:
    matchLabels:
      name: vm1
  ingress:
  - toPorts:
    - ports:
      - port: "3000"
        protocol: TCP
      rules:
        http:
        - method: GET
          path: "/hello"
        - method: GET
          path: "/bye"
Run this command:
karmor vm --kvms policy add cnp-l7.yaml
Output:
Success

  1. In kvm-service/examples/app, there is a sample HTTP server application http-echo-server.go that sends the URL path of the request as the response. Copy the file http-echo-server.go to the VM which has the label name: vm1 and run the HTTP server.

    go run http-echo-server.go
    
    Output:
    🚀 HTTP echo server listening on port 3000
    

  2. Switch to the VM which has the label name: vm2 and try to access vm1's HTTP server.

    curl http://10.20.1.34:3000/hello
    
    Output:
    hello
    
    Run this command:
    curl http://10.20.1.34:3000/bye
    
    Output:
    bye
    
    Run this command:
    curl http://10.20.1.34:3000/hi
    
    Output:
    Access denied
    
    Run this command:
    curl http://10.20.1.34:3000/thank-you
    
    Output:
    Access denied
    
    The above output shows that,

  3. For URLs that are listed in the configured network policy, the response is received from the HTTP server
  4. For URLs that are not listed in the configured network policy, the requests are denied.

  5. To verify this, switch to the VM which has label name: vm1 and run hubble to view the network logs.

    docker exec -it cilium hubble observe -f --protocol http
    
    Output:
    Mar  1 12:11:24.513: default/testvm2:40208 -> default/testvm1:3000 http-request FORWARDED (HTTP/1.1 GET http://10.20.1.34/hello)
    Mar  1 12:11:24.522: default/testvm2:40208 <- default/testvm1:3000 http-response FORWARDED (HTTP/1.1 200 7ms (GET http://10.20.1.34/hello))
    Mar  1 12:11:43.284: default/testvm2:40212 -> default/testvm1:3000 http-request FORWARDED (HTTP/1.1 GET http://10.20.1.34/hello)
    Mar  1 12:11:43.285: default/testvm2:40212 <- default/testvm1:3000 http-response FORWARDED (HTTP/1.1 200 1ms (GET http://10.20.1.34/hello))
    Mar  1 12:11:46.288: default/testvm2:40214 -> default/testvm1:3000 http-request FORWARDED (HTTP/1.1 GET http://10.20.1.34/bye)
    Mar  1 12:11:46.288: default/testvm2:40214 <- default/testvm1:3000 http-response FORWARDED (HTTP/1.1 200 0ms (GET http://10.20.1.34/bye))
    Mar  1 12:11:48.700: default/testvm2:40216 -> default/testvm1:3000 http-request DROPPED (HTTP/1.1 GET http://10.20.1.34/hi)
    Mar  1 12:11:51.997: default/testvm2:40218 -> default/testvm1:3000 http-request DROPPED (HTTP/1.1 GET http://10.20.1.34/thank-you)
    

Back to top