Running Kubearmor/Cilium On VMs Using Non-K8s Control Plane¶
Kubearmor is a runtime security engine that protects the host/VM from unknown threats. Cilium is a network security engine that can be used to enforce network policy in VMs.
With Kubearmor and Cilium running on a VM, it is possible to enforce host based security policies and secure the VM both at the system and network level.
Why we need a control plane?¶
With Kubearmor running on multiple VMs, it is difficult and time consuming to enforce policies on each VM. In addition to that, Cilium needs a control plane to distribute information about identities, labels and IPs to the Cilium agents running in multiple VMs.
Hence the solution is to manage all the VMs in the network from a control plane.
KVM-Service¶
Accuknox's KVM-Service (Kubearmor Virtual Machine Service) is an application designed to act as a control plane in a non-k8s environment and manage Kubearmor and Cilium running in multiple VMs. Using KVM-Service, users can manage their VMs, associate labels to them and enforce security policies.
Note: KVM-Service requires that all the managed VMs should be within the same network.
Design of KVM-Service¶
Components Involved and it's use¶
- Non-K8s Control Plane
- etcd : etcd is used as a key-value storage and is used to store the label information, IPs and unique identity of each configured VM.
- KVM-Service : Manages connection with VM, handles VM onboarding/offboarding, label management and policy enforcement.
- Karmor (Support utility) : A CLI utility which interacts with KVM-Service for VM onboarding/offboarding, policy enforcement and label management.
- VMs : Actual VMs connected in the network
Installation Guide¶
The following steps describes the process of onboarding VMs and enforcing policies in the VMs using KVM-Service. 1. Install and run KVM-Service and dependencies on a VM (or standalone linux machine). This VM acts as the non-k8s control plane. 2. Onboard the workload VMs 3. Download the VM installation scripts 4. Run the installation scripts 5. Enforce policies in VM from the control plane 6. Manage labels
Note : All the steps are tested and carried out in debian based OS distribution.
Step 1: Install etcd in control plane¶
-
Install etcd using below command
Output:sudo apt-get install etcd
Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: etcd 0 upgraded, 1 newly installed, 0 to remove and 8 not upgraded. Need to get 2,520 B of archives. After this operation, 16.4 kB of additional disk space will be used. Get:1 http://in.archive.ubuntu.com/ubuntu focal/universe amd64 etcd all 3.2.26+dfsg-6 [2,520 B] Fetched 2,520 B in 0s (9,080 B/s) Selecting previously unselected package etcd. (Reading database ... 246471 files and directories currently installed.) Preparing to unpack .../etcd_3.2.26+dfsg-6_all.deb ... Unpacking etcd (3.2.26+dfsg-6) ... Setting up etcd (3.2.26+dfsg-6) ...
-
Once etcd is installed, configure the following values in
/etc/default/etcd
as shown below.ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379 ETCD_ADVERTISE_CLIENT_URLS=http://0.0.0.0:2379
-
Restart etcd
sudo service etcd restart
-
Check the status
Output:sudo service etcd status
● etcd.service - etcd - highly-available key value store Loaded: loaded (/lib/systemd/system/etcd.service; enabled; vendor preset: enabled) Active: active (running) since Sun 2022-01-16 12:23:58 IST; 30min ago Docs: https://github.com/coreos/etcd man:etcd Main PID: 1087 (etcd) Tasks: 24 (limit: 18968) Memory: 84.2M CGroup: /system.slice/etcd.service └─1087 /usr/bin/etcd Jan 16 12:23:57 LEGION etcd[1087]: 8e9e05c52164694d as single-node; fast-forwarding 9 ticks (election ticks 10) Jan 16 12:23:58 LEGION etcd[1087]: 8e9e05c52164694d is starting a new election at term 88 Jan 16 12:23:58 LEGION etcd[1087]: 8e9e05c52164694d became candidate at term 89 Jan 16 12:23:58 LEGION etcd[1087]: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 89 Jan 16 12:23:58 LEGION etcd[1087]: 8e9e05c52164694d became leader at term 89 Jan 16 12:23:58 LEGION etcd[1087]: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 89 Jan 16 12:23:58 LEGION etcd[1087]: published {Name:LEGION ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32 Jan 16 12:23:58 LEGION etcd[1087]: ready to serve client requests Jan 16 12:23:58 LEGION systemd[1]: Started etcd - highly-available key value store. Jan 16 12:23:58 LEGION etcd[1087]: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
Step 2: Install KVM-Service in control plane¶
- Clone KVM-Service code and checkout to
non-k8s
branch.
git clone https://github.com/kubearmor/kvm-service.git
Cloning into 'kvm-service'...
remote: Enumerating objects: 1252, done.
remote: Counting objects: 100% (215/215), done.
remote: Compressing objects: 100% (111/111), done.
remote: Total 1252 (delta 122), reused 132 (delta 102), pack-reused 1037
Receiving objects: 100% (1252/1252), 139.62 MiB | 1.70 MiB/s, done.
Resolving deltas: 100% (702/702), done.
cd
into kvm-service directory:
cd kvm-service/
git checkout non-k8s
Branch 'non-k8s' set up to track remote branch 'non-k8s' from 'origin'.
Switched to a new branch 'non-k8s'
-
Navigate to
kvm-service/src/service/
and execute the following command to compile KVM-Service code.Output:make
logname: no login name cd /home/wazir/go/src/github.com/kubearmor/kvm-service/src/service; go mod tidy cd /home/wazir/go/src/github.com/kubearmor/kvm-service/src/service; go build -ldflags "-w -s -X main.BuildDate=2022-03-01T10:00:34Z -X main.GitCommit=beb3ab8 -X main.GitBranch=non-k8s -X main.GitState=dirty -X main.GitSummary=beb3ab8" -o kvmservice main.go
-
Once compilation is successful, run KVM-Service using the following command.
Output:sudo ./kvmservice --non-k8s 2> /dev/null
2022-01-16 13:06:16.304185 INFO BUILD-INFO: commit:901ea26, branch: non-k8s, date: 2022-01-16T07:35:51Z, version: 2022-01-16 13:06:16.304278 INFO Initializing all the KVMS daemon attributes 2022-01-16 13:06:16.304325 INFO Establishing connection with etcd service => http://localhost:2379 2022-01-16 13:06:16.333682 INFO Initialized the ETCD client! 2022-01-16 13:06:16.333748 INFO Initiliazing the KVMServer => podip:192.168.0.14 clusterIP:192.168.0.14 clusterPort:32770 2022-01-16 13:06:16.333771 INFO KVMService attributes got initialized 2022-01-16 13:06:17.333915 INFO Starting HTTP Server 2022-01-16 13:06:17.334005 INFO Starting Cilium Node Registration Observer 2022-01-16 13:06:17.334040 INFO Triggered the keepalive ETCD client 2022-01-16 13:06:17.334077 INFO Starting gRPC server 2022-01-16 13:06:17.334149 INFO ETCD: Getting raw values key:cilium/state/noderegister/v1 2022-01-16 13:06:17.335092 INFO Successfully KVMServer Listening on port 32770
Step 3: Install karmor in control plane¶
Run the following command to install karmor utility
curl -sfL https://raw.githubusercontent.com/kubearmor/kubearmor-client/main/install.sh | sudo sh -s -- -b /usr/local/bin
Step 4: Onboard VMs using karmor¶
Few example YAMLs are provided under kvm-service/examples
for VM onboarding. The same can be used for reference.
Lets use kvmpolicy1.yaml
and kvmpolicy2.yaml
file and onboard two VMs.
cat kvmpolicy1.yaml
apiVersion: security.kubearmor.com/v1
kind: KubeArmorVirtualMachine
metadata:
name: testvm1
labels:
name: vm1
vm: true
karmor vm add kvmpolicy1.yaml
Success
cat kvmpolicy2.yaml
apiVersion: security.kubearmor.com/v1
kind: KubeArmorVirtualMachine
metadata:
name: testvm2
labels:
name: vm2
vm: true
karmor vm add kvmpolicy2.yaml
Success
When a new VM is onboarded, the KVM-Service assigns a new identity to it. To see the list of onboarded VMs, execute the following the command.
karmor vm list
List of configured vms are :
[ VM : testvm1, Identity : 1090 ]
[ VM : testvm2, Identity : 35268 ]
Step 5: Generate installation scripts for configured VMs¶
Generate VM installation scripts for the configured VM
karmor vm --kvms getscript -v testvm1
VM installation script copied to testvm1.sh
karmor vm --kvms getscript -v testvm2
VM installation script copied to testvm2.sh
Step 6: Execute the installation script in VMs¶
Copy the generated installation scripts to appropriate VMs and execute the scripts to run Kubearmor and Cilium.
The script downloads Kubearmor and Cilium Docker images and run them as containers in each VM. Kubearmor and Cilium running in each VM connects to the KVM-Service control plane to register themselves and receive information about other VMs in the cluster, labels, IPs and configured security policies.
Step 7: Apply and verify Kubearmor system policy¶
- Few example YAMLs are provided under
kvm-service/examples
for Kubearmor policy enforcement in VM. The same can be used for reference.
cat khp-example-vmname.yaml
apiVersion: security.kubearmor.com/v1
kind: KubeArmorHostPolicy
metadata:
name: khp-02
spec:
nodeSelector:
matchLabels:
name: vm1
severity: 5
file:
matchPaths:
- path: /proc/cpuinfo
action:
Block
karmor vm --kvms policy add khp-example-vmname.yaml
Success
karmor
in VM and watch on alerts.
karmor log
gRPC server: localhost:32767
Created a gRPC client (localhost:32767)
Checked the liveness of the gRPC server
Started to watch alerts
- With the above mentioned policy enforced in the VM, if a user tries to access
/proc/cpuinfo
file, user will seepermission denied
error andkarmor log
will show the alert log for blocking the file access as shown below.Output:cat /proc/cpuinfo
Run this command:cat: /proc/cpuinfo: Permission denied
Output:karmor log
gRPC server: localhost:32767 Created a gRPC client (localhost:32767) Checked the liveness of the gRPC server Started to watch alerts == Alert / 2022-01-16 08:24:33.153921 == Cluster Name: default Host Name: 4511a8accc65 Policy Name: khp-02 Severity: 5 Type: MatchedHostPolicy Source: cat Operation: File Resource: /proc/cpuinfo Data: syscall=SYS_OPENAT fd=-100 flags=O_RDONLY Action: Block Result: Permission denied
Step 8: Apply and verify Cilium network policy¶
- The example policy provided in
kvm-service/examples/cnp-l7.yaml
describes a network policy which only allows request to two HTTP endpoints -/hello
and/bye
and block everything else.
cat cnp-l7.yaml
kind: CiliumNetworkPolicy
metadata:
name: "rule1"
spec:
description: "Allow only certain HTTP endpoints"
nodeSelector:
matchLabels:
name: vm1
ingress:
- toPorts:
- ports:
- port: "3000"
protocol: TCP
rules:
http:
- method: GET
path: "/hello"
- method: GET
path: "/bye"
karmor vm --kvms policy add cnp-l7.yaml
Success
-
In
kvm-service/examples/app
, there is a sample HTTP server applicationhttp-echo-server.go
that sends the URL path of the request as the response. Copy the filehttp-echo-server.go
to the VM which has the labelname: vm1
and run the HTTP server.Output:go run http-echo-server.go
🚀 HTTP echo server listening on port 3000
-
Switch to the VM which has the label
name: vm2
and try to accessvm1
's HTTP server.Output:curl http://10.20.1.34:3000/hello
Run this command:hello
Output:curl http://10.20.1.34:3000/bye
Run this command:bye
Output:curl http://10.20.1.34:3000/hi
Run this command:Access denied
Output:curl http://10.20.1.34:3000/thank-you
The above output shows that,Access denied
- For URLs that are listed in the configured network policy, the response is received from the HTTP server
-
For URLs that are not listed in the configured network policy, the requests are denied.
-
To verify this, switch to the VM which has label
name: vm1
and runhubble
to view the network logs.Output:docker exec -it cilium hubble observe -f --protocol http
Mar 1 12:11:24.513: default/testvm2:40208 -> default/testvm1:3000 http-request FORWARDED (HTTP/1.1 GET http://10.20.1.34/hello) Mar 1 12:11:24.522: default/testvm2:40208 <- default/testvm1:3000 http-response FORWARDED (HTTP/1.1 200 7ms (GET http://10.20.1.34/hello)) Mar 1 12:11:43.284: default/testvm2:40212 -> default/testvm1:3000 http-request FORWARDED (HTTP/1.1 GET http://10.20.1.34/hello) Mar 1 12:11:43.285: default/testvm2:40212 <- default/testvm1:3000 http-response FORWARDED (HTTP/1.1 200 1ms (GET http://10.20.1.34/hello)) Mar 1 12:11:46.288: default/testvm2:40214 -> default/testvm1:3000 http-request FORWARDED (HTTP/1.1 GET http://10.20.1.34/bye) Mar 1 12:11:46.288: default/testvm2:40214 <- default/testvm1:3000 http-response FORWARDED (HTTP/1.1 200 0ms (GET http://10.20.1.34/bye)) Mar 1 12:11:48.700: default/testvm2:40216 -> default/testvm1:3000 http-request DROPPED (HTTP/1.1 GET http://10.20.1.34/hi) Mar 1 12:11:51.997: default/testvm2:40218 -> default/testvm1:3000 http-request DROPPED (HTTP/1.1 GET http://10.20.1.34/thank-you)