Skip to content

Deploying Cilium in VMs

Cilium is a network policy enforcement engine that can be used both in k8s and VMs. Using Cilium with Kubernetes, one can protect both k8s workloads (pods, services) and virtual machines from network vulnerabilities.

Deployment Steps for Cilium VM

Dependencies

1) A k8s cluster - A k8s cluster (it can be a single node cluster) to act as a control plane and distribute information about identities, labels and IPs of the VMs. Users can manage network polices of the VMs using kubectl just like how pods or services are managed in k8s usually.

2) Docker >= 20.10 should be installed in all the VMs.

1. Download and Install Cilium CLI

curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/download/v0.10.2/cilium-linux-amd64.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-amd64.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
rm cilium-linux-amd64.tar.gz{,.sha256sum}

2. Setup Cilium in K8s Cluster

cilium install --config tunnel=vxlan --agent-image docker.io/accuknox/cilium:latest --operator-image docker.io/accuknox/cilium-operator-generic:latest

Note: If you are using AWS or Azure managed kubernetes cluster, then change the value of --operator-image option in the above command to docker.io/accuknox/cilium-operator-aws:latest or docker.io/accuknox/cilium-operator-azure:latest respectively.

3. Check the Cilium status

cilium status
There should not be any errors.

4. Enable Cilium clustermesh

a) If you are using self-managed k8s, use the following command to enable clustermesh.

cilium clustermesh enable --apiserver-image=docker.io/accuknox/cilium-clustermesh-apiserver:latest --service-type NodePort
b) If you are using GKE, EKS or Azure, use the following command.
cilium clustermesh enable --apiserver-image=docker.io/accuknox/cilium-clustermesh-apiserver:latest --service-type LoadBalancer

5. Onboard VMs in the cluster

a) Create an entry for each VM and assign labels to them.

cilium clustermesh vm create <vm-hostname> --labels key1=value1,key2=value2..keyN=valueN

  • hostname - VM's hostname
  • key1=value1,key2=value2...keyN=valueN - Labels of the VM (similar to pod labels).

b) Repeat the above command for each VM you wish to add to the cluster.

c) Once all the VMs are added, verify it.

cilium clustermesh vm status

6. Generate VM installation script

a) Generate the shell script to install Cilium in VMs.

cilium clustermesh vm install <file-name> --config devices=<interfaces>,enable-host-firewall,enable-hubble=true,hubble-listen-address=:4244,hubble-disable-tls=true,external-workload

  • file-name - script name (example, cilium-vm-XYZ-install.sh)
  • interfaces - one or more, comma separated list of VM's physical interfaces (example - eth0,eth1).

b) Open the generated script and edit the value of CILIUM_IMAGE to ${1:-docker.io/accuknox/cilium:latest}

c) Note: - If the host interface names of the VMs differs from one VM to another VM, then generate a script of each VM by configuring the interface parameter appropriately. - If the interface name is same for all the VMs, then you could generate the script once and use it across all the VMs.

7. Install Cilium in the VMs

a) Copy the installation script to the VMs that are added to the cluster and run the scripts in the VM's shell.

b) Once the installation is successful in the VM, check the status by executing the following command in VM's shell.

cilium status

8. Enforcing network policies in VM

Once the Cilium installation in VMs are completed, users can start to configure network policies using kubectl.

A sample Cilium network policy will look similar to the following YAML.

apiVersion: "cilium.io/v2"
kind: CiliumClusterwideNetworkPolicy
metadata:
  name: "rule1"
spec:
  description: "L4 policy to allow traffic only at port 80/TCP"
  nodeSelector:
    matchLabels:
      name: vm1
  ingress:
  - fromEndpoints:
    - matchLabels:
        name: vm2
    toPorts:
    - ports:
      - port: "80"
        protocol: TCP
The above policy allows ingress traffic to VM with label name: vm1 from VM with label name: vm2 at only port 80/TCP. All other ingress traffic to VM with name: vm1 will be blocked because of this policy.

Users can apply polices in the cluster using kubectl just like how resources are managed in k8s usually.

kubectl apply -f <yaml-file/url>

More examples of the network policies and their syntax are available in the official Cilium docs page.

Note: For policies that involves VMs, the kind should always be CiliumClusterwideNetworkPolicy

9. Network Observability in VMs

Users can use hubble CLI tool that comes along with the Cilium installation to monitor network traffic and policy enforcement in VMs.

docker exec -it cilium hubble observe -f

Back to top