Skip to content

K3's Cluster

Overview

This user journey guides you to install and verify the compatibility of Cilium on K3's by applying policies on kubernetes workloads.

Step 1: Install Virtualbox

sudo apt-get install virtualbox -y

Step 2: Install Vagrant

wget https://releases.hashicorp.com/vagrant/2.2.14/vagrant_2.2.14_x86_64.deb
sudo apt install ./vagrant_2.2.14_x86_64.deb
vagrant –version
vagrant plugin install vagrant-scp
vagrant plugin list

Step 3: Configure Ubuntu on Vagrant Virtualbox

nano VagrantFile
# -*- mode: ruby -*-
# vi: set ft=ruby :

ENV['VAGRANT_NO_PARALLEL'] = 'yes'

Vagrant.configure(2) do |config|

  NodeCount = 2

  (1..NodeCount).each do |i|
    config.vm.define "ubuntuvm#{i}" do |node|
      node.vm.box               = "generic/ubuntu2004"
      node.vm.box_check_update  = false
      node.vm.box_version       = "3.3.0"
      node.vm.hostname          = "ubuntuvm#{i}.example.com"
      node.vm.network "private_network", ip: "192.168.56.4#{i}"
      node.vm.provider "virtualbox" do |v|
        v.name    = "ubuntuvm#{i}"
        v.memory  = 1024
        v.cpus    = 1
      end

    end

  end

end
vagrant up

Alt

vagrant ssh ubuntu1

Alt

Step 4: Install K3's on Ubuntuvm

vagrant ssh ubuntuvm1
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--node-ip=192.168.56.41 --flannel-iface=eth1" sh -s - --write-kubeconfig-mode 644

Note: node-ip=192.168.56.41 → ubuntuvm1 ip

Alt

systemctl status k3s
which kubectl
kubectl get nodes
cat  /var/lib/rancher/k3s/server/token 

Alt

exit
vagrant scp ubuntuvm1:/etc/rancher/k3s/k3s.yaml ~/.kube/config
nano ~/.kube/config

Change the Server: 127.0.0.1 to 192.168.56.41

kubectl get nodes

Login in to Ubuntuvm2

vagrant ssh ubuntuvm2
ip a show
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--node-ip=192.168.56.42 --flannel-iface=eth1" K3S_URL="https://192.168.56.41:6443" K3S_TOKEN="K107b9ecf5076c2a0c79760aa0e545d7464f11bbd27c643b1f1b8eef34758af1b89::server:985d51052287fd7554e989bd742c7f31"  sh -

Note: IP & Token may change

exit
kubectl get nodes

Alt

Step 5: Cilium Install

curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-amd64.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
rm cilium-linux-amd64.tar.gz{,.sha256sum}

cilium install 
Alt

Cilium Verify:

kubectl get pods -n kube-system | grep cilium 

Alt

Cilium Hubble Enable:

cilium hubble enable

Alt

Cilium Hubble Verify:

kubectl get pods -n kube-system | grep hubble

Alt

Step 6: Cilium Policy

1. Create a tightfighter & deathstart deployment

nano tightfighter-deathstart-app.yaml
apiVersion: v1
kind: Service
metadata:
  name: deathstar
  labels:
    app.kubernetes.io/name: deathstar
spec:
  type: ClusterIP
  ports:
  - port: 80
  selector:
    org: empire
    class: deathstar
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deathstar
  labels:
    app.kubernetes.io/name: deathstar
spec:
  replicas: 2
  selector:
    matchLabels:
      org: empire
      class: deathstar
  template:
    metadata:
      labels:
        org: empire
        class: deathstar
        app.kubernetes.io/name: deathstar
    spec:
      containers:
      - name: deathstar
        image: docker.io/cilium/starwars
---
apiVersion: v1
kind: Pod
metadata:
  name: tiefighter
  labels:
    org: empire
    class: tiefighter
    app.kubernetes.io/name: tiefighter
spec:
  containers:
  - name: spaceship
    image: docker.io/tgraf/netperf
---
apiVersion: v1
kind: Pod
metadata:
  name: xwing
  labels:
    app.kubernetes.io/name: xwing
    org: alliance
    class: xwing
spec:
  containers:
  - name: spaceship
    image: docker.io/tgraf/netperf
kubectl apply -f tightfighter-deathstart-app.yaml 
kubectl get pods --show-labels

Alt

2. Apply the following policy

nano sample-cilium-ingress-policy.yaml                                
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: "rule1-ingress"
spec:
  description: "L7 policy to restrict access to specific HTTP call"
  endpointSelector:
    matchLabels:
      class: deathstar
  ingress:
  - toPorts:
    - ports:
      - port: "80"
        protocol: TCP
      rules:
        http:
        - method: "POST"
          path: "/v1/request-landing"

Alt

3. Apply the policy

kubectl apply -f sample-cilium-ingress-policy.yaml   

Alt

kubectl get cnp

Alt

4. Violating the policy

kubectl get svc
kubectl exec -n default tiefighter -- curl -s -XPOST 10.100.255.199/v1/request-landing 
kubectl exec -n default tiefighter -- curl -s -XPOST 10.100.255.199/v1/bye 

Alt

Cilium SVC port forward to Monitor the logs

cilium hubble port-forward

Alt

Step 7: Install the Hubble CLI Client

exportHUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
curl -L --remote-name-all https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz{,.sha256sum}
sha256sum --check hubble-linux-amd64.tar.gz.sha256sum

Alt

sudo tar xzvfC hubble-linux-amd64.tar.gz /usr/local/bin
rm hubble-linux-amd64.tar.gz{,.sha256sum}

Step 8: Monitoring the Cilium voilation logs

hubble observe -f --protocol http --pod tiefighter

Alt

Back to top