SUSE Linux Enterprise Server 15
Overview¶
This user journey guides you to install and verify the compatibility of Cilium on SUSE Linux Enterprise Server 15 with 5.3 Kernel Version by applying policies on kubernetes workloads.
Step 1: Install etcd in control plane VM¶
Create etcd user:
groupadd --system etcd
useradd --home-dir "/var/lib/etcd" \
--system \
--shell /bin/false \
-g etcd \
etcd
mkdir -p /etc/etcd
chown etcd:etcd /etc/etcd
mkdir -p /var/lib/etcd
chown etcd:etcd /var/lib/etcd
Determine your system architecture:
uname -m
Download and Install the etcd tarball for x86_64/amd64:
ETCD_VER=v3.2.7
rm -rf /tmp/etcd && mkdir -p /tmp/etcd
curl -L \ https://github.com/coreos/etcd/releases/download/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz \
-o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
tar xzvf /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz \
-C /tmp/etcd --strip-components=1
cp /tmp/etcd/etcd /usr/bin/etcd
cp /tmp/etcd/etcdctl /usr/bin/etcdctl
Or Download and Install the etcd tarball for arm64:
ETCD_VER=v3.2.7
rm -rf /tmp/etcd && mkdir -p /tmp/etcd
curl -L \ https://github.com/coreos/etcd/releases/download/${ETCD_VER}/etcd-${ETCD_VER}-linux-arm64.tar.gz \
-o /tmp/etcd-${ETCD_VER}-linux-arm64.tar.gz
tar xzvf /tmp/etcd-${ETCD_VER}-linux-arm64.tar.gz \
-C /tmp/etcd --strip-components=1
cp /tmp/etcd/etcd /usr/bin/etcd
cp /tmp/etcd/etcdctl /usr/bin/etcdctl
Create and Edit the .yaml file:
sudo vi /etc/etcd/etcd.conf.yaml
name: controller
data-dir: /var/lib/etcd
initial-cluster-state: 'new'
initial-cluster-token: 'etcd-cluster-01'
initial-cluster: controller=http://0.0.0.0:2380
initial-advertise-peer-urls: http://0.0.0.0:2380
advertise-client-urls: http://0.0.0.0:2379
listen-peer-urls: http://0.0.0.0:2380
listen-client-urls: http://0.0.0.0:2379
Create and Edit the .service file:
sudo vi /usr/lib/systemd/system/etcd.service
[Unit]
After=network.target
Description=etcd - highly-available key value store
[Service]
# Uncomment this on ARM64.
# Environment="ETCD_UNSUPPORTED_ARCH=arm64"
LimitNOFILE=65536
Restart=on-failure
Type=notify
ExecStart=/usr/bin/etcd --config-file /etc/etcd/etcd.conf.yml
User=etcd
[Install]
WantedBy=multi-user.target
Reload systemd service files with:
systemctl daemon-reload
Enable and Start the etcd service:
systemctl enable etcd
systemctl start etcd
systemctl status etcd
Step 2: Install KVM-Service in control plane VM¶
Download the Latest RPM Package
wget https://github.com/kubearmor/kvm-service/releases/download/0.1/kvmservice_0.1_linux-amd64.rpm
zypper install kvmservice_0.1_linux-amd64.rpm
systemctl status kvmservice
Step 3: Install Karmor in control plane VM¶
curl -sfL https://raw.githubusercontent.com/kubearmor/kubearmor-client/main/install.sh | sudo sh -s -- -b /usr/local/bin
Step 4: Onboard VMs using Karmor¶
cat kvmpolicy1.yaml
apiVersion: security.kubearmor.com/v1
kind: KubeArmorVirtualMachine
metadata:
name: testvm1
labels:
name: vm1
vm: true
Run this command to Add the VM:
karmor vm add kvmpolicy1.yaml
To see the onboarded VM’s
karmor vm list
Step 5: Generate Installation scripts for configured worker VMs¶
Generate VM installation scripts for the configured VM by running the following command:
karmor vm --kvms getscript -v testvm1
Step 6: Execute the Installation script in VMs¶
Note: Docker needs to Install before running the script.
vi testvm1.sh
Comment the following line on script and save it:
#sudo docker run --name kubearmor $DOCKER_OPTS $KUBEARMOR_IMAGE $KUBEARMOR_OPTS
Note: Upcoming release will fix the above comment section.
Execute the Installation script:
Copy the generated installation scripts to appropriate VMs using scp or rsync method and execute the scripts to run Cilium.
The script downloads Cilium Docker images and run them as containers in each VM. Cilium running in each VM connects to the KVM-Service control plane to register themselves and receive information about other VMs in the cluster, labels, IPs and configured security policies.
Execute the script on worker VM by running the following command:
./testvm1.sh
Note: Make sure the KVM-Service is running on control plane VM & To onboard more worker VM repeat Step 4, Step 5 & Step 6.
You can verify by running following command:
sudo docker ps
Step 7: Apply and Verify Cilium network policy¶
1. Allow connectivity with the control plane (
cat vm-allow-control-plane.yaml
kind: CiliumNetworkPolicy
metadata:
name: "vm-allow-control-plane"
spec:
description: "Policy to allow traffic to kv-store"
nodeSelector:
matchLabels:
name: vm1
egress:
- toCIDR:
- 10.138.0.5/32
toPorts:
- ports:
- port: "2379"
protocol: TCP
2. For SSH connectivity allow port 22 and 169.254.169.254 port 80
cat vm-allow-ssh.yaml
kind: CiliumNetworkPolicy
metadata:
name: "vm-allow-ssh"
spec:
description: "Policy to allow SSH"
nodeSelector:
matchLabels:
name: vm1
egress:
- toPorts:
- ports:
- port: "22"
protocol: TCP
- toCIDR:
- 169.254.169.254/32
toPorts:
- ports:
- port: "80"
protocol: TCP
3. This policy block the DNS access in VM
cat vm-dns-visibility.yaml
kind: CiliumNetworkPolicy
metadata:
name: "vm-dns-visibility"
spec:
description: "Policy to enable DNS visibility"
nodeSelector:
matchLabels:
name: vm1
egress:
- toPorts:
- ports:
- port: "53"
protocol: ANY
rules:
dns:
- matchPattern: "*"
4. This policy allow access of “www.google.co.in” alone in VM
cat vm-allow-www-google-co-in.yaml
kind: CiliumNetworkPolicy
metadata:
name: "vm-allow-www.google.co.in"
spec:
description: "Policy to allow traffic to www.google.co.in"
nodeSelector:
matchLabels:
name: vm1
egress:
- toFQDNs:
- matchName: www.google.co.in
toPorts:
- ports:
- port: "80"
protocol: TCP
- port: "443"
protocol: TCP
Run this command to apply the policy:
karmor vm --kvms policy add vm-allow-control-plane.yaml
karmor vm --kvms policy add vm-allow-ssh.yaml
karmor vm --kvms policy add vm-dns-visibility.yaml
karmor vm --kvms policy add vm-allow-www-google-co-in.yaml
Step 8: Policy Violation on worker node¶
curl http://www.google.co.in/
curl https://go.dev/
Verifying policy Violation logs:
docker exec -it cilium hubble observe -f -t policy-verdict