EKS Amazon Linux 2
Overview¶
This user journey guides you to install and verify the compatibility of Cilium on EKS Amazon Linux 2 by applying policies on Kubernetes workloads.
Step 1: Create a EKS-cluster using AWS console¶
Once the nodegroup is created, Install EKS CTL, AWS CLI, Helm tools
eksctl get cluster
aws eks --region us-west-1 update-kubeconfig --name eks-amazon-cilium
kubectl get nodes
kubectl get svc
Step 2: Cilium Install¶
Install Cilium CLI:
curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-
linux-amd64.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-amd64.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
rm cilium-linux-amd64.tar.gz{,.sha256sum}
Note: Cilium will Install in EKS, you have to pass a valid auto-detected cluster name. Remember the value of cluster name should only contain alphanumeric lowercase characters and hyphen symbols. for instance, If your auto detected cluster name is arn:aws:eks:us-west-1:199488642388:cluster/eks-amazon-cilium, then pass the value as arn-aws-eks-us-west-1-199488642388-cluster-eks-amazon-cilium.
cilium install --cluster-name arn-aws-eks-us-west-1-199488642388-cluster-eks-amazon-cilium
Cilium Verify:
kubectl get pods -n kube-system | grep cilium
Cilium Hubble Enable:
cilium hubble enable
Cilium Verify:
kubectl get pods -n kube-system | grep hubble
cilium status
Install the Hubble CLI Client:
export HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
curl -L --remote-name-all https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz{,.sha256sum}
sha256sum --check hubble-linux-amd64.tar.gz.sha256sum
tar xzvfC hubble-linux-amd64.tar.gz /usr/local/bin
rm hubble-linux-amd64.tar.gz{,.sha256sum}
Step 3: Cilium Policy¶
1. Create a tightfighter and deathstart deployment and verify it
cat tightfighter-deathstart-app.yaml
apiVersion: v1
kind: Service
metadata:
name: deathstar
labels:
app.kubernetes.io/name: deathstar
spec:
type: ClusterIP
ports:
- port: 80
selector:
org: empire
class: deathstar
---
apiVersion: v1
kind: Service
metadata:
name: deathstar
labels:
app.kubernetes.io/name: deathstar
spec:
type: ClusterIP
ports:
- port: 80
selector:
org: empire
class: deathstar
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: deathstar
labels:
app.kubernetes.io/name: deathstar
spec:
replicas: 2
selector:
matchLabels:
org: empire
class: deathstar
template:
metadata:
labels:
org: empire
class: deathstar
app.kubernetes.io/name: deathstar
spec:
containers:
- name: deathstar
image: docker.io/cilium/starwars
---
apiVersion: v1
kind: Pod
metadata:
name: tiefighter
labels:
org: empire
class: tiefighter
app.kubernetes.io/name: tiefighter
spec:
containers:
- name: spaceship
image: docker.io/tgraf/netperf
---
apiVersion: v1
kind: Pod
metadata:
name: xwing
labels:
app.kubernetes.io/name: xwing
org: alliance
class: xwing
spec:
containers:
- name: spaceship
image: docker.io/tgraf/netperf
kubectl apply -f tightfighter-deathstart-app.yaml
kubectl get pods --show-labels
2. Explore the policy
cat sample-cilium-ingress-policy.yaml
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "rule1-ingress"
spec:
description: "L7 policy to restrict access to specific HTTP call"
endpointSelector:
matchLabels:
class: deathstar
ingress:
- toPorts:
- ports:
- port: "80"
protocol: TCP
rules:
http:
- method: "POST"
path: "/v1/request-landing"
3. Apply the policy
kubectl apply -f sample-cilium-ingress-policy.yaml
4. Violating the policy
kubectl get svc
kubectl exec -n default tiefighter -- curl -s -XPOST 10.100.98.131/v1/request-landing
kubectl exec -n default tiefighter -- curl -s -XPOST 10.100.98.131/v1/test
Getting Alerts/Telemetry from Cilium:
cilium hubble port-forward
5. Monitoring the Cilium Violation Logs
hubble observe --pod tiefighter --protocol http