Skip to content

How to setup a On-prem cluster

Alt

Overview

This user journey guides you to set up end to end self-managed Accuknox control plane in the client infrastructure that provides complete control to manage, customize, and monitor Accunox control plane workloads.

  1. Create a Kubernetes Cluster

  2. Install Pre-requistites

    2.1. Istio

    2.2. MySQL

    2.3. Kafka

    2.4. Pinot

    2.5. Temporal

  3. Install Core Components

  4. Install Accuknox UI

  5. Onboarding Steps

Step 1: Create a Kubernetes Cluster

  • Create a cluster with below hardware resources
Components Name MySQL Kafka Pinot Core Components
Node pool name mySql kafka pinot microservices
No. of nodes 3 3 3 3
Machine Type E2 / ec2 / vm (vmware / Cloud based VM) E2 / ec2 / vm (vmware / Cloud based VM) E2 / ec2 / vm (vmware / Cloud based VM) E2 / ec2 / vm (vmware / Cloud based VM)
Image Type Any Linux based OS with Container support ( EG: GCP COS-Containerd) Any Linux based OS with Container support ( EG: GCP COS-Containerd) Any Linux based OS with Container support ( EG: GCP COS-Containerd) Any Linux based OS with Container support ( EG: GCP COS-Containerd)
CPU per nodes 16 6 12 18
Memory per nodes 20 16 40 40
Disk size per nodes 50 50 50 50
Total CPU 48 18 36 54
Total Memory 60 48 120 120
Total Disk Size 150 150 150 150
Taints and Labels mysql:true kafka:true pinot:true NA

Step 2: Install Pre-requisites

Please wait for the cluster to be up and running (Ready State) and connect to the cluster and start installing prerequisites.

2.1 Istio

  • Go to the Istio release page to download the installation file for your OS, or download and extract the latest release automatically (Linux or macOS)
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.10.0 TARGET_ARCH=x86_64 sh -
  • Move to the Istio package directory. For example, if the package is istio-1.11.3
cd istio-1.10.0
  • Create a namespace istio-system for Istio components
kubectl create namespace istio-system
  • Install the Istio base chart which contains cluster-wide resources used by the Istio control plane
helm install istio-base manifests/charts/base -n istio-system
  • Install the Istio discovery chart which deploys the istiod control plane service
helm install istiod manifests/charts/istio-control/istio-discovery -n istio-system
  • Install the Istio ingress gateway chart which contains the ingress gateway components

helm install istio-ingress manifests/charts/gateways/istio-ingress -n istio-system
Verify the Installation

Ensure all Kubernetes pods in istio-system namespace are deployed and have a STATUS of Running

kubectl get pods -n istio-system
Alt text

Install Cert Manager

  • Install cert-manager. Cert-manager will manage the certificates of gateway domains.

  • When running on GKE (Google Kubernetes Engine), you might encounter a ‘permission denied’ error when creating some of the required resources.

kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=$(gcloud config get-value core/account)
  • Install Cert Manager

    kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.3.1/cert-manager.yaml
    

  • All pods should have in Running state

kubectl get pods -n cert-manager
Alt

Installing Gateway

  • Add Accuknox Repositories to install helm packages
helm repo add accuknox-onprem-prerequisites https://USERNAME:[email protected]/repository/accuknox-onprem-prerequisites
helm repo update
helm search repo accuknox-onprem-prerequisites
helm pull accuknox-onprem-prerequisites/istio-gateway-charts --untar 
  • Move to directory
cd istio-gateway-charts
  • Istio Gateway configurations for DNS

  • This gateway config file defines the base API endpoints of the microservices under DNS

  • This repository also contains necessary files to setup SSL for DNS (Refer issuer.yaml and cert.yaml) using cert-manager

Find the Gateway IP

INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo ${INGRESS_HOST}

Create DNS using IP

  • Create A record for example (api-onprem.accuknox.com and keycloak.example.com) using LoadBalancer IP

  • Kindly update DNS records on cert.yaml, gateway-with-ssl.yaml, virtual-service.yaml files.

  • Issuers, and ClusterIssuers, are Kubernetes resources that represent certificate authorities (CAs) that are able to generate signed certificates by honouring certificate signing requests.

kubectl apply -f issuer.yaml
# Should have Status as Ready
kubectl get ClusterIssuer -n cert-manager
  • A Certificate is a namespaced resource that references an Issuer or ClusterIssuer that determines what will be honouring the certificate request.
kubectl apply -f cert.yaml
kubectl get Certificate -n istio-system # Should have Status as Ready
  • Create Gateway with SSL
    kubectl apply -f gateway-with-ssl.yaml
    

Apply Virtual services

  • A virtual service defines a set of traffic routing rules to apply when a host is addressed. Each routing rule defines matching criteria for the traffic of a specific protocol. If the traffic is matched, then it is sent to a named destination service (or subset/version of it) defined in the registry.
kubectl apply -f backend-api/virtual-service.yaml
kubectl apply -f keycloak/virtual-service.yaml

2.2 Mysql

helm pull accuknox-onprem-prerequisites/mysql-chart --untar
kubectl create namespace accuknox-mysql
cd mysql-chart
kubectl apply -f bundle.yaml -n accuknox-mysql
kubectl apply -f cr.yaml -n accuknox-mysql
kubectl apply -f secrets.yaml -n accuknox-mysql
kubectl apply -f ssl-secrets.yaml -n accuknox-mysql

kubectl get secret -n accuknox-mysql | grep mysql
Alt

kubectl apply -f backup-s3.yaml -n accuknox-mysql
Loading schema
helm pull accuknox-onprem-prerequisites/mysql-schema-chart --untar
helm upgrade --install accuknox-mysql-schema mysql-schema-chart -n accuknox-mysql
Verify the installation

Pods should be in Running state

kubectl get pods -n accuknox-mysql
Alt


Note:

  • To configure backup with GCS, add the HMAC keys in backup-s3.yaml, change the bucket name in cr.yaml and change the cron job entries as per the requirement in the cr.yaml.

  • HMAC Keys will vary for cloud providers. (GCP, AZURE, AWS)


  • After following steps the above steps, you will see a similar image as above Run a sanitary test with below commands at the MySQL namespace
kubectl run -i --rm --tty percona-client --image=percona:8.0 --restart=Never -n accuknox-mysql -- bash -il
mysql -h accuknox-mysql-haproxy -uroot -proot_password
  • Update the passwords in secret.yaml file and run below command
kubectl apply -f secrets.yaml

Optional [Backup to S3 bucket]

  • To configure backup with gcs add the HMAC keys in backup-s3.yaml, change the bucket name in cr.yaml and cron can be changed as required cr.yaml files.

FQDN: For K8’s Service name


accuknox-mysql-haproxy.accuknox-mysql.svc.cluster.local

2.3 kafka

helm pull accuknox-onprem-prerequisites/strimzi-kafka-operator --untar
helm install accuknox-kafka  strimzi-kafka-operator -n accuknox-kafka
  • Check pods should be in the running status

    kubectl get pods -n accuknox-kafka
    
    Alt

  • Get Bootstrap server endpoint

    kubectl get kafka accuknox -o jsonpath='{.status.listeners[?(@.type=="external")].bootstrapServers}' -n accuknox-kafka
    

  • Get CA certificate

    kubectl get secret accuknox-cluster-ca-cert -o jsonpath='{.data.ca\.p12}' -n accuknox-kafka | base64 -d > ca.p12
    


Note:

  • For any application connecting to Kafka outside the GKE, It needs to communicate via SSL/TLS authentication.

  • We need to provide the connectivity details to the component owners who are configuring the application deployed outside GKE Cluster.


  • Get CA Password

    kubectl get secret accuknox-cluster-ca-cert -o jsonpath='{.data.ca\.password}' -n accuknox-kafka | base64 -d > ca.password
    

  • Get User Certificate

    kubectl get secret/node-event-feeder-common -n accuknox-kafka -o jsonpath='{.data.user\.p12}' | base64 -d > user.p12
    

  • Get User Password

    kubectl get secret/node-event-feeder-common -n accuknox-kafka -o jsonpath='{.data.user\.password}' | base64 -d > user.password
    

  • Convert user.p12 to base64 encoded format

    cat user.p12 | base64 > user.p12.base64
    

  • Convert ca.p12 to base64 encoded format

    cat ca.password | base64 > ca.password.base64
    

  • Convert user.password to base64 encoded format

    cat user.password | base64 > user.password.base64
    

  • Convert p12 to pem format

    openssl pkcs12 -in ca.p12 -out ca.pem
    
    Alt

  • Copy the password from ca.password (file)

  • Convert ca.pem to base64 encoded format

    cat ca.pem | base64 > ca.pem.base64
    


Note:

  • ca.p12, ca.password, user.p12 and user.password are required to be used in Java based applications. For Go based applications, use ca.pem, user.p12 and user.password. For use in Kubernetes, use the base64 versions of respective files.

  • FQDN (K8’s Service name) Value for Internal Cluster application connectivity.

  • FQDN : accuknox-kafka-bootstrap.accuknox-kafka.svc.cluster.local:9092


Get Certificates and store it

  • If the kafka cluster is upgraded or reinstalled, then the cluster will generate new certificates and you need to change/update key-pair in the kafka clients or applications.

  • To mitigate this issue store these below certificates in safe place at the time of installing kafka(first time installation).

  • Manually apply these certificates using "kubectl" command or place under templetes folder. And also turn off the auto certificate generation by configuring(uncomment) "clusterca" and "clientsca" to "false" in "kafka-cluster.yaml" file(below image is for your reference). Once done the changes, install/upgarde the cluster. Alt

kubectl get secret/accuknox-clients-ca -o yaml -n accuknox-kafka > accuknox-clients-ca.yaml
kubectl get secret/accuknox-clients-ca-cert -o yaml -n accuknox-kafka > accuknox-clients-ca-cert.yaml
kubectl get secret/accuknox-cluster-ca-cert -o yaml -n accuknox-kafka > accuknox-cluster-ca-cert.yaml
kubectl get secret/accuknox-cluster-ca -o yaml -n accuknox-kafka > accuknox-cluster-ca.yaml
kubectl get secret/node-event-feeder -o yaml -n accuknox-kafka > node-event-feeder.yaml
kubectl get secret/node-event-feeder-common -o yaml -n accuknox-kafka > node-event-feeder-common.yaml

Verify Kafka cluster

kubectl get kafka -n accuknox-kafka
Alt

2.4 Pinot

helm pull accuknox-onprem-prerequisites/pinot-chart --untar
kubectl create namespace accuknox-pinot
helm install accuknox-pinot pinot -n accuknox-pinot

Pods should be in running status

kubectl get pods -n accuknox-pinot
Alt

Loading Schema and Tables

  • To load schema, It is necessary to open the Pinot dashboard. Run the below command to do port-forward and open http://localhost:9000/ in the browser.

kubectl port-forward svc/accuknox-pinot-controller -n accuknox-pinot 9000:9000
Alt

  • Select Swagger REST API from the side panel

Alt

Schema Creation

Download the pinot-schemas.zip file and unzip it and load all the schemas using below steps:

Click to Download Pinot Schemas

  • Under schema, Select POST /schemas Add a new schema

Alt

  • Click Try it out

Alt

  • Paste the schema file at body object and Execute

Alt

  • It should be 200 response codes and follow the same steps to create all schemas.

Alt

Tables Creation

Download the pinot-table.zip file and unzip it. Please load all the tables using the below steps.

Click to Download Pinot Tables

  • Under TABLE Select POST /tables Add a new table.

Alt

  • Click Try it out

Alt

  • Paste Table configuration file at body object and Execute

Alt

  • It should be 200 response codes and follow the same steps to create all tables.

Alt

2.5 Temporal

  • Temporal operator deployment steps
kubectl create ns accuknox-temporal
git clone https://github.com/temporalio/helm-charts.git
mv helm-charts temporal-server-chart
helm dep up temporal-server-chart
helm upgrade --install accuknox-temporal-server temporal-server-chart  --set server.replicaCount=1 --set cassandra.config.cluster_size=1 --set prometheus.enabled=false --set grafana.enabled=false --set elasticsearch.enabled=false --timeout 15m -n accuknox-temporal
  • If Prometheus/ Grafana is not required, Use the below command.
helm install --set server.replicaCount=1 --set cassandra.config.cluster_size=1 --set prometheus.enabled=false --set grafana.enabled=false --set elasticsearch.enabled=false temporal . --timeout 15m -n accuknox-temporal
kubectl get all -n temporal-server

Alt

  • Set the namespace to Accuknox-temporal
kubectl exec -n accuknox-temporal -it  pod/temporaltest-admintools-<pod-id> -- /bin/bash
tctl --ns accuknox-temporal n re

For example, Alt

  • Port-forward the temporal-web (:8088) pod to view the temporal workflows UI.
kubectl port-forward svc/accuknox-temporal-server-web 8088:8088 -n accuknox-temporal

Step 3: Install Core Components

Add Accuknox repository to install Core Components helm package:

helm repo add accuknox-onprem-services https://USERNAME:[email protected]/repository/accuknox-onprem-services
helm repo update
helm search repo accuknox-onprem-services

Run the below script to install all the accuknox Core Components.

Click to Download the Script

All Pods should have the Running status

kubectl get pods -A
Alt

Install Datapipeline API

Run the below command to install the data pipeline API component

helm upgrade --install  accuknox-datapipeline-api 
             accuknox-onprem-services/data-pipeline-api-charts \ 
             --set prometheus.host=<hostname> \
             --set prometheus.user=<username> \
             --set prometheus.password=<password> \
             -n accuknox-datapipeline-api

prometheus.host - specify prometheus host name

prometheus.user - specify prometheus username

prometheus.password - specify prometheus password

  • Pods should be in Running status
kubectl get pods -n accuknox-datapipeline-api

Alt

Install shared-informer-service

  • Install Nginx ingress controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml
  • Ingress Nginx controller external IP should mapped with DNS A record like sis-api.accuknox.com

kubectl get svc -n ingress-nginx
Alt

Run the below command to install the shared-informer-service component

  • For host variable need to map with DNS name
helm upgrade --install accuknox-shared-informer-service 
             accuknox-onprem-services/shared-informer-service-chart/
             -n accuknox-shared-informer-service
             --set host=<sis-api.accuknox.com>
  • Pods should be in Running status

kubectl get pods -n accuknox-shared-informer-service
Alt

kubectl get ingress -n accuknox-shared-informer-service

Alt

Install Knoxautopolicy

Run the below command to pull the Knoxautopolicy helm chart

helm pull accuknox-onprem-services/knox-auto-policy-chart --untar
  • Copy the password from user.password file

  • Copy the ca.pem.base64 and user.p12.base64 file & paste in to templates/secrets.yaml file in Knoxautopolicy-chart.

Alt

  • cluster-management-url value will be istio gateway host eg. https://api-onprem.accuknox.com

  • For user_password variable need to map with kafka user.password file content

helm install knox-auto-policy-chart \
             accuknox-onprem-services/knox-auto-policy-chart \
             --set config.application.cluster.cluster-mgmt-url= <cluster-management-url> \   
             --set config.feed-consumer.kafka.keystore.pword=<user_password> \
             -n accuknox-knoxautopolicy 

Pods should be in Running status

kubectl get pods -n accuknox-knoxautopolicy
Alt

Configure Keycloak

eg. keycloak.example.com

Alt

  • Click on administration console default username → admin | default password → admin

  • Click master add the realm

Alt

  • Click on the Realm settings → General section

Alt

  • Click on the login section

Alt

  • Click on the tokens section and please configure same as below images

Alt Alt

  • Click Clients → click create to import the files and upload the below accuknox-ui-zip folder files.

Alt

  • Click on accukox-ui

Alt

  • Click on a credential

Alt

  • Get the client secret and client id for install user-management

  • Click on the service account roles Alt

  • Click on client roles → select account → select all the available roles → click add selected. Alt

  • Click on the broker and select the available roles and click on add selected Alt

  • Click on realm-management and select all the available roles and add Alt

Step 4: Install Accuknox UI

It is mandatory to have Istio API Gateway Endpoint before deploying Accuknox UI. Attach the API gateway in Accuknox UI.

  • Ingress Nginx controller external IP should mapped with DNS A record like app-onprem.accuknox.com

    kubectl get svc -n ingress-nginx
    
    Alt

  • UI Installation using helm

    helm upgrade --install accuknox-ui accuknox-onprem-services/accuknox-ui --set ingress.hostaname=<ui-hostname>
    

  • ClusterIssuer should be in Ready state

    kubectl get ClusterIssuer
    
    Alt

  • Everything should be in Running status

    kubectl get pod
    
    Alt
    kubectl get ingress
    
    Alt

Install user management

  • For client_id and client_secret has to be taken from keycloak configuration

    helm upgrade --install accuknox-user-mgmt-service 
                 accuknox-onprem-services/user-management-service-chart
                 --set config.ui.host=<app.accuknox.com> 
                 --set config.client_secret=<client_secret>
                 --set config.client_id=<client_id>
                 -n accuknox-user-mgmt
    

  • Pods should be in Running status

    kubectl get pods -n accuknox-user-mgmt
    
    Alt


Login Screen

Alt


Step 5: Onboarding Steps

Onboard your K8’s Cluster into AccuKnox Control Plane

To know more

Back to top