Skip to content

Monitoring your cluster using ELK

Overview

This ELK setup is used for distributed monitoring solution suitable for almost any structured and unstructured data source.

Step 1: Elastic Operator Deployment

1.1 Create a namespace of your choice. Example : elastic-logging

kubectl create ns elastic-logging

Note: Use feeder-service namespace , if required.

1.2 Clone the git repository

git clone -b dev https://github.com/accuknox/Accuknox-Logging
  • Navigate into the directory that holds eck-operator folder.

1.3 Helm Install (Elastic)

  • Install the CRDs and deploy the operator with cluster-wide permissions to manage all namespaces.
    helm repo add elastic https://helm.elastic.co
    helm install elastic-operator eck-operator -n <namespace>
    
  • Enable the elastic resource as true in values.yaml to install Kibana along with feeder.
    elasticsearch:
      enabled: true
    
    Note: If there is ELK set up already running on the cluster, the CRD apply may fail.

1.4 Helm Install (Kibana)

  • Please enable the Kibana resource as true in values.yaml to install Kibana along with feeder.

    kibana:
      enabled: true
    

  • Navigate into the directory that holds kibana folder.

1.5 Beats Setup

  • The agent will be spinned along with Filebeat running along as a sidecar.

  • The filebeat configuration file in the package can be updated to specific Elastic instances, and logs can be viewed in Kibana.

a. Elastic Configuration Parameters

  • We will create a ConfigMap named filebeat-configmap with the content of filebeat.yml file.
    kind: ConfigMap
    metadata:
      name: filebeat-configmap
    data:
      filebeat.yml: |
        filebeat.inputs:
        - type: log
    
          # Change to true to enable this input configuration.
          enabled: true
    
          # Paths that should be crawled and fetched. Glob based paths.
          paths:
          - /var/log/*.log
        output.elasticsearch:
          hosts: ${ELASTICSEARCH_HOST}
          username: ${ELASTICSEARCH_USERNAME}
          password: ${ELASTICSEARCH_PASSWORD}
          ssl.verification_mode: none
    
  • The below Configuration parameters can be updated for elastic configuration.

    (If Default params needs to be modified)

     - name: ELASTICSEARCH_HOST
            value: https://<svc-name>
     - name: ELASTICSEARCH_PORT
            value: "<svc-port>"
     - name: ELASTICSEARCH_USERNAME
            value: "elastic"
     - name: ELASTICSEARCH_PASSWORD
            value: "<elastic-password>"
    

  • To get elastic password

    kubectl get secret elasticsearch-es-elastic-user -o go-template='{{.data.elastic | base64decode}}' -n namespace
    

b. Updating Elastic Search Host (Runtime)

kubectl set env deploy/feeder -n feeder-service ELASTICSEARCH_HOST=”https://elasticsearch-es-http”

c. Update Log Path

  • To Update the Log path configured, please modify the below log input path under file beat inputs.
    filebeat.inputs:
            - type: container
            paths:
            - /log_output/cilium.log
    

1.6 Kibana Dashboard

  • Once the filebeat starts listening, an index will be created or updated on the elastic configured and the pushed logs can be seen.

  • In order to create a dashboard, you will need to build visualizations. Kibana has two panels for this

    • One called Visualize and

    • Another called Dashboard

  • In order to create your dashboard, you will first create every individual visualization with the Visualize panel and save them.

1.7 Successful Installation

kubectl get all -n <namespace>
kubectl port-forward svc/kibana-kb-http 5601:5601

  • All the pods should be up and running.
  • Kibana Ui with filebeat index should be seen (after beat installation).

Step 2: Verify

On Prem Elastic

  • The On-Prem Elastic provides the feasibility of pushing the agent logs to Elastic Host using beats and feeder agent.
  • Elasticsearch is a search and analytics engine. It is an open source, full-text search and analysis engine, based on the Apache Lucene search engine.
  • Logstash is a log aggregator that collects data from various input sources, executes different transformations and enhancements and then ships the data to various supported output destinations.
  • Kibana is a visualization layer that works on top of Elasticsearch, providing users with the ability to analyze and visualize the data. And last but not least — Beats are lightweight agents that are installed on edge hosts to collect different types of data for forwarding into the stack.

2.1 Status of Elastic with On prem Feeder

  • Run the below command to check if agent and dependent pods are up and running.

    kubectl get all -n <namespace>
    

  • All the pods/services should be in Running state.

2.2 Status of Beats with On prem Feeder

  • Once the feeder agent starts running, exec into the filebeat sidecar as below.
    Kubectl exec -it <podname> -c filebeat-sidecar -n <namespace>
    filebeat -e -d "*"
    

3. Metrics

  • Once the feeder agent starts running, check the logs using below command
Kubectl logs –f <podname> –n <namespace>
Back to top