Skip to content

Elastic deploy

Elastic Operator Deployment

1. Please create a namespace of your choice. Example : elastic-logging

kubectl create ns elastic-logging

Note: Please use feeder-service namespace , if required.

2. Clone the git repository

git clone -b dev https://github.com/accuknox/Accuknox-Logging
  • Navigate into the directory that holds eck-operator folder.

3. Helm Install (Elastic)

  • Install the CRDs and deploy the operator with cluster-wide permissions to manage all namespaces.
    helm repo add elastic https://helm.elastic.co
    helm install elastic-operator eck-operator -n <namespace>
    
  • Please enable the elastic resource as true in values.yaml to install Kibana along with feeder.
    elasticsearch:
      enabled: true
    
    Note: If there is ELK set up already running on the cluster, the CRD apply may fail.

4. Helm Install (Kibana)

  • Please enable the Kibana resource as true in values.yaml to install Kibana along with feeder.

    kibana:
      enabled: true
    

  • Navigate into the directory that holds kibana folder.

5. Beats Setup

  • The agent will be spinned along with Filebeat running along as a sidecar.
  • The filebeat configuration file in the package can be updated to specific Elastic instances, and logs can be viewed in Kibana.

a. Elastic Configuration Parameters:

  • We will create a ConfigMap named filebeat-configmap with the content of filebeat.yml file.
    kind: ConfigMap
    metadata:
      name: filebeat-configmap
    data:
      filebeat.yml: |
        filebeat.inputs:
        - type: log
    
          # Change to true to enable this input configuration.
          enabled: true
    
          # Paths that should be crawled and fetched. Glob based paths.
          paths:
          - /var/log/*.log
        output.elasticsearch:
          hosts: ${ELASTICSEARCH_HOST}
          username: ${ELASTICSEARCH_USERNAME}
          password: ${ELASTICSEARCH_PASSWORD}
          ssl.verification_mode: none
    
  • The below Configuration parameters can be updated for elastic configuration.

    (If Default params needs to be modified)

     - name: ELASTICSEARCH_HOST
            value: https://<svc-name>
     - name: ELASTICSEARCH_PORT
            value: "<svc-port>"
     - name: ELASTICSEARCH_USERNAME
            value: "elastic"
     - name: ELASTICSEARCH_PASSWORD
            value: "<elastic-password>"
    

  • To get elastic password

    kubectl get secret elasticsearch-es-elastic-user -o go-template='{{.data.elastic | base64decode}}' -n namespace
    

b. Updating Elastic Search Host (Runtime):

kubectl set env deploy/feeder -n feeder-service ELASTICSEARCH_HOST=”https://elasticsearch-es-http”

c. Update Log Path:

  • To Update the Log path configured, please modify the below log input path under file beat inputs.
    filebeat.inputs:
            - type: container
            paths:
            - /log_output/cilium.log
    

6. Kibana Dashboard

  • Once the filebeat starts listening, an index will be created or updated on the elastic configured and the pushed logs can be seen.
  • In order to create a dashboard, you will need to build visualizations. Kibana has two panels for this
    1. One called Visualize and
    2. Another called Dashboard
  • In order to create your dashboard, you will first create every individual visualization with the Visualize panel and save them.

7. Successful Installation

kubectl get all -n <namespace>
kubectl port-forward svc/kibana-kb-http 5601:5601
  • All the pods should be up and running.
  • Kibana Ui with filebeat index should be seen (after beat installation).
Back to top