Skip to content

Elastic and Splunk Integration

Metrics and Logs

  • The On-Prem Feeder provides the feasibility of pushing the agent logs to Elastic Host using beats and feeder agent.
  • The On-Prem Feeder agent also has the capability of pushing metrics into On Prem Prometheus.
  • Prometheus collects and stores its metrics as time series data i.e., metrics information is stored with the timestamp at which it was recorded, alongside optional key-value pairs called labels.
  • Elasticsearch is a search and analytics engine. It is an open source, full-text search and analysis engine, based on the Apache Lucene search engine.
  • Logstash is a log aggregator that collects data from various input sources, executes different transformations and enhancements and then ships the data to various supported output destinations.
  • Kibana is a visualization layer that works on top of Elasticsearch, providing users with the ability to analyze and visualize the data. And last but not least — Beats are lightweight agents that are installed on edge hosts to collect different types of data for forwarding into the stack.

The below section explains installation of Feeder Agent , Elastic(Optional) and Prometheus(Optional)

1. Installation of Feeder Agent

  • As we are passing the elastic and kibana resource in the values.yaml of the feeder service , we can toggle the elastic/kibana installation along with feeder-service as below.
    helm repo add accuknox-onprem-agents https://USERNAME:[email protected]/repository/accuknox-onprem-agents
    helm repo update
    helm search repo accuknox-onprem-agents
    
    kubectl create ns accuknox-feeder-service
    helm upgrade --install accuknox-eck-operator accuknox-onprem-agents/eck-operator (only if ELK is required)
    helm upgrade --install accuknox-feeder-service accuknox-onprem-agents/feeder-service -n accuknox-feeder-service
    

2. Installation of Elastic

  • Please enable the elastic resource as true to install Elastic along with feeder.
    helm upgrade --install --set elasticsearch.enabled=true --set kibana.enabled=true accuknox-feeder-service accuknox-onprem-agents/feeder-service -n accuknox-feeder-service
    
    Note: If there is ELK set up already running on the cluster, the CRD apply may fail.
  • The Elastic master and data pods should be in up and running state on the same namespace.

  • Additionally the same can be enabled using below command by updating values.yaml

    elasticsearch:
      enabled: true
    

3. Installation of Kibana

  • Please enable the Kibana resource as true to install Kibana along with feeder.
    helm upgrade --install --set elasticsearch.enabled=true  --set kibana.enabled=true accuknox-feeder-service accuknox-onprem-agents/feeder-service -n accuknox-feeder-service
    
  • The Kibana pods should be in up and running state on the same namespace.

  • Additionally the same can be enabled using below command by updating values.yaml

    kibana:
      enabled: true
    

4. View Metrics

Feeder as a SERVER

  • Please toggle the below variable for to push metrics directly to an endpoint.
    GRPC_SERVER_ENABLED
      value: true
     ```
    - Once the feeder agent starts running, the metrics should start flowing up.
    - Please use `localhost:8000/metrics` endpoint to check metrics flow.
    
    <b> Feeder as a CLIENT</b>
    
    - Please toggle the below variable for to push metrics to GRPC Server in SAAS Platform.
    ```yaml
    GRPC_CLIENT_ENABLED
      value: true
    GRPC_SERVER_URL
      value: "localhost"
    GRPC_SERVER_PORT
      value: 8000
    
  • Once the feeder agent starts running, the metrics will be pushed to prometheus in SAAS and can be viewed in ACCUKNOX platform UI. Note: All of the above can be updated runtime as in Step 5.5.3

4.1 Installation of Prometheus (Required only when Feeder acts as Server)

Please refer the page for Installation of Prometheus.

4.2 Prometheus Configuration:(Required only when Feeder acts as Server)

  • Please add the below configuration in prometheus (on Prem) to see the agent metrics in Prometheus
      job_name: <feeder>-chart
      honor_timestamps: true
      scrape_interval: 30s
      scrape_timeout: 10s
      metrics_path: /metrics
      scheme: http
      follow_redirects: true
      static_configs:
      - targets:
        - <localhost>:8000
    

5. View Logs in Elastic

5.1. Beats Setup

  • The Beats agent will be spinned along with Filebeat running along as a sidecar.
  • The filebeat configuration file in the package can be updated to specific Elastic instances, and logs can be viewed in Kibana.

  • The logs are forwarded to Elastic when the below env variable is enabled.

    - name: ELASTIC_FEEDER_ENABLED
        value: true
    

5.2. Elastic Configuration Parameters:

  • We will create a ConfigMap named filebeat-configmap with the content of filebeat.yml file.
    kind: ConfigMap
    metadata:
      name: filebeat-configmap
    data:
      filebeat.yml: |
        filebeat.inputs:
        - type: log
    
          # Change to true to enable this input configuration.
          enabled: true
    
          # Paths that should be crawled and fetched. Glob based paths.
          paths:
          - /var/log/*.log
        output.elasticsearch:
          hosts: ${ELASTICSEARCH_HOST}
          username: ${ELASTICSEARCH_USERNAME}
          password: ${ELASTICSEARCH_PASSWORD}
          ssl.verification_mode: none
    
  • The below Configuration parameters can be updated for elastic configuration.

    (If Default params needs to be modified)

     - name: ELASTICSEARCH_HOST
            value: https://<svc-name>
     - name: ELASTICSEARCH_PORT
            value: "<svc-port>"
     - name: ELASTICSEARCH_USERNAME
            value: "elastic"
     - name: ELASTICSEARCH_PASSWORD
            value: "<elastic-password>"
    

  • To get elastic password

    kubectl get secret elasticsearch-es-elastic-user -o go-template='{{.data.elastic | base64decode}}' -n namespace
    

5.3. Updating Elastic Search Host (Runtime):(If required to switch different Elastic host)

kubectl set env deploy/feeder-service -n accuknox-feeder-service  ELASTICSEARCH_HOST="https://elasticsearch-es-http.test-feed.svc.cluster.local:9200" ELASTICSEARCH_USERNAME=elastic ELASTICSEARCH_PASSWORD=xxxxxxxxxx
- Note: Likewise other configuration parameters can be updated in Runtime.

5.4. Validate Log Path:

  • To view logs and to check filebeat (status) please use the below command
      kubectl exec -it -n accuknox-feeder-service pod/<podname> -c filebeat-sidecar -- /bin/bash
      filebeat -e
    
  • To Update the Log path configured, please modify the below log input path under file beat inputs.
    filebeat.inputs:
            - type: container
            paths:
            - /log_output/cilium.log
    

6. View Logs in Splunk

  • The logs are forwarded to Splunk when the below env variable is enabled.
    - name: SPLUNK_FEEDER_ENABLED
        value: true
    
  • The below Configuration parameters can be updated for Splunk configuration.

    (If Default params needs to be modified)

     - name: SPLUNK_FEEDER_URL
            value: https://<splunk-host>
     - name: SPLUNK_FEEDER_TOKEN
            value: "Token configured on HEC in Splunk App"
     - name: SPLUNK_FEEDER_SOURCE_TYPE
            value: "Source Type configured on HEC in Splunk App"
     - name: SPLUNK_FEEDER_SOURCE
            value: "Splunk Source configured on HEC in Splunk App"
     - name: SPLUNK_FEEDER_INDEX
         value: "Splunk Index configured on HEC in Splunk App"
    

6.1. Enabling/Disabling Splunk (Runtime):

kubectl set env deploy/feeder -n feeder-service SPLUNK_FEEDER_ENABLED="true"
- By enabling the flag to true (as above), the logs will be pushed to Splunk.Conversely disabling it to "false" will stop pushing logs. - Note: Likewise other configuration parameters can be updated in Runtime.

Back to top