Skip to content

Forwarding metrics to On-prem Feeder

Overview

On Prem Feeder

  • The On-Prem Feeder provides the feasibility of pushing the agent logs to Elastic Host using beats and feeder agent.
  • The feeder agent also has the capability of pushing metrics into On Prem Prometheus.
  • Prometheus collects and stores its metrics as time series data i.e., metrics information is stored with the timestamp at which it was recorded, alongside optional key-value pairs called labels.
  • Elasticsearch is a search and analytics engine. It is an open source, full-text search and analysis engine, based on the Apache Lucene search engine.
  • Logstash is a log aggregator that collects data from various input sources, executes different transformations and enhancements and then ships the data to various supported output destinations.
  • Kibana is a visualization layer that works on top of Elasticsearch, providing users with the ability to analyze and visualize the data. And last but not least — Beats are lightweight agents that are installed on edge hosts to collect different types of data for forwarding into the stack.

Step 1: Installation of Feeder Service (On Prem Without ELK)

  • As we are passing the elastic and kibana resource in the values.yaml of the feeder service , we can toggle the elastic/kibana installation along with feeder-service as below.
helm repo add accuknox-onprem-agents https://USERNAME:[email protected]/repository/accuknox-onprem-agents
helm repo update
helm search repo accuknox-onprem-agents
kubectl create ns accuknox-feeder-service
helm upgrade --install --set elastic.enabled=false --set kibana.enabled=false accuknox-feeder-service accuknox-onprem-agents/feeder-service -n accuknox-feeder-service

Step 2: Installation of Feeder Service (On Prem With ELK)

  • Enable the elastic resource as true to install Elastic along with feeder.
helm upgrade --install --set elasticsearch.enabled=true --set kibana.enabled=true accuknox-feeder-service accuknox-onprem-agents/feeder-service -n accuknox-feeder-service
(OR)

Note: If there is ELK set up already running on the cluster, the CRD apply may fail.

  • The Elastic master and data pods should be in up and running state on the same namespace.

Step 3: View Metrics

Feeder as a SERVER

  • Toggle the below variable for to push metrics directly to an endpoint.
GRPC_SERVER_ENABLED
  value: true
  • Once the feeder agent starts running, the metrics should start flowing up.

  • Please use localhost:8000/metrics endpoint to check metrics flow.

Feeder as a CLIENT

  • Please toggle the below variable for to push metrics to GRPC Server in SAAS Platform / Client Platform.
GRPC_CLIENT_ENABLED
  value: true
GRPC_SERVER_URL
  value: "<grpc_server_host>"
GRPC_SERVER_PORT
  value: "<grpc_server_port>"
  • Once the feeder agent starts running, the metrics will be pushed to prometheus.

Note: All of the above can be updated runtime as in Step 4.3

3.1 Installation of Prometheus

3.2 Prometheus Configuration

  • Add the below configuration in prometheus (on Prem) to see the agent metrics in Prometheus
      job_name: <feeder>-chart
      honor_timestamps: true
      scrape_interval: 30s
      scrape_timeout: 10s
      metrics_path: /metrics
      scheme: http
      follow_redirects: true
      static_configs:
      - targets:
        - <localhost>:8000
    
    Note: Alternatively, the target can be any GRPC Server host / port. (In case of Feeder Client)

3.3. Metrics Labelling (Prometheus)

  • The Cilium metrics can be seen under the below label
    cilium_<metricname> (Eg, cilium_http_requests_total)
    
  • The Kubearmor metrics can be seen under the below label
    kubearmor_<metricname> (Eg, kubearmor_action_requests_total)
    
  • The Vae metrics can be seen under the below label
    vae_<metricname> (Eg, vae_Proc_Count_API_requests_total)
    

Step 4: Forwarding Logs to Elastic

4.1. Beats Setup

  • The Beats agent will be spinned along with Filebeat running along as a sidecar.
  • The filebeat configuration file in the package can be updated to specific Elastic instances, and logs can be viewed in Kibana.

  • The logs are forwarded to Elastic when the below env variable is enabled.

    - name: ELASTIC_FEEDER_ENABLED
        value: true
    

4.2. Elastic Configuration Parameters:

  • We will create a ConfigMap named filebeat-configmap with the content of filebeat.yml file.
    kind: ConfigMap
    metadata:
      name: filebeat-configmap
    data:
      filebeat.yml: |
        filebeat.inputs:
        - type: log
    
          # Change to true to enable this input configuration.
          enabled: true
    
          # Paths that should be crawled and fetched. Glob based paths.
          paths:
          - /var/log/*.log
        output.elasticsearch:
          hosts: ${ELASTICSEARCH_HOST}
          username: ${ELASTICSEARCH_USERNAME}
          password: ${ELASTICSEARCH_PASSWORD}
          ssl.verification_mode: none
    
  • The below Configuration parameters can be updated for elastic configuration.

    (If Default params needs to be modified)

     - name: ELASTICSEARCH_HOST
            value: https://<svc-name>
     - name: ELASTICSEARCH_PORT
            value: "<svc-port>"
     - name: ELASTICSEARCH_USERNAME
            value: "elastic"
     - name: ELASTICSEARCH_PASSWORD
            value: "<elastic-password>"
    

  • To get elastic password

    kubectl get secret elasticsearch-es-elastic-user -o go-template='{{.data.elastic | base64decode}}' -n namespace
    

4.3. Updating Elastic Search Host (Runtime):(If required to switch different Elastic host)

kubectl set env deploy/feeder-service -n accuknox-feeder-service  ELASTICSEARCH_HOST="https://elasticsearch-es-http.test-feed.svc.cluster.local:9200" ELASTICSEARCH_USERNAME=elastic ELASTICSEARCH_PASSWORD=xxxxxxxxxx
Note: Likewise other configuration parameters can be updated in Runtime.

4.4. Update Log Path:

  • To view logs please use the below command
kubectl exec -it -n accuknox-feeder-service pod/<podname> -c filebeat-sidecar -- /bin/bash
  • To Update the Log path configured, please modify the below log input path under file beat inputs.
filebeat.inputs:
        - type: container
        paths:
        - /log_output/cilium.log

Step 5: Forwarding Logs to Splunk

  • The logs are forwarded to Splunk when the below env variable is enabled.
    - name: SPLUNK_FEEDER_ENABLED
        value: true
    
  • The below Configuration parameters can be updated for Splunk configuration.

    (If Default params needs to be modified)

     - name: SPLUNK_FEEDER_URL
            value: https://<splunk-host>
     - name: SPLUNK_FEEDER_TOKEN
            value: "Token configured on HEC in Splunk App"
     - name: SPLUNK_FEEDER_SOURCE_TYPE
            value: "Source Type configured on HEC in Splunk App"
     - name: SPLUNK_FEEDER_SOURCE
            value: "Splunk Source configured on HEC in Splunk App"
     - name: SPLUNK_FEEDER_INDEX
         value: "Splunk Index configured on HEC in Splunk App"
    

5.1. Enabling/Disabling Splunk (Runtime):

kubectl set env deploy/feeder -n feeder-service SPLUNK_FEEDER_ENABLED="true"
  • By enabling the flag to true (as above), the logs will be pushed to Splunk.Conversely disabling it to "false" will stop pushing logs.

Note: Likewise other configuration parameters can be updated in Runtime.

Back to top