Skip to content

Kubernetes Log Forwarding Configuration

This guide covers log forwarding from Kubernetes clusters to SenseOn, with examples focused on Amazon EKS. This is designed to help customers who seek log ingestion from:

  • Amazon EKS (Elastic Kubernetes Service)
  • Google GKE (Google Kubernetes Engine)
  • Azure AKS (Azure Kubernetes Service)
  • Self-hosted Kubernetes

Deployment Scenarios

SenseOn identifies two deployment situations for Kubernetes environments. Both use the same architecture (FluentBit → external collector), but differ in the FluentBit deployment mechanism.

Quick Decision Guide

Choose Scenario 1 if:

  • You already have FluentBit or Fluentd running
  • You want to maintain control of log collection
  • You prefer self-service configuration

Choose Scenario 2 if:

  • You have no existing log gathering infrastructure
  • You prefer fully managed log collection
  • You want SenseOn to handle configuration
  • You're okay with granting cluster access to SenseOn, only for relevant resources

Scenario 1: You Have Existing FluentBit

Overview

If you already have FluentBit (or Fluentd) deployed in your cluster, you just need to add a new OUTPUT block to forward logs to SenseOn.

Architecture:

Application Pods → FluentBit (already present) → Add OUTPUT → SenseOn Collector

Prerequisites

  • Existing FluentBit or Fluentd deployment
  • Cluster admin access to modify ConfigMaps
  • Outbound HTTPS (443) access to *.snson.net
  • Your applications logging JSON to stdout (or FluentBit parsers configured)

Step 1: Identify Your FluentBit Configuration

Find your existing FluentBit ConfigMap:

# Find FluentBit namespace
kubectl get pods --all-namespaces | grep fluent

# Get the ConfigMap
kubectl get configmap -n <fluent-namespace>
kubectl get configmap fluent-bit-config -n <fluent-namespace> -o yaml

Step 2: Add SenseOn OUTPUT Block

Edit your FluentBit ConfigMap to add a new OUTPUT section:

kubectl edit configmap fluent-bit-config -n <fluent-namespace>

Add a new OUTPUT block to the outputs: section:

outputs: |
    # Your existing outputs (keep these)
    [OUTPUT]
        Name          forward
        Match         *
        Host          your-internal-aggregator.example.com
        Port          24224

    # NEW: Add this OUTPUT for SenseOn
    [OUTPUT]
        Name            http
        Match           *
        Host            <your-code>-collector.snson.net
        Port            443
        URI             /log_sink
        Format          json_lines
        json_date_key   false
        tls             On
        tls.verify      On
        Header          Content-Type application/json
        Retry_Limit     3
        net.keepalive   On

settings explained:

  • Match * - Forwards all logs (you can filter with Match kube.*app-name* for specific apps)
  • Format json_lines - Sends newline-delimited JSON (required by SenseOn)
  • json_date_key false - Let SenseOn add timestamps server-side
  • tls On - Encrypt traffic (required)

Step 3: Restart FluentBit Pods

# Restart FluentBit to apply configuration
kubectl rollout restart daemonset/fluent-bit -n <fluent-namespace>

# Verify pods are running
kubectl get pods -n <fluent-namespace>

# Check pod logs for errors
kubectl logs -n <fluent-namespace> -l app=fluent-bit --tail=50

Step 4: Verify Logs Are Being Sent

# Check FluentBit logs for HTTP status
kubectl logs -n <fluent-namespace> -l app=fluent-bit --tail=100 | grep "HTTP status"

# Should see: HTTP status=201 (logs accepted successfully)

Example successful output:

[2025/10/21 10:00:05] [ info] [output:http:http.1] <your-code>-collector.snson.net:443, HTTP status=201

Step 5: Test with Sample Application

If you want to verify end-to-end, deploy a test pod that generates JSON logs:

kubectl run test-logger --image=busybox --restart=Never -- sh -c \
  'while true; do echo "{\"timestamp\":\"$(date -u +%Y-%m-%dT%H:%M:%SZ)\",\"level\":\"info\",\"message\":\"test log\",\"source\":\"test-pod\"}"; sleep 5; done'

# Check test pod logs
kubectl logs test-logger

# Verify FluentBit picked them up
kubectl logs -n <fluent-namespace> -l app=fluent-bit --tail=20 | grep test-pod

Scenario 2: No Existing FluentBit (Managed Deployment)

Overview

If you don't have FluentBit deployed, SenseOn can deploy and manage the complete log collection infrastructure for you.

Architecture:

Application Pods → FluentBit DaemonSet (deployed by SenseOn) → SenseOn Collector

Prerequisites

  • Kubernetes cluster (EKS, GKE, AKS, or self-hosted)
  • Cluster admin credentials
  • List of application/pod names you want to monitor
  • Network access details (VPN if applicable)

Step 1: Gather Required Information

Prepare this information for SenseOn:

  1. Cluster Details:

    • Cloud provider (AWS EKS / GCP GKE / Azure AKS / Self-hosted)
    • Cluster name
    • Region/zone
    • Kubernetes version
  2. Network Access:

    • Cluster endpoint URL
    • VPN details (if cluster is private)
    • Any network restrictions or firewall rules
  3. Applications to Monitor:

    • List of application names
    • Pod naming patterns (e.g., customer-api-*, nginx-* or actual pod names if feasible)
    • Namespaces where applications run
    • Expected log formats (JSON, plain text, etc. This is necessary as we will need to transform to JSON format)
  4. RBAC Requirements:

    • Any security policies in place (PSP, OPA, Kyverno)

Example application list:

applications:
  - name: nginx-ingress
    pod_pattern: ingress-nginx-controller*
    namespace: ingress-nginx
    log_format: json

  - name: api-server
    pod_pattern: api-server-*
    namespace: production
    log_format: json

  - name: database
    pod_pattern: postgres-*
    namespace: data
    log_format: text

Step 2: Contact SenseOn

Provide the gathered information to SenseOn (via your SenseOn account manager, technical contact, customer success representative or [email protected]).

SenseOn will provide:

  • Custom Terraform package configured for your environment
  • Instructions for deploying this
  • Overview of next steps (actionable by SenseOn)

Step 3: Grant Cluster Access

SenseOn will provide a Terraform package. Run it to create necessary RBAC permissions:

Example Terraform execution:

# Extract the provided Terraform package
tar -xzf senseon-fluentbit-deployment.tar.gz
cd senseon-fluentbit-deployment

# Review the Terraform plan
terraform init
terraform plan

# Apply (creates ServiceAccount, ClusterRole, namespace)
terraform apply

What this creates:

  • Namespace for FluentBit (e.g., senseon-logging)
  • ServiceAccount with appropriate permissions
  • ClusterRole and ClusterRoleBinding for log collection
  • Network policies (if required)

This creates very limited cluster access, allowing the SenseOn technical team to deploy the forwarder, with access only to relevant tagged resources

Step 4: SenseOn Deploys FluentBit

After cluster access is granted, SenseOn will:

  1. Deploy FluentBit DaemonSet (log collection and forwarding capacity on each node)
  2. Configure INPUT blocks for your specified applications
  3. Set up appropriate parsers for your log formats
  4. Configure OUTPUT to your assigned collector endpoint
  5. Verify logs are flowing correctly

You'll be notified when deployment is complete.

What SenseOn Manages

With managed deployment, SenseOn handles:

  • FluentBit version updates
  • Configuration changes (adding new apps, adjusting parsers)
  • Performance tuning (buffer sizes, flush intervals)
  • Troubleshooting and monitoring
  • Security patches

Understanding Kubernetes Log Parsing

The Problem: Nested JSON in Kubernetes

Kubernetes wraps container logs in a Docker/containerd format, which causes application JSON to be nested and escaped.

What your application outputs:

{"timestamp": "2025-10-21T10:00:00Z", "level": "info", "message": "user login", "user_id": "123"}

What Kubernetes writes to disk:

2025-10-21T10:00:00.123456789Z stdout F {"timestamp": "2025-10-21T10:00:00Z", "level": "info", "message": "user login", "user_id": "123"}

Without proper parsing, SenseOn would receive:

{
  "log": "{\"timestamp\": \"2025-10-21T10:00:00Z\", \"level\": \"info\", \"message\": \"user login\", \"user_id\": \"123\"}",
  "stream": "stdout",
  "time": "2025-10-21T10:00:00.123456789Z"
}

Your application JSON is escaped inside the log field! ❌

The Solution: Multi-Stage Parsing

FluentBit uses multi-stage parsing to unwrap Kubernetes logs:

Stage 1: Parse Container Runtime Format

[INPUT]
    Name              tail
    Path              /var/log/containers/*.log
    multiline.parser  cri
    Tag               kube.*

The cri parser handles Docker/containerd format, extracting the actual log message.

Stage 2: Parse Application JSON

[FILTER]
    Name kubernetes
    Match kube.*
    Merge_Log On
    Keep_Log Off

The kubernetes filter with Merge_Log On parses JSON from the extracted message and merges fields.

Stage 3: Clean Up and Enrich

[FILTER]
    Name modify
    Match kube.*
    Add source kubernetes
    Add cluster_name production-eks

Add metadata to identify the source.

Final result sent to SenseOn:

{
  "timestamp": "2025-10-21T10:00:00Z",
  "level": "info",
  "message": "user login",
  "user_id": "123",
  "source": "kubernetes",
  "cluster_name": "production-eks",
  "kubernetes_pod_name": "api-server-abc123",
  "kubernetes_namespace": "production"
}


Example Configuration Reference

Complete FluentBit Configuration for Kubernetes

This is a complete example configuration showing best practices for Kubernetes log collection:

apiVersion: v1
kind: ConfigMap
metadata:
  name: fluent-bit-config
  namespace: logging
data:
  fluent-bit.conf: |
    [SERVICE]
        Daemon Off
        Flush 5
        Log_Level info
        Parsers_File /fluent-bit/etc/parsers.conf
        HTTP_Server On
        HTTP_Listen 0.0.0.0
        HTTP_Port 2020
        Health_Check On

    # Collect logs from nginx ingress controller
    [INPUT]
        Name tail
        Path /var/log/containers/ingress-nginx-controller*.log
        multiline.parser cri
        Tag kube.nginx
        Mem_Buf_Limit 5MB
        Skip_Long_Lines On

    # Collect logs from application pods
    [INPUT]
        Name tail
        Path /var/log/containers/api-server*.log
        multiline.parser cri
        Tag kube.api
        Mem_Buf_Limit 5MB
        Skip_Long_Lines On

    # Enrich with Kubernetes metadata and parse JSON logs
    [FILTER]
        Name kubernetes
        Match kube.*
        Merge_Log On
        Keep_Log Off
        Use_Kubelet On
        Kubelet_Port 10250
        Kube_Tag_Prefix kube.var.log.containers.
        K8S-Logging.Parser Off
        K8S-Logging.Exclude Off

    # Add cluster identification
    [FILTER]
        Name modify
        Match kube.*
        Add cluster_name production-eks-01
        Add environment production
        Add source kubernetes

    # Forward to SenseOn collector
    [OUTPUT]
        Name            http
        Match           kube.*
        Host            <your-code>-collector.snson.net
        Port            443
        URI             /log_sink
        Format          json_lines
        json_date_key   false
        tls             On
        tls.verify      On
        Header          Content-Type application/json
        Retry_Limit     3
        net.keepalive   On

DaemonSet Deployment Example

If deploying FluentBit yourself, here's a complete DaemonSet example:

---
apiVersion: v1
kind: Namespace
metadata:
  name: logging

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluent-bit
  namespace: logging

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: fluent-bit
rules:
- apiGroups: [""]
  resources: ["pods", "namespaces"]
  verbs: ["get", "list", "watch"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: fluent-bit
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: fluent-bit
subjects:
- kind: ServiceAccount
  name: fluent-bit
  namespace: logging

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluent-bit
  namespace: logging
  labels:
    app: fluent-bit
spec:
  selector:
    matchLabels:
      app: fluent-bit
  template:
    metadata:
      labels:
        app: fluent-bit
    spec:
      serviceAccountName: fluent-bit
      containers:
      - name: fluent-bit
        image: fluent/fluent-bit:2.2.3
        resources:
          requests:
            memory: "128Mi"
            cpu: "150m"
          limits:
            memory: "128Mi"
            cpu: "150m"
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: fluent-bit-config
          mountPath: /fluent-bit/etc/
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: fluent-bit-config
        configMap:
          name: fluent-bit-config

To deploy

# Save both ConfigMap and DaemonSet to a file
kubectl apply -f fluentbit-deployment.yaml

# Verify deployment
kubectl get pods -n logging
kubectl logs -n logging -l app=fluent-bit --tail=50

Verification and Testing

Check FluentBit is Running

# List FluentBit pods (one per node)
kubectl get pods -n logging -l app=fluent-bit

# Check pod status
kubectl describe pod -n logging <fluent-bit-pod-name>

# View logs
kubectl logs -n logging <fluent-bit-pod-name> --tail=100

Verify Log Collection

# Check FluentBit is reading container logs
kubectl logs -n logging -l app=fluent-bit --tail=100 | grep "inotify_fs_add"

# Should see lines like:
# [ info] [input:tail:tail.0] inotify_fs_add(): inode=12345 name=/var/log/containers/api-server-xyz.log

Verify Forwarding to SenseOn

# Check for successful HTTP POSTs
kubectl logs -n logging -l app=fluent-bit --tail=200 | grep "HTTP status"

# Should see:
# [ info] [output:http:http.0] <your-code>-collector.snson.net:443, HTTP status=201

Generate Test Logs

Deploy a test application that generates JSON logs:

# Create test pod
kubectl run log-generator --image=busybox --restart=Never -- sh -c \
  'while true; do
    echo "{\"timestamp\":\"$(date -u +%Y-%m-%dT%H:%M:%SZ)\",\"level\":\"info\",\"message\":\"test event\",\"request_id\":\"$(date +%s)\"}";
    sleep 10;
  done'

# Check test pod logs
kubectl logs log-generator --tail=10

# Verify FluentBit collected them
kubectl logs -n logging -l app=fluent-bit --tail=50 | grep "test event"

Troubleshooting

FluentBit Pods Not Starting

Check pod events:

kubectl describe pod -n logging <fluent-bit-pod-name>

Common issues:

  1. RBAC permissions missing

    • Ensure ServiceAccount, ClusterRole, ClusterRoleBinding exist
    • Verify ServiceAccount is referenced in DaemonSet
  2. ConfigMap not mounted

    • Check ConfigMap exists: kubectl get configmap -n logging
    • Verify volume mount in DaemonSet spec
  3. Resource constraints

    • Check node resources: kubectl describe node
    • Adjust memory/CPU limits in DaemonSet

No Logs Being Collected

Check INPUT configuration:

kubectl logs -n logging -l app=fluent-bit --tail=100 | grep -i "input"

Verify log paths exist:

# Exec into a FluentBit pod
kubectl exec -it -n logging <fluent-bit-pod-name> -- sh

# Check log directory
ls -la /var/log/containers/

# Should see *.log files

Common issues:

  1. Wrong log path in INPUT
    • Path must match actual container log locations
    • Usually /var/log/containers/*.log
  2. Permissions issues
    • FluentBit needs read access to /var/log
    • Check hostPath volume mounts

Logs Not Reaching SenseOn

Check OUTPUT configuration:

kubectl logs -n logging -l app=fluent-bit --tail=200 | grep -i "output"

Test collector endpoint:

# From within cluster
kubectl run curl-test --image=curlimages/curl --rm -it --restart=Never -- \
  curl -v -X POST https://<your-code>-collector.snson.net/log_sink \
  -H "Content-Type: application/json" \
  -d '{"test":"connection"}'

# Should see HTTP 201 or 200

Common issues:

  1. Network policy blocking egress
    • Check network policies: kubectl get networkpolicy --all-namespaces
    • Ensure HTTPS (443) egress is allowed
  2. Wrong collector URL
    • Verify URL with SenseOn support
    • Check for typos (snson.net NOT senseon.net)
  3. TLS certificate issues
    • Check FluentBit logs for TLS errors
    • Ensure tls.verify On is set

Logs Are Nested/Escaped JSON

This means parsing isn't working correctly.

Check for these in your configuration:

  1. INPUT must use multiline.parser cri:

    [INPUT]
        Name tail
        Path /var/log/containers/*.log
        multiline.parser cri    # ← This is required
    

  2. Kubernetes filter must have Merge_Log On:

    [FILTER]
        Name kubernetes
        Match kube.*
        Merge_Log On    # ← This parses and merges JSON
        Keep_Log Off     # ← This removes the raw log field
    

  3. If logs are still nested, add explicit JSON parser:

    [FILTER]
        Name parser
        Match kube.*
        Key_Name log
        Parser json
        Reserve_Data True
    

High Memory Usage

Check FluentBit resource usage:

kubectl top pods -n logging

Reduce memory usage:

  1. Lower buffer limits in INPUT:

    [INPUT]
        Mem_Buf_Limit 5MB    # Default is 32MB
    

  2. Increase flush frequency:

    [SERVICE]
        Flush 5    # Flush every 5 seconds (default is 1)
    

  3. Filter out verbose logs:

    [FILTER]
        Name grep
        Match kube.*
        Exclude log debug|trace
    

Performance Tuning

For high-volume environments:

[SERVICE]
    Flush 1              # Flush every second
    Grace 5              # Wait 5 seconds on shutdown
    Log_Level warning    # Reduce FluentBit's own logging

[INPUT]
    Mem_Buf_Limit 32MB   # Increase buffer size
    Skip_Long_Lines On   # Skip lines > 32KB

[OUTPUT]
    Workers 2            # Parallel HTTP connections
    Retry_Limit 3        # Max retry attempts
    net.keepalive On     # Reuse connections

Cloud-Specific Notes

Amazon EKS

Cluster access:

# Configure kubectl
aws eks update-kubeconfig --region <region> --name <cluster-name>

# Verify access
kubectl get nodes

EKS-specific considerations:

  • Ensure IAM roles allow pod-to-pod communication
  • Check VPC security groups allow outbound HTTPS
  • Verify nodes have internet access (direct or via NAT gateway)

Google GKE

Cluster access:

# Configure kubectl
gcloud container clusters get-credentials <cluster-name> --region=<region>

# Verify access
kubectl get nodes

GKE-specific considerations:

  • Workload Identity may be required for private clusters
  • Check firewall rules allow egress to *.snson.net
  • Binary Authorization may block FluentBit image (add exception)

Azure AKS

Cluster access:

# Configure kubectl
az aks get-credentials --resource-group <rg> --name <cluster-name>

# Verify access
kubectl get nodes

AKS-specific considerations:

  • Check Network Security Groups allow outbound HTTPS
  • Azure Policy may require specific pod security standards
  • Verify DNS resolution for *.snson.net

Need help?

Contact SenseOn support at [email protected]

When contacting support, include:

  • Your organisation name
  • Collector endpoint URL
  • Example log entries (sanitised)
  • Relevant error messages (if applicable)
  • Log source type
  • Configuration files (if applicable)