Open source logging
Modern applications generate a massive volume of logs and other telemetry data, which requires an efficient log management solution. Loki, an open-source log aggregation system from Grafana, is a popular solution for companies. It allows for storing, searching, and analyzing huge volumes of data quickly and easily. Loki is part of the Grafana open-source observability stack called LGTM, which stands for Loki for logs, Grafana for visualization, Tempo for traces, and Mimir for metrics. For this blog, we will look into Loki.
Although Grafana offers its own collector agent called Promtail for sending logs to Loki, we’ll demonstrate how to use Fluent Bit, a leading open-source solution for collecting, processing, and routing large volumes of telemetry data.
Why use Fluent Bit for sending logs to Loki?
So, if Promtail is specifically engineered for sending logs to Loki why would we use Fluent Bit instead? In short, because Promtail is specifically engineered for sending logs to Loki. Fluent Bit is a more versatile, flexible, and powerful option.
Fluent Bit is a vendor-neutral solution
The vendor neutrality of Fluent Bit provides a significant benefit in terms of flexibility and interoperability. Since it is not tied to any specific vendor or product, it can be easily integrated into any existing technology stack, making it a highly adaptable and versatile solution for log forwarding and processing. This allows organizations to avoid vendor lock-in and choose the best tools for their specific needs rather than being limited to a single vendor’s products.
It can send data to all of the major backends, such as Chronosphere Observability Platform, Elasticsearch, Splunk, Datadog, and more. This helps you to avoid costly vendor lock-in. Transitioning to a new backend is a simple configuration change — no new vendor-specific agent to install across your entire infrastructure. A single Fluent Bit agent can be configured to send data to Loki, Chronosphere, Elasticsearch, Splunk, and Datadog all at the same time.
Fluent Bit is Lightweight and fast
Fluent Bit is a lightweight and fast solution for sending logs to Loki. It requires fewer system resources and runs faster than other log collection agents. As a result, it can process a massive amount of log data with minimal impact on system performance. Its footprint is only ~ 450kb, but it certainly punches above its weight class when it comes to processing millions of records daily.
Fluent Bit supports multiple platforms
Fluent Bit supports multiple platforms, including Windows, Linux, macOS, and Kubernetes, making it an ideal solution for companies with diverse IT environments.
Fluent Bit is open source
Fluent Bit is open source. Fluent Bit is a graduated Cloud Native Computing Foundation project under the Fluentd umbrella.
Fluent Bit supports a wide range of input and output plugins
Fluent Bit offers a wide range of input and output plugins allowing it to collect and send logs from various sources to various destinations. These plugins include file, syslog, TCP, HTTP, and more.
Fluent Bit is easy to configure
Fluent Bit is easy to configure and deploy, even for users with limited technical expertise. Its configuration files are written in a simple and easy-to-understand syntax, and it offers extensive documentation and community support.
Fluent Bit is battle-tested and trusted
Fluent Bit is trusted. Fluent Bit has been downloaded and deployed billions of times. In fact, it is included with major Kubernetes distributions, including Google Kubernetes Engine (GKE), AWS Elastic Kubernetes Service (EKS), and Azure Kubernetes Service (AKS).
Now that we know the benefits and why organizations prefer to use Fluent Bit, let’s go over some basic concepts before we get started on the demo.
How does Fluent Bit work?
At a high level, Fluent Bit works by taking data from various sources (Inputs), parsing and transforming that data (Parsers), and then sending it to various destinations (Outputs).
Here’s a brief explanation of each component in Fluent Bit:
Inputs: These are data sources that Fluent Bit reads from. Examples include log files, Docker containers, system metrics, and many more. Fluent Bit supports a wide range of Inputs out of the box.
Parsers: Once Fluent Bit has read the data, it can be transformed using Parsers. These are responsible for recognizing the format of the incoming data, extracting the relevant fields, and mapping them to a structured format. Parsers help make the data more manageable and can even allow for real-time alerting.
Outputs: The last step in Fluent Bit’s data collection process is sending the transformed data to various destinations. Outputs include Elasticsearch, Kafka, Amazon S3, and many more. Fluent Bit allows for the routing and filtering of data before it is sent to the desired destination, making it a powerful tool for managing large amounts of data.
You can read more about Fluent Bit in detail here.
How does Loki benefit from the use of Fluent Bit?
Fluent Bit acts as a bridge between your logs and Loki, which is a horizontally-scalable, highly-available, and multi-tenant log aggregation system. Fluent Bit can parse and structure logs into the format Loki requires , making it easier to search and analyze log data. It can also compress and batch logs, reducing network bandwidth and improving performance.
By using Fluent Bit to send logs to Loki, you can take advantage of Loki’s advanced features, such as query language and alerts, to gain insights into your applications and infrastructure. Fluent Bit can also be configured to use the Loki API to create, update, and delete labels for your log streams, enabling better organization and filtering of log data. Overall, Fluent Bit can help make log collection and analysis more efficient and effective, particularly in large-scale environments.
Prerequisites
For this demo, you will need to have Docker and Docker Compose installed. If you don’t have it already installed, you can follow the install docker-compose official documentation, which has very well-articulated steps. Lastly, you need a Grafana Cloud Account — a trial account would work for this demo.
Once you’re done with the installation, let’s look at the configuration for Fluent bit.
Configure Fluent Bit
Fluent Bit can be configured using a configuration file or environment variables. The configuration file is written in a simple syntax, and it allows for easy management of complex pipelines. Environment variables can also be used to configure Fluent Bit, and they provide a simple way to pass configuration data without needing a configuration file. Once the configuration is set up, Fluent Bit can be run as a standalone process or as a sidecar in containerized environments.
For this demo, we will be going ahead with a configuration file.
[SERVICE]
flush 1
log_level info
Currently, the file only contains information about the service, which defines the global behavior of the Fluent Bit engine. We also need to define input and outputs as well.
Input configuration
Fluent Bit accepts data from a variety of sources using input plugins. The tail
input plugin allows you to read from a text log file as though you were running the tail -f
command
Add the following to your fluent-bit.conf
file:
[SERVICE]
flush 1
log_level info
[INPUT]
name tail
path /etc/data/data.log
tag log_generator
The path
parameter in Fluent Bit’s configuration may need to be adjusted based on the source of your logs. The plugin name, which identifies which plugin Fluent Bit should load, cannot be customized by the user. The tag
parameter is optional but can be used for routing and filtering your data, as discussed in more detail below.
Output Configuration
As with inputs, Fluent Bit uses output plugins to send the gathered data to their desired destinations.
To set up your configuration, you will need to gather some information from your Grafana account: See the image below for how to locate it from Grafana Cloud page.
- HOST_NAME – Cloud Loki instance URL.
- USER_NAME – User name of Cloud Loki Instance
- API_KEY – API Key of Cloud Loki Instance
Once you have gathered the required information, add the following to your fluent-bit.conf
file below the Input section.
[SERVICE]
flush 1
log_level info
[INPUT]
name tail
path /etc/data/data.log
tag log_generator
[OUTPUT]
Name stdout
Match *
[OUTPUT]
# for sending logs to local Loki instance
name loki
match *
host loki
port 3100
labels job=fluentbit
[OUTPUT]
# for sending logs to cloud Loki instance
Name loki
Match *
Host HOST_NAME
port 443
tls on
tls.verify on
http_user USER_NAME
line_format json
labels job=fluentbit
http_passwd API_KEY
**Tip: I_f you want to look into more details of each output parameters of Loki plugin you check out [here](https://docs.fluentbit.io/manual/pipeline/outputs/loki)._**
The Match
*
parameter indicates that all of the data gathered by Fluent Bit will be forwarded to Loki instance. We could also match based on a tag defined in the input plugin. tls On ensures that the connection between Fluent Bit and the Loki instance is secure. By default, the Port is configured to 9200, so we need to change that to 9243, which is the port used by Loki Cloud
Note: We have also defined a secondary output that sends all the data to stdout. This is not required for the Loki configuration but can be incredibly helpful if we need to debug our configuration.
Start sending your logs!
For ease of setup, I’ve written a docker-compose file as follows, and it will help you get started with all the necessary things, such as Grafana, Loki, Log generator, and Fluent Bit instance running on local.
version: "3"
networks:
loki:
volumes:
log-data:
driver: local
services:
flog-log:
image: mingrammer/flog
command: "-f json -t log -l -w -d 5s -o /etc/data/data.log"
volumes:
- log-data:/etc/data
fluent-bit:
image: fluent/fluent-bit
volumes:
- ./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf
- log-data:/etc/data
depends_on:
- loki
networks:
- loki
loki:
image: grafana/loki:2.7.0
ports:
- "3100:3100"
command: -config.file=/etc/loki/local-config.yaml
networks:
- loki
grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
networks:
- loki
environment:
- GF_PATHS_PROVISIONING=/etc/grafana/provisioning
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
entrypoint:
- sh
- -euc
- |
mkdir -p /etc/grafana/provisioning/datasources
cat < /etc/grafana/provisioning/datasources/ds.yaml
apiVersion: 1
datasources:
- name: Loki
type: loki
access: proxy
orgId: 1
url: http://loki:3100
basicAuth: false
isDefault: true
version: 1
editable: true
EOF
/run.sh
Put the fluent-bit.conf and docker-compose.yaml file in the same directory, and run the command below to get things up and running; make sure you’re running the terminal within the same directory.
➜ demo-fluent-bit docker-compose up
[+] Running 4/0
⠿ Container demo-fluent-bit-flog-log-1 Created 0.0s
⠿ Container demo-fluent-bit-loki-1 Created 0.0s
⠿ Container demo-fluent-bit-grafana-1 Created 0.0s
⠿ Container demo-fluent-bit-fluent-bit-1 Created 0.0s
Attaching to demo-fluent-bit-flog-log-1, demo-fluent-bit-fluent-bit-1, demo-fluent-bit-grafana-1, demo-fluent-bit-loki-1
demo-fluent-bit-loki-1 | level=info ts=2023-02-20T09:45:30.981870161Z caller=main.go:103 msg="Starting Loki" version="(version=2.7.0, branch=HEAD, revision=1b627d880)"
demo-fluent-bit-loki-1 | level=info ts=2023-02-20T09:45:30.982532796Z caller=server.go:323 http=[::]:3100 grpc=[::]:9095 msg="server listening on addresses"
demo-fluent-bit-loki-1 | level=warn ts=2023-02-20T09:45:30.986650564Z caller=cache.go:114 msg="fifocache config is deprecated. use embedded-cache instead"
demo-fluent-bit-loki-1 | level=warn ts=2023-02-20T09:45:30.986689968Z caller=experimental.go:20 msg="experimental feature in use" feature="In-memory (FIFO) cache - chunksembedded-cache"
demo-fluent-bit-loki-1 | level=info ts=2023-02-20T09:45:30.987051548Z caller=table_manager.go:404 msg="loading local table index_19408"
Verify the pipeline
Once, all the services are up, as defined in docker-compose, you can head over to localhost:3000/ this where we port-forwarded our Grafana instance with Loki as a data source.
Check out the below screenshot where Logs are coming to cloud Loki.
At the same time, it is coming to our local instance of Loki; since we’ve already added Loki as a datasource, we can explore that in Grafana now.
That’s it, and you have successfully built your own logs pipeline.
Learn more about Fluent Bit
We’ve just seen a basic configuration for getting log data from Fluent Bit into Loki in Grafana Cloud. The Fluent Bit Loki output plugin supports many additional parameters that enable you to fine-tune your Fluent Bit to the Grafana Loki pipeline. Check out the Fluent Bit documentation for more.
To learn even more about Fluent Bit, check out Fluent Bit Academy, your destination for best practices and how-to’s on advanced processing, routing, and all things Fluent Bit. Here’s a sample of what you can find there:
- Getting Started with Fluent Bit and OpenSearch
- Getting Started with Fluent Bit and OpenTelemetry
- Fluent Bit for Windows
About Fluent Bit and Chronosphere
With Chronosphere’s acquisition of Calyptia in 2024, Chronosphere became the primary corporate sponsor of Fluent Bit. Eduardo Silva — the original creator of Fluent Bit and co-founder of Calyptia — leads a team of Chronosphere engineers dedicated full-time to the project, ensuring its continuous development and improvement.
Fluent Bit is a graduated project of the Cloud Native Computing Foundation (CNCF) under the umbrella of Fluentd, alongside other foundational technologies such as Kubernetes and Prometheus. Chronosphere is also a silver-level sponsor of the CNCF.