Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
High Performance Telemetry Agent for Logs, Metrics and Traces
Fluent Bit is a fast and lightweight telemetry agent for logs, metrics, and traces for Linux, macOS, Windows, and BSD family operating systems. Fluent Bit has been made with a strong focus on performance to allow the collection and processing of telemetry data from different sources without complexity.
High performance: High throughput with low resources consumption
Data parsing
Convert your unstructured messages using Fluent Bit parsers: , , and
Metrics support: Prometheus and OpenTelemetry compatible
Reliability and data integrity
handling
in memory and file system
Networking
Security: Built-in TLS/SSL support
Asynchronous I/O
Pluggable architecture and : Inputs, Filters and Outputs:
Connect nearly any source to nearly any destination using preexisting plugins
Extensibility:
Write input, filter, or output plugins in the C language
: Expose internal metrics over HTTP in JSON and format
: Perform data selection and transformation using basic SQL queries
Create new streams of data using query results
Aggregation windows
Data analysis and prediction: Time series forecasting
Portable: Runs on Linux, macOS, Windows and BSD systems
For more details about changes in each release, refer to the .
Fluent Bit is a graduated sub-project under the umbrella of .
Fluent Bit was originally created by and is now sponsored by . As a CNCF-hosted project, it's a fully vendor-neutral and community-driven project.
Fluent Bit, including its core, plugins, and tools, is distributed under the terms of the .
Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
Fluent Bit is an open source telemetry agent specifically designed to efficiently handle the challenges of collecting and processing telemetry data across a wide range of environments, from constrained systems to complex cloud infrastructures. Managing telemetry data from various sources and formats can be a constant challenge, particularly when performance is a critical factor.
Rather than serving as a drop-in replacement, Fluent Bit enhances the observability strategy for your infrastructure by adapting and optimizing your existing logging layer, and adding metrics and traces processing. Fluent Bit supports a vendor-neutral approach, seamlessly integrating with other ecosystems such as Prometheus and OpenTelemetry. Trusted by major cloud providers, banks, and companies in need of a ready-to-use telemetry agent solution, Fluent Bit effectively manages diverse data sources and formats while maintaining optimal performance and keeping resource consumption low.
Fluent Bit can be deployed as an edge agent for localized telemetry data handling or utilized as a central aggregator/collector for managing telemetry data across multiple sources and environments.
In 2014, the team at was forecasting the need for a lightweight log processor for constraint environments like embedded Linux and gateways. To meet this need, Eduardo Silva created Fluent Bit, a new open-source solution and part of the Fluentd ecosystem.
After the project matured, it gained traction for normal Linux systems. With the new containerized world, the cloud native community asked to extend the project scope to support more sources, filters, and destinations. Not long after, Fluent Bit became one of the preferred solutions to solve the logging challenges in cloud environments.
You can download the most recent stable or development source code.
For production systems, it's strongly suggested that you get the latest stable release of the source code in either zip file or tarball file format from GitHub using the following link pattern:
https://github.com/fluent/fluent-bit/archive/refs/tags/v<release version>.tar.gz
https://github.com/fluent/fluent-bit/archive/refs/tags/v<release version>.zipFor example, for version 1.8.12 the link is: https://github.com/fluent/fluent-bit/archive/refs/tags/v1.8.12.tar.gz
If you want to contribute to Fluent Bit, you should use the most recent code. You can get the development version from the Git repository:
The master branch is where the development of Fluent Bit happens. Development version users should expect issues when compiling or at run time.
Fluent Bit users are encouraged to help test every development version to ensure a stable release.
After downloading Fluent Bit, install it using one of the following methods:
An overview of free public labs for learning how to successfully use Fluent Bit.
in normal operation mode is configurable through text files or using specific arguments in the command line. Although this is the ideal deployment case, there are scenarios where a more restricted configuration is required. Static configuration mode restricts configuration ability.
Static configuration mode includes a built-in configuration in the final binary of Fluent Bit, disabling the usage of external files or flags at runtime.
The production grade telemetry ecosystem
Telemetry data processing can be complex, especially at scale. That's why was created. Fluentd is more than a basic tool. It's grown into a full-scale ecosystem that contains SDKs for different languages and sub-projects, like .
The Fluentd and Fluent Bit projects are both:
Licensed under the terms of Apache License v2.0.
Graduated hosted projects by the .
Fluent Bit is distributed as the fluent-bit package and is available for the latest stable Debian system.
The following architectures are supported
x86_64
aarch64
Fluent Bit is distributed as the fluent-bit package and is available for long-term support releases of Ubuntu. The latest officially supported version is Noble Numbat (24.04).
The recommended secure deployment approach is to use the following instructions.
Add the Fluent Bit server GPG key to your keyring to ensure you can get the correct signed packages.
Follow the official .
git clone https://github.com/fluent/fluent-bitarm64v8
The recommended secure deployment approach is to use the following instructions:
The first step is to add the Fluent Bit server GPG key to your keyring to ensure you can get the correct signed packages.
Follow the official Debian wiki guidance.
For Debian, you must add the Fluent Bit APT server entry to your sources lists. Ensure codename is set to your specific Debian release name. (for example: bookworm for Debian 12).
Update your source's lists:
Update your system's apt database:
Ensure your GPG key is up to date.
Use the following apt-get command to install the latest Fluent Bit:
Instruct systemd to enable the service:
If you do a status check, you should see a similar output similar to:
The default Fluent Bit configuration collect metrics of CPU usage and sends the records to the standard output. You can see the outgoing data in your /var/log/messages file.
On Ubuntu, you need to add the Fluent Bit APT server entry to your sources lists. Ensure codename is set to your specific Ubuntu release name. For example, focal for Ubuntu 20.04.
Update your source's list:
Update the apt database on your system:
Ensure your GPG key is up to date.
Use the following apt-get command to install the latest Fluent Bit:
Instruct systemd to enable the service:
If you do a status check, you should see a similar output like this:
The default configuration of fluent-bit is collecting metrics of CPU usage and sending the records to the standard output. You can see the outgoing data in your /var/log/syslog file.
sudo apt-get install fluent-bitsudo systemctl start fluent-bitsudo sh -c 'curl https://packages.fluentbit.io/fluentbit.key | gpg --dearmor > /usr/share/keyrings/fluentbit-keyring.gpg'codename=$(grep -oP '(?<=VERSION_CODENAME=).*' /etc/os-release 2>/dev/null || lsb_release -cs 2>/dev/null)echo "deb [signed-by=/usr/share/keyrings/fluentbit-keyring.gpg] https://packages.fluentbit.io/debian/$codename $codename main" | sudo tee /etc/apt/sources.list.d/fluent-bit.listsudo apt-get update$ sudo service fluent-bit status
● fluent-bit.service - Fluent Bit
Loaded: loaded (/lib/systemd/system/fluent-bit.service; disabled; vendor preset: enabled)
Active: active (running) since mié 2016-07-06 16:58:25 CST; 2h 45min ago
Main PID: 6739 (fluent-bit)
Tasks: 1
Memory: 656.0K
CPU: 1.393s
CGroup: /system.slice/fluent-bit.service
└─6739 /opt/fluent-bit/bin/fluent-bit -c /etc/fluent-bit/fluent-bit.conf
...sudo apt-get install fluent-bitsudo systemctl start fluent-bitsudo sh -c 'curl https://packages.fluentbit.io/fluentbit.key | gpg --dearmor > /usr/share/keyrings/fluentbit-keyring.gpg'codename=$(grep -oP '(?<=VERSION_CODENAME=).*' /etc/os-release 2>/dev/null || lsb_release -cs 2>/dev/null)echo "deb [signed-by=/usr/share/keyrings/fluentbit-keyring.gpg] https://packages.fluentbit.io/ubuntu/$codename $codename main" | sudo tee /etc/apt/sources.list.d/fluent-bit.listsudo apt-get update$ systemctl status fluent-bit
● fluent-bit.service - Fluent Bit
Loaded: loaded (/lib/systemd/system/fluent-bit.service; disabled; vendor preset: enabled)
Active: active (running) since mié 2016-07-06 16:58:25 CST; 2h 45min ago
Main PID: 6739 (fluent-bit)
Tasks: 1
Memory: 656.0K
CPU: 1.393s
CGroup: /system.slice/fluent-bit.service
└─6739 /opt/fluent-bit/bin/fluent-bit -c /etc/fluent-bit/fluent-bit.conf
...Wasm: Wasm Filter Plugins or Wasm Input Plugins
Write Filters in Lua or Output plugins in Golang

You can also view the source files for these workshops on GitLab.
This workshop by Amazon goes through common Kubernetes logging patterns and routing data to OpenSearch and visualizing with OpenSearch dashboards.
The following steps assume you are familiar with configuring Fluent Bit using text files and you have experience building it from scratch as described in Build and Install.
In your file system, prepare a specific directory that will be used as an entry point for the build system to lookup and parse the configuration files. This directory must contain a minimum of one configuration file, called fluent-bit.conf, that contains the required SERVICE, INPUT, and OUTPUT sections.
As an example, create a new fluent-bit.yaml file or fluent-bit.conf file:
service:
flush: 1
daemon: off
log_level: info
pipeline:
inputs:
- name: cpu
[SERVICE]
Flush 1
Daemon off
Log_Level info
[INPUT]
Name cpu
[OUTPUT]
Name stdout
Match *This configuration calculates CPU metrics from the running system and prints them to the standard output interface.
Go to the Fluent Bit source code build directory:
Run CMake, appending the FLB_STATIC_CONF option pointing to the configuration directory recently created:
Build Fluent Bit:
The generated fluent-bit binary is ready to run without additional configuration:
Vendor neutral and community driven.
Widely adopted by the industry, being trusted by major companies like Amazon, Microsoft, Google, and hundreds of others.
The projects have many similarities: Fluent Bit is designed and built on top of the best ideas of Fluentd architecture and general design. Which one you choose depends on your end-users' needs.
The following table describes a comparison of different areas of the projects:
Scope
Containers / Servers
Embedded Linux / Containers / Servers
Language
C and Ruby
C
Memory
Greater than 60 MB
Approximately 1 MB
Performance
Medium Performance
High Performance
Both Fluentd and Fluent Bit can work as Aggregators or Forwarders, and can complement each other or be used as standalone solutions.
In the recent years, cloud providers have switched from Fluentd to Fluent Bit for performance and compatibility. Fluent Bit is now considered the next-generation solution.
Fluent Bit is distributed as the fluent-bit package and is available for the latest versions of Rocky or Alma Linux now that CentOS Stream is tracking more recent dependencies.
Fluent Bit supports the following architectures:
x86_64
aarch64
arm64v8
From CentOS 9 Stream and later, the CentOS dependencies will update more often than downstream usage. This might mean that incompatible (more recent) versions are provided of certain dependencies (for example, OpenSSL). For OSS, there are RockyLinux and AlmaLinux repositories. This might be required for RHEL 9 as well which will no longer track equivalent CentOS 9 stream dependencies. No RHEL 9 build is provided, it is expected to use one of the OSS variants listed.
The fluent-bit package is provided through a Yum repository. To add the repository reference to your system:
In /etc/yum.repos.d/, add a new file called fluent-bit.repo.
Add the following content to the file - replace almalinux with rockylinux if required:
As a best practice, enable gpgcheck and
After your repository is configured, run the following command to install it:
Instruct Systemd to enable the service:
If you do a status check, you should see a similar output like this:
The default Fluent Bit configuration collect metrics of CPU usage and sends the records to the standard output. You can see the outgoing data in your /var/log/messages file.
Classic mode is a custom configuration model for Fluent Bit. It's more limited than the YAML configuration mode, and doesn't have the more extensive feature support the YAML configuration has. Classic mode basic design only supports grouping sections with key-value pairs and lacks the ability to handle sub-sections or complex data structures like lists.
Learn more about classic mode:
Input plugins gather information from different sources. Some plugins collect data from log files, and others gather metrics information from the operating system. There are many different plugins, and they let you handle many different needs.
When an input plugin loads, an internal instance is created. Each instance has its own independent configuration. Configuration keys are often called properties.
The Fluent Bit data pipeline incorporates several specific concepts. Data processing flows through the pipeline following these concepts in order.
gather information from different sources. Some plugins collect data from log files, and others gather metrics information from the operating system. There are many plugins to suit different needs.
source code provides BitBake recipes to configure, build, and package the software for a Yocto-based image. Specific steps in the usage of these recipes in your Yocto environment (Poky) is out of the scope of this documentation.
Fluent Bit distributes two main recipes, one for testing/dev purposes and one with the latest stable release.
In Fluent Bit v3.2 and later, YAML configuration files support all of the settings and features that support, plus additional features that classic configuration files don't support, like processors.
YAML configuration files support the following top-level sections:
env: Configures .
includes: Specifies additional YAML configuration files to .
Fluent Bit comes with a variety of built-in plugins, and also supports loading external plugins at runtime. Use this feature for loading Go or WebAssembly (Wasm) plugins that are built as shared object files (.so). Fluent Bit YAML configuration provides the following ways to load these external plugins:
You can specify external plugins directly within your main YAML configuration file using the plugins section. Here's an example:
The upstream_servers section defines a group of endpoints, referred to as nodes. Nodes are used by output plugins to distribute data in a round-robin fashion. Use this section for plugins that require load balancing when sending data. Examples of plugins that support this capability include and .
The upstream_servers section require specifying a name for the group and a list of nodes. The following example defines two upstream server groups, forward-balancing and forward-balancing-2:
Each node in the upstream_servers group must specify a name,
Fluent Bit supports the usage of environment variables in any value associated to a key when using a configuration file.
The variables are case sensitive and can be used in the following format:
When Fluent Bit starts, the configuration reader will detect any request for ${MY_VARIABLE} and will try to resolve its value.
When Fluent Bit is running under (using the official packages), environment variables can be set in the following files:
/etc/default/fluent-bit (Debian based system)
You might need to estimate how much memory Fluent Bit could be using in scenarios like containerized environments where memory limits are essential.
To make an estimate, in-use input plugins must set the Mem_Buf_Limitoption. Learn more about it in .
Input plugins append data independently. To make an estimation, impose a limit with the Mem_Buf_Limit option. If the limit was set to 10MB
cd fluent-bit/build/cmake -DFLB_STATIC_CONF=/path/to/my/confdir/make$ bin/fluent-bit
...
[0] cpu.local: [1539984752.000347547, {"cpu_p"=>0.750000, "user_p"=>0.500000, "system_p"=>0.250000, "cpu0.p_cpu"=>1.000000, "cpu0.p_user"=>1.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>0.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>0.000000, "cpu2.p_user"=>0.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>1.000000, "cpu3.p_user"=>1.000000, "cpu3.p_system"=>0.000000}]Dependencies
Built as a Ruby Gem, depends on other gems.
Zero dependencies, unless required by a plugin.
Plugins
Over 1,000 external plugins available.
Over 100 built-in plugins available.
License
It's strongly recommended to always use the stable release of the Fluent Bit recipe and not the one from Git master for production deployments.
Fluent Bit 1.1.x and later fully supports x86_64, x86, arm32v7, and arm64v8.
devel
Build Fluent Bit from Git master. Use for development and testing purposes only.
v1.8.11
Build latest stable version of Fluent Bit.
repo_gpgcheckhostporttlstls_verifyshared_keyWhile the upstream_servers section can be defined globally, some output plugins might require the configuration to be specified in a separate YAML file. Consult the documentation for each specific output plugin to understand its requirements.
upstream_servers:
- name: forward-balancing
nodes:
- name: node-1
host: 127.0.0.1
port: 43000
- name: node-2
host: 127.0.0.1
port: 44000
- name: node-3
host: 127.0.0.1
port: 45000
tls: true
tls_verify: false
shared_key: secret
- name: forward-balancing-2
nodes:
- name: node-A
host: 192.168.1.10
port: 50000
- name: node-B
host: 192.168.1.11
port: 51000The default configuration file is written to:
Fluent Bit is started by the S99fluent-bit script.
All configurations with a tool chain that supports threads and dynamic library linking are supported.
BR2_PACKAGE_FLUENT_BIT=y/etc/fluent-bit/fluent-bit.confplugins_file optionYou can load external plugins from a separate YAML file by specifying the plugins_file option in the service section for better modularity.
To configure this:
In this setup, the extra_plugins.yaml file might contain the following plugins section:
plugins:
- /path/to/out_gstdout.so
service:
log_level: info
pipeline:
inputs:
- name: random
outputs:
- name: gstdout
match: '*'service:
log_level: info
plugins_file: extra_plugins.yaml
pipeline:
inputs:
- name: random
outputs:
- name: gstdout
match: '*'[fluent-bit]
name = Fluent Bit
baseurl = https://packages.fluentbit.io/almalinux/$releasever/
gpgcheck=1
gpgkey=https://packages.fluentbit.io/fluentbit.key
repo_gpgcheck=1
enabled=1sudo yum install fluent-bitsudo systemctl start fluent-bit$ systemctl status fluent-bit
● fluent-bit.service - Fluent Bit
Loaded: loaded (/usr/lib/systemd/system/fluent-bit.service; disabled; vendor preset: disabled)
Active: active (running) since Thu 2016-07-07 02:08:01 BST; 9s ago
Main PID: 3820 (fluent-bit)
CGroup: /system.slice/fluent-bit.service
└─3820 /opt/fluent-bit/bin/fluent-bit -c etc/fluent-bit/fluent-bit.conf
...plugins:
- /other/path/to/out_gstdout.soParsers convert unstructured data to structured data. Use a parser to set a structure to the incoming data by using input plugins as data is collected.
Filters let you alter the collected data before delivering it to a destination. In production environments you need full control of the data you're collecting. Using filters lets you control data before processing.
The buffering phase in the pipeline aims to provide a unified and persistent mechanism to store your data, using the primary in-memory model or the file system-based mode.
Routing is a core feature that lets you route your data through filters, and then to one or multiple destinations. The router relies on the concept of tags and matching rules.
Output plugins let you define destinations for your data. Common destinations are remote services, local file systems, or other standard interfaces.
service: Configures global properties of the Fluent Bit service.
pipeline: Configures active inputs, filters, and outputs.
parsers: Defines custom parsers.
multiline_parsers: Defines custom multiline parsers.
plugins: Defines paths for custom plugins.
upstream_servers: Defines nodes for output plugins.
parsers section of a YAML configuration file, use the following syntax.parsers:
- name: custom_parser1
format: json
time_key: time
time_format: '%Y-%m-%dT%H:%M:%S.%L'
time_keep: on
For information about supported configuration options for custom parsers, see configuring parsers.
In addition to defining parsers in the parsers section of YAML configuration files, you can store parser definitions in standalone files. These standalone files require the same syntax as parsers defined in a standard YAML configuration file.
To add a standalone parsers file to Fluent Bit, use the parsers_file parameter in the service section of your YAML configuration file.
To add a standalone parsers file to Fluent Bit, follow these steps.
Define custom parsers in a standalone YAML file. For example, my-parsers.yaml defines two custom parsers:
Update the parsers_file parameter in the service section of your YAML configuration file:
multiline_parsers section of a YAML configuration file, use the following syntax:multiline_parsers:
- name: multiline-regex-test
type: regex
flush_timeout: 1000
rules:
- state: start_state
regex:
This example defines a multiline parser named multiline-regex-test that uses regular expressions to handle multi-event logs. The parser contains two rules: the first rule transitions from start_state to cont when a matching log entry is detected, and the second rule continues to match subsequent lines.
For information about supported configuration options for custom multiline parsers, see configuring multiline parsers.
/etc/sysconfig/fluent-bit (Others)
These files are ignored if they don't exist.
Create the following configuration file (fluent-bit.conf):
Open a terminal and set the environment variable:
The previous command sets the stdout value to the variable MY_OUTPUT.
Run Fluent Bit with the recently created configuration file:
${MY_VARIABLE}[SERVICE]
Flush 1
Daemon Off
Log_Level info
[INPUT]
Name cpu
Tag cpu.local
[OUTPUT]
Name ${MY_OUTPUT}
Match *20MBFluent Bit has an internal binary representation for the data being processed. When this data reaches an output plugin, it can create its own representation in a new memory buffer for processing. The best examples are the InfluxDB and Elasticsearch output plugins, which need to convert the binary representation to their respective custom JSON formats before sending data to the backend servers.
When imposing a limit of 10MB for the input plugins, and a worst case scenario of the output plugin consuming 20MB, you need to allocate a minimum (30MB x 1.2) = 36MB.
In intensive environments where memory allocations happen in the orders of magnitude, the default memory allocator provided by Glibc could lead to high fragmentation, reporting a high memory usage by the service.
It's strongly suggested that in any production environment, Fluent Bit should be built with jemalloc enabled (-DFLB_JEMALLOC=On). The jemalloc implementation of malloc is an alternative memory allocator that can reduce fragmentation, resulting in better performance.
Use the following command to determine if Fluent Bit has been built with jemalloc:
The output should look like:
If the FLB_HAVE_JEMALLOC option is listed in Build Flags, jemalloc is enabled.
Learn these key concepts to understand how Fluent Bit operates.
Before diving into Fluent Bit you might want to get acquainted with some of the key concepts of the service. This document provides an introduction to those concepts and common Fluent Bit terminology. Reading this document will help you gain a more general understanding of the following topics:
Event or Record
Filtering
Tag
Timestamp
Match
Structured Message
Every incoming piece of data that belongs to a log or a metric that's retrieved by Fluent Bit is considered an Event or a Record.
As an example, consider the following content of a Syslog file:
It contains four lines that represent four independent Events.
An Event is comprised of:
timestamp
key/value metadata (v2.1.0 and greater)
payload
The Fluent Bit wire protocol represents an Event as a two-element array with a nested array as the first element:
where
TIMESTAMP is a timestamp in seconds as an integer or floating point value (not a string).
METADATA is an object containing event metadata, and might be empty.
MESSAGE is an object containing the event body.
Fluent Bit versions prior to v2.1.0 used:
to represent events. This format is still supported for reading input event streams.
You might need to perform modifications on an event's content. The process to alter, append to, or drop Events is called .
Use filtering to:
Append specific information to the Event like an IP address or metadata.
Select a specific piece of the Event content.
Drop Events that match a certain pattern.
Every Event ingested by Fluent Bit is assigned a Tag. This tag is an internal string used in a later stage by the Router to decide which Filter or phase it must go through.
Most tags are assigned manually in the configuration. If a tag isn't specified, Fluent Bit assigns the name of the plugin instance where that Event was generated from.
A tagged record must always have a Matching rule. To learn more about Tags and Matches, see .
The timestamp represents the time an Event was created. Every Event contains an associated timestamps. All events have timestamps, and they're set by the input plugin or discovered through a data parsing process.
The timestamp is a numeric fractional integer in the format:
where:
_SECONDS_ is the number of seconds that have elapsed since the Unix epoch.
_NANOSECONDS_ is a fractional second or one thousand-millionth of a second.
Fluent Bit lets you route your collected and processed Events to one or multiple destinations. A Match represents a rule to select Events where a Tag matches a defined rule.
To learn more about Tags and Matches, see .
Source events can have a structure. A structure defines a set of keys and values inside the Event message to implement faster operations on data modifications. Fluent Bit treats every Event message as a structured message.
Consider the following two messages:
No structured message
With a structured message
For performance reasons, Fluent Bit uses a binary serialization data format called .
Fluent Bit is compatible with most x86-based, x86_64-based, arm32v7-based, and arm64v8-based systems.
You can build and install Fluent Bit from its source code. There are also platform-specific guides for building Fluent Bit from source on macOS and Windows.
To install Fluent Bit from one of the available packages, use the installation method for your chosen platform.
Fluent Bit is available for the following container deployments:
Fluent Bit is available on , including the following distributions:
Fluent Bit is available on .
Fluent Bit is available on .
Official support is based on community demand. Fluent Bit might run on older operating systems, but must be built from source or using custom packages.
Fluent Bit can run on Berkeley Software Distribution (BSD) systems and IBM Z Linux (s390x) systems with restrictions. Not all plugins and filters are supported.
Fluent Bit packages are also provided by for older end-of-life versions, Unix systems, or for additional support and features including aspects (such as CVE backporting).
Fluent Bit is available for a variety of Linux distributions and embedded Linux systems.
The most secure option is to create the repositories according to the instructions for your specific OS.
An installation script is provided for use with most Linux targets. This will by default install the most recent version released.
curl https://raw.githubusercontent.com/fluent/fluent-bit/master/install.sh | shThis is a helper and should always be validated prior to use.
For the 1.9.0 and 1.8.15 releases and later, the GPG key . Ensure the new key is added.
The GPG Key fingerprint of the new key is:
The previous key is and might be required to install previous versions.
The GPG Key fingerprint of the old key is:
Refer to the to see which platforms are supported in each release.
For version 1.9 and later, td-agent-bit is a deprecated package and is removed after 1.9.9. The correct package name to use now is fluent-bit.
AWS maintains a distribution of Fluent Bit that combines the latest official release with a set of Go Plugins for sending logs to AWS services. AWS and Fluent Bit are working together to rewrite their plugins for inclusion in the official Fluent Bit distribution.
The AWS for Fluent Bit image contains Go Plugins for:
Amazon CloudWatch as cloudwatch_logs. See the or the .
Amazon Kinesis Data Firehose as kinesis_firehose. See the or the .
Amazon Kinesis Data Streams as kinesis_streams. See the or the .
These plugins are higher performance than Go plugins.
Also, Fluent Bit includes an S3 output plugin named s3.
AWS vends their container image using , and a set of highly available regional Amazon ECR repositories. For more information, see the .
The AWS for Fluent Bit image uses a custom versioning scheme because it contains multiple projects. To see what each release contains, see the .
AWS vends SSM public parameters with the regional repository link for each image. These parameters can be queried by any AWS account.
To see a list of available version tags in a given region, run the following command:
To see the ECR repository URI for a given image tag in a given region, run the following:
You can use these SSM public parameters as parameters in your CloudFormation templates:
The env section lets you define environment variables directly within the configuration file. These variables can then be used to dynamically replace values throughout your configuration using the ${VARIABLE_NAME} syntax.
Variables set in this section can't be overridden by system environment variables.
Values set in the env section are case-sensitive. However, as a best practice, Fluent Bit recommends using uppercase names for environment variables. The following example defines two variables, FLUSH_INTERVAL and STDOUT_FMT, which can be accessed in the configuration using ${FLUSH_INTERVAL} and ${STDOUT_FMT}:
env:
FLUSH_INTERVAL: 1
STDOUT_FMT: 'json_lines'
service:
flush: ${FLUSH_INTERVAL}
log_level: info
pipeline:
inputs:
- name: random
outputs:
- name: stdout
match: '*'
format: ${STDOUT_FMT}Fluent Bit provides a set of predefined environment variables that can be used in your configuration:
In addition to variables defined in the configuration file or the predefined ones, Fluent Bit can access system environment variables set in the user space. These external variables can be referenced in the configuration using the same ${VARIABLE_NAME} pattern.
Variables set in the env section can't be overridden by system environment variables.
For example, to set the FLUSH_INTERVAL system environment variable to 2 and use it in your configuration:
In the configuration file, you can then access this value as follows:
This approach lets you manage and override configuration values using environment variables, providing flexibility in various deployment environments.
The includes section lets you specify additional YAML configuration files to be merged into the current configuration. These files are identified as a list of filenames and can include relative or absolute paths. If no absolute path is provided, the file is assumed to be located in a directory relative to the file that references it.
Use this section to organize complex configurations into smaller, manageable files and include them as needed.
The following example demonstrates how to include additional YAML files using relative path references. This is the file system path structure:
├── fluent-bit.yaml
├── inclusion-1.yaml
└── subdir
└── inclusion-2.yamlThe content of fluent-bit.yaml:
Ensure that the included files are formatted correctly and contain valid YAML configurations for seamless integration.
If a path isn't specified as absolute, it will be treated as relative to the file that includes it.
Fluent Bit might optionally use a configuration file to define how the service will behave.
The schema is defined by three concepts:
Sections
Entries: key/value
Indented Configuration Mode
An example of a configuration file is as follows:
A section is defined by a name or title inside brackets. Using the previous example, a Service section has been set using [SERVICE] definition. The following rules apply:
All section content must be indented (four spaces ideally).
Multiple sections can exist on the same file.
A section must have comments and entries.
Any commented line under a section must be indented too.
key/valueA section can contain entries. An entry is defined by a line of text that contains a Key and a Value. Using the previous example, the [SERVICE] section contains two entries: one is the key Daemon with value off and the other is the key Log_Level with the value debug. The following rules apply:
An entry is defined by a key and a value.
A key must be indented.
A key must contain a value which ends in a line break.
Multiple keys with the same name can exist.
Commented lines are set prefixing the # character. Commented lines aren't processed but they must be indented.
Fluent Bit configuration files are based in a strict indented mode. Each configuration file must follow the same pattern of alignment from left to right when writing text. By default, an indentation level of four spaces from left to right is suggested. Example:
This example shows two sections with multiple entries and comments. Empty lines are allowed.
Plugins that interact with AWS services fetch credentials from the following providers in order. Only the first provider that provides credentials is used.
All AWS plugins additionally support a role_arn (or AWS_ROLE_ARN, for ) configuration parameter. If specified, the fetched credentials are used to assume the given role.
Plugins use the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY (and optionally AWS_SESSION_TOKEN) environment variables if set.
Plugins read the shared config file at $AWS_CONFIG_FILE (or $HOME/.aws/config), and the shared credentials file at $AWS_SHARED_CREDENTIALS_FILE (or $HOME/.aws/credentials) to fetch the credentials for the profile named $AWS_PROFILE or $AWS_DEFAULT_PROFILE (or "default"). See .
The shared settings evaluate in the following order:
No other settings are supported.
Credentials are fetched using a signed web identity token for a Kubernetes service account. See .
Credentials are fetched for the ECS task's role. See .
Credentials are fetched using a pod identity endpoint. See .
Fetches credentials for the EC2 instance profile's role. See . As of Fluent Bit version 1.8.8, IMDSv2 is used by default and IMDSv1 might be disabled. Prior versions of Fluent Bit require enabling IMDSv1 on EC2.
Enable hot reload through SIGHUP signal or an HTTP endpoint
Fluent Bit supports the reloading feature when enabled in the configuration file or on the command line with -Y or --enable-hot-reload option.
Hot reloading is supported on Linux, macOS, and Windows operating systems.
To get started with reloading over HTTP, enable the HTTP Server in the configuration file:
service:
http_server: on
http_listen: 0.0.0.0
http_port: 2020
hot_reload: onAfter updating the configuration, use one of the following methods to perform a hot reload:
Use the following HTTP endpoints to perform a hot reload:
PUT /api/v2/reload
POST /api/v2/reload
For using curl to reload Fluent Bit, users must specify an empty request body as:
Obtain a count of hot reload using the HTTP endpoint:
GET /api/v2/reload
The endpoint returns hot_reload_count as follows:
The default value of the counter is 0.
Hot reloading can be used with SIGHUP.
SIGHUP signal isn't supported on Windows.
Use one of the following methods to confirm the reload occurred.
Learn how to run Fluent Bit in multiple threads for improved scalability.
Fluent Bit has one event loop to handle critical operations, like managing timers, receiving internal messages, scheduling flushes, and handling retries. This event loop runs in the main Fluent Bit thread.
To free up resources in the main thread, you can configure inputs and outputs to run in their own self-contained threads. However, inputs and outputs implement multithreading in distinct ways: inputs can run in threaded mode, and outputs can use one or more workers.
Threading also affects certain processes related to inputs and outputs. For example, filters always run in the main thread, but processors run in the self-contained threads of their respective inputs or outputs, if applicable.
When inputs collect telemetry data, they can either perform this process inside the main Fluent Bit thread or inside a separate dedicated thread. You can configure this behavior by enabling or disabling the threaded setting.
All inputs are capable of running in threaded mode, but certain inputs always run in threaded mode regardless of configuration. These always-threaded inputs are:
Inputs aren't internally aware of multithreading. If an input runs in threaded mode, Fluent Bit manages the logistics of that input's thread.
When outputs flush data, they can either perform this operation inside the main Fluent Bit thread or inside a separate dedicated thread called a worker. Each output can have one or more workers running in parallel, and each worker can handle multiple concurrent flushes. You can configure this behavior by changing the value of the workers setting.
All outputs are capable of running in multiple workers, and each output has a default value of 0, 1, or 2 workers. However, even if an output uses workers by default, you can safely reduce the number of workers under the default or disable workers entirely.
Fluent Bit is designed for high performance and minimal resource usage. Depending on your use case, you can optimize further using specific configuration options to achieve faster performance or reduce resource consumption.
tailThe Tail input plugin is used to read data from files on the filesystem. By default, it uses a small memory buffer of 32KB per monitored file. While this is sufficient for most generic use cases and helps keep memory usage low when monitoring many files, there are scenarios where you might want to increase performance by using more memory.
If your files are typically larger than 32KB, consider increasing the buffer size to speed up file reading. For example, you can experiment with a buffer size of 128KB:
By increasing the buffer size, Fluent Bit will make fewer system calls (read(2)) to read the data, reducing CPU usage and improving performance.
The release of Fluent Bit v4.1.0 introduced new performance improvements for JSON encoding using Single Instruction, Multiple Data (SIMD). Plugins that convert logs from the Fluent Bit internal binary representation to JSON can now do so 2.5 times (read) faster. Powered by the .
Ensure that your Fluent Bit binary is built with SIMD support. This feature is available for architectures such as x86_64, amd64, aarch64, and arm64. As of now, SIMD is only enabled by default in Fluent Bit container images.
You can check if SIMD is enabled by looking for the following log entry when Fluent Bit starts:
Look for the simd entry, which will indicate the SIMD support type, such as SSE2, NEON, or none.
If your Fluent Bit binary wasn't built with SIMD enabled, and you are using a supported platform, you can build Fluent Bit from source using the CMake option -DFLB_SIMD=On.
By default, most input plugins run in the same system thread than the main event loop, however by configuration you can instruct them to run in a separate thread which will allow you to take advantage of other CPU cores in your system.
To run an input plugin in threaded mode, add threaded: true as in the following example:
You can test logging pipelines locally to observe how they handles log messages. This guide explains how to use Docker Compose to run Fluent Bit and Elasticsearch locally, but you can use the same principles to test other plugins.
Start by creating one of the corresponding Fluent Bit configuration files to start testing.
pipeline:
inputs:
- name: dummy
dummy: '{"top": {".dotted": "value"}}'
outputs:
- name: es
host
[INPUT]
Name dummy
Dummy {"top": {".dotted": "value"}}
[OUTPUT]
Name es
Host elasticsearch
Replace_Dots OnUse to run Fluent Bit (with the configuration file mounted) and Elasticsearch.
To view indexed logs, run the following command:
To reset your index, run the following command:
The Disk input plugin gathers the information about the disk throughput of the running system every certain interval of time and reports them.
The Disk I/O metrics plugin creates metrics that are log-based, such as JSON payload. For Prometheus-based metrics, see the Node Exporter Metrics input plugin.
The plugin supports the following configuration parameters:
In order to get disk usage from your system, you can run the plugin from the command line or through the configuration file:
You can run the plugin from the command line:
Which returns information like the following:
In your main configuration file append the following:
Total interval (sec) = interval_sec + (interval_nsec / 1000000000)
For example: 1.5s = 1s + 500000000ns
The Docker events input plugin uses the Docker API to capture server events. A complete list of possible events returned by this plugin can be found in the Docker documentation.
This plugin supports the following configuration parameters:
To capture Docker events, you can run the plugin from the command line or through the configuration file.
From the command line you can run the plugin with the following options:
In your main configuration file, append the following:
The Docker input plugin lets you collect Docker container metrics, including memory usage and CPU consumption.
The plugin supports the following configuration parameters:
If you set neither include nor exclude, the plugin will try to get metrics from all running containers.
The following example configuration collects metrics from two docker instances (6bab19c3a0f9 and 14159be4ca2c).
This configuration will produce records like the following:
Fluent Bit is distributed as the fluent-bit package and is available for . The following versions are supported:
Raspbian Bookworm (12)
Raspbian Bullseye (11)
Raspbian Buster (10)
Configuration files must be flexible enough for any deployment need, but they must keep a clean and readable format.
Fluent Bit Commands extends a configuration file with specific built-in features. The following commands are available:
Enable traffic through a proxy server using the HTTP_PROXY environment variable.
Fluent Bit supports configuring an HTTP proxy for all egress HTTP/HTTPS traffic using the HTTP_PROXY or http_proxy environment variable.
The format for the HTTP proxy environment variable is http://USER:PASS@HOST:PORT, where:
USER is the username when using basic authentication.
parsers:
- name: custom_parser1
format: json
time_key: time
time_format: '%Y-%m-%dT%H:%M:%S.%L'
time_keep: on
- name: custom_parser2
format: regex
regex: '^\<(?<pri>[0-9]{1,5})\>1 (?<time>[^ ]+) (?<host>[^ ]+) (?<ident>[^ ]+) (?<pid>[-0-9]+) (?<msgid>[^ ]+) (?<extradata>(\[(.*)\]|-)) (?<message>.+)$'
time_key: time
time_format: '%Y-%m-%dT%H:%M:%S.%L'
time_keep: on
types: pid:integerservice:
parsers_file: my-parsers.yamlexport MY_OUTPUT=stdout$ bin/fluent-bit -c fluent-bit.conf
...
[0] cpu.local: [1491243925, {"cpu_p"=>1.750000, "user_p"=>1.750000, "system_p"=>0.000000, "cpu0.p_cpu"=>3.000000, "cpu0.p_user"=>2.000000, "cpu0.p_system"=>1.000000, "cpu1.p_cpu"=>0.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>4.000000, "cpu2.p_user"=>4.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>1.000000, "cpu3.p_user"=>1.000000, "cpu3.p_system"=>0.000000}]fluent-bit -h | grep JEMALLOCBuild Flags = JSMN_PARENT_LINKS JSMN_STRICT FLB_HAVE_TLS FLB_HAVE_SQLDB
FLB_HAVE_TRACE FLB_HAVE_FLUSH_LIBCO FLB_HAVE_VALGRIND FLB_HAVE_FORK
FLB_HAVE_PROXY_GO FLB_HAVE_JEMALLOC JEMALLOC_MANGLE FLB_HAVE_REGEX
FLB_HAVE_C_TLS FLB_HAVE_SETJMP FLB_HAVE_ACCEPT4 FLB_HAVE_INOTIFY[SERVICE]
# This is a commented line
Daemon off
log_level debugcredential_process
config
Linux only. See Sourcing credentials with an external process in the AWS CLI.
aws_access_key_id, aws_secret_access_key, aws_session_token
credentials
Access key ID and secret key to use to authenticate. The session token must be set for temporary credentials.
C3C0 A285 34B9 293E AF51 FABD 9F9D DC08 3888 C1CD
Fluentbit releases (Releases signing key) <[email protected]>${HOSTNAME}
The system's hostname.
export FLUSH_INTERVAL=2includes:
- inclusion-1.yaml
- subdir/inclusion-2.yamlEnd-of-line comments aren't supported, only full-line comments.
[SERVICE]
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_PORT 2020
Hot_Reload Ondev_name
Device name to limit the target (for example, sda). If not set, in_disk gathers information from all of disks and partitions.
all disks
interval_nsec
Polling interval in nanoseconds.
0
interval_sec
Polling interval in seconds.
1
threaded
Indicates whether to run this input in its own thread.
false
buffer_size
The size of the buffer used to read docker events in bytes.
8192
key
When a message is unstructured (no parser applied), it's appended as a string under the key name message.
message
parser
Specify the name of a parser to interpret the entry as a structured message.
none
reconnect.retry_interval
The retry interval in seconds.
1
reconnect.retry_limits
The maximum number of retries allowed. The plugin tries to reconnect with docker socket when EOF is detected.
5
threaded
Indicates whether to run this input in its own thread.
false
unix_path
The docker socket Unix path.
/var/run/docker.sock
exclude
A space-separated list of containers to exclude.
none
include
A space-separated list of containers to include.
none
interval_nsec
Polling interval in nanoseconds.
0
interval_sec
Polling interval in seconds.
1
path.containers
Container directory path, for custom Docker data-root configurations.
/var/lib/docker/containers
path.sysfs
Sysfs cgroup mount point.
/sys/fs/cgroup
threaded
Indicates whether to run this input in its own thread.
false
[INPUT]
Name docker
Include 6bab19c3a0f9 14159be4ca2c
[OUTPUT]
Name stdout
Match *pipeline:
inputs:
- name: docker
include: 6bab19c3a0f9 14159be4ca2c
outputs:
- name: stdout
match: '*'To install Fluent Bit and related AWS output plugins on Amazon Linux 2 on EC2 using AWS Systems Manager, follow this AWS guide.
To install Fluent Bit on any Amazon Linux instance, follow these steps.
Fluent Bit is provided through a Yum repository. To add the repository reference to your system, add a new file called fluent-bit.repo in /etc/yum.repos.d/ with the following content:
[fluent-bit]
name = Fluent Bit
baseurl = https://packages.fluentbit.io/amazonlinux/2/
gpgcheck=1
gpgkey=https://packages.fluentbit.io/fluentbit.key
enabled=1[fluent-bit]
name = Fluent Bit
baseurl = https://packages.fluentbit.io/amazonlinux/2023/
gpgcheck=1
gpgkey=https://packages.fluentbit.io/fluentbit.key
enabled=1Ensure your GPG key is up to date.
After your repository is configured, run the following command to install it:
Instruct systemd to enable the service:
If you do a status check, you should see a similar output like this:
The default Fluent Bit configuration collect metrics of CPU usage and sends the records to the standard output. You can see the outgoing data in your /var/log/messages file.
The first step is to add the Fluent Bit server GPG key to your keyring so you can get FLuent Bit signed packages:
On Debian and derivative systems such as Raspbian, you need to add the Fluent Bit APT server entry to your sources lists.
Add the following content at bottom of your /etc/apt/sources.list file.
Now let your system update the apt database:
Ensure your GPG key is up to date.
Use the following apt-get command to install the latest Fluent Bit:
Instruct systemd to enable the service:
If you do a status check, you should see a similar output like this:
The default configuration of Fluent Bit collects metrics for CPU usage and sends the records to the standard output. You can see the outgoing data in your /var/log/syslog file.
Configuring a logging pipeline might lead to an extensive configuration file. In order to maintain a human-readable configuration, split the configuration in multiple files.
The @INCLUDE command allows the configuration reader to include an external configuration file:
This example defines the main service configuration file and also includes two files to continue the configuration.
Fluent Bit will respects the following order when including:
Service
Inputs
Filters
Outputs
The following is an example of an inputs.conf file, like the one called in the previous example.
The following is an example of an outputs.conf file, like the one called in the previous example.
Fluent Bit supports configuration variables. One way to expose this variables to Fluent Bit is through setting a shell environment variable, the other is through the @SET command.
The @SET command can only be used at root level of each line. It can't be used inside a section:
@INCLUDE FILE
Include a configuration file.
@SET KEY=VAL
Set a configuration variable.
PASS is the password when using basic authentication.HOST is the HTTP proxy hostname or IP address.
PORT is the port the HTTP proxy is listening on.
To use an HTTP proxy with basic authentication, provide the username and password:
When no authentication is required, omit the username and password:
The HTTP_PROXY environment variable is a standard way of setting a HTTP proxy in a containerized environment, and it's also natively supported by any application written in Go. Fluent Bit implements the same convention. The http_proxy environment variable is also supported. When both the HTTP_PROXY and http_proxy environment variables are provided, HTTP_PROXY will be preferred.
Use the NO_PROXY environment variable when traffic shouldn't flow through the HTTP proxy. The no_proxy environment variable is also supported. When both NO_PROXY and no_proxy environment variables are provided, NO_PROXY takes precedence.
The format for the no_proxy environment variable is a comma-separated list of host names or IP addresses.
A domain name matches itself and all of its subdomains (for example, example.com matches both example.com and test.example.com):
A domain with a leading dot (.) matches only its subdomains (for example, .example.com matches test.example.com but not example.com):
As an example, you might use NO_PROXY when running Fluent Bit in a Kubernetes environment, where and you want:
All real egress traffic to flow through an HTTP proxy.
All local Kubernetes traffic to not flow through the HTTP proxy.
In this case, set:
Jan 18 12:52:16 flb systemd[2222]: Starting GNOME Terminal Server
Jan 18 12:52:16 flb dbus-daemon[2243]: [session uid=1000 pid=2243] Successfully activated service 'org.gnome.Terminal'
Jan 18 12:52:16 flb systemd[2222]: Started GNOME Terminal Server.
Jan 18 12:52:16 flb gsd-media-keys[2640]: # watch_fast: "/org/gnome/terminal/legacy/" (establishing: 0, active: 0)[[TIMESTAMP, METADATA], MESSAGE][TIMESTAMP, MESSAGE]SECONDS.NANOSECONDS"Project Fluent Bit created on 1398289291"{"project": "Fluent Bit", "created": 1398289291}F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722Aaws ssm get-parameters-by-path --region eu-central-1 --path /aws/service/aws-for-fluent-bit/ --query 'Parameters[*].Name'aws ssm get-parameter --region ap-northeast-1 --name /aws/service/aws-for-fluent-bit/2.0.0Parameters:
FireLensImage:
Description: Fluent Bit image for the FireLens Container
Type: AWS::SSM::Parameter::Value<String>
Default: /aws/service/aws-for-fluent-bit/latestservice:
flush: ${FLUSH_INTERVAL}
log_level: info
pipeline:
inputs:
- name: random
outputs:
- name: stdout
match: '*'
format: json_lines[FIRST_SECTION]
# This is a commented line
Key1 some value
Key2 another value
# more comments
[SECOND_SECTION]
KeyN 3.14curl -X POST -d '{}' localhost:2020/api/v2/reload{"hot_reload_count":3}pipeline:
inputs:
- name: tail
path: '/var/log/containers/*.log'
buffer_chunk_size: 128kb
buffer_max_size: 128kb[2024/11/10 22:25:53] [ info] [fluent bit] version=3.2.0, commit=12cb22e0e9, pid=74359
[2024/11/10 22:25:53] [ info] [storage] ver=1.5.2, type=memory, sync=normal, checksum=off, max_chunks_up=128
[2024/11/10 22:25:53] [ info] [simd ] SSE2
[2024/11/10 22:25:53] [ info] [cmetrics] version=0.9.8
[2024/11/10 22:25:53] [ info] [ctraces ] version=0.5.7
[2024/11/10 22:25:53] [ info] [sp] stream processor startedpipeline:
inputs:
- name: tail
path: '/var/log/containers/*.log'
threaded: trueversion: "3.7"
services:
fluent-bit:
image: fluent/fluent-bit
volumes:
- ./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf
depends_on:
- elasticsearch
elasticsearch:
image: elasticsearch:7.17.6
ports:
- "9200:9200"
environment:
- discovery.type=single-nodecurl "localhost:9200/_search?pretty" \
-H 'Content-Type: application/json' \
-d'{ "query": { "match_all": {} }}'curl -X DELETE "localhost:9200/fluent-bit?pretty"fluent-bit -i disk -o stdout...
[0] disk.0: [1485590297, {"read_size"=>0, "write_size"=>0}]
[1] disk.0: [1485590298, {"read_size"=>0, "write_size"=>0}]
[2] disk.0: [1485590299, {"read_size"=>0, "write_size"=>0}]
[3] disk.0: [1485590300, {"read_size"=>0, "write_size"=>11997184}]
...pipeline:
inputs:
- name: disk
tag: disk
interval_sec: 1
interval_nsec: 0
outputs:
- name: stdout
match: '*'[INPUT]
Name disk
Tag disk
Interval_Sec 1
Interval_NSec 0
[OUTPUT]
Name stdout
Match *fluent-bit -i docker_events -o stdoutpipeline:
inputs:
- name: docker_events
outputs:
- name: stdout
match: '*'[INPUT]
Name docker_events
[OUTPUT]
Name stdout
Match *[1] docker.0: [1571994772.00555745, {"id"=>"6bab19c3a0f9", "name"=>"postgresql", "cpu_used"=>172102435, "mem_used"=>5693400, "mem_limit"=>4294963200}]sudo yum install fluent-bitsudo systemctl start fluent-bit$ systemctl status fluent-bit
● fluent-bit.service - Fluent Bit
Loaded: loaded (/usr/lib/systemd/system/fluent-bit.service; disabled; vendor preset: disabled)
Active: active (running) since Thu 2016-07-07 02:08:01 BST; 9s ago
Main PID: 3820 (fluent-bit)
CGroup: /system.slice/fluent-bit.service
└─3820 /opt/fluent-bit/bin/fluent-bit -c /etc/fluent-bit/fluent-bit.conf
...sudo apt-get install fluent-bitsudo service fluent-bit startsudo sh -c 'curl https://packages.fluentbit.io/fluentbit.key | sudo apt-key add - 'echo "deb https://packages.fluentbit.io/raspbian/bookworm bookworm main" | sudo tee /etc/apt/sources.list.d/fluent-bit.listecho "deb https://packages.fluentbit.io/raspbian/bullseye bullseye main" | sudo tee /etc/apt/sources.list.d/fluent-bit.listecho "deb https://packages.fluentbit.io/raspbian/buster buster main" | sudo tee /etc/apt/sources.list.d/fluent-bit.listsudo apt-get update$ sudo service fluent-bit status
● fluent-bit.service - Fluent Bit
Loaded: loaded (/lib/systemd/system/fluent-bit.service; disabled; vendor preset: enabled)
Active: active (running) since mié 2016-07-06 16:58:25 CST; 2h 45min ago
Main PID: 6739 (fluent-bit)
Tasks: 1
Memory: 656.0K
CPU: 1.393s
CGroup: /system.slice/fluent-bit.service
└─6739 /opt/fluent-bit/bin/fluent-bit -c /etc/fluent-bit/fluent-bit.conf
...[SERVICE]
Flush 1
@INCLUDE inputs.conf
@INCLUDE outputs.conf[INPUT]
Name cpu
Tag mycpu
[INPUT]
Name tail
Path /var/log/*.log
Tag varlog.*[OUTPUT]
Name stdout
Match mycpu
[OUTPUT]
Name es
Match varlog.*
Host 127.0.0.1
Port 9200
Logstash_Format On// DO NOT USE
@SET my_input=cpu
@SET my_output=stdout
[SERVICE]
Flush 1
[INPUT]
Name ${my_input}
[OUTPUT]
Name ${my_output}HTTP_PROXY='http://example_user:[email protected]:8080'HTTP_PROXY='http://proxy.example.com:8080'NO_PROXY='foo.com,127.0.0.1,localhost'NO_PROXY='.example.com,127.0.0.1,localhost'NO_PROXY='127.0.0.1,localhost,kubernetes.default.svc'listen
Set the address to listen to.
0.0.0.0
port
Set the port to listen to.
25826
threaded
Indicates whether to run this input in its own .
false
typesdb
Set the data specification file. You can specify multiple files separated by commas. Later entries take precedence over earlier ones.
/usr/share/collectd/types.db
To receive collectd datagrams, you can run the plugin from the command line or through the configuration file.
From the command line you can let Fluent Bit listen for collectd datagrams with the following options:
By default, the service listens on all interfaces (0.0.0.0) using UDP port 25826. You can change this directly:
In this example, collectd datagrams will only arrive through the network interface at 192.168.3.2 address and UDP port 9090.
In your main configuration file append the following:
With this configuration, Fluent Bit listens to 0.0.0.0:25826, and outputs incoming datagram packets to stdout.
You must set the same types.db files that your collectd server uses. Otherwise, Fluent Bit might not be able to interpret the payload properly.
The typesdb parameter supports multiple files separated by commas. When multiple files are specified, later entries take precedence over earlier ones if there are duplicate type definitions. This lets you override default types with custom definitions.
For example:
In this configuration, custom type definitions in /etc/collectd/custom.db override any matching definitions from /usr/share/collectd/types.db.
The plugin supports the following configuration parameters:
prio_level
The log level to filter. The kernel log is dropped if its priority is more than prio_level. Allowed values are 0-8. 8 means all logs are saved.
8
threaded
Indicates whether to run this input in its own .
false
To start getting the Linux Kernel messages, you can run the plugin from the command line or through the configuration file:
Which returns output similar to:
As described previously, the plugin processed all messages that the Linux Kernel reported. The output has been truncated for clarification.
In your main configuration file append the following:
A full feature set to access content of your records.
Fluent Bit works internally with structured records and it can be composed of an unlimited number of keys and values. Values can be anything like a number, string, array, or a map.
Having a way to select a specific part of the record is critical for certain core functionalities or plugins, this feature is called Record Accessor.
Consider record accessor to be a basic grammar to specify record content and other miscellaneous values.
A record accessor rule starts with the character $. Use the structured content as an example. The following table describes how to access a record:
The following table describes some accessing rules and the expected returned value:
If the accessor key doesn't exist in the record like the last example $labels['undefined'], the operation is omitted, and no exception will occur.
The feature is enabled on a per plugin basis. Not all plugins enable this feature. As an example, consider a configuration that aims to filter records using that only matches where labels have a color blue:
The file content to process in test.log is the following:
When running Fluent Bit with the previous configuration, the output is:
record_accessor templatingThe Fluent Bit record_accessor library has a limitation in the characters that can separate template variables. Only dots and commas (. and ,) can come after a template variable. This is because the templating library must parse the template and determine the end of a variable.
The following templates are invalid because the template variables aren't separated by commas or dots:
$TaskID-$ECSContainerName
$TaskID/$ECSContainerName
$TaskID_$ECSContainerName
However, the following are valid:
$TaskID.$ECSContainerName
$TaskID.ecs_resource.$ECSContainerName
$TaskID.fooo.$ECSContainerName
And the following are valid since they only contain one template variable with nothing after it:
fooo$TaskID
fooo____$TaskID
fooo/bar$TaskID
Fluent Bit output plugins aim to connect to external services to deliver logs over the network. Being able to connect to one node (host) is normal and enough for more of the use cases, but there are other scenarios where balancing across different nodes is required. The Upstream feature provides this capability.
An Upstream defines a set of nodes that will be targeted by an output plugin, by the nature of the implementation an output plugin must support the Upstream feature. The following plugin has Upstream support:
The current balancing mode implemented is round-robin.
To define an Upstream you must create an specific configuration file that contains an UPSTREAM and one or multiple NODE sections. The following table describes the properties associated with each section. All properties are mandatory:
A Node might contain additional configuration keys required by the plugin, to provide enough flexibility for the output plugin. A common use case is a Forward output where if TLS is enabled, it requires a shared key.
In addition to the properties defined in the configuration table, the network operations against a defined node can optionally be done through the use of TLS for further encryption and certificates use.
The TLS options available are described in the section and can be added to the any Node section.
The following example defines an Upstream called forward-balancing which aims to be used by Forward output plugin, it register three Nodes:
node-1: connects to 127.0.0.1:43000
node-2: connects to 127.0.0.1:44000
node-3: connects to 127.0.0.1:45000 using TLS without verification. It also defines a specific configuration option required by Forward output called shared_key.
Every Upstream definition must exists in its own configuration file in the file system. Adding multiple Upstream configurations in the same file or different files isn't allowed.
The CPU input plugin, measures the CPU usage of a process or the whole system by default (considering per CPU core). It reports values in percentage unit for every interval of time set. This plugin is available only for Linux.
The following tables describe the information generated by the plugin. The following keys represent the data used by the overall system, and all values associated to the keys are in a percentage unit (0 to 100%):
The CPU metrics plugin creates metrics that are log-based, such as JSON payload. For Prometheus-based metrics, see the Node Exporter Metrics input plugin.
cpu_p
CPU usage of the overall system, this value is the summation of time spent on user and kernel space. The result takes in consideration the numbers of CPU cores in the system.
system_p
In addition to the keys reported in the previous table, a similar content is created per CPU core. The cores are listed from 0 to N as the Kernel reports:
The plugin supports the following configuration parameters:
To get the statistics of the CPU usage of your system, you can run the plugin from the command line or through the configuration file:
You can run this input plugin from the command line using a command like the following:
The command returns results similar to the following:
As described previously, the CPU input plugin gathers the overall usage every one second and flushed the information to the output on the fifth second. This example uses the stdout plugin to demonstrate the output records. In a real use-case you might want to flush this information to some central aggregator such as or .
In your main configuration file append the following:
A plugin to collect Fluent Bit metrics
Fluent Bit exposes metrics to let you monitor the internals of your pipeline. The collected metrics can be processed similarly to those from the Prometheus Node Exporter input plugin. They can be sent to output plugins including Prometheus Exporter, Prometheus Remote Write or OpenTelemetry.
You can run the plugin from the command line or through the configuration file:
Run the plugin from the command line using the following command:
which returns results like the following:
In the following configuration file, the input plugin fluentbit_metrics collects metrics every 2 seconds and exposes them through the output plugin on HTTP/TCP port 2021.
You can test the expose of the metrics by using curl:
The Health input plugin lets you check how healthy a TCP server is. It checks by issuing a TCP connection at regular intervals.
The plugin supports the following configuration parameters:
To start performing the checks, you can run the plugin from the command line or through the configuration file:
From the command line you can let Fluent Bit generate the checks with the following options:
In your main configuration file append the following:
Once Fluent Bit is running, you will see health check results in the output interface similar to this:
The Memory (mem) input plugin gathers memory and swap usage on Linux at a fixed interval and reports totals and free space. The plugin emits log-based metrics (for Prometheus-format metrics see the Node Exporter metrics input plugin).
The plugin supports the following configuration parameters:
To collect memory and swap usage from your system, run the plugin from the command line or through the configuration file.
Run the following command from the command line:
The output is similar to:
In your main configuration file append the following:
Example output when pid is set:
Fluent Bit is distributed as the fluent-bit package and is available for the latest stable CentOS system.
Fluent Bit supports the following architectures:
x86_64
aarch64
The following article covers the relevant compatibility changes for users upgrading from previous Fluent Bit versions.
For more details about changes on each release, refer to the .
Release notes will be prepared in advance of a Git tag for a release. An official release should provide both a tag and a release note together to allow users to verify and understand the release contents.
The tag drives the binary release process. Release binaries (containers and packages) will appear after a tag and its associated release note. This lets users to expect the new release binary to appear and allow/deny/update it as appropriate in their infrastructure.
The td-agent-bit
It's possible for logs or data to be ingested or created faster than the ability to flush it to some destinations. A common scenario is when reading from big log files, especially with a large backlog, and dispatching the logs to a backend over the network, which takes time to respond. This generates backpressure, leading to high memory consumption in the service.
To avoid backpressure, Fluent Bit implements a mechanism in the engine that restricts the amount of data an input plugin can ingest. Restriction is done through the configuration parameters Mem_Buf_Limit and storage.Max_Chunks_Up.
As described in , Fluent Bit offers two modes for data handling: in-memory only (default) and in-memory and filesystem (optional).
The default storage.type memory buffer can be restricted with Mem_Buf_Limit. If memory reaches this limit and you reach a backpressure scenario, you won't be able to ingest more data until the data chunks that are in memory can be flushed. The input pauses and Fluent Bit
has an engine that helps to coordinate the data ingestion from input plugins. The engine calls the scheduler to decide when it's time to flush the data through one or multiple output plugins. The scheduler flushes new data at a fixed number of seconds, and retries when asked.
When an output plugin gets called to flush some data, after processing that data it can notify the engine using these possible return statuses:
OK: Data successfully processed and flushed.
Retry
The Network I/O metrics (netif) input plugin gathers network traffic information of the running system at regular intervals, and reports them. This plugin is available only for Linux.
The Network I/O metrics plugin creates metrics that are log-based, such as JSON payload. For Prometheus-based metrics, see the input plugin.
The following table describes the metrics generated by the plugin. Metric names are prefixed with the interface name (for example, eth0):
Fluent Bit uses configuration files to store information about your specified , , , and more. You can write these configuration files in one of these formats:
are the standard configuration format as of Fluent Bit v3.2. They use the .yaml file extension.
will be deprecated at the end of 2026. They use the .conf file extension.
pipeline:
inputs:
- name: collectd
listen: 0.0.0.0
port: 25826
typesdb: '/usr/share/collectd/types.db,/etc/collectd/custom.db'
outputs:
- name: stdout
match: '*'[INPUT]
Name collectd
Listen 0.0.0.0
Port 25826
TypesDB /usr/share/collectd/types.db,/etc/collectd/custom.db
[OUTPUT]
Name stdout
Match *fluent-bit -i collectd -o stdoutfluent-bit -i collectd -p listen=192.168.3.2 -p port=9090 -o stdouttypesdb: '/usr/share/collectd/types.db,/etc/collectd/custom.db'pipeline:
inputs:
- name: kmsg
tag: kernel
outputs:
- name: stdout
match: '*'[INPUT]
Name kmsg
Tag kernel
[OUTPUT]
Name stdout
Match *fluent-bit -i kmsg -t kernel -o stdout -m '*'...
[0] kernel: [1463421823, {"priority"=>3, "sequence"=>1814, "sec"=>11706, "usec"=>732233, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
[1] kernel: [1463421823, {"priority"=>3, "sequence"=>1815, "sec"=>11706, "usec"=>732300, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
[2] kernel: [1463421829, {"priority"=>3, "sequence"=>1816, "sec"=>11712, "usec"=>729728, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
[3] kernel: [1463421829, {"priority"=>3, "sequence"=>1817, "sec"=>11712, "usec"=>729802, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec
...none
db.sync
Set a database sync method. Accepted values: extra, full, normal, off.
normal
interval_sec
Set the reconnect interval (seconds).
0
interval_nsec
Set the reconnect interval (sub seconds: nanoseconds).
500000000
kube_url
API Server endpoint.
https://kubernetes.default.svc
kube_ca_file
Kubernetes TLS CA file.
/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
kube_ca_path
Kubernetes TLS ca path.
none
kube_token_file
Kubernetes authorization token file.
/var/run/secrets/kubernetes.io/serviceaccount/token
kube_token_ttl
Kubernetes token time to live, until it's read again from the token file.
10m
kube_request_limit
Kubernetes limit parameter for events query, no limit applied when set to 0.
0
kube_retention_time
Kubernetes retention time for events.
1h
kube_namespace
Kubernetes namespace to query events from.
all
tls.debug
Debug level between 0 (nothing) and 4 (every detail).
0
tls.verify
Enable or disable verification of TLS peer certificate.
On
tls.vhost
Set optional TLS virtual host.
none
In Fluent Bit 3.1 or later, this plugin uses a Kubernetes watch stream instead of polling. In versions earlier than 3.1, the interval parameters are used for reconnecting the Kubernetes watch stream.
This input always runs in its own thread.
The Kubernetes service account used by Fluent Bit must have get, list, and watch permissions to namespaces and pods for the namespaces watched in the kube_namespace configuration parameter. If you're using the Helm chart to configure Fluent Bit, this role is included.
In the following configuration file, the Kubernetes events plugin collects events every 5 seconds (default for interval_nsec) and exposes them through the standard output plugin on the console:
Event timestamps are created from the first existing field, based on the following order of precedence:
lastTimestamp
firstTimestamp
metadata.creationTimestamp
db
Set a database file to keep track of recorded Kubernetes events.
$TaskIDfooo$ECSContainerName$log
some message
$labels['color']
blue
$labels['project']['env']
production
$labels['unset']
null
$labels['undefined']
UPSTREAM
name
Defines a name for the `Upstream in question.
NODE
name
Defines a name for the Node in question.
host
IP address or hostname of the target host.
port
TCP port of the target service.
CPU usage in Kernel mode, for short it means the CPU usage by the Kernel. The result of this value takes in consideration the numbers of CPU cores in the system.
user_p
CPU usage in User mode, for short it means the CPU usage by user space programs. The result of this value takes in consideration the numbers of CPU cores in the system.
cpuN.p_cpu
Represents the total CPU usage by core N.
cpuN.p_system
Total CPU spent in system or kernel mode associated to this core.
cpuN.p_user
Total CPU spent in user mode or user space programs associated to this core.
interval_nsec
Polling interval in nanoseconds.
0
interval_sec
Polling interval in seconds.
1
pid
Specify the process ID (PID) of a running process in the system. By default, the plugin monitors the whole system but if this option is set, it will only monitor the given process ID.
none
threaded
Indicates whether to run this input in its own thread.
false
scrape_interval
The rate at which Fluent Bit internal metrics are collected.
2 seconds
scrape_on_start
Scrape metrics upon start, use to avoid waiting for scrape_interval for the first round of metrics.
false
threaded
Indicates whether to run this input in its own thread.
false
add_host
If enabled, hostname is appended to each record.
false
add_port
If enabled, port number is appended to each record.
false
alert
If enabled, it generates messages only when the target TCP service is down.
false
host
Name of the target host or IP address.
none
interval_nsec
Specify a nanoseconds interval for service checks. Works with the interval_sec configuration key.
0
interval_sec
Interval in seconds between the service checks.
1
port
TCP port where to perform the connection request.
none
threaded
Indicates whether to run this input in its own thread.
false
Mem.free
Free or available memory reported by the kernel.
Kilobytes
Mem.total
Total system memory.
Kilobytes
Mem.used
Memory in use (Mem.total - Mem.free).
Kilobytes
Swap.free
Free swap space.
Kilobytes
Swap.total
Total system swap.
Kilobytes
Swap.used
Swap space in use (Swap.total - Swap.free).
Kilobytes
proc_bytes
Optional. Resident set size for the configured process (pid).
Bytes
proc_hr
Optional. Human-readable value of proc_bytes (for example, 12.00M).
Formatted
interval_nsec
Polling interval in nanoseconds.
0
interval_sec
Polling interval in seconds.
1
pid
Process ID to measure. When set, the plugin also reports proc_bytes and proc_hr.
none
threaded
Run this input in its own thread.
false
arm64v8
For CentOS 9 and later, Fluent Bit uses CentOS Stream as the canonical base system.
The recommended secure deployment approach is to use the following instructions:
CentOS 8 is now end-of-life, so the default Yum repositories are unavailable.
Ensure you've configured an appropriate mirror. For example:
An alternative is to use Rocky or Alma Linux, which should be equivalent.
From CentOS 9 Stream and later, the CentOS dependencies will update more often than downstream usage. This might mean that incompatible (more recent) versions are provided of certain dependencies (for example, OpenSSL). For OSS, Fluent Bit also provide RockyLinux and AlmaLinux repositories.
Replace the centos string in Yum configuration with almalinux or rockylinux to use those repositories instead. This might be required for RHEL 9 as well which will no longer track equivalent CentOS 9 stream dependencies. No RHEL 9 build is provided, as it's expected you're using one of the OSS variants listed.
Thefluent-bit package is provided through a Yum repository. To add the repository reference to your system:
In /etc/yum.repos.d/, add a new file called fluent-bit.repo.
Add the following content to the file:
As a best practice, enable gpgcheck and repo_gpgcheck for security reasons. Fluent Bit signs its repository metadata and all Fluent Bit packages.
Ensure your GPG key is up to date.
After your repository is configured, run the following command to install it:
Instruct Systemd to enable the service:
If you do a status check, you should see a similar output like this:
The default Fluent Bit configuration collect metrics of CPU usage and sends the records to the standard output. You can see the outgoing data in your /var/log/messages file.
The fluent-bit.repo file for the latest installations of Fluent Bit uses a $releasever variable to determine the correct version of the package to install to your system:
Depending on your Red Hat distribution version, this variable can return a value other than the OS major release version (for example, RHEL7 Server distributions return 7Server instead of 7). The Fluent Bit package URL uses the major OS release version, so any other value here will cause a 404.
To resolve this issue, replace the $releasever variable with your system's OS major release version. For example:
CentOS 9 and later will no longer be compatible with RHEL 9 as it might track more recent dependencies. Alternative AlmaLinux and RockyLinux repositories are available.
See the previous guidance.
fluent-bitIf you are migrating from previous version of Fluent Bit, review the following important changes:
By default, the tail input plugin follows a file from the end after the service starts, instead of reading it from the beginning. Every file found when the plugin starts is followed from it last position. New files discovered at runtime or when files rotate are read from the beginning.
To keep the old behavior, set the option read_from_head to true.
The project_id of resource in LogEntry sent to Google Cloud Logging would be set to the project_id rather than the project number. To learn the difference between Project ID and project number, see Creating and managing projects.
If you have existing queries based on the resource's project_id, update your query accordingly.
The migration from v1.4 to v1.5 is pretty straightforward.
The keepalive configuration mode has been renamed to net.keepalive. Now, all Network I/O keepalive is enabled by default. To learn more about this and other associated configuration properties read the Networking Administration section. - If you use the Elasticsearch output plugin, the default value of type changed from flb_type to _doc. Many versions of Elasticsearch tolerate this, but Elasticsearch v5.6 through v6.1 require a type without a leading underscore. See the Elasticsearch output plugin documentation FAQ entry for more.
If you are migrating from Fluent Bit v1.3, there are no breaking changes.
If you are migrating from Fluent Bit v1.2 to v1.3, there are no breaking changes. If you are upgrading from an older version, review the following incremental changes:
Fluent Bit v1.2 fixed many issues associated with JSON encoding and decoding.
For example, when parsing Docker logs, it's no longer necessary to use decoders. The new Docker parser looks like this:
Fluent Bit made improvements to Kubernetes Filter handling of stringified log messages. If the Merge_Log option is enabled, it will try to handle the log content as a JSON map, if so, it will add the keys to the root map.
In addition, fixes and improvements were made to the Merge_Log_Key option. If a merge log succeed, all new keys will be packaged under the key specified by this option. A suggested configuration is as follows:
As an example, if the original log content is the following map:
the final record will be composed as follows:
If you are upgrading from Fluent Bit 1.0.x or earlier, review the following relevant changes when switching to Fluent Bit v1.1 or later series:
Fluent Bit introduced a new configuration property called Kube_Tag_Prefix to help Tag prefix resolution and address an unexpected behavior in previous versions.
During the 1.0.x release cycle, a commit in the Tail input plugin changed the default behavior on how the Tag was composed when using the wildcard for expansion generating breaking compatibility with other services. Consider the following configuration example:
The expected behavior is that Tag will be expanded to:
text kube.var.log.containers.apache.log
The change introduced in the 1.0 series switched from absolute path to the base filename only:
text kube.apache.log
THe Fluent Bit v1.1 release restored the default behavior and now the Tag is composed using the absolute path of the monitored file.
Having absolute path in the Tag is relevant for routing and flexible configuration where it also helps to keep compatibility with Fluentd behavior.
This behavior switch in Tail input plugin affects how Filter Kubernetes operates. When the filter is used it needs to perform local metadata lookup that comes from the file names when using Tail as a source. With the new Kube_Tag_Prefix option you can specify the prefix used in the Tail input plugin. For the previous configuration example the new configuration will look like:
The proper value for Kube_Tag_Prefix must be composed by Tag prefix set in Tail input plugin plus the converted monitored directory replacing slashes with dots.
Some configuration settings in Fluent Bit use standardized unit sizes to define data and storage limits. For example, the buffer_chunk_size and buffer_max_size parameters for the Tail input plugin use unit sizes.
The following table describes the unit sizes you can use and what they mean.
none
Bytes: If you specify an integer without a unit size, Fluent Bit interprets that value as a bytes representation.
32000 means 32,000 bytes.
k, kb, K, KB
Kilobytes: A unit of memory equal to 1,000 bytes.
32k means 32,000 bytes.
m, mb, M, MB
Megabytes: A unit of memory equal to 1,000,000 bytes.
32m means 32,000,000 bytes.
g, gb, G, GB
Gigabytes: A unit of memory equal to 1,000,000,000 bytes.
32g means 32,000,000,000 bytes.
Fluent Bit exposes most of its configuration features through the command line interface. Use the -h or --help flag to see a list of available options.
service:
flush: 1
log_level: info
pipeline:
inputs:
- name: kubernetes_events
tag: k8s_events
kube_url: https://kubernetes.default.svc
outputs:
- name: stdout
match: '*'[SERVICE]
flush 1
log_level info
[INPUT]
name kubernetes_events
tag k8s_events
kube_url https://kubernetes.default.svc
[OUTPUT]
name stdout
match *{
"log": "some message",
"stream": "stdout",
"labels": {
"color": "blue",
"unset": null,
"project": {
"env": "production"
}
}
}[SERVICE]
flush 1
log_level info
parsers_file parsers.conf
[INPUT]
name tail
path test.log
parser json
[FILTER]
name grep
match *
regex $labels['color'] ^blue$
[OUTPUT]
name stdout
match *
format json_lines{"log": "message 1", "labels": {"color": "blue"}}
{"log": "message 2", "labels": {"color": "red"}}
{"log": "message 3", "labels": {"color": "green"}}
{"log": "message 4", "labels": {"color": "blue"}}$ bin/fluent-bit -c fluent-bit.conf
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io
[2020/09/11 16:11:07] [ info] [engine] started (pid=1094177)
[2020/09/11 16:11:07] [ info] [storage] version=1.0.5, initializing...
[2020/09/11 16:11:07] [ info] [storage] in-memory
[2020/09/11 16:11:07] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
[2020/09/11 16:11:07] [ info] [sp] stream processor started
[2020/09/11 16:11:07] [ info] inotify_fs_add(): inode=55716713 watch_fd=1 name=test.log
{"date":1599862267.483684,"log":"message 1","labels":{"color":"blue"}}
{"date":1599862267.483692,"log":"message 4","labels":{"color":"blue"}}[UPSTREAM]
name forward-balancing
[NODE]
name node-1
host 127.0.0.1
port 43000
[NODE]
name node-2
host 127.0.0.1
port 44000
[NODE]
name node-3
host 127.0.0.1
port 45000
tls on
tls.verify off
shared_key secretbuild/bin/fluent-bit -i cpu -t my_cpu -o stdout -m '*'...
[0] [1452185189, {"cpu_p"=>7.00, "user_p"=>5.00, "system_p"=>2.00, "cpu0.p_cpu"=>10.00, "cpu0.p_user"=>8.00, "cpu0.p_system"=>2.00, "cpu1.p_cpu"=>6.00, "cpu1.p_user"=>4.00, "cpu1.p_system"=>2.00}]
[1] [1452185190, {"cpu_p"=>6.50, "user_p"=>5.00, "system_p"=>1.50, "cpu0.p_cpu"=>6.00, "cpu0.p_user"=>5.00, "cpu0.p_system"=>1.00, "cpu1.p_cpu"=>7.00, "cpu1.p_user"=>5.00, "cpu1.p_system"=>2.00}]
[2] [1452185191, {"cpu_p"=>7.50, "user_p"=>5.00, "system_p"=>2.50, "cpu0.p_cpu"=>7.00, "cpu0.p_user"=>3.00, "cpu0.p_system"=>4.00, "cpu1.p_cpu"=>6.00, "cpu1.p_user"=>6.00, "cpu1.p_system"=>0.00}]
[3] [1452185192, {"cpu_p"=>4.50, "user_p"=>3.50, "system_p"=>1.00, "cpu0.p_cpu"=>6.00, "cpu0.p_user"=>5.00, "cpu0.p_system"=>1.00, "cpu1.p_cpu"=>5.00, "cpu1.p_user"=>3.00, "cpu1.p_system"=>2.00}]
...
pipeline:
inputs:
- name: cpu
tag: my_cpu
outputs:
- name: stdout
match: '*'[INPUT]
Name cpu
Tag my_cpu
[OUTPUT]
Name stdout
Match *fluent-bit -i fluentbit_metrics -o stdout...
[2025/12/02 08:33:54.689265000] [ info] [input:fluentbit_metrics:fluentbit_metrics.0] initializing
[2025/12/02 08:33:54.689272000] [ info] [input:fluentbit_metrics:fluentbit_metrics.0] storage_strategy='memory' (memory only)
[2025/12/02 08:33:54.689917000] [ info] [output:stdout:stdout.0] worker #0 started
[2025/12/02 08:33:54.690115000] [ info] [sp] stream processor started
[2025/12/02 08:33:54.690204000] [ info] [engine] Shutdown Grace Period=5, Shutdown Input Grace Period=2
2025-12-02T07:33:56.692855536Z fluentbit_uptime{hostname="XXXXX.local"} = 2
2025-12-02T07:33:54.687838528Z fluentbit_logger_logs_total{message_type="error"} = 0
2025-12-02T07:33:54.687838528Z fluentbit_logger_logs_total{message_type="warn"} = 0
2025-12-02T07:33:54.690212675Z fluentbit_logger_logs_total{message_type="info"} = 10
2025-12-02T07:33:54.687838528Z fluentbit_logger_logs_total{message_type="debug"} = 0
2025-12-02T07:33:54.687838528Z fluentbit_logger_logs_total{message_type="trace"} = 0
2025-12-02T07:33:54.689222850Z fluentbit_input_bytes_total{name="fluentbit_metrics.0"} = 0
2025-12-02T07:33:54.689222850Z fluentbit_input_records_total{name="fluentbit_metrics.0"} = 0
2025-12-02T07:33:54.689222850Z fluentbit_input_ring_buffer_writes_total{name="fluentbit_metrics.0"} = 0
2025-12-02T07:33:54.689222850Z fluentbit_input_ring_buffer_retries_total{name="fluentbit_metrics.0"} = 0
2025-12-02T07:33:54.689222850Z fluentbit_input_ring_buffer_retry_failures_total{name="fluentbit_metrics.0"} = 0
2025-12-02T07:33:56.692846827Z fluentbit_input_metrics_scrapes_total{name="fluentbit_metrics.0"} = 1
2025-12-02T07:33:54.689563930Z fluentbit_output_proc_records_total{name="stdout.0"} = 0
...service:
flush: 1
log_level: info
pipeline:
inputs:
- name: fluentbit_metrics
tag: internal_metrics
scrape_interval: 2
outputs:
- name: prometheus_exporter
match: internal_metrics
host: 0.0.0.0
port: 2021# Fluent Bit Metrics + Prometheus Exporter
# -------------------------------------------
# The following example collects Fluent Bit metrics and exposes
# them through a Prometheus HTTP endpoint.
#
# After starting the service try it with:
#
# $ curl http://127.0.0.1:2021/metrics
#
[SERVICE]
flush 1
log_level info
[INPUT]
name fluentbit_metrics
tag internal_metrics
scrape_interval 2
[OUTPUT]
name prometheus_exporter
match internal_metrics
host 0.0.0.0
port 2021curl http://127.0.0.1:2021/metricsfluent-bit -i health -p host=127.0.0.1 -p port=80 -o stdoutpipeline:
inputs:
- name: health
host: 127.0.0.1
port: 80
interval_sec: 1
interval_nsec: 0
outputs:
- name: stdout
match: '*'[INPUT]
Name health
Host 127.0.0.1
Port 80
Interval_Sec 1
Interval_NSec 0
[OUTPUT]
Name stdout
Match *$ fluent-bit -i health -p host=127.0.0.1 -p port=80 -o stdout
...
[0] health.0: [1624145988.305640385, {"alive"=>true}]
[1] health.0: [1624145989.305575360, {"alive"=>true}]
[2] health.0: [1624145990.306498573, {"alive"=>true}]
[3] health.0: [1624145991.305595498, {"alive"=>true}]
...fluent-bit -i mem -t memory -o stdout -m '*'...
[0] memory: [[1751381087.225589224, {}], {"Mem.total"=>3986708, "Mem.used"=>560708, "Mem.free"=>3426000, "Swap.total"=>0, "Swap.used"=>0, "Swap.free"=>0}]
[0] memory: [[1751381088.228411537, {}], {"Mem.total"=>3986708, "Mem.used"=>560708, "Mem.free"=>3426000, "Swap.total"=>0, "Swap.used"=>0, "Swap.free"=>0}]
[0] memory: [[1751381089.225600084, {}], {"Mem.total"=>3986708, "Mem.used"=>561480, "Mem.free"=>3425228, "Swap.total"=>0, "Swap.used"=>0, "Swap.free"=>0}]
[0] memory: [[1751381090.228345064, {}], {"Mem.total"=>3986708, "Mem.used"=>561480, "Mem.free"=>3425228, "Swap.total"=>0, "Swap.used"=>0, "Swap.free"=>0}]
...pipeline:
inputs:
- name: mem
tag: memory
interval_sec: 5
pid: 1234
outputs:
- name: stdout
match: '*'[INPUT]
Name mem
Tag memory
Interval_Sec 5
Interval_NSec 0
PID 1234
[OUTPUT]
Name stdout
Match *...
[0] memory: [[1751381087.225589224, {}], {"Mem.total"=>3986708, "Mem.used"=>560708, "Mem.free"=>3426000, "Swap.total"=>0, "Swap.used"=>0, "Swap.free"=>0, "proc_bytes"=>12349440, "proc_hr"=>"11.78M"}]
...[fluent-bit]
name = Fluent Bit
baseurl = https://packages.fluentbit.io/centos/$releasever/
gpgcheck=1
gpgkey=https://packages.fluentbit.io/fluentbit.key
repo_gpgcheck=1
enabled=1sudo yum install fluent-bitsudo systemctl start fluent-bit$ sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-* && \
$ sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-*$ systemctl status fluent-bit
● fluent-bit.service - Fluent Bit
Loaded: loaded (/usr/lib/systemd/system/fluent-bit.service; disabled; vendor preset: disabled)
Active: active (running) since Thu 2016-07-07 02:08:01 BST; 9s ago
Main PID: 3820 (fluent-bit)
CGroup: /system.slice/fluent-bit.service
└─3820 /opt/fluent-bit/bin/fluent-bit -c etc/fluent-bit/fluent-bit.conf
...[fluent-bit]
name = Fluent Bit
baseurl = https://packages.fluentbit.io/centos/$releasever/$basearch/[fluent-bit]
name = Fluent Bit
baseurl = https://packages.fluentbit.io/centos/7/$basearch/
gpgcheck=1
gpgkey=https://packages.fluentbit.io/fluentbit.key
repo_gpgcheck=1
enabled=1[PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
Time_Keep On [FILTER]
Name Kubernetes
Match kube.*
Kube_Tag_Prefix kube.var.log.containers.
Merge_Log On
Merge_Log_Key log_processed {"key1": "val1", "key2": "val2"} {"log": "{\"key1\": \"val1\", \"key2\": \"val2\"}", "log_processed": { "key1": "val1", "key2": "val2" } } [INPUT]
Name tail
Path /var/log/containers/*.log
Tag kube.* [INPUT]
Name tail
Path /var/log/containers/*.log
Tag kube.*
[FILTER]
Name kubernetes
Match *
Kube_Tag_Prefix kube.var.log.containers. # Podman container tooling.
podman run -rm -ti fluent/fluent-bit --help
# Docker container tooling.
docker run --rm -it fluent/fluent-bit --helpYou must have Homebrew installed in your system. If it isn't present, install it with the following command:
The Fluent Bit package on Homebrew isn't officially supported, but should work for basic use cases and testing. It can be installed using:
Run the following brew command in your terminal to retrieve the dependencies:
Download a copy of the Fluent Bit source code (upstream):
If you want to use a specific version, checkout to the proper tag. For example, to use v4.0.4, use the command:
To prepare the build system, you must expose certain environment variables so Fluent Bit CMake build rules can pick the right libraries:
Change to the build/ directory inside the Fluent Bit sources:
Build Fluent Bit. This example indicates to the build system the location the final binaries and config files should be installed:
Install Fluent Bit to the previously specified directory. Writing to this directory requires root privileges.
The binaries and configuration examples can be located at /opt/fluent-bit/.
Clone the Fluent Bit source code (upstream):
If you want to use a specific version, checkout to the proper tag. For example, to use v4.0.4 do:
To prepare the build system, you must expose certain environment variables so Fluent Bit CMake build rules can pick the right libraries:
Create the specific macOS SDK target. For example, to specify macOS Big Sur (11.3) SDK environment:
Change to the build/ directory inside the Fluent Bit sources:
Build the Fluent Bit macOS installer:
The macOS installer will be generated as:
Finally, the fluent-bit-<fluent-bit version>-(intel or apple).pkg will be generated.
The created installer will put binaries at /opt/fluent-bit/.
To make the access path easier to Fluent Bit binary, extend the PATH variable:
To test, try Fluent Bit by generating a test message using the Dummy input plugin which prints to the standard output interface every one second:
You will see an output similar to this:
To halt the process, press ctrl-c in the terminal.
[warn] [input] {input name or alias} paused (mem buf overlimit) log message.Depending on the input plugin in use, this might cause incoming data to be discarded (for example, TCP input plugin). The tail plugin can handle pauses without data loss, storing its current file offset and resuming reading later. When buffer memory is available, the input resumes accepting logs. Fluent Bit emits a [info] [input] {input name or alias} resume (mem buf overlimit) message.
Mitigate the risk of data loss by configuring secondary storage on the filesystem using the storage.type of filesystem (as described in Buffering and storage). Initially, logs will be buffered to both memory and the filesystem. When the storage.max_chunks_up limit is reached, all new data will be stored in the filesystem. Fluent Bit stops queueing new data in memory and buffers only to the filesystem. When storage.type filesystem is set, the Mem_Buf_Limit setting no longer has any effect. Instead, the [SERVICE] level storage.max_chunks_up setting controls the size of the memory buffer.
Mem_Buf_Limit applies only with the default storage.type memory. This option is disabled by default and can be applied to all input plugins.
As an example situation:
Mem_Buf_Limit is set to 1MB.
The input plugin tries to append 700 KB.
The engine routes the data to an output plugin.
The output plugin backend (HTTP Server) is down.
Engine scheduler retries the flush after 10 seconds.
The input plugin tries to append 500 KB.
In this situation, the engine allows appending those 500 KB of data into the memory, with a total of 1.2 MB of data buffered. The limit is permissive and will allow a single write past the limit. When the limit is exceeded, the following actions are taken:
Block local buffers for the input plugin (can't append more data).
Notify the input plugin, invoking a pause callback.
The engine protects itself and won't append more data coming from the input plugin in question. It's the responsibility of the plugin to keep state and decide what to do in a paused state.
In a few seconds, if the scheduler was able to flush the initial 700 KB of data or it has given up after retrying, that amount of memory is released and the following actions occur:
Upon data buffer release (700 KB), the internal counters get updated.
Counters now are set at 500 KB.
Because 500 KB is less than 1 MB, it checks the input plugin state.
If the plugin is paused, it invokes a resume callback.
The input plugin can continue appending more data.
The [SERVICE] level storage.max_chunks_up setting controls the size of the memory buffer. When storage.type filesystem is set, the Mem_Buf_Limit setting no longer has an effect.
The setting behaves similar to the Mem_Buf_Limit scenario when the non-default storage.pause_on_chunks_overlimit is enabled.
When (default) storage.pause_on_chunks_overlimit is disabled, the input won't pause when the memory limit is reached. Instead, it switches to buffering logs only in the filesystem. Limit the disk spaced used for filesystem buffering with storage.total_limit_size.
See Buffering and Storage docs for more information.
Each plugin is independent and not all of them implement pause and resume callbacks. These callbacks are a notification mechanism for the plugin.
One example of a plugin that implements these callbacks and keeps state correctly is the Tail Input plugin. When the pause callback triggers, it pauses its collectors and stops appending data. Upon resume, it resumes the collectors and continues ingesting data. Tail tracks the current file offset when it pauses, and resumes at the same position. If the file hasn't been deleted or moved, it can still be read.
With the default storage.type memory and Mem_Buf_Limit, the following log messages emit for pause and resume:
With storage.type filesystem and storage.max_chunks_up, the following log messages emit for pause and resume:
Error: An unrecoverable error occurred and the engine shouldn't try to flush that data again.
The scheduler provides two configuration options, called scheduler.cap and scheduler.base, which can be set in the Service section. These determine the waiting time before a retry happens.
scheduler.cap
Set a maximum retry time in seconds. Supported in v1.8.7 or later.
2000
scheduler.base
Set a base of exponential backoff. Supported in v1.8.7 or later.
5
The scheduler.base determines the lower bound of time and the scheduler.cap determines the upper bound for each retry.
Fluent Bit uses an exponential backoff and jitter algorithm to determine the waiting time before a retry. The waiting time is a random number between a configurable upper and lower bound. For a detailed explanation of the exponential backoff and jitter algorithm, see Exponential Backoff And Jitter.
For example:
For the Nth retry, the lower bound of the random number will be:
base
The upper bound will be:
min(base * (Nth power of 2), cap)
For example:
When base is set to 3 and cap is set to 30:
First retry: The lower bound will be 3. The upper bound will be 3 * 2 = 6. The waiting time will be a random number between (3, 6).
Second retry: The lower bound will be 3. The upper bound will be 3 * (2 * 2) = 12. The waiting time will be a random number between (3, 12).
Third retry: The lower bound will be 3. The upper bound will be 3 * (2 * 2 * 2) =24. The waiting time will be a random number between (3, 24).
Fourth retry: The lower bound will be 3, because 3 * (2 * 2 * 2 * 2) = 48 > 30. The upper bound will be 30. The waiting time will be a random number between (3, 30).
The following example configures the scheduler.base as 3 seconds and scheduler.cap as 30 seconds.
The waiting time will be:
1
(3, 6)
2
(3, 12)
3
(3, 24)
4
(3, 30)
The scheduler provides a configuration option called Retry_Limit, which can be set independently for each output section. This option lets you disable retries or impose a limit to try N times and then discard the data after reaching that limit:
Retry_Limit
N
Integer value to set the maximum number of retries allowed. N must be >= 1 (default: 1)
Retry_Limit
no_limits or False
When set there no limit for the number of retries that the scheduler can do.
Retry_Limit
no_retries
When set, retries are disabled and scheduler doesn't try to send data to the destination if it failed the first time.
The following example configures two outputs, where the HTTP plugin has an unlimited number of retries, and the Elasticsearch plugin have a limit of 5 retries:
copies
Number of messages to generate each time messages are generated.
1
dummy
Dummy JSON record.
{"message":"dummy"}
fixed_timestamp
If enabled, use a fixed timestamp. This allows the message to be pre-generated once.
false
flush_on_startup
If set to true, the first dummy event is generated at startup.
false
interval_nsec
Set time interval, in nanoseconds, at which every message is generated. If set, rate configuration is ignored.
0
interval_sec
Set time interval, in seconds, at which every message is generated. If set, rate configuration is ignored.
0
metadata
Dummy JSON metadata.
{}
rate
Rate at which messages are generated, expressed in how many times per second.
1
samples
Limit the number of events generated. For example, if samples=3, the plugin generates only three events and stops. 0 means no limit.
0
start_time_nsec
Set a dummy base timestamp, in nanoseconds. If set to -1, the current time is used.
-1
start_time_sec
Set a dummy base timestamp, in seconds. If set to -1, the current time is used.
-1
threaded
Indicates whether to run this input in its own .
false
You can run the plugin from the command line or through the configuration file:
Run the plugin from the command line using the following command:
which returns results like the following:
In your main configuration file append the following:
Set the buffer chunk size.
512K
buffer_max_size
Set the maximum size of buffer.
4M
hostname
Specify hostname or fully qualified domain name. This parameter can be used for "sniffing" (auto-discovery of) cluster node information.
localhost
http2
Enable HTTP/2 support.
true
listen
The address to listen on.
0.0.0.0
meta_key
Specify a key name for meta information.
@meta
port
The port for Fluent Bit to listen on.
9200
tag_key
Specify a key name for extracting as a tag.
NULL
threaded
Indicates whether to run this input in its own .
false
version
Specify the Elasticsearch version that Fluent Bit reports to clients during sniffing and API requests.
8.0.0
The Elasticsearch input plugin supports TLS/SSL for receiving data from Beats agents or other clients over encrypted connections. For more details about the properties available and general configuration, refer to Transport Security.
When configuring TLS for Elasticsearch ingestion, common options include:
tls.verify: Enable or disable certificate validation for incoming connections.
tls.ca_file: Specify a CA certificate to validate client certificates when using mutual TLS (mTLS).
tls.crt_file and tls.key_file: Provide the server certificate and private key.
Elasticsearch clients use a process called "sniffing" to automatically discover cluster nodes. When a client connects, it can query the cluster to retrieve a list of available nodes and their addresses. This allows the client to distribute requests across the cluster and adapt when nodes join or leave.
The hostname parameter specifies the hostname or fully qualified domain name that Fluent Bit returns during sniffing requests. Clients use this information to build their connection list. Set this value to match how clients should reach this Fluent Bit instance (for example, an external IP or load balancer address rather than localhost in production environments).
In order to start performing the checks, you can run the plugin from the command line or through the configuration file:
From the command line you can configure Fluent Bit to handle Bulk API requests with the following options:
In your configuration file append the following:
As described previously, the plugin will handle ingested Bulk API requests. For large bulk ingestion, you might have to increase buffer size using the buffer_max_size and buffer_chunk_size parameters:
Ingesting from beats series agents is also supported. For example, Filebeats, Metricbeat, and Winlogbeat are able to ingest their collected data through this plugin.
The Fluent Bit node information is returning as Elasticsearch 8.0.0.
Users must specify the following configurations on their beats configurations:
For large log ingestion on these beat plugins, users might have to configure rate limiting on those beats plugins when Fluent Bit indicates that the application is exceeding the size limit for HTTP requests:
buffer_chunk_size
buffer_size
Maximum payload size (in bytes) for a single MQTT message.
2048
listen
Listener network interface.
0.0.0.0
payload_key
Field name where the MQTT message payload will be stored in the output record.
none
port
TCP port where listening for connections.
1883
threaded
Indicates whether to run this input in its own .
false
Notes:
buffer_size defaults to 2048 bytes; messages larger than this limit are dropped.
Defaults for listen and port are 0.0.0.0 and 1883, so you can omit them if you want the standard MQTT listener.
Payloads are expected to be JSON maps; non-JSON payloads will fail to parse.
The MQTT input plugin supports TLS/SSL. For the available options and guidance, see Transport Security.
To listen for MQTT messages, you can run the plugin from the command line or through the configuration file.
The MQTT input plugin lets Fluent Bit behave as a server. Dispatch some messages using a MQTT client. In the following example, the mosquitto tool is being used for the purpose:
Running the following command:
Returns a response like the following:
The following command line will send a message to the MQTT input plugin:
In your main configuration file append the following:
{interface}.rx.bytes
Number of bytes received on the interface.
{interface}.rx.packets
Number of packets received on the interface.
{interface}.rx.errors
Number of receive errors on the interface.
{interface}.tx.bytes
Number of bytes transmitted on the interface.
{interface}.tx.packets
Number of packets transmitted on the interface.
{interface}.tx.errors
Number of transmit errors on the interface.
The plugin supports the following configuration parameters:
interface
Specify the network interface to monitor. For example, eth0.
none
interval_nsec
Polling interval in nanoseconds.
0
interval_sec
Polling interval in seconds.
1
test_at_init
If true, test if the network interface is valid at initialization.
false
To monitor network traffic from your system, you can run the plugin from the command line or through the configuration file.
Run Fluent Bit using a command similar to the following:
Which returns output similar to the following:
In your main configuration file append the following:
Total interval (sec) = interval_sec + (interval_nsec / 1000000000)
For example: 1.5s = 1s + 500000000ns
Samples
Specifies the number of samples to generate. The default value of -1 generates unlimited samples.
-1
Interval_Sec
Specifies the interval between generated samples, in seconds.
1
Interval_Nsec
Specifies the interval between generated samples, in nanoseconds. This works in conjunction with Interval_Sec.
0
Threaded
Specifies whether to run this input in its own .
false
To start generating random samples, you can either run the plugin from the command line or through a configuration file.
Use the following command line options to generate samples.
The following examples are sample configuration files for this input plugin:
pipeline:
inputs:
- name: random
samples: -1
interval_sec: 1
interval_nsec: 0
outputs:
[INPUT]
Name random
Samples -1
Interval_Sec 1
Interval_NSec 0
[OUTPUT]
Name stdout
Match *After Fluent Bit starts running, it generates reports in the output interface:
The Exec WASI input plugin lets you execute Wasm programs that are WASI targets like external programs and collect event logs from there.
The plugin supports the following configuration parameters:
Here is a configuration example.
in_exec_wasi can handle parsers. To retrieve from structured data from a Wasm program, you must create a parser.conf:
The time_format should be aligned for the format your using for timestamps.
This example assumes the Wasm program writes JSON style strings to stdout.
Then, you can specify the parsers.conf in the main Fluent Bit configuration:
The Process metrics input plugin lets you check how healthy a process is. It does so by performing service checks at specified intervals.
This plugin creates metrics that are log-based, such as JSON payloads. For Prometheus-based metrics, see the Node exporter metrics input plugin.
The plugin supports the following configuration parameters:
To start performing the checks, you can run the plugin from the command line or through the configuration file:
The following example checks the health of crond process.
In your main configuration file, append the following sections:
After Fluent Bit starts running, it outputs the health of the process:
The Splunk input plugin handles Splunk HTTP HEC requests.
This plugin uses the following configuration parameters:
To start performing the checks, you can run the plugin from the command line or through the configuration file.
The tag for the Splunk input plugin is set by adding the tag to the end of the request URL by default. This tag is then used to route the event through the system. The default behavior of the Splunk input sets the tags for the following endpoints:
/services/collector
/services/collector/event
/services/collector/raw
The requests for these endpoints are interpreted as services_collector, services_collector_event, and services_collector_raw.
To use the other tags for multiple instantiating input Splunk plugins, you must specify the tag property on each Splunk plugin configuration to prevent data pipeline collisions.
From the command line you can configure Fluent Bit to handle HTTP HEC requests with the following options:
In your main configuration file append the following sections:
The pipeline section defines the flow of how data is collected, processed, and sent to its final destination. This section contains the following subsections:
Fluent Bit supports multiple sources and formats. In addition, it provides filters that you can use to perform custom modifications. As your pipeline grows, it's important to validate your data and structure.
Fluent Bit users are encouraged to integrate data validation in their continuous integration (CI) systems.
In a normal production environment, inputs, filters, and outputs are defined in configuration files. Fluent Bit provides the filter, which you can use to validate keys and values from your records and take action when an exception is found.
A simplified view of the data processing pipeline is as follows:
The Podman metrics input plugin lets Fluent Bit gather Podman container metrics. The procedure for collecting container list and gathering data associated with them is based on filesystem data.
The metrics can be exposed later as, for example, Prometheus counters and gauges.
$ git clone https://github.com/fluent/fluent-bit
$ cd fluent-bitgit checkout v4.0.4$ export OPENSSL_ROOT_DIR=`brew --prefix openssl`
$ export PATH=`brew --prefix bison`/bin:$PATHcd build/$ git clone https://github.com/fluent/fluent-bit
$ cd fluent-bitgit checkout v4.0.4$ export OPENSSL_ROOT_DIR=`brew --prefix openssl`
$ export PATH=`brew --prefix bison`/bin:$PATHexport MACOSX_DEPLOYMENT_TARGET=11.3/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"brew install fluent-bitbrew install git cmake openssl bison libyamlCPack: Create package using productbuild
CPack: Install projects
CPack: - Run preinstall target for: fluent-bit
CPack: - Install project: fluent-bit []
CPack: - Install component: binary
CPack: - Install component: library
CPack: - Install component: headers
CPack: - Install component: headers-extra
CPack: Create package
CPack: - Building component package: /Users/fluent-bit-builder/GitHub/fluent-bit/build/_CPack_Packages/Darwin/productbuild//Users/fluent-bit-builder/GitHub/fluent-bit/build/fluent-bit-1.9.2-apple/Contents/Packages/fluent-bit-1.9.2-apple-binary.pkg
CPack: - Building component package: /Users/fluent-bit-builder/GitHub/fluent-bit/build/_CPack_Packages/Darwin/productbuild//Users/fluent-bit-builder/GitHub/fluent-bit/build/fluent-bit-1.9.2-apple/Contents/Packages/fluent-bit-1.9.2-apple-headers.pkg
CPack: - Building component package: /Users/fluent-bit-builder/GitHub/fluent-bit/build/_CPack_Packages/Darwin/productbuild//Users/fluent-bit-builder/GitHub/fluent-bit/build/fluent-bit-1.9.2-apple/Contents/Packages/fluent-bit-1.9.2-apple-headers-extra.pkg
CPack: - Building component package: /Users/fluent-bit-builder/GitHub/fluent-bit/build/_CPack_Packages/Darwin/productbuild//Users/fluent-bit-builder/GitHub/fluent-bit/build/fluent-bit-1.9.2-apple/Contents/Packages/fluent-bit-1.9.2-apple-library.pkg
CPack: - package: /Users/fluent-bit-builder/GitHub/fluent-bit/build/fluent-bit-1.9.2-apple.pkg generated.export PATH=/opt/fluent-bit/bin:$PATHfluent-bit -i dummy -o stdout -f 1...
[0] dummy.0: [1644362033.676766000, {"message"=>"dummy"}]
[0] dummy.0: [1644362034.676914000, {"message"=>"dummy"}][warn] [input] {input name or alias} paused (mem buf overlimit)
[info] [input] {input name or alias} resume (mem buf overlimit)[input] {input name or alias} paused (storage buf overlimit)
[input] {input name or alias} resume (storage buf overlimit)service:
flush: 5
daemon: off
log_level: debug
scheduler.base: 3
scheduler.cap: 30[SERVICE]
Flush 5
Daemon off
Log_Level debug
scheduler.base 3
scheduler.cap 30pipeline:
outputs:
- name: http
host: 192.168.5.6
port: 8080
retry_limit: false
- name: es
host: 192.168.5.20
port: 9200
logstash_format: on
retry_limit: 5[OUTPUT]
Name http
Host 192.168.5.6
Port 8080
Retry_Limit False
[OUTPUT]
Name es
Host 192.168.5.20
Port 9200
Logstash_Format On
Retry_Limit 5pipeline:
inputs:
- name: dummy
dummy: '{"message": "custom dummy"}'
outputs:
- name: stdout
match: '*'[INPUT]
Name dummy
Dummy {"message": "custom dummy"}
[OUTPUT]
Name stdout
Match *fluent-bit -i dummy -o stdout...
[0] dummy.0: [[1686451466.659962491, {}], {"message"=>"dummy"}]
[0] dummy.0: [[1686451467.659679509, {}], {"message"=>"dummy"}]
...pipeline:
inputs:
- name: elasticsearch
listen: 0.0.0.0
port: 9200
outputs:
- name: stdout
match: '*'[INPUT]
name elasticsearch
listen 0.0.0.0
port 9200
[OUTPUT]
name stdout
match *pipeline:
inputs:
- name: elasticsearch
listen: 0.0.0.0
port: 9200
buffer_max_size: 20M
buffer_chunk_size: 5M
outputs:
- name: stdout
match: '*'[INPUT]
name elasticsearch
listen 0.0.0.0
port 9200
buffer_max_size 20M
buffer_chunk_size 5M
[OUTPUT]
name stdout
match *fluent-bit -i elasticsearch -p port=9200 -o stdoutoutput.elasticsearch:
allow_older_versions: true
ilm: falseprocessors:
- rate_limit:
limit: "200/s"pipeline:
inputs:
- name: mqtt
tag: data
listen: 0.0.0.0
port: 1883
payload_key: payload
outputs:
- name: stdout
match: '*'[INPUT]
Name mqtt
Tag data
Listen 0.0.0.0
Port 1883
Payload_Key payload
[OUTPUT]
Name stdout
Match *fluent-bit -i mqtt -t data -o stdout -m '*'...
[0] data: [1463775773, {"topic"=>"some/topic", "key1"=>123, "key2"=>456}]
...mosquitto_pub -m '{"key1": 123, "key2": 456}' -t some/topicpipeline:
inputs:
- name: netif
tag: netif
interface: eth0
interval_sec: 1
interval_nsec: 0
verbose: false
test_at_init: false
outputs:
- name: stdout
match: '*'[INPUT]
Name netif
Tag netif
Interface eth0
Interval_Sec 1
Interval_NSec 0
Verbose false
Test_At_Init false
[OUTPUT]
Name stdout
Match *fluent-bit -i netif -p interface=eth0 -o stdout...
[0] netif.0: [1499524459.001698260, {"eth0.rx.bytes"=>89769869, "eth0.rx.packets"=>73357, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>4256474, "eth0.tx.packets"=>24293, "eth0.tx.errors"=>0}]
[1] netif.0: [1499524460.002541885, {"eth0.rx.bytes"=>98, "eth0.rx.packets"=>1, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>98, "eth0.tx.packets"=>1, "eth0.tx.errors"=>0}]
[2] netif.0: [1499524461.001142161, {"eth0.rx.bytes"=>98, "eth0.rx.packets"=>1, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>98, "eth0.tx.packets"=>1, "eth0.tx.errors"=>0}]
[3] netif.0: [1499524462.002612971, {"eth0.rx.bytes"=>98, "eth0.rx.packets"=>1, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>98, "eth0.tx.packets"=>1, "eth0.tx.errors"=>0}]
...fluent-bit -i random -o stdout$ fluent-bit -i random -o stdout
...
[0] random.0: [1475893654, {"rand_value"=>1863375102915681408}]
[1] random.0: [1475893655, {"rand_value"=>425675645790600970}]
[2] random.0: [1475893656, {"rand_value"=>7580417447354808203}]
[3] random.0: [1475893657, {"rand_value"=>1501010137543905482}]
[4] random.0: [1475893658, {"rand_value"=>16238242822364375212}]
...threaded
Indicates whether to run this input in its own thread.
false
verbose
If true, gather metrics precisely.
false
accessible_paths
Specify the allowed list of paths to be able to access paths from Wasm programs.
.
buf_size
Size of the buffer. Review unit sizes for allowed values.
4096
interval_nsec
Polling interval (nanosecond).
0
interval_sec
Polling interval (seconds).
1
oneshot
Execute the command only once at startup.
false
parser
Specify the name of a parser to interpret the entry as a structured message.
threaded
Indicates whether to run this input in its own thread.
false
wasi_path
The location of a Wasm program file.
wasm_heap_size
Size of the heap size of Wasm execution. Review unit sizes for allowed values.
8192
wasm_stack_size
Size of the stack size of Wasm execution. Review unit sizes for allowed values.
8192
listen
The address to listen on.
0.0.0.0
port
The port for Fluent Bit to listen on.
9880
tag_key
Specify the key name to overwrite a tag. If set, the tag will be overwritten by a value of the key.
none
buffer_max_size
Specify the maximum buffer size in KB to receive a JSON message.
4M
buffer_chunk_size
This sets the chunk size for incoming JSON messages. These chunks are then stored and managed in the space available by buffer_max_size.
512K
successful_response_code
Set the successful response code. Allowed values: 200, 201, and 204
201
splunk_token
Specify a Splunk token for HTTP HEC authentication. If multiple tokens are specified (with commas and no spaces), usage will be divided across each of the tokens.
none
store_token_in_metadata
Store Splunk HEC tokens in the Fluent Bit metadata. If set to false, tokens will be stored as normal key-value pairs in the record data.
true
splunk_token_key
Use the specified key for storing the Splunk token for HTTP HEC. Use only when store_token_in_metadata is false.
@splunk_token
Threaded
Indicates whether to run this input in its own thread.
false
Unlike filters, processors and parsers aren't defined within a unified section of YAML configuration files and don't use tag matching. Instead, each input or output defined in the configuration file can have a parsers key and processors key to configure the parsers and processors for that specific plugin.
A pipeline section will define a complete pipeline configuration, including inputs, filters, and outputs subsections. You can define multiple pipeline sections, but they won't operate independently. Instead, all components will be merged into a single pipeline internally.
Each of the subsections for inputs, filters, and outputs constitutes an array of maps that has the parameters for each. Most properties are either strings or numbers and can be defined directly.
For example:
This pipeline consists of two inputs: a tail plugin and an HTTP server plugin. Each plugin has its own map in the array of inputs consisting of basic properties. To use more advanced properties that consist of multiple values the property itself can be defined using an array, such as the record and allowlist_key properties for the record_modifier filter:
In the cases where each value in a list requires two values they must be separated by a space, such as in the record property for the record_modifier filter.
An input section defines a source (related to an input plugin). Each section has a base configuration. Each input plugin can add it own configuration keys:
Name
Name of the input plugin. Defined as subsection of the inputs section.
Tag
Tag name associated to all records coming from this plugin.
Log_Level
Set the plugin's logging verbosity level. Allowed values are: off, error, warn, info, debug, and trace. Defaults to the SERVICE section's Log_Level.
The Name is mandatory and defines for Fluent Bit which input plugin should be loaded. Tag is mandatory for all plugins except for the input forward plugin which provides dynamic tags.
The following is an example of an input section for the cpu plugin.
A filter section defines a filter (related to a filter plugin). Each section has a base configuration and each filter plugin can add its own configuration keys:
Name
Name of the filter plugin. Defined as a subsection of the filters section.
Match
A pattern to match against the tags of incoming records. It's case-sensitive and supports the star (*) character as a wildcard.
Match_Regex
A regular expression to match against the tags of incoming records. Use this option if you want to use the full regular expression syntax.
Log_Level
Set the plugin's logging verbosity level. Allowed values are: off, error, warn, info, debug, and trace. Defaults to the SERVICE section's Log_Level.
Name is mandatory and lets Fluent Bit know which filter plugin should be loaded. The Match or Match_Regex is mandatory for all plugins. If both are specified, Match_Regex takes precedence.
The following is an example of a filter section for the grep plugin:
The outputs section specifies a destination that certain records should follow after a Tag match. Fluent Bit can route up to 256 OUTPUT plugins. The configuration supports the following keys:
Name
Name of the output plugin. Defined as a subsection of the outputs section.
Match
A pattern to match against the tags of incoming records. It's case-sensitive and supports the star (*) character as a wildcard.
Match_Regex
A regular expression to match against the tags of incoming records. Use this option if you want to use the full regular expression syntax.
Log_Level
Set the plugin's logging verbosity level. Allowed values are: off, error, warn, info, debug, and trace. The output log level defaults to the SERVICE section's Log_Level.
The following is an example of an output section:
Here's an example of a pipeline configuration:
inputs
Specifies the name of the plugin responsible for collecting or receiving data. This component serves as the data source in the pipeline. Examples of input plugins include tail, http, and random.
filters
Filters are used to transform, enrich, or discard events based on specific criteria. They allow matching tags using strings or regular expressions, providing a more flexible way to manipulate data. Filters run as part of the main event loop and can be applied across multiple inputs and filters. Examples of filters include modify, grep, and nest.
outputs
Defines the destination for processed data. Outputs specify where the data will be sent, such as to a remote server, a file, or another service. Each output plugin is configured with matching rules to determine which events are sent to that destination. Common output plugins include stdout, elasticsearch, and kafka.
$ cmake -DFLB_DEV=on -DCMAKE_INSTALL_PREFIX=/opt/fluent-bit ../
$ make -j 16sudo make installcd build/$ cmake -DCPACK_GENERATOR=productbuild -DCMAKE_INSTALL_PREFIX=/opt/fluent-bit ../
$ make -j 16
$ cpack -G productbuildparsers:
- name: wasi
format: json
time_key: time
time_format: '%Y-%m-%dT%H:%M:%S.%L %z'[PARSER]
Name wasi
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L %zservice:
flush: 1
daemon: off
parsers_file: parsers.yaml
log_level: info
http_server: off
http_listen: 0.0.0.0
http_port: 2020
pipeline:
inputs:
- name: exec_wasi
tag: exec.wasi.local
wasi_path: /path/to/wasi/program.wasm
# Note: run from the 'wasi_path' location.
accessible_paths: /path/to/accessible
parser: wasi
outputs:
- name: stdout
match: '*'[SERVICE]
Flush 1
Daemon Off
Parsers_File parsers.conf
Log_Level info
HTTP_Server Off
HTTP_Listen 0.0.0.0
HTTP_Port 2020
[INPUT]
Name exec_wasi
Tag exec.wasi.local
WASI_Path /path/to/wasi/program.wasm
Accessible_Paths .,/path/to/accessible
Parser wasi
[OUTPUT]
Name stdout
Match *
fluent-bit -i splunk -p port=8088 -o stdoutpipeline:
inputs:
- name: splunk
listen: 0.0.0.0
port: 8088
outputs:
- name: stdout
match: '*'[INPUT]
name splunk
listen 0.0.0.0
port 8088
[OUTPUT]
name stdout
match *pipeline:
inputs:
- name: tail
path: /var/log/example.log
parser: json
processors:
logs:
- name: record_modifier
filters:
- name: grep
match: '*'
regex: key pattern
outputs:
- name: stdout
match: '*'pipeline:
inputs:
...
filters:
...
outputs:
...pipeline:
inputs:
- name: tail
tag: syslog
path: /var/log/syslog
- name: http
tag: http_server
port: 8080pipeline:
inputs:
- name: tail
tag: syslog
path: /var/log/syslog
filters:
- name: record_modifier
match: syslog
record:
- powered_by calyptia
- name: record_modifier
match: syslog
allowlist_key:
- powered_by
- messagepipeline:
inputs:
- name: cpu
tag: my_cpupipeline:
filters:
- name: grep
match: '*'
regex: log aapipeline:
outputs:
- name: stdout
match: 'my*cpu'true
Threaded
Specifies whether to run this input in its own .
false
Proc_Name
The name of the target process to check.
none
Interval_Sec
Specifies the interval between service checks, in seconds.
1
Interval_Nsec
Specifies the interval between service checks, in nanoseconds. This works in conjunction with Interval_Sec.
0
Alert
If enabled, the plugin will only generate messages if the target process is down.
false
Fd
If enabled, a number of fd is appended to each record.
true
Mem
If enabled, memory usage of the process is appended to each record.
Grep to exclude certain records.
Record Modifier to alter records' content by adding and removing specific keys.
Add data validation between each step to ensure your data structure is correct.
This example uses the Expect filter.
Expect filters set rules aiming to validate criteria like:
Does the record contain key A?
Does the record not contain key A?
Does the key A value equal NULL?
Is the key A value not NULL?
Does the key A value equal B?
Every Expect filter configuration exposes rules to validate the content of your records using configuration parameters.
Consider a JSON file data.log with the following content:
The following files configure a pipeline to consume the log, while applying an Expect filter to validate that the keys color and label exist.
The following is the Fluent Bit YAML configuration file:
The following is the Fluent Bit YAML parsers file:
The following is the Fluent Bit classic configuration file:
The following is the Fluent Bit classic parsers file:
If the JSON parser fails or is missing in the Tail input (parser json), the Expect filter triggers the exit action.
To extend the pipeline, add a Grep filter to match records that map label containing a key called name with value the abc, and add an Expect filter to re-validate that condition:
The following is the Fluent Bit YAML configuration file:
When deploying in production, consider removing any Expect filters from your configuration file. These filters are unnecessary unless you need 100% coverage of checks at runtime.
add_path
If enabled, the path is appended to each record.
false
buf_size
Buffer size to read the file.
256
file
Absolute path to the target file. For example: /proc/uptime.
none
interval_nsec
Polling interval (nanoseconds).
0
interval_sec
Polling interval (seconds).
1
key
To read the head of a file, you can run the plugin from the command line or through the configuration file.
The following example will read events from the /proc/uptime file, tag the records with the uptime name and flush them back to the stdout plugin:
The output will look similar to:
In your main configuration file append the following:
The interval is calculated like this:
Total interval (sec) = interval_sec + (interval_nsec / 1000000000).
For example: 1.5s = 1s + 500000000ns.
Use this mode to get a specific line. The following example gets CPU frequency from /proc/cpuinfo.
/proc/cpuinfo is a special file to get CPU information.
The CPU frequency is cpu MHz : 2791.009. The following configuration file gets the needed line:
If you run the following command:
The output is something similar to;
scrape_interval
Interval between each scrape of Podman data (in seconds).
30
scrape_on_start
Sets whether this plugin scrapes Podman data on startup.
false
path.config
Custom path to the Podman containers configuration file.
/var/lib/containers/storage/overlay-containers/containers.json
path.sysfs
Custom path to the sysfs subsystem directory.
/sys/fs/cgroup
path.procfs
Custom path to the proc subsystem directory.
/proc
threaded
Indicates whether to run this input in its own .
false
This plugin doesn't execute podman commands or send HTTP requests to Podman API. It reads a Podman configuration file and metrics exposed by the /sys and /proc filesystems.
This plugin supports and automatically detects both cgroups v1 and v2.
You can run the following curl command:
Which returns information like:
Currently supported counters are:
container_memory_usage_bytes
container_memory_max_usage_bytes
container_memory_rss
container_spec_memory_limit_bytes
container_cpu_user_seconds_total
container_cpu_usage_seconds_total
container_network_receive_bytes_total
container_network_receive_errors_total
container_network_transmit_bytes_total
container_network_transmit_errors_total
This plugin mimics the naming convention of Docker metrics exposed by cadvisor.
The address to listen on.
0.0.0.0
port
The port to listen on.
8080
buffer_max_size
Specifies the maximum buffer size in KB to receive a JSON message.
4M
buffer_chunk_size
Sets the chunk size for incoming JSON messages. These chunks are then stored and managed in the space specified by buffer_max_size.
512K
successful_response_code
Specifies the success response code. Supported values are 200, 201, and 204.
201
tag_from_uri
If true, a tag will be created from the uri parameter (for example, api_prom_push from /api/prom/push), and any tag specified in the configuration will be ignored. If false, you must provide a tag in the configuration for this plugin.
true
uri
Specifies an optional HTTP URI for the target web server listening for Prometheus remote write payloads (for example, /api/prom/push).
none
threaded
Specifies whether to run this input in its own .
false
The following examples are sample configuration files for this input plugin:
pipeline:
inputs:
- name: prometheus_remote_write
listen: 127.0.0.1
port: 8080
uri: /api/prom/push
outputs
[INPUT]
name prometheus_remote_write
listen 127.0.0.1
port 8080
uri /api/prom/push
[OUTPUT]
name stdout
match *These sample configurations configure Fluent Bit to listen for data on port 8080. You can send payloads in Prometheus remote-write format to the endpoint /api/prom/push.
The Prometheus remote write input plugin supports TLS and SSL. For more details about the properties available and general configuration, refer to the Transport security documentation.
To communicate with TLS, you must use these TLS-related parameters:
Now, you should be able to send data over TLS to the remote-write input.
listen
Absolute path to the device entry. For example, /dev/ttyS0.
none
Bitrate
The bit rate for the communication. For example: 9600, 38400, 115200.
none
Min_Bytes
The serial interface expects at least Min_Bytes to be available before processing the message.
1
Separator
Specify a separator string that's used to determinate when a message ends.
none
Format
Specify the format of the incoming data stream. Format and Separator can't be used at the same time.
json (no other options available)
Threaded
Indicates whether to run this input in its own .
false
To retrieve messages by using the Serial interface, you can run the plugin from the command line or through the configuration file:
The following example loads the input serial plugin where it set a Bitrate of 9600, listens from the /dev/tnt0 interface, and uses the custom tag data to route the message.
The interface (/dev/tnt0) is an emulation of the serial interface. Further examples will write some message to the other end of the interface. For example, /dev/tnt1.
In Fluent Bit you can run the command:
Which should produce output like:
Using the Separator configuration, you can send multiple messages at once.
Run this command after starting Fluent Bit:
Then, run Fluent Bit:
This should produce results similar to the following:
In your main configuration file append the following sections:
You can emulate a serial interface on your Linux system and test the serial input plugin locally when you don't have an interface in your computer. The following procedure has been tested on Ubuntu 15.04 running Linux Kernel 4.0.
Download the sources:
Unpack and compile:
Copy the new kernel module into the kernel modules directory:
Load the module:
You should see new serial ports in dev (ls /dev/tnt\*\).
Give appropriate permissions to the new serial ports:
When the module is loaded, it will interconnect the following virtual interfaces:
File
The host of the Prometheus metric endpoint to scrape.
none
port
The port of the Prometheus metric endpoint to scrape.
none
scrape_interval
The interval to scrape metrics.
10s
metrics_path
The metrics URI endpoint, which must start with a forward slash (/). Parameters can be added to the path by using ?
/metrics
threaded
Indicates whether to run this input in its own .
false
If an endpoint exposes Prometheus Metrics you can specify the configuration to scrape and then output the metrics. The following example retrieves metrics from the HashiCorp Vault application.
pipeline:
inputs:
- name: prometheus_scrape
host: 0.0.0.0
port: 8201
tag: vault
metrics_path:
[INPUT]
name prometheus_scrape
host 0.0.0.0
port 8201
tag vault
metrics_path /v1/sys/metrics?format=prometheus
scrape_interval 10s
[OUTPUT]
name stdout
match *This returns output similar to:
host
Fluent Bit implements a unified networking interface that's exposed to components like plugins. This interface abstracts the complexity of general I/O and is fully configurable.
A common use case is when a component or plugin needs to connect with a service to send and receive data. There are many challenges to handle like unresponsive services, networking latency, or any kind of connectivity error. The networking interface aims to abstract and simplify the network I/O handling, minimize risks, and optimize performance.
Fluent Bit uses the following networking concepts:
Typically, creating a new TCP connection to a remote server is straightforward and takes a few milliseconds. However, there are cases where DNS resolving, a slow network, or incomplete TLS handshakes might create long delays, or incomplete connection statuses.
net.connect_timeout lets you configure the maximum time to wait for a connection to be established. This value already considers the TLS handshake process.
net.connect_timeout_log_error indicates if an error should be logged in case of connect timeout. If disabled, the timeout is logged as a debug level message.
On environments with multiple network interfaces, you can choose which interface to use for Fluent Bit data that will flow through the network.
Use net.source_address to specify which network address to use for a TCP connection and data flow.
A connection keepalive refers to the ability of a client to keep the TCP connection open in a persistent way. This feature offers many benefits in terms of performance because communication channels are always established beforehand.
Any component that uses TCP channels like HTTP or , can take use feature. For configuration purposes use the net.keepalive property.
If a connection keepalive is enabled, there might be scenarios where the connection can be unused for long periods of time. Unused connections can be removed. To control how long a keepalive connection can be idle, Fluent Bit uses a configuration property called net.keepalive_idle_timeout.
The global dns.mode value issues DNS requests using the specified protocol, either TCP or UDP. If a transport layer protocol is specified, plugins that configure the net.dns.mode setting override the global setting.
For optimal performance, Fluent Bit tries to deliver data quickly and create TCP connections on-demand and in keepalive mode. In highly scalable environments, you might limit how many connections are created in parallel.
Use the net.max_worker_connections property in the output plugin section to set the maximum number of allowed connections. This property acts at the worker level. For example, if you have five workers and net.max_worker_connections is set to 10, a maximum of 50 connections is allowed. If the limit is reached, the output plugin issues a retry.
When Fluent Bit listens for incoming connections (for example, in input plugins like HTTP, TCP, OpenTelemetry, Forward, and Syslog), the operating system maintains a queue of pending connections. The net.backlog option controls the maximum number of pending connections that can be queued before new connection attempts are refused. Increasing this value can help Fluent Bit handle bursts of incoming connections more gracefully. The default value is 128.
The following table describes the network configuration properties available and their usage in optimizing performance or adjusting configuration needs for plugins that rely on networking I/O:
This example sends five random messages through a TCP output connection. The remote side uses the nc (netcat) utility to see the data.
Use the following configuration snippet of your choice in a corresponding file named fluent-bit.yaml or fluent-bit.conf:
In another terminal, start nc and make it listen for messages on TCP port 9090:
Start Fluent Bit with the configuration file you defined previously to see data flowing to netcat:
If the net.keepalive option isn't enabled, Fluent Bit closes the TCP connection and netcat quits.
After the five records arrive, the connection idles. After 10 seconds, the connection closes due to net.keepalive_idle_timeout.
The in_ebpf input plugin uses eBPF (extended Berkeley Packet Filter) to capture low-level system events. This plugin lets Fluent Bit monitor kernel-level activities such as process executions, file accesses, memory allocations, network connections, and signal handling. It provides valuable insights into system behavior for debugging, monitoring, and security analysis.
The in_ebpf plugin leverages eBPF to trace kernel events in real-time. By specifying trace points, users can collect targeted system-level metrics and events, giving visibility into operating system interactions and performance characteristics.
The plugin supports the following configuration parameters:
To enable in_ebpf, ensure the following dependencies are installed on your system:
Kernel version: 4.18 or greater, with eBPF support enabled.
Required packages:
bpftool: Used to manage and debug eBPF programs.
in_ebpfTo enable the in_ebpf plugin, follow these steps to build Fluent Bit from source:
Clone the Fluent Bit repository:
Configure the build with in_ebpf:
Create a build directory and run cmake with the -DFLB_IN_EBPF=On flag to enable the in_ebpf plugin:
Compile the source:
Here's a basic example of how to configure the plugin:
The configuration enables tracing for:
Signal handling events (trace_signal)
Memory allocation events (trace_malloc)
Network bind operations (trace_bind)
You can enable multiple traces by adding multiple Trace directives in your configuration. Full list of existing traces can be seen here:
Each trace produces records with common fields and trace-specific fields.
All traces include the following fields:
The trace_signal trace includes these additional fields:
The trace_malloc trace includes these additional fields:
The trace_bind trace includes these additional fields:
The Exec input plugin lets you execute external programs and collects event logs.
This plugin invokes commands using a shell. Its inputs are subject to shell metacharacter substitution. Careless use of untrusted input in command arguments could lead to malicious command execution.
This plugin needs a functional /bin/sh and won't function in all the distro-less production images.
The debug images use the same binaries so even though they have a shell, there is no support for this plugin as it's compiled out.
The plugin supports the following configuration parameters:
You can run the plugin from the command line or through the configuration file:
The following example will read events from the output of ls.
which should return something like the following:
In your main configuration file append the following:
To use Fluent Bit with the exec plugin to wrap another command, use the exit_after_oneshot and propagate_exit_code options:
Fluent Bit will output:
then exits with exit code 1.
Translation of command exit codes to Fluent Bit exit code follows . Like with a shell, there is no way to differentiate between the command exiting on a signal and the shell exiting on a signal. Similarly, there is no way to differentiate between normal exits with codes greater than 125 and abnormal or signal exits reported by Fluent Bit or the shell. Wrapped commands should use exit codes between 0 and 125 inclusive to allow reliable identification of normal exit. If the command is a pipeline, the exit code will be the exit code of the last command in the pipeline unless overridden by shell options.
By default, the exec plugin emits one message per command output line, with a single field exec containing the full message. Use the parser option to specify the name of a parser configuration to use to process the command input.
Take great care with shell quoting and escaping when wrapping commands.
A script like the following can ruin your day if someone passes it the argument $(rm -rf /my/important/files; echo "deleted your stuff!")'
The previous script would be safer if written with:
It's generally best to avoid dynamically generating the command or handling untrusted arguments.
The NGINX exporter metrics input plugin scrapes metrics from the NGINX stub status handler.
The plugin supports the following configuration parameters:
The NGINX exporter metrics input plugin supports TLS/SSL. For more details about the properties available and general configuration, refer to .
NGINX must be configured with a location that invokes the stub status handler. The following is an example configuration with such a location:
Another metrics API is available with NGINX Plus. You must first configure a path in NGINX Plus.
From the command line you can let Fluent Bit generate the checks with the following options:
To gather metrics from the command line with the NGINX Plus REST API, turn on the nginx_plus property:
In your main configuration file append the following:
And for NGINX Plus API:
You can test against the NGINX server running on localhost by invoking it directly from the command line:
This returns output similar to the following:
For a list of available metrics, see the on GitHub.
The HTTP input plugin lets Fluent Bit open an HTTP port that you can then route data to in a dynamic way.
buffer_chunk_size
HTTP input plugin supports TLS/SSL. For more details about the properties available and general configuration, refer to .
The HTTP input plugin will accept and automatically handle gzipped content in version 2.2.1 or later if the header Content-Encoding: gzip is set on the received data.
This plugin supports dynamic tags which let you send data with different tags through the same input. See the following for an example:
The tag for the HTTP input plugin is set by adding the tag to the end of the request URL. This tag is then used to route the event through the system.
For example, in the following curl message the tag set is app.log because the end path is /app.log:
http.0 exampleIf you don't set the tag, http.0 is automatically used. If you have multiple HTTP inputs then they will follow a pattern of http.N where N is an integer representing the input.
tag_keyThe tag_key configuration option lets you specify the key name that will be used to overwrite a tag. The tag's value will be replaced with the value associated with the specified key. For example, setting tag_key to custom_tag and the log event contains a JSON field with the key custom_tag. Fluent Bit will use the value of that field as the new tag for routing the event through the system.
tag_key exampleThe success_header parameter lets you set multiple HTTP headers on success. The format is:
A plugin based on Process Exporter to collect process level of metrics of system metrics
Prometheus Node exporter is a popular way to collect system level metrics from operating systems such as CPU, disk, network, and process statistics.
Fluent Bit 2.2 and later includes a process exporter plugin that builds off the Prometheus design to collect process level metrics without having to manage two separate processes or agents.
The Process Exporter Metrics plugin implements collecting of the various metrics available from the third party implementation of Prometheus Process Exporter and these will be expanded over time as needed.
This input always runs in its own .
In the following configuration file, the input plugin process_exporter_metrics collects metrics every 2 seconds and exposes them through the output plugin on HTTP/TCP port 2021.
You can see the metrics by using curl:
When deploying Fluent Bit in a container you will need to specify additional settings to ensure that Fluent Bit has access to the process details.
The following docker command deploys Fluent Bit with a specific mount path for procfs and settings enabled to ensure that Fluent Bit can collect from the host. These are then exposed over port 2021.
Development prioritises a subset of the available collectors in the . To request others, open a GitHub issue by using the following template:
Kubernetes Production Grade Log Processor
is a lightweight and extensible log processor with full support for Kubernetes:
Process Kubernetes containers logs from the file system or Systemd/Journald.
Enrich logs with Kubernetes Metadata.
Centralize your logs in third party storage services like Elasticsearch, InfluxDB, HTTP, and so on.
The service section defines global properties of the service. The available configuration keys are:
The gpu_metrics input plugin collects graphics processing unit (GPU) performance metrics from graphics cards on Linux systems. It provides real-time monitoring of GPU utilization, memory usage (VRAM), clock frequencies, power consumption, temperature, and fan speeds.
The plugin reads metrics directly from the Linux sysfs filesystem (/sys/class/drm/) without requiring external tools or libraries. Only AMD GPUs are supported through the amdgpu kernel driver. NVIDIA and Intel GPUs aren't supported.
fluent-bit -i proc -p proc_name=crond -o stdoutpipeline:
inputs:
- name: proc
proc_name: crond
interval_sec: 1
interval_nsec: 0
fd: true
mem: true
outputs:
- name: stdout
match: '*'[INPUT]
Name proc
Proc_Name crond
Interval_Sec 1
Interval_NSec 0
Fd true
Mem true
[OUTPUT]
Name stdout
Match *$ fluent-bit -i proc -p proc_name=fluent-bit -o stdout
...
[0] proc.0: [1485780297, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1120000, "mem.VmRSS"=>1120000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
[1] proc.0: [1485780298, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1148000, "mem.VmRSS"=>1148000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
[2] proc.0: [1485780299, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1152000, "mem.VmRSS"=>1148000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
[3] proc.0: [1485780300, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1152000, "mem.VmRSS"=>1148000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
...service:
flush: 1
log_level: info
parsers_file: parsers.yaml
pipeline:
inputs:
- name: tail
path: data.log
parser: json
exit_on_eof: on
# First 'expect' filter to validate that our data was structured properly
filters:
- name: expect
match: '*'
key_exists:
- color
- $label['name']
action: exit
outputs:
- name: stdout
match: '*'parsers:
- name: json
format: json[SERVICE]
flush 1
log_level info
parsers_file parsers.conf
[INPUT]
name tail
path ./data.log
parser json
exit_on_eof on
# First 'expect' filter to validate that our data was structured properly
[FILTER]
name expect
match *
key_exists color
key_exists $label['name']
action exit
[OUTPUT]
name stdout
match *[PARSER]
Name json
Format jsonservice:
flush: 1
log_level: info
parsers_file: parsers.yaml
pipeline:
inputs:
- name: tail
path: data.log
parser: json
exit_on_eof: on
# First 'expect' filter to validate that our data was structured properly
filters:
- name: expect
match: '*'
key_exists:
- color
- $label['name']
action: exit
# Match records that only contains map 'label' with key 'name' = 'abc'
- name: grep
match: '*'
regex: "$label['name'] ^abc$"
# Check that every record contains 'label' with a non-null value
- name: expect
match: '*'
key_val_eq: $label['name'] abc
action: exit
# Append a new key to the record using an environment variable
- name: record_modifier
match: '*'
record: hostname ${HOSTNAME}
# Check that every record contains 'hostname' key
- name: expect
match: '*'
key_exists: hostname
action: exit
outputs:
- name: stdout
match: '*'[SERVICE]
flush 1
log_level info
parsers_file parsers.conf
[INPUT]
name tail
path ./data.log
parser json
exit_on_eof on
# First 'expect' filter to validate that our data was structured properly
[FILTER]
name expect
match *
key_exists color
key_exists label
action exit
# Match records that only contains map 'label' with key 'name' = 'abc'
[FILTER]
name grep
match *
regex $label['name'] ^abc$
# Check that every record contains 'label' with a non-null value
[FILTER]
name expect
match *
key_val_eq $label['name'] abc
action exit
# Append a new key to the record using an environment variable
[FILTER]
name record_modifier
match *
record hostname ${HOSTNAME}
# Check that every record contains 'hostname' key
[FILTER]
name expect
match *
key_exists hostname
action exit
[OUTPUT]
name stdout
match *{"color": "blue", "label": {"name": null}}
{"color": "red", "label": {"name": "abc"}, "meta": "data"}
{"color": "green", "label": {"name": "abc"}, "meta": null}pipeline:
inputs:
- name: head
tag: uptime
file: /proc/uptime
buf_size: 256
interval_sec: 1
interval_nsec: 0
outputs:
- name: stdout
match: '*'[INPUT]
Name head
Tag uptime
File /proc/uptime
Buf_Size 256
Interval_Sec 1
Interval_Nsec 0
[OUTPUT]
Name stdout
Match *pipeline:
inputs:
- name: head
tag: head.cpu
file: /proc/cpuinfo
lines: 8
split_line: true
filters:
- name: record_modifier
match: '*'
whitelist_key: line7
outputs:
- name: stdout
match: '*'[INPUT]
Name head
Tag head.cpu
File /proc/cpuinfo
Lines 8
Split_Line true
# {"line0":"processor : 0", "line1":"vendor_id : GenuineIntel" ...}
[FILTER]
Name record_modifier
Match *
Whitelist_key line7
[OUTPUT]
Name stdout
Match *fluent-bit -i head -t uptime -p File=/proc/uptime -o stdout -m '*'Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io
[2016/05/17 21:53:54] [ info] starting engine
[0] uptime: [1463543634, {"head"=>"133517.70 194870.97"}]
[1] uptime: [1463543635, {"head"=>"133518.70 194872.85"}]
[2] uptime: [1463543636, {"head"=>"133519.70 194876.63"}]
[3] uptime: [1463543637, {"head"=>"133520.70 194879.72"}]processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2640M CPU @ 2.80GHz
stepping : 7
microcode : 41
cpu MHz : 2791.009
cache size : 4096 KB
physical id : 0
siblings : 1fluent-bit -c head.confFluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io
[2017/06/26 22:38:24] [ info] [engine] started
[0] head.cpu: [1498484305.000279805, {"line7"=>"cpu MHz : 2791.009"}]
[1] head.cpu: [1498484306.011680137, {"line7"=>"cpu MHz : 2791.009"}]
[2] head.cpu: [1498484307.010042482, {"line7"=>"cpu MHz : 2791.009"}]
[3] head.cpu: [1498484308.008447978, {"line7"=>"cpu MHz : 2791.009"}]pipeline:
inputs:
- name: podman_metrics
scrape_interval: 10
scrape_on_start: true
outputs:
- name: prometheus_exporter[INPUT]
name podman_metrics
scrape_interval 10
scrape_on_start true
[OUTPUT]
name prometheus_exportercurl 0.0.0.0:2021/metrics# HELP fluentbit_input_bytes_total Number of input bytes.
# TYPE fluentbit_input_bytes_total counter
fluentbit_input_bytes_total{name="podman_metrics.0"} 0
# HELP fluentbit_input_records_total Number of input records.
# TYPE fluentbit_input_records_total counter
fluentbit_input_records_total{name="podman_metrics.0"} 0
# HELP container_memory_usage_bytes Container memory usage in bytes
# TYPE container_memory_usage_bytes counter
container_memory_usage_bytes{id="858319c39f3f52cd44aa91a520aafb84ded3bc4b4a1e04130ccf87043149bbbf",name="blissful_wescoff",image="docker.io/library/ubuntu:latest"} 884736
# HELP container_cpu_user_seconds_total Container cpu usage in seconds in user mode
# TYPE container_cpu_user_seconds_total counter
container_cpu_user_seconds_total{id="858319c39f3f52cd44aa91a520aafb84ded3bc4b4a1e04130ccf87043149bbbf",name="blissful_wescoff",image="docker.io/library/ubuntu:latest"} 0
# HELP container_cpu_usage_seconds_total Container cpu usage in seconds
# TYPE container_cpu_usage_seconds_total counter
container_cpu_usage_seconds_total{id="858319c39f3f52cd44aa91a520aafb84ded3bc4b4a1e04130ccf87043149bbbf",name="blissful_wescoff",image="docker.io/library/ubuntu:latest"} 0
# HELP container_network_receive_bytes_total Network received bytes
# TYPE container_network_receive_bytes_total counter
container_network_receive_bytes_total{id="858319c39f3f52cd44aa91a520aafb84ded3bc4b4a1e04130ccf87043149bbbf",name="blissful_wescoff",image="docker.io/library/ubuntu:latest",interface="eth0"} 8515
# HELP container_network_receive_errors_total Network received errors
# TYPE container_network_receive_errors_total counter
container_network_receive_errors_total{id="858319c39f3f52cd44aa91a520aafb84ded3bc4b4a1e04130ccf87043149bbbf",name="blissful_wescoff",image="docker.io/library/ubuntu:latest",interface="eth0"} 0
# HELP container_network_transmit_bytes_total Network transmitted bytes
# TYPE container_network_transmit_bytes_total counter
container_network_transmit_bytes_total{id="858319c39f3f52cd44aa91a520aafb84ded3bc4b4a1e04130ccf87043149bbbf",name="blissful_wescoff",image="docker.io/library/ubuntu:latest",interface="eth0"} 962
# HELP container_network_transmit_errors_total Network transmitted errors
# TYPE container_network_transmit_errors_total counter
container_network_transmit_errors_total{id="858319c39f3f52cd44aa91a520aafb84ded3bc4b4a1e04130ccf87043149bbbf",name="blissful_wescoff",image="docker.io/library/ubuntu:latest",interface="eth0"} 0
# HELP fluentbit_input_storage_overlimit Is the input memory usage overlimit ?.
# TYPE fluentbit_input_storage_overlimit gauge
fluentbit_input_storage_overlimit{name="podman_metrics.0"} 0
# HELP fluentbit_input_storage_memory_bytes Memory bytes used by the chunks.
# TYPE fluentbit_input_storage_memory_bytes gauge
fluentbit_input_storage_memory_bytes{name="podman_metrics.0"} 0
# HELP fluentbit_input_storage_chunks Total number of chunks.
# TYPE fluentbit_input_storage_chunks gauge
fluentbit_input_storage_chunks{name="podman_metrics.0"} 0
# HELP fluentbit_input_storage_chunks_up Total number of chunks up in memory.
# TYPE fluentbit_input_storage_chunks_up gauge
fluentbit_input_storage_chunks_up{name="podman_metrics.0"} 0
# HELP fluentbit_input_storage_chunks_down Total number of chunks down.
# TYPE fluentbit_input_storage_chunks_down gauge
fluentbit_input_storage_chunks_down{name="podman_metrics.0"} 0
# HELP fluentbit_input_storage_chunks_busy Total number of chunks in a busy state.
# TYPE fluentbit_input_storage_chunks_busy gauge
fluentbit_input_storage_chunks_busy{name="podman_metrics.0"} 0
# HELP fluentbit_input_storage_chunks_busy_bytes Total number of bytes used by chunks in a busy state.
# TYPE fluentbit_input_storage_chunks_busy_bytes gauge
fluentbit_input_storage_chunks_busy_bytes{name="podman_metrics.0"} 0fluent-bit -i podman_metrics -o prometheus_exporterpipeline:
inputs:
- name: prometheus_remote_write
listen: 127.0.0.1
port: 8080
uri: /api/prom/push
tls: on
tls.crt_file: /path/to/certificate.crt
tls.key_file: /path/to/certificate.key[INPUT]
Name prometheus_remote_write
Listen 127.0.0.1
Port 8080
Uri /api/prom/push
Tls On
tls.crt_file /path/to/certificate.crt
tls.key_file /path/to/certificate.keypipeline:
inputs:
- name: serial
tag: data
file: /dev/tnt0
bitrate: 9600
separator: X
outputs:
- name: stdout
match: '*' [INPUT]
Name serial
Tag data
File /dev/tnt0
BitRate 9600
Separator X
[OUTPUT]
Name stdout
Match *git clone https://github.com/freemed/tty0ttycd tty0tty/module
makesudo cp tty0tty.ko /lib/modules/$(uname -r)/kernel/drivers/misc/sudo depmod
sudo modprobe tty0ttyfluent-bit -i serial -t data -p File=/dev/tnt0 -p BitRate=9600 -o stdout -m '*'echo 'this is some message' > /dev/tnt1fluent-bit -i serial -t data -p File=/dev/tnt0 -p BitRate=9600 -o stdout -m '*'...
[0] data: [1463780680, {"msg"=>"this is some message"}]
...echo 'aaXbbXccXddXee' > /dev/tnt1fluent-bit -i serial -t data -p File=/dev/tnt0 -p BitRate=9600 -p Separator=X -o stdout -m '*'...
[0] data: [1463781902, {"msg"=>"aa"}]
[1] data: [1463781902, {"msg"=>"bb"}]
[2] data: [1463781902, {"msg"=>"cc"}]
[3] data: [1463781902, {"msg"=>"dd"}]
.../dev/tnt0 <=> /dev/tnt1
/dev/tnt2 <=> /dev/tnt3
/dev/tnt4 <=> /dev/tnt5
/dev/tnt6 <=> /dev/tnt7...
2022-03-26T23:01:29.836663788Z go_memstats_alloc_bytes_total = 31891336
2022-03-26T23:01:29.836663788Z go_memstats_frees_total = 313264
2022-03-26T23:01:29.836663788Z go_memstats_lookups_total = 0
2022-03-26T23:01:29.836663788Z go_memstats_mallocs_total = 378992
2022-03-26T23:01:29.836663788Z process_cpu_seconds_total = 1.6200000000000001
2022-03-26T23:01:29.836663788Z go_goroutines = 19
2022-03-26T23:01:29.836663788Z go_info{version="go1.17.7"} = 1
2022-03-26T23:01:29.836663788Z go_memstats_alloc_bytes = 12547800
2022-03-26T23:01:29.836663788Z go_memstats_buck_hash_sys_bytes = 1468900
2022-03-26T23:01:29.836663788Z go_memstats_gc_cpu_fraction = 8.1509688352783453e-06
2022-03-26T23:01:29.836663788Z go_memstats_gc_sys_bytes = 5875576
2022-03-26T23:01:29.836663788Z go_memstats_heap_alloc_bytes = 12547800
2022-03-26T23:01:29.836663788Z go_memstats_heap_idle_bytes = 2220032
2022-03-26T23:01:29.836663788Z go_memstats_heap_inuse_bytes = 14000128
2022-03-26T23:01:29.836663788Z go_memstats_heap_objects = 65728
2022-03-26T23:01:29.836663788Z go_memstats_heap_released_bytes = 2187264
2022-03-26T23:01:29.836663788Z go_memstats_heap_sys_bytes = 16220160
2022-03-26T23:01:29.836663788Z go_memstats_last_gc_time_seconds = 1648335593.2483871
2022-03-26T23:01:29.836663788Z go_memstats_mcache_inuse_bytes = 2400
2022-03-26T23:01:29.836663788Z go_memstats_mcache_sys_bytes = 16384
2022-03-26T23:01:29.836663788Z go_memstats_mspan_inuse_bytes = 150280
2022-03-26T23:01:29.836663788Z go_memstats_mspan_sys_bytes = 163840
2022-03-26T23:01:29.836663788Z go_memstats_next_gc_bytes = 16586496
2022-03-26T23:01:29.836663788Z go_memstats_other_sys_bytes = 422572
2022-03-26T23:01:29.836663788Z go_memstats_stack_inuse_bytes = 557056
2022-03-26T23:01:29.836663788Z go_memstats_stack_sys_bytes = 557056
2022-03-26T23:01:29.836663788Z go_memstats_sys_bytes = 24724488
2022-03-26T23:01:29.836663788Z go_threads = 8
2022-03-26T23:01:29.836663788Z process_max_fds = 65536
2022-03-26T23:01:29.836663788Z process_open_fds = 12
2022-03-26T23:01:29.836663788Z process_resident_memory_bytes = 200638464
2022-03-26T23:01:29.836663788Z process_start_time_seconds = 1648333791.45
2022-03-26T23:01:29.836663788Z process_virtual_memory_bytes = 865849344
2022-03-26T23:01:29.836663788Z process_virtual_memory_max_bytes = 1.8446744073709552e+19
2022-03-26T23:01:29.836663788Z vault_runtime_alloc_bytes = 12482136
2022-03-26T23:01:29.836663788Z vault_runtime_free_count = 313256
2022-03-26T23:01:29.836663788Z vault_runtime_heap_objects = 65465
2022-03-26T23:01:29.836663788Z vault_runtime_malloc_count = 378721
2022-03-26T23:01:29.836663788Z vault_runtime_num_goroutines = 12
2022-03-26T23:01:29.836663788Z vault_runtime_sys_bytes = 24724488
2022-03-26T23:01:29.836663788Z vault_runtime_total_gc_pause_ns = 1917611
2022-03-26T23:01:29.836663788Z vault_runtime_total_gc_runs = 19
...Rename a key.
head
lines
Line number to read. If the number N is set, in_head reads first N lines like head(1) -n.
0
split_line
If enabled, in_head generates key-value pair per line.
false
threaded
Indicates whether to run this input in its own thread.
false
net.keepalive_idle_timeout
Set maximum time expressed in seconds for an idle keepalive connection.
30s
net.dns.mode
Select the primary DNS connection type (TCP or UDP).
none
net.dns.prefer_ipv4
Prioritize IPv4 DNS results when trying to establish a connection.
false
net.dns.prefer_ipv6
Prioritize IPv6 DNS results when trying to establish a connection.
false
net.dns.resolver
Select the primary DNS resolver type (LEGACY or ASYNC).
none
net.keepalive_max_recycle
Set maximum number of times a keepalive connection can be used before it's retired.
2000
net.max_worker_connections
Set maximum number of TCP connections that can be established per worker.
0
net.proxy_env_ignore
Ignore the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY when set.
false
net.tcp_keepalive
Enable or disable Keepalive support.
off
net.tcp_keepalive_time
Interval between the last data packet sent and the first TCP keepalive probe.
-1
net.tcp_keepalive_interval
Interval between TCP keepalive probes when no response is received on a keepidle probe.
-1
net.tcp_keepalive_probes
Number of unacknowledged probes to consider a connection dead.
-1
net.source_address
Specify network address to bind for data traffic.
none
net.connect_timeout
Set maximum time allowed to establish a connection, this time includes the TLS handshake.
10s
net.connect_timeout_log_error
On connection timeout, specify if it should log an error. When disabled, the timeout is logged as a debug message.
true
net.io_timeout
Set maximum time a connection can stay idle while assigned.
0s
net.keepalive
Enable or disable connection keepalive support.
true
This sets the chunk size for incoming JSON messages. These chunks are then stored and managed in the space available by buffer_max_size.
512K
buffer_max_size
Specify the maximum buffer size to receive a JSON message.
4M
http2
Enable HTTP/2 support.
true
listen
The address to listen on.
0.0.0.0
port
The port for Fluent Bit to listen on.
9880
success_header
Add an HTTP header key/value pair on success. Multiple headers can be set. For example, X-Custom custom-answer.
none
successful_response_code
Allows setting successful response code. Supported values: 200, 201, and 204.
201
tag_key
Specify the key name to overwrite a tag. If set, the tag will be overwritten by a value of the key.
none
threaded
Indicates whether to run this input in its own thread.
false
metrics
Specify which process level of metrics are collected from the host operating system. Actual values of metrics will be read from /proc when needed. cpu, io, memory, state, context_switches, fd, start_time, thread_wchan, and thread metrics depend on procfs.
cpu,io,memory,state,context_switches,fd,start_time,thread_wchan,thread
start_time
Exposes start_time statistics from /proc.
thread_wchan
Exposes thread_wchan from /proc.
thread
Exposes thread statistics from /proc.
scrape_interval
The rate, in seconds, at which metrics are collected.
5
path.procfs
The mount point used to collect process information and metrics. Read-only permissions are enough.
/proc/
process_include_pattern
Regular expression to determine which names of processes are included in the metrics produced by this plugin. It's applied for all process unless explicitly set.
.+
process_exclude_pattern
Regular expression to determine which names of processes are excluded in the metrics produced by this plugin. It's not applied unless explicitly set.
NULL
cpu
Exposes CPU statistics from /proc.
io
Exposes I/O statistics from /proc.
memory
Exposes memory statistics from /proc.
state
Exposes process state statistics from /proc.
context_switches
Exposes context_switches statistics from /proc.
fd
Exposes file descriptors statistics from /proc.
libbpf-devlibbpfCMake 3.13 or higher: Required for building the plugin.
Run Fluent Bit:
Run Fluent Bit with elevated permissions (for example, sudo). Loading eBPF programs requires root access or appropriate privileges.
poll_ms
Set the polling interval in milliseconds for collecting events from the ring buffer.
1000
ringbuf_map_name
Set the name of the eBPF ring buffer map to read events from.
events
trace
Set the eBPF trace to enable (for example, trace_bind, trace_malloc, trace_signal). This parameter can be set multiple times to enable multiple traces.
none
event_type
Type of event (signal, malloc, or bind).
pid
Process ID that generated the event.
tid
Thread ID that generated the event.
comm
Command name (process name) that generated the event.
signal
Signal number that was sent.
tpid
Target process ID that received the signal.
operation
Memory operation type (for example, 0 = malloc, 1 = free, 2 = calloc, 3 = realloc).
address
Memory address of the operation.
size
Size of the memory operation in bytes.
uid
User ID of the process.
gid
Group ID of the process.
port
Port number the socket is binding to.
bound_dev_if
Network device interface the socket is bound to.
error_raw
Error code for the bind operation (0 indicates success).
sudo chmod 666 /dev/tnt*service:
flush: 1
log_level: info
pipeline:
inputs:
- name: random
samples: 5
outputs:
- name: tcp
match: '*'
host: 127.0.0.1
port: 9090
format: json_lines
# Networking Setup
net.dns.mode: TCP
net.connect_timeout: 5s
net.source_address: 127.0.0.1
net.keepalive: true
net.keepalive_idle_timeout: 10s[SERVICE]
flush 1
log_level info
[INPUT]
name random
samples 5
[OUTPUT]
name tcp
match *
host 127.0.0.1
port 9090
format json_lines
# Networking Setup
net.dns.mode TCP
net.connect_timeout 5s
net.source_address 127.0.0.1
net.keepalive true
net.keepalive_idle_timeout 10snc -l 9090$ nc -l 9090
{"date":1587769732.572266,"rand_value":9704012962543047466}
{"date":1587769733.572354,"rand_value":7609018546050096989}
{"date":1587769734.572388,"rand_value":17035865539257638950}
{"date":1587769735.572419,"rand_value":17086151440182975160}
{"date":1587769736.572277,"rand_value":527581343064950185}curl -d '{"key1":"value1","key2":"value2"}' -XPOST -H "content-type: application/json" http://localhost:8888/app.logpipeline:
inputs:
- name: http
listen: 0.0.0.0
port: 8888
outputs:
- name: stdout
match: app.log[INPUT]
Name http
Listen 0.0.0.0
Port 8888
[OUTPUT]
Name stdout
Match app.logcurl -d '{"key1":"value1","key2":"value2"}' -XPOST -H "content-type: application/json" http://localhost:8888pipeline:
inputs:
- name: http
listen: 0.0.0.0
port: 8888
outputs:
- name: stdout
match: http.0[INPUT]
Name http
Listen 0.0.0.0
Port 8888
[OUTPUT]
Name stdout
Match http.0curl -d '{"key1":"value1","key2":"value2"}' -XPOST -H "content-type: application/json" http://localhost:8888/app.logpipeline:
inputs:
- name: http
listen: 0.0.0.0
port: 8888
tag_key: key1
outputs:
- name: stdout
match: value1[INPUT]
Name http
Listen 0.0.0.0
Port 8888
Tag_Key key1
[OUTPUT]
Name stdout
Match value1pipeline:
inputs:
- name: http
success_header:
- X-Custom custom-answer
- X-Another another-answer[INPUT]
Name http
Success_Header X-Custom custom-answer
Success_Header X-Another another-answercurl -d @app.log -XPOST -H "content-type: application/json" http://localhost:8888/app.logpipeline:
inputs:
- name: http
listen: 0.0.0.0
port: 8888
outputs:
- name: stdout
match: '*'[INPUT]
Name http
Listen 0.0.0.0
Port 8888
[OUTPUT]
Name stdout
Match *fluent-bit -i http -p port=8888 -o stdout# Process Exporter Metrics + Prometheus Exporter
# -------------------------------------------
# The following example collect host metrics on Linux and expose
# them through a Prometheus HTTP endpoint.
#
# After starting the service try it with:
#
# $ curl http://127.0.0.1:2021/metrics
#
service:
flush: 1
log_level: info
pipeline:
inputs:
- name: process_exporter_metrics
tag: process_metrics
scrape_interval: 2
outputs:
- name: prometheus_exporter
match: process_metrics
host: 0.0.0.0
port: 2021# Process Exporter Metrics + Prometheus Exporter
# -------------------------------------------
# The following example collect host metrics on Linux and expose
# them through a Prometheus HTTP endpoint.
#
# After starting the service try it with:
#
# $ curl http://127.0.0.1:2021/metrics
#
[SERVICE]
flush 1
log_level info
[INPUT]
name process_exporter_metrics
tag process_metrics
scrape_interval 2
[OUTPUT]
name prometheus_exporter
match process_metrics
host 0.0.0.0
port 2021curl http://127.0.0.1:2021/metricsdocker run -ti -v /proc:/host/proc:ro \
-p 2021:2021 \
fluent/fluent-bit:2.2 \
/fluent-bit/bin/fluent-bit \
-i process_exporter_metrics \
-p path.procfs=/host/proc \
-o prometheus_exporter \
-f 1# For YAML configuration.
sudo fluent-bit --config fluent-bit.yaml
# For classic configuration.
sudo fluent-bit --config fluent-bit.confsudo apt update
sudo apt install libbpf-dev linux-tools-common cmakegit clone https://github.com/fluent/fluent-bit.git
cd fluent-bitmkdir build
cd build
cmake .. -DFLB_IN_EBPF=Onpipeline:
inputs:
- name: ebpf
poll_ms: 500
trace:
- trace_signal
- trace_malloc
- trace_bind[INPUT]
Name ebpf
Poll_Ms 500
Trace trace_signal
Trace trace_malloc
Trace trace_bindmakeinterval_sec
Polling interval (seconds).
1
oneshot
Only run once at startup. This allows collection of data before to Fluent Bit startup.
false
parser
Specify the name of a parser to interpret the entry as a structured message.
none
propagate_exit_code
Cause Fluent Bit to exit with the exit code of the command exited by this plugin. Follows . Requires exit_after_oneshot=true.
false
threaded
Indicates whether to run this input in its own .
false
buf_size
Size of the buffer. See unit sizes for allowed values.
4096
command
The command to execute, passed to popen without any additional escaping or processing. Can include pipelines, redirection, command-substitution, or other information.
none
exit_after_oneshot
Exit as soon as the one-shot command exits. This allows the exec plugin to be used as a wrapper for another command, sending the target command's output to any Fluent Bit sink, then exits. When enabled, oneshot is automatically set to true.
false
interval_nsec
Polling interval (nanoseconds).
0
status_url
The URL of the stub status handler.
/status
threaded
Indicates whether to run this input in its own .
false
host
Name of the target host or IP address.
localhost
nginx_plus
Turn on NGINX Plus mode.
true
port
Port of the target NGINX service to connect to.
80
scrape_interval
The interval to scrape metrics from the NGINX service.
5s
Before getting started it's important to understand how Fluent Bit will be deployed. Kubernetes manages a cluster of nodes. The Fluent Bit log agent tool needs to run on every node to collect logs from every pod. Fluent Bit is deployed as a DaemonSet, which is a pod that runs on every node of the cluster.
When Fluent Bit runs, it reads, parses, and filters the logs of every pod. In addition, Fluent Bit adds metadata to each entry using the Kubernetes filter plugin.
The Kubernetes filter plugin talks to the Kubernetes API Server to retrieve relevant information such as the pod_id, labels, and annotations. Other fields, such as pod_name, container_id, and container_name, are retrieved locally from the log file names. All of this is handled automatically, and no intervention is required from a configuration aspect.
Fluent Bit should be deployed as a DaemonSet, so it will be available on every node of your Kubernetes cluster.
The recommended way to deploy Fluent Bit for Kubernetes is with the official Helm Chart.
If you are using Red Hat OpenShift you must set up Security Context Constraints (SCC) using the relevant option in the helm chart.
Helm is a package manager for Kubernetes and lets you deploy application packages into your running cluster. Fluent Bit is distributed using a Helm chart found in the Fluent Helm Charts repository.
Use the following command to add the Fluent Helm charts repository
To validate that the repository was added, run helm search repo fluent to ensure the charts were added. The default chart can then be installed by running the following command:
The default chart values include configuration to read container logs. With Docker parsing, Systemd logs apply Kubernetes metadata enrichment, and output to an Elasticsearch cluster. You can modify the included values file to specify additional outputs, health checks, monitoring endpoints, or other configuration options.
The default configuration of Fluent Bit ensures the following:
Consume all containers logs from the running node and parse them with either the docker or cri multi-line parser.
Persist how far it got into each file it's tailing so if a pod is restarted it picks up from where it left off.
The Kubernetes filter adds Kubernetes metadata, specifically labels and annotations. The filter only contacts the API Server when it can't find the cached information, otherwise it uses the cache.
The default backend in the configuration is Elasticsearch set by the . It uses the Logstash format to ingest the logs. If you need a different Index and Type, refer to the plugin option and update as needed.
There is an option called Retry_Limit, which is set to False. If Fluent Bit can't flush the records to Elasticsearch, it will retry indefinitely until it succeeds.
Fluent Bit v1.5.0 and later supports deployment to Windows pods.
When deploying Fluent Bit to Kubernetes, there are three log files that you need to pay attention to.
C:\k\kubelet.err.log
This is the error log file from kubelet daemon running on host. Retain this file for future troubleshooting, including debugging deployment failures.
C:\var\log\containers\<pod>_<namespace>_<container>-<docker>.log
This is the main log file you need to watch. Configure Fluent Bit to follow this file. It's a symlink to the Docker log file in C:\ProgramData\, with some additional metadata on the file's name.
C:\ProgramData\Docker\containers\<docker>\<docker>.log
This is the log file produced by Docker. Normally you don't directly read from this file, but you need to make sure that this file is visible from Fluent Bit.
Typically, your deployment YAML contains the following volume configuration.
Assuming the basic volume configuration described previously, you can apply one of the following configurations to start logging:
Windows pods often lack working DNS immediately after boot (#78479). To mitigate this issue, filter_kubernetes provides a built-in mechanism to wait until the network starts up:
DNS_Retries: Retries N times until the network start working (6)
DNS_Wait_Time: Lookup interval between network status checks (30)
By default, Fluent Bit waits for three minutes (30 seconds x 6 times). If it's not enough for you, update the configuration as follows:
% endtab %}
off
dns.mode
Sets the primary transport layer protocol used by the asynchronous DNS resolver. Can be overridden on a per-plugin basis.
UDP
log_file
Absolute path for an optional log file. By default, all logs are redirected to the standard error interface (stderr).
none
log_level
Sets the logging verbosity level. Possible values: off, error, warn, info, debug, and trace. Values are cumulative. For example, if debug is set, it will include error, warning, info, and debug. The trace mode is only available if Fluent Bit was built with the WITH_TRACE
info
parsers_file
Path for . You can include one or more files.
none
plugins_file
Path for a plugins configuration file. This file specifies the paths to custom plugins (.so files) that Fluent Bit can load at runtime. Plugins can be declared directly in the of YAML configuration files.
none
streams_file
Path for the configuration file. This file defines the rules and operations for stream processing in Fluent Bit. Stream processor configurations can also be defined directly in the streams section of YAML configuration files.
none
http_server
Enables the built-in HTTP server.
off
http_listen
Sets the listening interface for the HTTP Server when it's enabled.
0.0.0.0
http_port
Sets the TCP port for the HTTP server.
2020
hot_reload
Enables of configuration with SIGHUP.
on
coro_stack_size
Sets the coroutines stack size in bytes. The value must be greater than the page size of the running system. Setting the value too small (for example, 4096) can cause coroutine threads to overrun the stack buffer. For best results, don't change this parameter from its default value.
24576
scheduler.cap
Sets a maximum retry time in seconds.
2000
scheduler.base
Sets the base of exponential backoff.
5
json.convert_nan_to_null
If enabled, NaN is converted to null when Fluent Bit converts msgpack to JSON.
false
json.escape_unicode
Controls how Fluent Bit serializes non‑ASCII / multi‑byte Unicode characters in JSON strings. When enabled, Unicode characters are escaped as \uXXXX sequences (characters outside BMP become surrogate pairs). When disabled, Fluent Bit emits raw UTF‑8 bytes.
true
sp.convert_from_str_to_num
If enabled, the stream processor converts strings that represent numbers to a numeric type.
true
windows.maxstdio
If specified, adjusts the limit of stdio. Only provided for Windows. Values from 512 to 2048 are allowed.
512
The following storage-related keys can be set as children to the storage key:
storage.path
Set a location in the file system to store streams and chunks of data. Required for filesystem buffering.
none
storage.sync
Configure the synchronization mode used to store data in the file system. Accepted values: normal or full.
normal
storage.checksum
Enable data integrity check when writing and reading data from the filesystem. Accepted values: off or on.
off
storage.max_chunks_up
Set the maximum number of chunks that can be up in memory when using filesystem storage.
For scheduler and retry details, see scheduling and retries.
For storage and buffering details, see buffering and storage.
The following configuration example that defines a service section with hot reloading enabled and a pipeline with a random input and stdout output:
flush
Sets the flush time in seconds.nanoseconds. The engine loop uses a flush timeout to define when to flush the records ingested by input plugins through the defined output plugins.
1
grace
Sets the grace time in seconds as an integer value. The engine loop uses a grace timeout to define the wait time on exit.
5
daemon
Specifies whether Fluent Bit should run as a daemon (background process). Possible values: yes, no, on, and off. Don't enable when using a Systemd-based unit, such as the one provided in Fluent Bit packages.
buffer_chunk_size
By default the buffer to store the incoming Forward messages, don't allocate the maximum memory allowed, instead it allocate memory when it's required. The rounds of allocations are set by buffer_chunk_size. The value must be according to the specification.
1024000
buffer_max_size
Specify the maximum buffer memory size used to receive a Forward message. The value must be according to the specification.
6144000
empty_shared_key
Enable secure forward protocol with a zero-length shared key. Use this to enable user authentication without requiring a shared key, or to connect to Fluentd with a zero-length shared key.
false
listen
Listener network interface.
0.0.0.0
port
TCP port to listen for incoming connections.
24224
security.users
Specify the username and password pairs for secure forward authentication. Requires shared_key or empty_shared_key to be set.
self_hostname
Hostname for secure forward authentication.
localhost
shared_key
Shared key for secure forward authentication.
none
tag
Override the tag of the forwarded events with the defined value.
none
tag_prefix
Prefix incoming tag with the defined value.
none
threaded
Indicates whether to run this input in its own .
false
unix_path
Specify the path to Unix socket to receive a Forward message. If set, listen and port are ignored.
none
unix_perm
Set the permission of the Unix socket file. If unix_path isn't set, this parameter is ignored.
none
The Forward input plugin supports TLS/SSL. For more details about the properties available and general configuration, refer to Transport Security.
To receive Forward messages, you can run the plugin from the command line or through the configuration file as shown in the following examples.
From the command line you can let Fluent Bit listen for Forward messages with the following options:
By default, the service listens on all interfaces (0.0.0.0) through TCP port 24224. You can change this by passing parameters to the command:
In the example, the Forward messages arrive only through network interface 192.168.3.2 address and TCP Port 9090.
In your main configuration file append the following:
In Fluent Bit v3 or later, in_forward can handle secure forward protocol.
When using security.users for user-password authentication, you must also configure either shared_key or set empty_shared_key to true. The Forward input plugin will reject a configuration that has security.users set without one of these options.
For shared key authentication, specify shared_key in both forward output and forward input. For user-password authentication, specify security.users with at least one user-password pair along with a shared key. To use user authentication without requiring clients to know a shared key, set empty_shared_key to true.
The self_hostname value can't be the same between Fluent Bit servers and clients.
To use username and password authentication without requiring clients to know a shared key, set empty_shared_key to true:
After Fluent Bit is running, you can send some messages using the fluent-cat tool, provided by Fluentd:
When you run the plugin with the following command:
In Fluent Bit you should see the following output:
gpu_utilization_percent
GPU core utilization as a percentage (0 to 100). Indicates how busy the GPU is when processing workloads.
gpu_memory_used_bytes
Amount of video RAM (VRAM) currently in use, measured in bytes.
gpu_memory_total_bytes
Total video RAM (VRAM) capacity available on the GPU, measured in bytes.
gpu_clock_mhz
Current GPU clock frequency in MHz. This metric has multiple instances with different type labels (see ).
gpu_power_watts
Current power consumption in watts. Can be disabled with enable_power set to false.
gpu_temperature_celsius
GPU die temperature in degrees Celsius. Can be disabled with enable_temperature set to false.
The gpu_clock_mhz metric is reported separately for three clock domains:
graphics
GPU core/shader clock frequency.
memory
VRAM clock frequency.
soc
System-on-chip clock frequency.
The plugin supports the following configuration parameters:
cards_exclude
Pattern specifying which GPU cards to exclude from monitoring. Uses the same syntax as cards_include.
none
cards_include
Pattern specifying which GPU cards to monitor. Supports wildcards (*), ranges (0-3), and comma-separated lists (0,2,4).
*
enable_power
Enable collection of power consumption metrics (gpu_power_watts).
true
enable_temperature
Enable collection of temperature metrics (gpu_temperature_celsius).
true
The GPU metrics plugin scans for any supported AMD GPU using the amdgpu kernel driver. Any GPU using legacy drivers is ignored.
To check if your AMD GPU will be detected run:
Example output:
In systems with multiple GPUs, the GPU metrics plugin will detect all AMD cards by default. You can control which GPUs you want to monitor with the cards_include and cards_exclude parameters.
To list the GPUs running in your system run the following command:
Example output:
To get GPU metrics from your system, you can run the plugin from either the command line or through the configuration file:
Run the following command from the command line:
Example output:
In your main configuration file append the following:
The Kafka input plugin enables Fluent Bit to consume messages directly from one or more Apache Kafka topics. By subscribing to specified topics, this plugin efficiently collects and forwards Kafka messages for further processing within your Fluent Bit pipeline.
Starting with version 4.0.4, the Kafka input plugin supports authentication with AWS MSK IAM, enabling integration with Amazon MSK (Managed Streaming for Apache Kafka) clusters that require IAM-based access.
This plugin uses the official librdkafka C library as a built-in dependency.
To subscribe to or collect messages from Apache Kafka, run the plugin from the command line or through the configuration file as shown in the following examples.
The Kafka plugin can read parameters through the -p argument (property):
In your main configuration file append the following:
The Fluent Bit source repository contains a full example of using Fluent Bit to process Kafka records:
The previous example will connect to the broker listening on kafka-broker:9092 and subscribe to the fb-source topic, polling for new messages every 100 milliseconds.
Since the payload will be in JSON format, the plugin is configured to parse the payload with format json.
Every message received is then processed with kafka.lua and sent back to the fb-sink topic of the same broker.
The example can be executed locally with make start in the examples/kafka_filter directory (docker/compose is used).
Fluent Bit v4.0.4 and later supports authentication to Amazon MSK (Managed Streaming for Apache Kafka) clusters using AWS IAM. This lets you securely connect to MSK brokers with AWS credentials, leveraging IAM roles and policies for access control.
If you are compiling Fluent Bit from source, ensure the following requirements are met to enable AWS MSK IAM support:
The packages libsasl2 and libsasl2-dev must be installed on your build environment.
Network Access: Fluent Bit must be able to reach your MSK broker endpoints (AWS VPC setup).
AWS Credentials: Provide these AWS credentials using any supported AWS method. These credentials are discovered by default when aws_msk_iam flag is enabled.
IAM roles (recommended for EC2, ECS, or EKS)
The AWS credentials used by Fluent Bit must have permission to connect to your MSK cluster. Here is a minimal example policy:
The Prometheus text file input plugin allows Fluent Bit to read metrics from Prometheus text format files (.prom files) on the local filesystem. Use this plugin to collect custom metrics that are written to files by external applications or scripts, similar to the Prometheus Node Exporter text file collector.
The following configuration will monitor /var/lib/prometheus/textfile directory for .prom files every 15 seconds:
The plugin expects files to be in the standard Prometheus text exposition format. Here's an example of a valid .prom file:
Applications can write custom metrics to .prom files, and this plugin will collect them:
Cron jobs or batch processes can write completion metrics:
External monitoring tools can write metrics that Fluent Bit will collect and forward.
One of the ways to configure Fluent Bit is using a main configuration file. Fluent Bit allows the use one configuration file that works at a global scope and uses the defined .
The main configuration file supports four sections:
Service
Input
The Standard input plugin supports retrieving a message stream from the standard input interface (stdin) of the Fluent Bit process. To use it, specify the plugin name as the input. For example:
If the stdin stream is closed (end-of-file), the plugin instructs Fluent Bit to exit with success (0) after flushing any pending output.
fluent-bit -i exec -p 'command=ls /var/log' -o stdout...
[0] exec.0: [1521622010.013470159, {"exec"=>"ConsoleKit"}]
[1] exec.0: [1521622010.013490313, {"exec"=>"Xorg.0.log"}]
[2] exec.0: [1521622010.013492079, {"exec"=>"Xorg.0.log.old"}]
[3] exec.0: [1521622010.013493443, {"exec"=>"anaconda.ifcfg.log"}]
[4] exec.0: [1521622010.013494707, {"exec"=>"anaconda.log"}]
[5] exec.0: [1521622010.013496016, {"exec"=>"anaconda.program.log"}]
[6] exec.0: [1521622010.013497225, {"exec"=>"anaconda.storage.log"}]
...pipeline:
inputs:
- name: exec
tag: exec_ls
command: ls /var/log
interval_sec: 1
interval_nsec: 0
buf_size: 8mb
oneshot: false
outputs:
- name: stdout
match: '*'[INPUT]
Name exec
Tag exec_ls
Command ls /var/log
Interval_Sec 1
Interval_NSec 0
Buf_Size 8mb
Oneshot false
[OUTPUT]
Name stdout
Match *pipeline:
inputs:
- name: exec
tag: exec_oneshot_demo
command: 'for s in $(seq 1 10); do echo "count: $s"; sleep 1; done; exit 1'
oneshot: true
exit_after_oneshot: true
propagate_exit_code: true
outputs:
- name: stdout
match: '*'[INPUT]
Name exec
Tag exec_oneshot_demo
Command for s in $(seq 1 10); do echo "count: $s"; sleep 1; done; exit 1
Oneshot true
Exit_After_Oneshot true
Propagate_Exit_Code true
[OUTPUT]
Name stdout
Match *...
[0] exec_oneshot_demo: [[1681702172.950574027, {}], {"exec"=>"count: 1"}]
[1] exec_oneshot_demo: [[1681702173.951663666, {}], {"exec"=>"count: 2"}]
[2] exec_oneshot_demo: [[1681702174.953873724, {}], {"exec"=>"count: 3"}]
[3] exec_oneshot_demo: [[1681702175.955760865, {}], {"exec"=>"count: 4"}]
[4] exec_oneshot_demo: [[1681702176.956840282, {}], {"exec"=>"count: 5"}]
[5] exec_oneshot_demo: [[1681702177.958292246, {}], {"exec"=>"count: 6"}]
[6] exec_oneshot_demo: [[1681702178.959508200, {}], {"exec"=>"count: 7"}]
[7] exec_oneshot_demo: [[1681702179.961715745, {}], {"exec"=>"count: 8"}]
[8] exec_oneshot_demo: [[1681702180.963924140, {}], {"exec"=>"count: 9"}]
[9] exec_oneshot_demo: [[1681702181.965852990, {}], {"exec"=>"count: 10"}]
...#!/bin/bash
# This is a DANGEROUS example of what NOT to do, NEVER DO THIS
exec fluent-bit \
-o stdout \
-i exec \
-p exit_after_oneshot=true \
-p propagate_exit_code=true \
-p command='myscript $*' -p command='echo '"$(printf '%q' "$@")" \server {
listen 80;
listen [::]:80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
# Configure the stub status handler.
location /status {
stub_status;
}
}server {
listen 80;
listen [::]:80;
server_name localhost;
# Enable /api/ location with appropriate access control in order
# to make use of NGINX Plus API.
location /api/ {
api write=on;
# Configure to allow requests from the server running Fluent Bit.
allow 192.168.1.*;
deny all;
}
}fluent-bit -i nginx_metrics -p host=127.0.0.1 -p port=80 -p status_url=/status -p nginx_plus=off -o stdoutfluent-bit -i nginx_metrics -p host=127.0.0.1 -p port=80 -p nginx_plus=on -p status_url=/api -o stdoutpipeline:
inputs:
- name: nginx_metrics
host: 127.0.0.1
port: 80
status_url: /status
nginx_plus: off
scrape_interval: 5s
outputs:
- name: stdout
match: '*'[INPUT]
Name nginx_metrics
Host 127.0.0.1
Port 80
Status_URL /status
Nginx_Plus off
Scrape_Interval 5s
[OUTPUT]
Name stdout
Match *pipeline:
inputs:
- name: nginx_metrics
host: 127.0.0.1
port: 80
status_url: /api
nginx_plus: on
scrape_interval: 5s
outputs:
- name: stdout
match: '*'[INPUT]
Name nginx_metrics
Host 127.0.0.1
Port 80
Status_URL /api
Nginx_Plus on
Scrape_Interval 5s
[OUTPUT]
Name stdout
Match *fluent-bit -i nginx_metrics -p host=127.0.0.1 -p nginx_plus=off -o stdout -p match=* -f 1...
2021-10-14T19:37:37.228691854Z nginx_connections_accepted = 788253884
2021-10-14T19:37:37.228691854Z nginx_connections_handled = 788253884
2021-10-14T19:37:37.228691854Z nginx_http_requests_total = 42045501
2021-10-14T19:37:37.228691854Z nginx_connections_active = 2009
2021-10-14T19:37:37.228691854Z nginx_connections_reading = 0
2021-10-14T19:37:37.228691854Z nginx_connections_writing = 1
2021-10-14T19:37:37.228691854Z nginx_connections_waiting = 2008
2021-10-14T19:37:35.229919621Z nginx_up = 1
...parsers:
- name: docker
format: json
time_key: time
time_format: '%Y-%m-%dT%H:%M:%S.%L'
time_keep: true
pipeline:
inputs:
- name: tail
tag: kube.*
path: 'C:\\var\\log\\containers\\*.log'
parser: docker
db: 'C:\\fluent-bit\\tail_docker.db'
mem_buf_limit: 7MB
refresh_interval: 10
- name: tail
tag: kube.error
path: 'C:\\k\\kubelet.err.log'
db: 'C:\\fluent-bit\\tail_kubelet.db'
filters:
- name: kubernetes
match: kube.*
kube_url: 'https://kubernetes.default.svc.cluster.local:443'
outputs:
- name: stdout
match: '*'fluent-bit.conf: |
[SERVICE]
Parsers_File C:\\fluent-bit\\parsers.conf
[INPUT]
Name tail
Tag kube.*
Path C:\\var\\log\\containers\\*.log
Parser docker
DB C:\\fluent-bit\\tail_docker.db
Mem_Buf_Limit 7MB
Refresh_Interval 10
[INPUT]
Name tail
Tag kubelet.err
Path C:\\k\\kubelet.err.log
DB C:\\fluent-bit\\tail_kubelet.db
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc.cluster.local:443
[OUTPUT]
Name stdout
Match *
parsers.conf: |
[PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
Time_Keep On filters:
- name: kubernetes
...
dns_retries: 10
dns_wait_time: 30[filter]
Name kubernetes
...
DNS_Retries 10
DNS_Wait_Time 30helm repo add fluent https://fluent.github.io/helm-chartshelm upgrade --install fluent-bit fluent/fluent-bitspec:
containers:
- name: fluent-bit
image: my-repo/fluent-bit:1.8.4
volumeMounts:
- mountPath: C:\k
name: k
- mountPath: C:\var\log
name: varlog
- mountPath: C:\ProgramData
name: progdata
volumes:
- name: k
hostPath:
path: C:\k
- name: varlog
hostPath:
path: C:\var\log
- name: progdata
hostPath:
path: C:\ProgramDataservice:
flush: 1
log_level: info
http_server: true
http_listen: 0.0.0.0
http_port: 2020
hot_reload: on
pipeline:
inputs:
- name: random
outputs:
- name: stdout
match: '*'pipeline:
inputs:
- name: forward
listen: 0.0.0.0
port: 24224
buffer_chunk_size: 1M
buffer_max_size: 6M
outputs:
- name: stdout
match: '*'[INPUT]
Name forward
Listen 0.0.0.0
Port 24224
Buffer_Chunk_Size 1M
Buffer_Max_Size 6M
[OUTPUT]
Name stdout
Match *pipeline:
inputs:
- name: forward
listen: 0.0.0.0
port: 24224
buffer_chunk_size: 1M
buffer_max_size: 6M
security.users: fluentbit changeme
shared_key: secret
self_hostname: flb.server.local
outputs:
- name: stdout
match: '*'[INPUT]
Name forward
Listen 0.0.0.0
Port 24224
Buffer_Chunk_Size 1M
Buffer_Max_Size 6M
Security.Users fluentbit changeme
Shared_Key secret
Self_Hostname flb.server.local
[OUTPUT]
Name stdout
Match *pipeline:
inputs:
- name: forward
listen: 0.0.0.0
port: 24224
buffer_chunk_size: 1M
buffer_max_size: 6M
security.users: fluentbit changeme
empty_shared_key: true
self_hostname: flb.server.local
outputs:
- name: stdout
match: '*'[INPUT]
Name forward
Listen 0.0.0.0
Port 24224
Buffer_Chunk_Size 1M
Buffer_Max_Size 6M
Security.Users fluentbit changeme
Empty_Shared_Key true
Self_Hostname flb.server.local
[OUTPUT]
Name stdout
Match *fluent-bit -i forward -o stdoutfluent-bit -i forward -p listen="192.168.3.2" -p port=9090 -o stdoutecho '{"key 1": 123456789, "key 2": "abcdefg"}' | fluent-cat my_tagfluent-bit -i forward -o stdout...
[0] my_tag: [1475898594, {"key 1"=>123456789, "key 2"=>"abcdefg"}]
...pipeline:
inputs:
- name: gpu_metrics
cards_exclude: "0"
cards_include: "1"
enable_power: true
enable_temperature: true
path_sysfs: /sys
scrape_interval: 2
outputs:
- name: stdout
match: '*'[INPUT]
Name gpu_metrics
Cards_Exclude 0
Cards_Include 1
Enable_Power true
Enable_Temperature true
Path_Sysfs /sys
Scrape_Interval 2
[OUTPUT]
Name stdout
Match *lspci | grep -i vga | grep -i amd03:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Navi 31 [Radeon RX 7900 XT/7900 XTX/7900 GRE/7900M] (rev ce)
73:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Granite Ridge [Radeon Graphics] (rev c5)ls /sys/class/drm/card*/device/vendor/sys/class/drm/card0/device/vendor
/sys/class/drm/card1/device/vendorfluent-bit -i gpu_metrics -o stdout2025-10-25T20:36:55.236905093Z gpu_utilization_percent{card="1",vendor="amd"} = 2
2025-10-25T20:36:55.237853918Z gpu_utilization_percent{card="0",vendor="amd"} = 0
2025-10-25T20:36:55.236905093Z gpu_memory_used_bytes{card="1",vendor="amd"} = 1580118016
2025-10-25T20:36:55.237853918Z gpu_memory_used_bytes{card="0",vendor="amd"} = 26083328
2025-10-25T20:36:55.236905093Z gpu_memory_total_bytes{card="1",vendor="amd"} = 17163091968
2025-10-25T20:36:55.237853918Z gpu_memory_total_bytes{card="0",vendor="amd"} = 2147483648
2025-10-25T20:36:55.236905093Z gpu_clock_mhz{card="1",vendor="amd",type="graphics"} = 45
2025-10-25T20:36:55.236905093Z gpu_clock_mhz{card="1",vendor="amd",type="memory"} = 96
2025-10-25T20:36:55.236905093Z gpu_clock_mhz{card="1",vendor="amd",type="soc"} = 500
2025-10-25T20:36:55.237853918Z gpu_clock_mhz{card="0",vendor="amd",type="graphics"} = 600
2025-10-25T20:36:55.237853918Z gpu_clock_mhz{card="0",vendor="amd",type="memory"} = 2800
2025-10-25T20:36:55.237853918Z gpu_clock_mhz{card="0",vendor="amd",type="soc"} = 1200
2025-10-25T20:36:55.236905093Z gpu_power_watts{card="1",vendor="amd"} = 28
2025-10-25T20:36:55.236905093Z gpu_temperature_celsius{card="1",vendor="amd"} = 28
2025-10-25T20:36:55.237853918Z gpu_temperature_celsius{card="0",vendor="amd"} = 39
2025-10-25T20:36:55.236905093Z gpu_fan_speed_rpm{card="1",vendor="amd"} = 0
2025-10-25T20:36:55.236905093Z gpu_fan_pwm_percent{card="1",vendor="amd"} = 0128
storage.backlog.mem_limit
Set the memory limit for backlog data chunks.
5M
storage.backlog.flush_on_shutdown
Attempt to flush all backlog chunks during shutdown. Accepted values: off or on.
off
storage.metrics
Enable storage layer metrics on the HTTP endpoint. Accepted values: off or on.
off
storage.delete_irrecoverable_chunks
Delete irrecoverable chunks during runtime and at startup. Accepted values: off or on.
off
storage.keep.rejected
Enable the Dead Letter Queue (DLQ) to preserve chunks that fail to be delivered. Accepted values: off or on.
off
storage.rejected.path
Subdirectory name under storage.path for storing rejected chunks.
rejected
gpu_fan_speed_rpm
Fan rotation speed in Revolutions per Minute (RPM).
gpu_fan_pwm_percent
Fan PWM duty cycle as a percentage (0-100). Indicates fan intensity.
path_sysfs
Path to the sysfs root directory. Typically used for testing or non-standard systems.
/sys
scrape_interval
Interval in seconds between metric collection cycles.
5
true
scrape_interval
Interval in seconds between file scans.
10s
storage.pause_on_chunks_overlimit
Enable pausing on an input when they reach their chunks limit.
none
storage.type
Sets the storage type for this input, one of: filesystem, memory or memrb.
memory
tag
Set a tag for the events generated by this input plugin.
none
threaded
Enable threading on an input.
false
thread.ring_buffer.capacity
Set custom ring buffer capacity when the input runs in threaded mode.
1024
thread.ring_buffer.window
Set custom ring buffer window percentage for threaded inputs.
5
alias
Sets an alias. Use for multiple instances of the same input plugin. If no alias is specified, a default name is assigned using the plugin name followed by a dot and a sequence number.
none
log_level
Specifies the log level for output plugin. If not set here, plugin uses global log level in service section.
info
log_supress_interval
Suppresses log messages from output plugin that appear similar within a specified time interval. 0 no suppression.
0
mem_buf_limit
Set a memory buffer limit for the input plugin. If the limit is reached, the plugin will pause until the buffer is drained. The value is in bytes. If set to 0, the buffer limit is disabled.
0
path
File or directory path pattern. Supports glob patterns with * wildcard (for example, /var/lib/prometheus/*.prom).
none
routable
If set to true, the data generated by the plugin will be routable, meaning that it can be forwarded to other plugins or outputs. If set to false, the data will be discarded.
pipeline:
inputs:
- name: prometheus_textfile
tag: custom_metrics
path: '/var/lib/prometheus/textfile/*.prom'
scrape_interval: 15
outputs:
- name: prometheus_exporter
match: custom_metrics
host: 192.168.100.61
port: 2021# HELP custom_counter_total A custom counter metric
# TYPE custom_counter_total counter
custom_counter_total{instance="server1",job="myapp"} 42
# HELP custom_gauge A custom gauge metric
# TYPE custom_gauge gauge
custom_gauge{environment="production"} 1.23
# HELP custom_histogram_bucket A custom histogram
# TYPE custom_histogram_bucket histogram
custom_histogram_bucket{le="0.1"} 10
custom_histogram_bucket{le="0.5"} 25
custom_histogram_bucket{le="1.0"} 40
custom_histogram_bucket{le="+Inf"} 50
custom_histogram_sum 125.5
custom_histogram_count 50# Script writes metrics to file
echo "# HELP app_requests_total Total HTTP requests" > /var/lib/prometheus/textfile/app.prom
echo "# TYPE app_requests_total counter" >> /var/lib/prometheus/textfile/app.prom
echo "app_requests_total{status=\"200\"} 1500" >> /var/lib/prometheus/textfile/app.prom
echo "app_requests_total{status=\"404\"} 23" >> /var/lib/prometheus/textfile/app.prom#!/bin/bash
# Backup script writes completion metrics
BACKUP_START=$(date +%s)
# ... perform backup ...
BACKUP_END=$(date +%s)
DURATION=$((BACKUP_END - BACKUP_START))
cat > /var/lib/prometheus/textfile/backup.prom << EOF
# HELP backup_duration_seconds Time taken to complete backup
# TYPE backup_duration_seconds gauge
backup_duration_seconds ${DURATION}
# HELP backup_last_success_timestamp_seconds Last successful backup timestamp
# TYPE backup_last_success_timestamp_seconds gauge
backup_last_success_timestamp_seconds ${BACKUP_END}
EOFpipeline:
inputs:
- name: prometheus_textfile
tag: textfile_metrics
path: /var/lib/prometheus/textfile
- name: node_exporter_metrics
tag: system_metrics
scrape_interval: 15
outputs:
- name: opentelemetry
match: '*'
host: 192.168.56.4
port: 2021fluent-bit
poll_ms
Kafka brokers polling interval in milliseconds.
500
poll_timeout_ms
Timeout in milliseconds for Kafka consumer poll operations. Only effective when threaded is enabled.
1
rdkafka.{property}
{property} can be any .
none
threaded
Indicates whether to run this input in its own .
false
topics
Single entry or list of comma-separated topics (,) that Fluent Bit will subscribe to.
none
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)AWS credentials file (~/.aws/credentials)
Instance metadata service (IMDS)
IAM Permissions: The credentials must allow access to the target MSK cluster, as shown in the following example policy.
brokers
Single or multiple list of Kafka Brokers. For example: 192.168.1.3:9092, 192.168.1.4:9092.
none
buffer_max_size
Specify the maximum size of buffer per cycle to poll Kafka messages from subscribed topics. To increase throughput, specify larger size.
4M
client_id
Client id passed to librdkafka.
none
enable_auto_commit
Rely on Kafka auto-commit and commit messages in batches.
false
format
Serialization format of the messages. If set to json, the payload will be parsed as JSON.
none
group_id
aws_msk_iam
If true, enables AWS MSK IAM authentication. Possible values: true, false.
false
aws_msk_iam_cluster_arn
Full ARN of the MSK cluster for region extraction. This value is required if aws_msk_iam is true.
none
Group id passed to librdkafka.
Output
It's also possible to split the main configuration file into multiple files using the Include File feature to include external files.
The Service section defines global properties of the service. The following keys are:
flush
Set the flush time in seconds.nanoseconds. The engine loop uses a Flush timeout to define when it's required to flush the records ingested by input plugins through the defined output plugins.
1
grace
Set the grace time in seconds as an integer value. The engine loop uses a grace timeout to define wait time on exit.
5
daemon
Boolean. Determines whether Fluent Bit should run as a Daemon (background). Allowed values are: yes, no, on, and off. Don't enable when using a Systemd based unit, such as the one provided in Fluent Bit packages.
Off
dns.mode
Set the primary transport layer protocol used by the asynchronous DNS resolver. Can be overridden on a per plugin basis.
The following is an example of a SERVICE section:
For scheduler and retry details, see scheduling and retries.
The INPUT section defines a source (related to an input plugin). Each input plugin can add its own configuration keys:
Name
Name of the input plugin.
Tag
Tag name associated to all records coming from this plugin.
Log_Level
Set the plugin's logging verbosity level. Allowed values are: off, error, warn, info, debug, and trace. Defaults to the SERVICE section's Log_Level.
Name is mandatory and tells Fluent Bit which input plugin to load. Tag is mandatory for all plugins except for the input forward plugin, which provides dynamic tags.
The following is an example of an INPUT section:
The FILTER section defines a filter (related to an filter plugin). Each filter plugin can add it own configuration keys. The base configuration for each FILTER section contains:
Name
Name of the filter plugin.
Match
A pattern to match against the tags of incoming records. Case sensitive, supports asterisk (*) as a wildcard.
Match_Regex
A regular expression to match against the tags of incoming records. Use this option if you want to use the full regular expression syntax.
Log_Level
Set the plugin's logging verbosity level. Allowed values are: off, error, warn, info, debug, and trace. Defaults to the SERVICE section's Log_Level.
Name is mandatory and lets Fluent Bit know which filter plugin should be loaded. Match or Match_Regex is mandatory for all plugins. If both are specified, Match_Regex takes precedence.
The following is an example of a FILTER section:
The OUTPUT section specifies a destination that certain records should go to after a Tag match. Fluent Bit can route up to 256 OUTPUT plugins. The configuration supports the following keys:
Name
Name of the output plugin.
Match
A pattern to match against the tags of incoming records. Case sensitive and supports the asterisk (*) character as a wildcard.
Match_Regex
A regular expression to match against the tags of incoming records. Use this option if you want to use the full regular expression syntax.
Log_Level
Set the plugin's logging verbosity level. Allowed values are: off, error, warn, info, debug, and trace. Defaults to the SERVICE section's Log_Level.
The following is an example of an OUTPUT section:
The following configuration file example demonstrates how to collect CPU metrics and flush the results every five seconds to the standard output:
To avoid complicated long configuration files is better to split specific parts in different files and call them (include) from one main file. The @INCLUDE can be used in the following way:
The configuration reader will try to open the path somefile.conf. If not found, the reader assumes the file is on a relative path based on the path of the base configuration file:
Main configuration path: /tmp/main.conf
Included file: somefile.conf
Fluent Bit will try to open somefile.conf, if it fails it will try /tmp/somefile.conf.
The @INCLUDE command only works at top-left level of the configuration line, and can't be used inside sections.
Wildcard character (*) supports including multiple files. For example:
Files matching the wildcard character are included unsorted. If plugin ordering between files needs to be preserved, the files should be included explicitly.
Buffer_Size
Set the buffer size to read data. This value is used to increase buffer size and must be set according to the specification.
16k
Parser
The name of the parser to invoke instead of the default JSON input parser.
none
Threaded
Indicates whether to run this input in its own .
false
If no parser is configured for the stdin plugin, it expects valid JSON input data in one of the following formats:
A JSON object with one or more key-value pairs: { "key": "value", "key2": "value2" }
A 2-element JSON array in Fluent Bit Event format, which can be:
[TIMESTAMP, { "key": "value" }] where TIMESTAMP is a floating point value representing a timestamp in seconds.
From Fluent Bit v2.1.0, [[TIMESTAMP, METADATA], { "key": "value" }] where TIMESTAMP has the same meaning as previous and METADATA is a JSON object.
Multi-line input JSON is supported.
Any input data which isn't in one of the supported formats will cause the plugin to log errors like:
To handle inputs in other formats, a parser must be explicitly specified in the configuration for the stdin plugin. See parser input example for sample configuration.
The Fluent Bit event timestamp will be set from the input record if the two-element event input is used or a custom parser configuration supplies a timestamp. Otherwise, the event timestamp will be set to the timestamp at which the record is read by the stdin plugin.
To demonstrate how the plugin works, you can use a bash script that generates messages and writes them to Fluent Bit.
Write the following content in a file named test.sh:
Start the script and Fluent Bit:
The command should return output like the following:
An input event timestamp can also be supplied. Replace test.sh with:
Re-run the sample command. Timestamps output by Fluent Bit are now one day old because Fluent Bit used the input message timestamp.
Which returns the following:
Additional metadata is supported in Fluent Bit v2.1.0 and later by replacing the timestamp with a two-element object. For example:
Run test using the command:
Which returns results like the following:
On older Fluent Bit versions records in this format will be discarded. If the log level permits, Fluent Bit will log:
To capture inputs in other formats, specify a parser configuration for the stdin plugin.
For example, if you want to read raw messages line by line and forward them, you could use a separate parsers file that captures the whole message line:
You can then use the parsers file in a stdin plugin in the main Fluent Bit configuration file as follows:
Fluent Bit will now read each line and emit a single message for each input line, using the following command:
Which returns output similar to:
In production deployments it's best to use a parser that splits messages into real fields and adds appropriate tags.
Fluent Bit is distributed as the fluent-bit package for Windows and as a . Fluent Bit provides two Windows installers: a ZIP archive and an EXE installer.
Not all plugins are supported on Windows. The shows the default set of supported plugins.
Provide a valid Windows configuration with the installation.
The Blob input plugin monitors a directory and processes binary (blob) files. It scans the specified path at regular intervals, reads binary files, and forwards them as records through the Fluent Bit pipeline. This plugin processes binary log files, artifacts, or any binary data that needs to be collected and forwarded to outputs.
The plugin supports the following configuration parameters:
fluent-bit -i kafka -o stdout -p brokers=192.168.1.3:9092 -p topics=some-topicpipeline:
inputs:
- name: kafka
brokers: 192.168.1.3:9092
topics: some-topic
poll_ms: 100
outputs:
- name: stdout
match: '*'[INPUT]
Name kafka
Brokers 192.168.1.3:9092
Topics some-topic
Poll_ms 100
[OUTPUT]
Name stdout
Match *pipeline:
inputs:
- name: kafka
brokers: kafka-broker:9092
topics: fb-source
poll_ms: 100
format: json
filters:
- name: lua
match: '*'
script: kafka.lua
call: modify_kafka_message
outputs:
- name: kafka
brokers: kafka-broker:9092
topics: fb-sink[INPUT]
Name kafka
Brokers kafka-broker:9092
Topics fb-source
Poll_ms 100
Format json
[FILTER]
Name lua
Match *
Script kafka.lua
Call modify_kafka_message
[OUTPUT]
Name kafka
Brokers kafka-broker:9092
Topics fb-sinkpipeline:
inputs:
- name: kafka
brokers: my-cluster.abcdef.c1.kafka.us-east-1.amazonaws.com:9098
topics: my-topic
aws_msk_iam: true
aws_msk_iam_cluster_arn: arn:aws:kafka:us-east-1:123456789012:cluster/my-cluster/abcdef-1234-5678-9012-abcdefghijkl-s3
outputs:
- name: stdout
match: '*'{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"kafka-cluster:*",
"kafka-cluster:DescribeCluster",
"kafka-cluster:ReadData",
"kafka-cluster:DescribeTopic",
"kafka-cluster:Connect"
],
"Resource": "*"
}
]
}[SERVICE]
Flush 5
Daemon off
Log_Level debug[INPUT]
Name cpu
Tag my_cpu[FILTER]
Name grep
Match *
Regex log aa[OUTPUT]
Name stdout
Match my*cpu[SERVICE]
Flush 5
Daemon off
Log_Level debug
[INPUT]
Name cpu
Tag my_cpu
[OUTPUT]
Name stdout
Match my*cpu@INCLUDE somefile.conf@INCLUDE input_*.conf#!/bin/sh
for ((i=0; i<=5; i++)); do
echo -n "{\"key\": \"some value\"}"
sleep 1
donebash test.sh | fluent-bit -q -i stdin -o stdout[0] stdin.0: [[1684196745.942883835, {}], {"key"=>"some value"}]
[0] stdin.0: [[1684196746.938949056, {}], {"key"=>"some value"}]
[0] stdin.0: [[1684196747.940162493, {}], {"key"=>"some value"}]
[0] stdin.0: [[1684196748.941392297, {}], {"key"=>"some value"}]
[0] stdin.0: [[1684196749.942644238, {}], {"key"=>"some value"}]
[0] stdin.0: [[1684196750.943721442, {}], {"key"=>"some value"}]#!/bin/sh
for ((i=0; i<=5; i++)); do
echo -n "
[
$(date '+%s.%N' -d '1 day ago'),
{
\"realtimestamp\": $(date '+%s.%N')
}
]
"
sleep 1
donebash test.sh | fluent-bit -q -i stdin -o stdout[0] stdin.0: [[1684110480.028171300, {}], {"realtimestamp"=>1684196880.030070}]
[0] stdin.0: [[1684110481.033753395, {}], {"realtimestamp"=>1684196881.034741}]
[0] stdin.0: [[1684110482.036730051, {}], {"realtimestamp"=>1684196882.037704}]
[0] stdin.0: [[1684110483.039903879, {}], {"realtimestamp"=>1684196883.041081}]
[0] stdin.0: [[1684110484.044719457, {}], {"realtimestamp"=>1684196884.046404}]
[0] stdin.0: [[1684110485.048710107, {}], {"realtimestamp"=>1684196885.049651}]#!/bin/sh
for ((i=0; i<=5; i++)); do
echo -n "
[
[
$(date '+%s.%N' -d '1 day ago'),
{\"metakey\": \"metavalue\"}
],
{
\"realtimestamp\": $(date '+%s.%N')
}
]
"
sleep 1
donebash ./test.sh | fluent-bit -q -i stdin -o stdout[0] stdin.0: [[1684110513.060139417, {"metakey"=>"metavalue"}], {"realtimestamp"=>1684196913.061017}]
[0] stdin.0: [[1684110514.063085317, {"metakey"=>"metavalue"}], {"realtimestamp"=>1684196914.064145}]
[0] stdin.0: [[1684110515.066210508, {"metakey"=>"metavalue"}], {"realtimestamp"=>1684196915.067155}]
[0] stdin.0: [[1684110516.069149971, {"metakey"=>"metavalue"}], {"realtimestamp"=>1684196916.070132}]
[0] stdin.0: [[1684110517.072484016, {"metakey"=>"metavalue"}], {"realtimestamp"=>1684196917.073636}]
[0] stdin.0: [[1684110518.075428724, {"metakey"=>"metavalue"}], {"realtimestamp"=>1684196918.076292}][ warn] unknown time format 6parsers:
- name: stringify_message
format: regex
key_name: message
regex: '^(?<message>.*)'[PARSER]
name stringify_message
format regex
Key_Name message
regex ^(?<message>.*)service:
parsers_file: parsers.yaml
pipeline:
inputs:
- name: stdin
tag: stdin
parser: stringify_message
outputs:
- name: stdout
match: '*'[SERVICE]
parsers_file parsers.conf
[INPUT]
Name stdin
Tag stdin
Parser stringify_message
[OUTPUT]
Name stdout
Match *fluent-bit -i stdin -o stdout[debug] [input:stdin:stdin.0] invalid JSON message, skipping
[error] [input:stdin:stdin.0] invalid record found, it's not a JSON map or array# For YAML configuration.
seq 1 5 | ./fluent-bit --config fluent-bit.yaml
# For classic configuration.
seq 1 5 | ./fluent-bit --config fluent-bit.conf...
[0] stdin: [[1751545974.960182000, {}], {"message"=>"1"}]
[1] stdin: [[1751545974.960246000, {}], {"message"=>"2"}]
[2] stdin: [[1751545974.960255000, {}], {"message"=>"3"}]
[3] stdin: [[1751545974.960262000, {}], {"message"=>"4"}]
[4] stdin: [[1751545974.960268000, {}], {"message"=>"5"}]
...service:
flush: 5
daemon: off
log_level: info
parsers_file: parsers.yaml
plugins_file: plugins.yaml
http_server
[SERVICE]
# Flush
# =====
# set an interval of seconds before to flush records to a destination
flush 5
# Daemon
# ======
# instruct Fluent Bit to run in foreground or background mode.
daemon Off
# Log_Level
# =========
# Set the verbosity level of the service, values can be:
#
# - error
# - warning
# - info
# - debug
# - trace
For version 1.9 and later, td-agent-bit is a deprecated package and was removed after 1.9.9. The correct package name to use now is fluent-bit.
The latest stable version is 4.2.0.
These are now using the Github Actions built versions. Legacy AppVeyor builds are still available (AMD 32/64 only) at releases.fluentbit.io but are deprecated.
MSI installers are also available:
To check the integrity, use the Get-FileHash cmdlet for PowerShell.
Download a ZIP archive. Choose the suitable installers for your 32-bit or 64-bit environments.
Expand the ZIP archive. You can do this by clicking Extract All in Explorer or Expand-Archive in PowerShell.
The ZIP package contains the following set of files.
Launch cmd.exe or PowerShell on your machine, and execute fluent-bit.exe:
The following output indicates Fluent Bit is running:
To halt the process, press Control+C in the terminal.
Download an EXE installer for the appropriate 32-bit or 64-bit build.
Double-click the EXE installer you've downloaded. The installation wizard starts.
Click Next and finish the installation. By default, Fluent Bit is installed in C:\Program Files\fluent-bit\.
The Windows installer is built by CPack using NSIS and supports the default NSIS options for silent installation and install directory.
To silently install to C:\fluent-bit directory here is an example:
The uninstaller also supports a silent uninstall using the same /S flag. This can be used for provisioning with automation like Ansible, Puppet, and so on.
Windows services are equivalent to daemons in Unix (long-running background processes). For v1.5.0 and later, Fluent Bit has native support for Windows services.
For example, you have the following installation layout:
To register Fluent Bit as a Windows service, execute the following command on at a command prompt. A single space is required after binpath=.
Fluent Bit can be started and managed as a normal Windows service.
To halt the Fluent Bit service, use the stop command.
To start Fluent Bit automatically on boot, execute the following:
Instead of sc.exe, PowerShell can be used to manage Windows services.
Create a Fluent Bit service:
Start the service:
Query the service status:
Stop the service:
Remove the service (requires PowerShell 6.0 or later)
If you need to create a custom executable, use the following procedure to compile Fluent Bit by yourself.
Install Microsoft Visual C++ to compile Fluent Bit. You can install the minimum toolkit using the following command:
Choose C++ Build Tools and C++ CMake tools for Windows and wait until the process finishes.
Install flex and bison. One way to install them on Windows is to use winflexbison.
Add the path C:\WinFlexBison to your systems environment variable Path. Here's how to do that.
Install OpenSSL binaries, at least the library files and headers.
Install to pull the source code from the repository.
Open the Start menu on Windows and type command Prompt for VS. From the result list, select the one that corresponds to your target system ( x86 or x64).
Verify the installed OpenSSL library files match the selected target. You can examine the library files by using the dumpbin command with the /headers option.
Clone the source code of Fluent Bit.
Compile the source code.
Now you should be able to run Fluent Bit:
To create a ZIP package, call cpack as follows:
alias
Sets an alias for multiple instances of the same input plugin. This helps when you need to run multiple blob input instances with different configurations.
none
database_file
Specify a database file to keep track of processed files and their state. This enables the plugin to resume processing from the last known position if Fluent Bit is restarted.
none
exclude_pattern
Set one or multiple shell patterns separated by commas to exclude files matching certain criteria. For example, exclude_pattern *.tmp,*.bak will exclude temporary and backup files from processing.
none
log_level
Specifies the log level for this input plugin. If not set here, the plugin uses the global log level specified in the service section. Valid values: off, error, warn, info, debug, trace.
info
log_suppress_interval
Suppresses log messages from this input plugin that appear similar within a specified time interval. Set to 0 to disable suppression. The value must be specified in seconds. This helps reduce log noise when the same error or warning occurs repeatedly.
0
mem_buf_limit
The Blob input plugin periodically scans the specified directory path for binary files. When a new or modified file is detected, the plugin reads the file content and creates records that are forwarded through the Fluent Bit pipeline. The plugin can track processed files using a database file, allowing it to resume from the last known position after a restart.
Binary file content is typically included in the output records, and the exact format depends on the output plugin configuration. The plugin generates one or more records per file, depending on the file size and configuration.
The database file enables the plugin to track which files have been processed and maintain state across Fluent Bit restarts. This is similar to how the Tail input plugin uses a database file.
When a database file is specified:
The plugin stores information about processed files, including file paths and processing status
On restart, the plugin can skip files that were already processed
The database is backed by SQLite3 and will create additional files (.db-shm and .db-wal) when using write-ahead logging mode
It's recommended to use a unique database file for each blob input instance to avoid conflicts. For example:
The Blob input plugin common use cases are:
Binary log files: Processing binary-formatted log files that can't be read as text
Artifact collection: Collecting binary artifacts or build outputs for analysis or archival
File monitoring: Monitoring directories for new binary files and forwarding them to storage or analysis systems
Data pipeline integration: Integrating binary data sources into your Fluent Bit data pipeline
You can run the plugin from the command line or through a configuration file.
Run the plugin from the command line using the following command:
which returns results like the following:
In your main configuration file append the following:
This example shows how to configure the blob plugin with a database file to track processed files:
This example excludes certain file patterns and uses filesystem storage for better reliability:
This example renames files after successful upload and handles failures:
UDP
log_file
Absolute path for an optional log file. By default all logs are redirected to the standard error interface (stderr).
none
log_level
Set the logging verbosity level. Allowed values are: off, error, warn, info, debug, and trace. Values are cumulative. If debug is set, it will include error, warning, info, and debug. Trace mode is only available if Fluent Bit was built with the WITH_TRACE option enabled.
info
parsers_file
Path for a parsers configuration file. Multiple Parsers_File entries can be defined within the section.
none
plugins_file
Path for a plugins configuration file. A plugins configuration file defines paths for external plugins. See an example.
none
streams_file
Path for the Stream Processor configuration file. Learn more about Stream Processing configuration.
none
http_server
Enable the built-in HTTP Server.
Off
http_listen
Set listening interface for HTTP Server when it's enabled.
0.0.0.0
http_port
Set TCP Port for the HTTP Server.
2020
coro_stack_size
Set the coroutines stack size in bytes. The value must be greater than the page size of the running system. Setting the value too small (4096) can cause coroutine threads to overrun the stack buffer. The default value of this parameter shouldn't be changed.
24576
scheduler.cap
Set a maximum retry time in seconds. Supported in v1.8.7 and greater.
2000
scheduler.base
Set a base of exponential backoff. Supported in v1.8.7 and greater.
5
json.convert_nan_to_null
If enabled, NaN converts to null when Fluent Bit converts msgpack to json.
false
json.escape_unicode
Controls how Fluent Bit serializes non‑ASCII / multi‑byte Unicode characters in JSON strings. When enabled, Unicode characters are escaped as \uXXXX sequences (characters outside BMP become surrogate pairs). When disabled, Fluent Bit emits raw UTF‑8 bytes.
true
sp.convert_from_str_to_num
If enabled, Stream processor converts from number string to number type.
true
windows.maxstdio
If specified, the limit of stdio is adjusted. Only provided for Windows. From 512 to 2048 is allowed.
512
Expand-Archive fluent-bit-4.2.0-win64.zipfluent-bit
├── bin
│ ├── fluent-bit.dll
│ └── fluent-bit.exe
│ └── fluent-bit.pdb
├── conf
│ ├── fluent-bit.conf
│ ├── parsers.conf
│ └── plugins.conf
└── include
│ ├── flb_api.h
│ ├── ...
│ └── flb_worker.h
└── fluent-bit.hwget -o winflexbison.zip https://github.com/lexxmark/winflexbison/releases/download/v2.5.22/win_flex_bison-2.5.22.zip
Expand-Archive winflexbison.zip -Destination C:\WinFlexBison
cp -Path C:\WinFlexBison\win_bison.exe C:\WinFlexBison\bison.exe
cp -Path C:\WinFlexBison\win_flex.exe C:\WinFlexBison\flex.exeGet-FileHash fluent-bit-4.2.0-win32.exefluent-bit.exe -i dummy -o stdout
...
[0] dummy.0: [1561684385.443823800, {"message"=>"dummy"}]
[1] dummy.0: [1561684386.428399000, {"message"=>"dummy"}]
[2] dummy.0: [1561684387.443641900, {"message"=>"dummy"}]
[3] dummy.0: [1561684388.441405800, {"message"=>"dummy"}]& "C:\Program Files\fluent-bit\bin\fluent-bit.exe" -i dummy -o stdout<installer exe> /S /D=C:\fluent-bitC:\fluent-bit\
├── conf
│ ├── fluent-bit.conf
│ └── parsers.conf
│ └── plugins.conf
└── bin
├── fluent-bit.dll
└── fluent-bit.exe
└── fluent-bit.pdbsc.exe create fluent-bit binpath= "\fluent-bit\bin\fluent-bit.exe -c \fluent-bit\conf\fluent-bit.conf"sc.exe start fluent-bit
sc.exe query fluent-bit
SERVICE_NAME: fluent-bit
TYPE : 10 WIN32_OWN_PROCESS
STATE : 4 Running
...sc.exe stop fluent-bitsc.exe config fluent-bit start= autoNew-Service fluent-bit -BinaryPathName "`"C:\Program Files\fluent-bit\bin\fluent-bit.exe`" -c `"C:\Program Files\fluent-bit\conf\fluent-bit.conf`"" -StartupType Automatic -Description "This service runs Fluent Bit, a log collector that enables real-time processing and delivery of log data to centralized logging systems."Start-Service fluent-bitget-Service fluent-bit | format-list
Name : fluent-bit
DisplayName : fluent-bit
Status : Running
DependentServices : {}
ServicesDependedOn : {}
CanPauseAndContinue : False
CanShutdown : False
CanStop : True
ServiceType : Win32OwnProcessStop-Service fluent-bitRemove-Service fluent-bitwget -o vs.exe https://aka.ms/vs/16/release/vs_buildtools.exe
start vs.exefluent-bit.exe -i dummy -o stdoutcpack -G ZIPpipeline:
inputs:
- name: blob
path: '/path/to/binary/files/*.bin'
outputs:
- name: stdout
match: '*'[INPUT]
Name blob
Path /path/to/binary/files/*.bin
[OUTPUT]
Name stdout
Match *pipeline:
inputs:
- name: blob
path: /var/log/binaries/*.bin
database_file: /var/lib/fluent-bit/blob.db
scan_refresh_interval: 10s
tag: blob.files
outputs:
- name: stdout
match: '*'[INPUT]
Name blob
Path /var/log/binaries/*.bin
Database_File /var/lib/fluent-bit/blob.db
Scan_Refresh_Interval 10s
Tag blob.files
[OUTPUT]
Name stdout
Match *pipeline:
inputs:
- name: blob
path: /data/artifacts/**/*
exclude_pattern: *.tmp,*.bak,*.old
storage.type: filesystem
storage.pause_on_chunks_overlimit: true
mem_buf_limit: 50M
tag: artifacts
outputs:
- name: stdout
match: '*'[INPUT]
Name blob
Path /data/artifacts/**/*
Exclude_Pattern *.tmp,*.bak,*.old
Storage.Type filesystem
Storage.Pause_On_Chunks_Overlimit true
Mem_Buf_Limit 50M
Tag artifacts
[OUTPUT]
Name stdout
Match *pipeline:
inputs:
- name: blob
path: /var/log/binaries/*.bin
database_file: /var/lib/fluent-bit/blob.db
upload_success_action: add_suffix
upload_success_suffix: .processed
upload_failure_action: add_suffix
upload_failure_suffix: .failed
tag: blob.data
outputs:
- name: stdout
match: '*'[INPUT]
Name blob
Path /var/log/binaries/*.bin
Database_File /var/lib/fluent-bit/blob.db
Upload_Success_Action add_suffix
Upload_Success_Suffix .processed
Upload_Failure_Action add_suffix
Upload_Failure_Suffix .failed
Tag blob.data
[OUTPUT]
Name stdout
Match *pipeline:
inputs:
- name: blob
path: /var/log/binaries/*.bin
database_file: /var/lib/fluent-bit/blob.dbfluent-bit -i blob --prop "path=[SOME_PATH_TO_BINARY_FILES]" -o stdout...
[2025/11/05 17:39:32.818356000] [ info] [input:blob:blob.0] initializing
[2025/11/05 17:39:32.818362000] [ info] [input:blob:blob.0] storage_strategy='memory' (memory only)
...Set a memory buffer limit for the input plugin instance in bytes. If the limit is reached, the plugin will pause until the buffer is drained. If set to 0, the buffer limit is disabled. If the plugin has enabled filesystem buffering, this limit won't apply. The value must be according to the Unit Size specification.
0
path
Path to scan for blob (binary) files. Supports wildcards and glob patterns. For example, /var/log/binaries/*.bin or /data/artifacts/**/*.dat. This is a required parameter.
none
routable
If true, the data generated by the plugin can be forwarded to other plugins or outputs. If false, the data will be discarded. This is used for testing or when you want to process data but not forward it.
true
scan_refresh_interval
Set the interval time to scan for new files. The plugin periodically scans the specified path for new or modified files. The value must be specified according to the Unit Size specification (2s, 30m, 1h).
2s
storage.pause_on_chunks_overlimit
Enable pausing on an input when it reaches its chunks limit. When enabled, the plugin will pause processing if the number of chunks exceeds the limit, preventing memory issues during backpressure scenarios.
false
storage.type
Sets the storage type for this input. Options: filesystem (persists data to disk), memory (stores data in memory only), or memrb (memory ring buffer). For production environments with high data volumes, consider using filesystem to prevent data loss during restarts.
memory
tag
Set a tag for the events generated by this input plugin. Tags are used for routing records to specific outputs. Supports tag expansion with wildcards.
none
threaded
Indicates whether to run this input in its own thread. When enabled, the plugin runs in a separate thread, which can improve performance for I/O-bound operations.
false
threaded.ring_buffer.capacity
Set custom ring buffer capacity when the input runs in threaded mode. This determines how many records can be buffered in the ring buffer before blocking.
1024
threaded.ring_buffer.window
Set custom ring buffer window percentage for threaded inputs. This controls when the ring buffer is considered "full" and triggers backpressure handling.
5
upload_failure_action
Action to perform on the file after upload failure. Supported values: delete (delete the file), add_suffix (rename file by appending a suffix), emit_log (emit a log record with a custom message). When set to add_suffix, use upload_failure_suffix to specify the suffix. When set to emit_log, use upload_failure_message to specify the message.
none
upload_failure_message
Message to emit as a log record after upload failure. Only used when upload_failure_action is set to emit_log. This can be used for debugging or monitoring purposes.
none
upload_failure_suffix
Suffix to append to the filename after upload failure. Only used when upload_failure_action is set to add_suffix. For example, if set to .failed, a file named data.bin will be renamed to data.bin.failed.
none
upload_success_action
Action to perform on the file after successful upload. Supported values: delete (delete the file), add_suffix (rename file by appending a suffix), emit_log (emit a log record with a custom message). When set to add_suffix, use upload_success_suffix to specify the suffix. When set to emit_log, use upload_success_message to specify the message.
none
upload_success_message
Message to emit as a log record after successful upload. Only used when upload_success_action is set to emit_log. This can be used for debugging or monitoring purposes.
none
upload_success_suffix
Suffix to append to the filename after successful upload. Only used when upload_success_action is set to add_suffix. For example, if set to .processed, a file named data.bin will be renamed to data.bin.processed.
none
fluent-bit.exe -i dummy -o stdoutwget -o git.exe https://github.com/git-for-windows/git/releases/download/v2.28.0.windows.1/Git-2.28.0-64-bit.exe
start git.exegit clone https://github.com/fluent/fluent-bit
cd fluent-bit/buildcmake .. -G "NMake Makefiles"
cmake --build .Fluent Bit collects, parses, filters, and ships logs to a central place. A critical piece of this workflow is the ability to do buffering: a mechanism to place processed data into a temporary location until is ready to be shipped.
By default when Fluent Bit processes data, it uses Memory as a primary and temporary place to store the records. There are scenarios where it would be ideal to have a persistent buffering mechanism based in the filesystem to provide aggregation and data safety capabilities.
Choosing the right configuration is critical and the behavior of the service can be conditioned based in the backpressure settings. Before jumping into the configuration it helps to understand the relationship between chunks, memory, filesystem, and backpressure.
Understanding chunks, buffering, and backpressure is critical for a proper configuration.
See for a full explanation.
When an input plugin source emits records, the engine groups the records together in a chunk. A chunk's size usually is around 2 MB. By configuration, the engine decides where to place this chunk. By default, all chunks are created only in memory.
There are two scenarios where Fluent Bit marks chunks as irrecoverable:
When Fluent Bit encounters a bad layout in a chunk. A bad layout is a chunk that doesn't conform to the expected format.
When Fluent Bit encounters an incorrect or invalid chunk header size.
In both scenarios Fluent Bit logs an error message and then discards the irrecoverable chunks.
As mentioned previously, chunks generated by the engine are placed in memory by default, but this is configurable.
If memory is the only mechanism set for the input plugin, it will store as much data as possible in memory. This is the fastest mechanism with the least system overhead. However, if the service isn't able to deliver the records fast enough, Fluent Bit memory usage increases as it accumulates more data than it can deliver.
In a high load environment with backpressure, having high memory usage risks getting killed by the kernel's OOM Killer. To work around this backpressure scenario, limit the amount of memory in records that an input plugin can register using the mem_buf_limit property. If a plugin has queued more than the mem_buf_limit, it won't be able to ingest more until that data can be delivered or flushed properly. In this scenario the input plugin in question is paused. When the input is paused, records won't be ingested until the plugin resumes. For some inputs, such as TCP and tail, pausing the input will almost certainly lead to log loss. For the tail input, Fluent Bit can save its current offset in the current file it's reading, and pick back up when the input resumes.
Look for messages in the Fluent Bit log output like:
Using mem_buf_limit is good for certain scenarios and environments. It helps to control the memory usage of the service. However, if a file rotates while the plugin is paused, data can be lost since it won't be able to register new records. This can happen with any input source plugin. The goal of mem_buf_limit is memory control and survival of the service.
For a full data safety guarantee, use filesystem buffering.
Choose your preferred format for an example input definition:
If this input uses more than 50 MB memory to buffer logs, you will get a warning like this in the Fluent Bit logs:
Filesystem buffering helps with backpressure and overall memory control. Enable it using storage.type filesystem.
Memory and filesystem buffering mechanisms aren't mutually exclusive. Enabling filesystem buffering for your input plugin source can improve both performance and data safety.
Enabling filesystem buffering changes the behavior of the engine. Upon chunk creation, the engine stores the content in memory and also maps a copy on disk through . The newly created chunk is active in memory, backed up on disk, and called to be up, which means the chunk content is up in memory.
Fluent Bit controls the number of chunks that are up in memory by using the filesystem buffering mechanism to deal with high memory usage and backpressure.
By default, the engine allows a total of 128 chunks up in memory in total, considering all chunks. This value is controlled by the service property storage.max_chunks_up. The active chunks that are up are either ready for delivery (marked busy and locked), or are still receiving records. Any other remaining chunk is in a down state, which means that it's only in the filesystem and won't be up in memory unless it's ready to be delivered. Chunks are never much larger than 2 MB, so with the default storage.max_chunks_up value of 128, each input is limited to roughly 256 MB of memory.
If the input plugin has enabled storage.type as filesystem, when reaching the storage.max_chunks_up threshold, instead of the plugin being paused, all new data will go to chunks that are down in the filesystem. This lets you control memory usage by the service and also provides a guarantee that the service won't lose any data. By default, the enforcement of the storage.max_chunks_up limit is best-effort. Fluent Bit can only append new data to chunks that are up. When the limit is reached chunks will be temporarily brought up in memory to ingest new data, and then put to a down state afterwards. In general, Fluent Bit works to keep the total number of up chunks at or under storage.max_chunks_up.
If storage.pause_on_chunks_overlimit is enabled (default is off), the input plugin pauses upon exceeding storage.max_chunks_up. With this option, storage.max_chunks_up becomes a hard limit for the input. When the input is paused, records won't be ingested until the plugin resumes. For some inputs, such as TCP and tail, pausing the input will almost certainly lead to log loss. For the tail input, Fluent Bit can save its current offset in the current file it's reading, and pick back up when the input is resumed.
Look for messages in the Fluent Bit log output like:
Limiting filesystem space for chunks
Fluent Bit implements the concept of logical queues. Based on its tag, a chunk can be routed to multiple destinations. Fluent Bit keeps an internal reference from where a chunk was created and where it needs to go.
It's common to find cases where multiple destinations with different response times exist for a chunk, or one of the destinations is generating backpressure.
To limit the amount of filesystem chunks logically queueing, Fluent Bit v1.6 and later includes the storage.total_limit_size configuration property for output. This property limits the total size in bytes of chunks that can exist in the filesystem for a certain logical output destination. If one of the destinations reaches the configured storage.total_limit_size, the oldest chunk from its queue for that logical output destination will be discarded to make room for new data.
The storage layer configuration takes place in three sections:
Service
Input
Output
The known Service section configures a global environment for the storage layer, the Input sections define which buffering mechanism to use, and the Output defines limits for the logical filesystem queues.
The Service section refers to the section defined in the main :
The Dead Letter Queue (DLQ) feature preserves chunks that fail to be delivered to output destinations. Instead of losing this data, Fluent Bit copies the rejected chunks to a dedicated storage location for later analysis and troubleshooting.
Chunks are copied to the DLQ in the following failure scenarios:
Permanent errors: When an output plugin returns an unrecoverable error (FLB_ERROR).
Retry limit reached: When a chunk exhausts all configured retry attempts.
Retries disabled: When retry_limit is set to no_retries and a flush fails.
The DLQ feature requires:
storage.path must be configured (filesystem storage must be enabled).
storage.keep.rejected must be set to On.
Rejected chunks are stored in a subdirectory under storage.path. For example, with the following configuration:
Rejected chunks are stored at /var/log/flb-storage/rejected/.
Each DLQ file is named using this format:
For example: kube_var_log_containers_test_400_http_0x7f8b4c.flb
The file contains the original chunk data in the internal format of Fluent Bit, preserving all records and metadata.
The DLQ feature enables the following capabilities:
Data preservation: Invalid or rejected chunks are preserved instead of being permanently lost.
Root cause analysis: Investigate why specific data failed to be delivered without impacting live processing.
Data recovery: Replay or transform rejected chunks after fixing the underlying issue.
Debugging: Analyze the exact content of problematic records.
To examine DLQ chunks, you can use the storage metrics endpoint (when storage.metrics is enabled) or directly inspect the files in the rejected directory.
A Service section will look like this:
This configuration sets an optional buffering mechanism where the route to the data is /var/log/flb-storage/. It uses normal synchronization mode, without running a checksum and up to a maximum of 5 MB of memory when processing backlog data. Additionally, the dead letter queue is enabled, and rejected chunks are stored in /var/log/flb-storage/rejected/.
Optionally, any Input plugin can configure their storage preference. The following table describes the options available:
The following example configures a service offering filesystem buffering capabilities and two input plugins being the first based in filesystem and the second with memory only.
If certain chunks are filesystem storage.type based, it's possible to control the size of the logical queue for an output plugin. The following table describes the options available:
The following example creates records with CPU usage samples in the filesystem which are delivered to Google Stackdriver service while limiting the logical queue (buffering) to 5M:
If Fluent Bit is offline because of a network issue, it will continue buffering CPU samples, keeping a maximum of 5 MB of the newest data.
Fluent Bit provides integrated support for Transport Layer Security (TLS) and its predecessor Secure Sockets Layer (SSL). This section refers only to TLS for both implementations.
Both input and output plugins that perform Network I/O can optionally enable TLS and configure the behavior. The following table describes the properties available:
To use TLS on input plugins, you must provide both a certificate and a private key.
The listed properties can be enabled in the configuration file, specifically in each output plugin section or directly through the command line.
The following output plugins can take advantage of the TLS feature:
The following input plugins can take advantage of the TLS feature:
In addition, other plugins implement a subset of TLS support, with restricted configuration:
By default, the HTTP input plugin uses plain TCP. Run the following command to enable TLS:
In the previous command, the two properties tls and tls.verify are set for demonstration purposes. Always enable verification in production environments.
The same behavior can be accomplished using a configuration file:
By default, the HTTP output plugin uses plain TCP. Run the following command to enable TLS:
In the previous command, the properties tls and tls.verify are enabled for demonstration purposes. Always enable verification in production environments.
The same behavior can be accomplished using a configuration file:
The following command generates a 4096-bit RSA key pair and a certificate that's signed using SHA-256 with the expiration date set to 30 days in the future. In this example, test.host.net is set as the common name. This example opts out of DES, so the private key is stored in plain text.
Fluent Bit supports . If you are serving multiple host names on a single IP address (for example, using virtual hosting), you can make use of tls.vhost to connect to a specific hostname.
subjectAltNameBy default, TLS verification of host names isn't done automatically. As an example, you can extract the X509v3 Subject Alternative Name from a certificate:
This certificate covers only my.fluent-aggregator.net so if you use a different hostname it should fail.
To fully verify the alternative name and demonstrate the failure, enable tls.verify_hostname:
This outgoing connect will fail and disconnect:
none
tls.key_file
Absolute path to private Key file.
none
tls.key_passwd
Optional password for tls.key_file file.
none
tls.max_version
Specify the maximum version of TLS.
none
tls.min_version
Specify the minimum version of TLS.
none
tls.verify
Force certificate validation.
on
tls.vhost
Hostname to be used for TLS SNI extension.
none
tls.verify_hostname
Force TLS verification of host names.
off
tls
Enable or disable TLS support.
off
tls.debug
Set TLS debug verbosity level. Accepted values: 0 (No debug), 1 (Error), 2 (State change), 3 (Informational) and 4 (Verbose).
1
tls.ca_file
Absolute path to CA certificate file.
none
tls.ca_path
Absolute path to scan for certificate files.
none
tls.ciphers
Specify TLS ciphers up to TLSv1.2.
none
tls.crt_file
Absolute path to Certificate file.
fluent-bit -i http \
-p port=9999 \
-p tls=on \
-p tls.verify=off \
-p tls.crt_file=self_signed.crt \
-p tls.key_file=self_signed.key \
-o stdout \
-m '*'pipeline:
inputs:
- name: http
port: 9999
tls: on
tls.verify: off
tls.crt_file: self_signed.crt
tls.key_file: self_signed.key
outputs:
- name: stdout
match: '*'[INPUT]
name http
port 9999
tls on
tls.verify off
tls.crt_file self_signed.crt
tls.key_file self_signed.key
[OUTPUT]
Name stdout
Match *fluent-bit -i cpu -t cpu -o http://192.168.2.3:80/something \
-p tls=on \
-p tls.verify=off \
-m '*'pipeline:
inputs:
- name: cpu
tag: cpu
outputs:
- name: http
match: '*'
host: 192.168.2.3
port: 80
uri: /something
tls: on
tls.verify: off[INPUT]
Name cpu
Tag cpu
[OUTPUT]
Name http
Match *
Host 192.168.2.3
Port 80
URI /something
tls on
tls.verify offopenssl req -x509 \
-newkey rsa:4096 \
-sha256 \
-nodes \
-keyout self_signed.key \
-out self_signed.crt \
-subj "/CN=test.host.net"pipeline:
inputs:
- name: cpu
tag: cpu
outputs:
- name: forward
match: '*'
host: 192.168.10.100
port: 24224
tls: on
tls.verify: off
tls.ca_file: '/etc/certs/fluent.crt'
tls.vhost: 'fluent.example.com'[INPUT]
Name cpu
Tag cpu
[OUTPUT]
Name forward
Match *
Host 192.168.10.100
Port 24224
tls on
tls.verify on
tls.ca_file /etc/certs/fluent.crt
tls.vhost fluent.example.comX509v3 Subject Alternative Name:
DNS:my.fluent-aggregator.netpipeline:
inputs:
- name: cpu
tag: cpu
outputs:
- name: forward
match: '*'
host: other.fluent-aggregator.net
port: 24224
tls: on
tls.verify: on
tls.verify_hostname: on
tls.ca_file: '/path/to/fluent-x509v3-alt-name.crt'[INPUT]
Name cpu
Tag cpu
[OUTPUT]
Name forward
Match *
Host other.fluent-aggregator.net
Port 24224
tls on
tls.verify on
tls.verify_hostname on
tls.ca_file /path/to/fluent-x509v3-alt-name.crt[2024/06/17 16:51:31] [error] [tls] error: unexpected EOF with reason: certificate verify failed
[2024/06/17 16:51:31] [debug] [upstream] connection #50 failed to other.fluent-aggregator.net:24224
[2024/06/17 16:51:31] [error] [output:forward:forward.0] no upstream connections availablestorage.backlog.mem_limit
If storage.path is set, Fluent Bit looks for data chunks that weren't delivered and are still in the storage layer. These are called backlog data. Backlog chunks are filesystem chunks that were left over from a previous Fluent Bit run; chunks that couldn't be sent before exit that Fluent Bit will pick up when restarted. Fluent Bit will check the storage.backlog.mem_limit value against the current memory usage from all up chunks for the input. If the up chunks currently consume less memory than the limit, it will bring the backlog chunks up into memory so they can be sent by outputs.
5M
storage.backlog.flush_on_shutdown
When enabled, Fluent Bit will attempt to flush all backlog filesystem chunks to their destination during the shutdown process. This can help ensure data delivery before Fluent Bit stops, but can increase shutdown time. Accepted values: Off, On.
Off
storage.metrics
If http_server option is enabled in the main [SERVICE] section, this option registers a new endpoint where internal metrics of the storage layer can be consumed. For more details refer to the section.
off
storage.delete_irrecoverable_chunks
When enabled, will be deleted during runtime, and any other irrecoverable chunk located in the configured storage path directory will be deleted when Fluent Bit starts. Accepted values: Off, On.
Off
storage.keep.rejected
When enabled, the dead-letter queue feature stores failed chunks that can't be delivered. Accepted values: Off, On.
Off
storage.rejected.path
When specified, the dead-letter queue is stored in a subdirectory (stream) under storage.path. The default value rejected is used at runtime if not set.
none
Scheduler failures: When the retry scheduler can't schedule a retry (for example, due to resource constraints).
storage.path
Set an optional location in the file system to store streams and chunks of data. If this parameter isn't set, Input plugins can only use in-memory buffering.
none
storage.sync
Configure the synchronization mode used to store the data in the file system. Using full increases the reliability of the filesystem buffer and ensures that data is guaranteed to be synced to the filesystem even if Fluent Bit crashes. On Linux, full corresponds with the MAP_SYNC option for memory mapped files. Accepted values: normal, full.
normal
storage.checksum
Enable the data integrity check when writing and reading data from the filesystem. The storage layer uses the CRC32 algorithm. Accepted values: Off, On.
Off
storage.max_chunks_up
If the input plugin has enabled filesystem storage type, this property sets the maximum number of chunks that can be up in memory. Use this setting to control memory usage when you enable storage.type filesystem.
128
storage.type
Specifies the buffering mechanism to use. Accepted values: memory, filesystem.
memory
storage.pause_on_chunks_overlimit
Specifies if the input plugin should pause (stop ingesting new data) when the storage.max_chunks_up value is reached.
off
storage.total_limit_size
Limit the maximum disk space size in bytes for buffering chunks in the filesystem for the current output logical destination.
none
[input] tail.1 paused (mem buf overlimit)
[input] tail.1 resume (mem buf overlimit)pipeline:
inputs:
- name: tcp
listen: 0.0.0.0
port: 5170
format: none
tag: tcp-logs
mem_buf_limit: 50MB[INPUT]
Name tcp
Listen 0.0.0.0
Port 5170
Format none
Tag tcp-logs
Mem_Buf_Limit 50MB[input] tcp.1 paused (mem buf overlimit)[input] tail.1 paused (storage buf overlimit)
[input] tail.1 resume (storage buf overlimit)service:
storage.path: /var/log/flb-storage/
storage.keep.rejected: on
storage.rejected.path: rejected<sanitized_tag>_<status_code>_<output_name>_<unique_id>.flbservice:
flush: 1
log_level: info
storage.path: /var/log/flb-storage/
storage.sync: normal
storage.checksum: off
storage.backlog.mem_limit: 5M
storage.backlog.flush_on_shutdown: off
storage.keep.rejected: on
storage.rejected.path: rejected[SERVICE]
flush 1
log_Level info
storage.path /var/log/flb-storage/
storage.sync normal
storage.checksum off
storage.backlog.mem_limit 5M
storage.backlog.flush_on_shutdown off
storage.keep.rejected on
storage.rejected.path rejectedservice:
flush: 1
log_level: info
storage.path: /var/log/flb-storage/
storage.sync: normal
storage.checksum: off
storage.max_chunks_up: 128
storage.backlog.mem_limit: 5M
pipeline:
inputs:
- name: cpu
storage.type: filesystem
- name: mem
storage.type: memory[SERVICE]
flush 1
log_Level info
storage.path /var/log/flb-storage/
storage.sync normal
storage.checksum off
storage.max_chunks_up 128
storage.backlog.mem_limit 5M
[INPUT]
name cpu
storage.type filesystem
[INPUT]
name mem
storage.type memoryservice:
flush: 1
log_level: info
storage.path: /var/log/flb-storage/
storage.sync: normal
storage.checksum: off
storage.max_chunks_up: 128
storage.backlog.mem_limit: 5M
pipeline:
inputs:
- name: cpu
storage.type: filesystem
outputs:
- name: stackdriver
match: '*'
storage.total_limit_size: 5M[SERVICE]
flush 1
log_Level info
storage.path /var/log/flb-storage/
storage.sync normal
storage.checksum off
storage.max_chunks_up 128
storage.backlog.mem_limit 5M
[INPUT]
name cpu
storage.type filesystem
[OUTPUT]
name stackdriver
match *
storage.total_limit_size 5MThe Dead Letter Queue (DLQ) feature preserves chunks that fail to be delivered to output destinations. This enables troubleshooting delivery failures without losing data.
To enable the DLQ, add the following to your Service section:
service:
storage.path: /var/log/flb-storage/
storage.keep.rejected: on
storage.rejected.path: rejected[SERVICE]
storage.path /var/log/flb-storage/
storage.keep.rejected on
storage.rejected.path rejectedChunks are copied to the DLQ when:
An output plugin returns an unrecoverable error.
A chunk exhausts all configured retry attempts.
Retries are disabled (retry_limit: no_retries) and the flush fails.
The scheduler fails to schedule a retry.
DLQ files are stored in the configured path (for example, /var/log/flb-storage/rejected/) with names that include the tag, status code, and output plugin name. This helps identify which records failed and why.
For example, a file named kube_var_log_containers_test_400_http_0x7f8b4c.flb indicates a chunk with tag kube.var.log.containers.test that failed with status code 400 when sending to the http output.
DLQ files remain on disk until manually removed. Monitor disk usage and implement a cleanup policy.
For more details on DLQ configuration, see Buffering and Storage.
Tap can be used to generate events or records detailing what messages pass through Fluent Bit, at what time and what filters affect them.
Ensure that the container image supports Fluent Bit Tap (available in Fluent Bit 2.0+):
If the --enable-chunk-trace option is present, your Fluent Bit version supports Fluent Bit Tap, but it's disabled by default. Use this option to enable it.
You can start Fluent Bit with tracing activated from the beginning by using the trace-input and trace-output properties:
The following warning indicates the -Z or --enable-chunk-tracing option is missing:
Set properties for the output using the --trace-output-property option:
With that option set, the stdout plugin emits traces in json_lines format:
All three options can also be defined using the more flexible --trace option:
This example defines the Tap pipeline using this configuration: input=dummy.0 output=stdout output.format=json_lines which defines the following:
input: dummy.0 listens to the tag or alias dummy.0.
output: stdout outputs to a stdout plugin.
output.format: json_lines sets the stdout format to json_lines.
Tap support can also be activated and deactivated using the embedded web server:
In another terminal, activate Tap by either using the instance id of the input (dummy.0) or its alias. The alias is more predictable, and is used here:
This response means Tap is active. The terminal with Fluent Bit running should now look like this:
All the records that display are those emitted by the activities of the dummy plugin.
This example takes the same steps but demonstrates how the mechanism works with more complicated configurations.
This example follows a single input, out of many, and which passes through several filters.
To ensure the window isn't cluttered by the records generated by the input plugins, send all of it to null.
Activate with the following curl command:
You should start seeing output similar to the following:
When activating Tap, any plugin parameter can be given. These parameters can be used to modify the output format, the name of the time key, the format of the date, and other details.
The following example uses the parameter "format": "json" to demonstrate how to show stdout in JSON format.
First, run Fluent Bit enabling Tap:
In another terminal, activate Tap including the output (stdout), and the parameters wanted ("format": "json"):
In the first terminal, you should see the output similar to the following:
This parameter shows stdout in JSON format.
See output plugins for additional information.
This filter record is an example to explain the details of a Tap record:
type: Defines the stage the event is generated:
1: Input record. This is the unadulterated input record.
2: Filtered record. This is a record after it was filtered. One record is generated per filter.
3: Pre-output record. This is the record right before it's sent for output.
This example is a record generated by the manipulation of a record by a filter so it has the type 2.
start_time and end_time: Records the start and end of an event, and is different for each event type:
type 1: When the input is received, both the start and end time.
type 2: The time when filtering is matched until it has finished processing.
trace_id: A string composed of a prefix and a number which is incremented with each record received by the input during the Tap session.
plugin_instance: The plugin instance name as generated by Fluent Bit at runtime.
plugin_alias: If an alias is set this field will contain the alias set for a plugin.
records: An array of all the records being sent. Fluent Bit handles records in chunks of multiple records and chunks are indivisible, the same is done in the Tap output. Each record consists of its timestamp followed by the actual data which is a composite type of keys and values.
When the service is running, you can export metrics to see the overall status of the data flow of the service. There are other use cases where you might need to know the current status of the service internals, like the current status of the internal buffers. Dump Internals can help provide this information.
Fluent Bit v1.4 introduced the Dump Internals feature, which can be triggered from the command line triggering the CONT Unix signal.
Run the following kill command to signal Fluent Bit:
The command pidof aims to identify the Process ID of Fluent Bit.
Fluent Bit will dump the following information to the standard output interface (stdout):
The input plugins dump provides insights for every input instance configured.
Overall ingestion status of the plugin.
overlimit
If the plugin has been configured with , this entry will report if the plugin is over the limit or not at the moment of the dump. Over the limit prints yes, otherwise no.
mem_size
Current memory size in use by the input plugin in-memory.
mem_limit
Limit set by Mem_Buf_Limit.
When an input plugin ingests data into the engine, a Chunk is created. A Chunk can contain multiple records. At flush time, the engine creates a Task that contains the routes for the Chunk associated in question.
The Task dump describes the tasks associated to the input plugin:
total_tasks
Total number of active tasks associated to data generated by the input plugin.
new
Number of tasks not yet assigned to an output plugin. Tasks are in new status for a very short period of time. This value is normally very low or zero.
running
Number of active tasks being processed by output plugins.
size
Amount of memory used by the Chunks being processed (total chunk size).
The Chunks dump tells more details about all the chunks that the input plugin has generated and are still being processed.
Depending on the buffering strategy and limits imposed by configuration, some Chunks might be up (in memory) or down (filesystem).
total_chunks
Total number of Chunks generated by the input plugin that are still being processed by the engine.
up_chunks
Total number of Chunks loaded in memory.
down_chunks
Total number of Chunks stored in the filesystem but not loaded in memory yet.
busy_chunks
Chunks marked as busy (being flushed) or locked. Busy Chunks are immutable and likely are ready to be or are being processed.
Fluent Bit relies on a custom storage layer interface designed for hybrid buffering. The Storage Layer entry contains a total summary of Chunks registered by Fluent Bit:
total chunks
Total number of Chunks.
mem chunks
Total number of Chunks memory-based.
fs chunks
Total number of Chunks filesystem based.
up
Total number of filesystem chunks up in memory.
An input plugin to ingest OpenTelemetry logs, metrics, and traces
The OpenTelemetry input plugin lets you receive data based on the OpenTelemetry specification from various OpenTelemetry exporters, the OpenTelemetry Collector, or the Fluent Bit OpenTelemetry output plugin.
Fluent Bit has a compliant implementation which fully supports OTLP/HTTP and OTLP/GRPC. The single port configured defaults to 4318 and supports both transport methods.
When raw_traces is set to false (default), the traces endpoint (/v1/traces) processes incoming trace data using the unified JSON parser with strict validation. The endpoint accepts both protobuf and JSON encoded payloads. When raw_traces is set to true, any data forwarded to the traces endpoint will be packed and forwarded as a log message without processing, validation, or conversion to the Fluent Bit internal trace format.
Fluent Bit exposes the following endpoints for data ingestion based on the OpenTelemetry protocol:
For OTLP/HTTP:
Logs
/v1/logs
Metrics
/v1/metrics
For OTLP/GRPC:
Logs
/opentelemetry.proto.collector.log.v1.LogService/Export
/opentelemetry.proto.collector.log.v1.LogService/Export
The OpenTelemetry input plugin supports the following telemetry data types:
A sample configuration file to get started will look something like the following:
With this configuration, Fluent Bit listens on port 4318 for data. You can now send telemetry data to the endpoints /v1/metrics for metrics, /v1/traces for traces, and /v1/logs for logs.
A sample curl request to POST JSON encoded log data would be:
Fluent Bit includes enhanced support for OpenTelemetry traces with improved JSON parsing, error handling, and validation capabilities.
Fluent Bit provides a unified interface for processing OpenTelemetry trace data in JSON format. The parser converts OpenTelemetry JSON trace payloads into the Fluent Bit internal trace representation, supporting the full OpenTelemetry trace specification including:
Resource spans with attributes
Instrumentation scope information
Span data (names, IDs, timestamps, status)
Span events and links
The unified parser handles the OpenTelemetry JSON encoding format, which wraps attribute values in type-specific containers (for example, stringValue, intValue, doubleValue, boolValue).
The OpenTelemetry input plugin provides detailed error status information when processing trace data. If trace processing fails, the plugin returns specific error codes that help identify the issue:
FLB_OTEL_TRACES_ERR_INVALID_JSON - Invalid JSON format
FLB_OTEL_TRACES_ERR_INVALID_TRACE_ID - Invalid trace ID format or length
FLB_OTEL_TRACES_ERR_INVALID_SPAN_ID - Invalid span ID format or length
The OpenTelemetry specification defines three valid span status codes. When processing trace data, the plugin accepts the following status code values (case-insensitive):
OK - The operation completed successfully
ERROR - The operation has an error
UNSET - The status isn't set (default)
Any other status code value triggers FLB_OTEL_TRACES_ERR_STATUS_FAILURE and causes the trace data to be rejected. The status code must be provided as a string in the status.code field of the span object.
When trace validation fails, the following behavior applies:
Trace data is dropped: Invalid trace data isn't processed or forwarded. The trace payload is rejected immediately.
Error logging: The plugin logs an error message with the specific error status code to help diagnose issues. Error messages include the error code number and description.
No retry mechanism: Failed requests aren't automatically retried. The client must resend corrected trace data.
HTTP response codes
Fluent Bit enforces strict validation for trace and span IDs to ensure data integrity:
Trace IDs: Must be exactly 32 hexadecimal characters (16 bytes)
Span IDs: Must be exactly 16 hexadecimal characters (8 bytes)
Parent Span IDs: Must be exactly 16 hexadecimal characters (8 bytes) when present
The validation process:
Verifies the ID length matches the expected size
Validates that all characters are valid hexadecimal digits (0-9, a-f, A-F)
Decodes the hexadecimal string to binary format
Rejects invalid IDs with appropriate error codes
Invalid IDs result in error status codes (FLB_OTEL_TRACES_ERR_INVALID_TRACE_ID, FLB_OTEL_TRACES_ERR_INVALID_SPAN_ID, and so on) and the trace data is rejected to prevent processing of corrupted or malformed trace information.
The following example shows a valid OpenTelemetry JSON trace payload that can be sent to the /v1/traces endpoint:
Trace IDs must be exactly 32 hex characters and span IDs must be exactly 16 hex characters. Invalid IDs will be rejected with appropriate error messages.
In the example, the status.code field uses "OK". Valid status code values are "OK", "ERROR", and "UNSET" (case-insensitive). Any other value triggers FLB_OTEL_TRACES_ERR_STATUS_FAILURE and causes the trace to be rejected.
docker run --rm -ti fluent/fluent-bit:latest --help | grep trace
-Z, --enable-chunk-traceenable chunk tracing, it can be activated either through the http api or the command line
--trace-input input to start tracing on startup.
--trace-output output to use for tracing on startup.
--trace-output-property set a property for output tracing on startup.
--trace setup a trace pipeline on startup. Uses a single line, ie: "input=dummy.0 output=stdout output.format='json'"$ fluent-bit -Z -i dummy -o stdout -f 1 --trace-input=dummy.0 --trace-output=stdout
...
[0] dummy.0: [[1689971222.068537501, {}], {"message"=>"dummy"}]
[0] dummy.0: [[1689971223.068556121, {}], {"message"=>"dummy"}]
[0] trace: [[1689971222.068677045, {}], {"type"=>1, "trace_id"=>"0", "plugin_instance"=>"dummy.0", "records"=>[{"timestamp"=>1689971222, "record"=>{"message"=>"dummy"}}], "start_time"=>1689971222, "end_time"=>1689971222}]
[1] trace: [[1689971222.068735577, {}], {"type"=>3, "trace_id"=>"0", "plugin_instance"=>"dummy.0", "records"=>[{"timestamp"=>1689971222, "record"=>{"message"=>"dummy"}}], "start_time"=>1689971222, "end_time"=>1689971222}]
[0] dummy.0: [[1689971224.068586317, {}], {"message"=>"dummy"}]
[0] trace: [[1689971223.068626923, {}], {"type"=>1, "trace_id"=>"1", "plugin_instance"=>"dummy.0", "records"=>[{"timestamp"=>1689971223, "record"=>{"message"=>"dummy"}}], "start_time"=>1689971223, "end_time"=>1689971223}]
[1] trace: [[1689971223.068675735, {}], {"type"=>3, "trace_id"=>"1", "plugin_instance"=>"dummy.0", "records"=>[{"timestamp"=>1689971223, "record"=>{"message"=>"dummy"}}], "start_time"=>1689971223, "end_time"=>1689971223}]
[2] trace: [[1689971224.068689341, {}], {"type"=>1, "trace_id"=>"2", "plugin_instance"=>"dummy.0", "records"=>[{"timestamp"=>1689971224, "record"=>{"message"=>"dummy"}}], "start_time"=>1689971224, "end_time"=>1689971224}]
[3] trace: [[1689971224.068747182, {}], {"type"=>3, "trace_id"=>"2", "plugin_instance"=>"dummy.0", "records"=>[{"timestamp"=>1689971224, "record"=>{"message"=>"dummy"}}], "start_time"=>1689971224, "end_time"=>1689971224}]
^C[2023/07/21 16:27:05] [engine] caught signal (SIGINT)
[2023/07/21 16:27:05] [ warn] [engine] service will shutdown in max 5 seconds
[2023/07/21 16:27:05] [ info] [input] pausing dummy.0
[0] dummy.0: [[1689971225.068568875, {}], {"message"=>"dummy"}]
[2023/07/21 16:27:06] [ info] [engine] service has stopped (0 pending tasks)
[2023/07/21 16:27:06] [ info] [input] pausing dummy.0
[2023/07/21 16:27:06] [ warn] [engine] service will shutdown in max 1 seconds
[0] trace: [[1689971225.068654038, {}], {"type"=>1, "trace_id"=>"3", "plugin_instance"=>"dummy.0", "records"=>[{"timestamp"=>1689971225, "record"=>{"message"=>"dummy"}}], "start_time"=>1689971225, "end_time"=>1689971225}]
[1] trace: [[1689971225.068695829, {}], {"type"=>3, "trace_id"=>"3", "plugin_instance"=>"dummy.0", "records"=>[{"timestamp"=>1689971225, "record"=>{"message"=>"dummy"}}], "start_time"=>1689971225, "end_time"=>1689971225}]
[2023/07/21 16:27:07] [ info] [engine] service has stopped (0 pending tasks)
[2023/07/21 16:27:07] [ info] [output:stdout:stdout.0] thread worker #0 stopping...
[2023/07/21 16:27:07] [ info] [output:stdout:stdout.0] thread worker #0 stopped
[2023/07/21 16:27:07] [ info] [output:stdout:stdout.0] thread worker #0 stopping...
[2023/07/21 16:27:07] [ info] [output:stdout:stdout.0] thread worker #0 stopped[2023/07/21 16:26:42] [ warn] [chunk trace] enable chunk tracing via the configuration or command line to be able to activate tracing.$ fluent-bit -Z -i dummy -o stdout -f 1 --trace-input=dummy.0 --trace-output=stdout --trace-output-property=format=json_lines
...
[0] dummy.0: [[1689971340.068565891, {}], {"message"=>"dummy"}]
[0] dummy.0: [[1689971341.068632477, {}], {"message"=>"dummy"}]
{"date":1689971340.068745,"type":1,"trace_id":"0","plugin_instance":"dummy.0","records":[{"timestamp":1689971340,"record":{"message":"dummy"}}],"start_time":1689971340,"end_time":1689971340}
{"date":1689971340.068825,"type":3,"trace_id":"0","plugin_instance":"dummy.0","records":[{"timestamp":1689971340,"record":{"message":"dummy"}}],"start_time":1689971340,"end_time":1689971340}
[0] dummy.0: [[1689971342.068613646, {}], {"message"=>"dummy"}]{"date":1689971340.068745,"type":1,"trace_id":"0","plugin_instance":"dummy.0","records":[{"timestamp":1689971340,"record":{"message":"dummy"}}],"start_time":1689971340,"end_time":1689971340}fluent-bit -Z -i dummy -o stdout -f 1 --trace="input=dummy.0 output=stdout output.format=json_lines"$ docker run --rm -ti -p 2020:2020 fluent/fluent-bit:latest -Z -H -i dummy -p alias=input_dummy -o stdout -f 1
...
[0] dummy.0: [1666346597.203307010, {"message"=>"dummy"}]
[0] dummy.0: [1666346598.204103793, {"message"=>"dummy"}]$ curl 127.0.0.1:2020/api/v1/trace/input_dummy
{"status":"ok"}...
[0] dummy.0: [1666346616.203551736, {"message"=>"dummy"}]
[0] trace: [1666346617.205221952, {"type"=>1, "trace_id"=>"trace.0", "plugin_instance"=>"dummy.0", "plugin_alias"=>"input_dummy", "records"=>[{"timestamp"=>1666346617, "record"=>{"message"=>"dummy"}}], "start_time"=>1666346617, "end_time"=>1666346617}]
[0] dummy.0: [1666346617.205131790, {"message"=>"dummy"}]
[0] trace: [1666346617.205419358, {"type"=>3, "trace_id"=>"trace.0", "plugin_instance"=>"dummy.0", "plugin_alias"=>"input_dummy", "records"=>[{"timestamp"=>1666346617, "record"=>{"message"=>"dummy"}}], "start_time"=>1666346617, "end_time"=>1666346617}]
[0] trace: [1666346618.204110867, {"type"=>1, "trace_id"=>"trace.1", "plugin_instance"=>"dummy.0", "plugin_alias"=>"input_dummy", "records"=>[{"timestamp"=>1666346618, "record"=>{[0] dummy.0: [1666346618.204049246, {"message"=>"dummy"}]
"message"=>"dummy"}}], "start_time"=>1666346618, "end_time"=>1666346618}]
[0] trace: [1666346618.204198654, {"type"=>3, "trace_id"=>"trace.1", "plugin_instance"=>"dummy.0", "plugin_alias"=>"input_dummy", "records"=>[{"timestamp"=>1666346618, "record"=>{"message"=>"dummy"}}], "start_time"=>1666346618, "end_time"=>1666346618}]$ docker run --rm -ti -p 2020:2020 fluent/fluent-bit:latest \
-Z -H \
-i dummy -p alias=dummy_0 -p \
dummy='{"dummy": "dummy_0", "key_name": "foo", "key_cnt": "1"}' \
-i dummy -p alias=dummy_1 -p dummy='{"dummy": "dummy_1"}' \
-i dummy -p alias=dummy_2 -p dummy='{"dummy": "dummy_2"}' \
-F record_modifier -m 'dummy.0' -p record="powered_by fluent" \
-F record_modifier -m 'dummy.1' -p record="powered_by fluent-bit" \
-F nest -m 'dummy.0' \
-p operation=nest -p wildcard='key_*' -p nest_under=data \
-o null -m '*' -f 1$ curl 127.0.0.1:2020/api/v1/trace/dummy_0
{"status":"ok"}...
[0] trace: [1666349359.325597543, {"type"=>1, "trace_id"=>"trace.0", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349359, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1"}}], "start_time"=>1666349359, "end_time"=>1666349359}]
[0] trace: [1666349359.325723747, {"type"=>2, "start_time"=>1666349359, "end_time"=>1666349359, "trace_id"=>"trace.0", "plugin_instance"=>"record_modifier.0", "records"=>[{"timestamp"=>1666349359, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1", "powered_by"=>"fluent"}}]}]
[0] trace: [1666349359.325783954, {"type"=>2, "start_time"=>1666349359, "end_time"=>1666349359, "trace_id"=>"trace.0", "plugin_instance"=>"nest.2", "records"=>[{"timestamp"=>1666349359, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}]}]
[0] trace: [1666349359.325913783, {"type"=>3, "trace_id"=>"trace.0", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349359, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}], "start_time"=>1666349359, "end_time"=>1666349359}]
[0] trace: [1666349360.323826619, {"type"=>1, "trace_id"=>"trace.1", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349360, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1"}}], "start_time"=>1666349360, "end_time"=>1666349360}]
[0] trace: [1666349360.323859618, {"type"=>2, "start_time"=>1666349360, "end_time"=>1666349360, "trace_id"=>"trace.1", "plugin_instance"=>"record_modifier.0", "records"=>[{"timestamp"=>1666349360, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1", "powered_by"=>"fluent"}}]}]
[0] trace: [1666349360.323900784, {"type"=>2, "start_time"=>1666349360, "end_time"=>1666349360, "trace_id"=>"trace.1", "plugin_instance"=>"nest.2", "records"=>[{"timestamp"=>1666349360, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}]}]
[0] trace: [1666349360.323926366, {"type"=>3, "trace_id"=>"trace.1", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349360, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}], "start_time"=>1666349360, "end_time"=>1666349360}]
[0] trace: [1666349361.324223752, {"type"=>1, "trace_id"=>"trace.2", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349361, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1"}}], "start_time"=>1666349361, "end_time"=>1666349361}]
[0] trace: [1666349361.324263959, {"type"=>2, "start_time"=>1666349361, "end_time"=>1666349361, "trace_id"=>"trace.2", "plugin_instance"=>"record_modifier.0", "records"=>[{"timestamp"=>1666349361, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1", "powered_by"=>"fluent"}}]}]
[0] trace: [1666349361.324283250, {"type"=>2, "start_time"=>1666349361, "end_time"=>1666349361, "trace_id"=>"trace.2", "plugin_instance"=>"nest.2", "records"=>[{"timestamp"=>1666349361, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}]}]
[0] trace: [1666349361.324294291, {"type"=>3, "trace_id"=>"trace.2", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349361, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}], "start_time"=>1666349361, "end_time"=>1666349361}]
^C[2022/10/21 10:49:23] [engine] caught signal (SIGINT)
[2022/10/21 10:49:23] [ warn] [engine] service will shutdown in max 5 seconds
[2022/10/21 10:49:23] [ info] [input] pausing dummy_0
[2022/10/21 10:49:23] [ info] [input] pausing dummy_1
[2022/10/21 10:49:23] [ info] [input] pausing dummy_2
[2022/10/21 10:49:23] [ info] [engine] service has stopped (0 pending tasks)
[2022/10/21 10:49:23] [ info] [input] pausing dummy_0
[2022/10/21 10:49:23] [ info] [input] pausing dummy_1
[2022/10/21 10:49:23] [ info] [input] pausing dummy_2
[0] trace: [1666349362.323272011, {"type"=>1, "trace_id"=>"trace.3", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349362, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1"}}], "start_time"=>1666349362, "end_time"=>1666349362}]
[0] trace: [1666349362.323306843, {"type"=>2, "start_time"=>1666349362, "end_time"=>1666349362, "trace_id"=>"trace.3", "plugin_instance"=>"record_modifier.0", "records"=>[{"timestamp"=>1666349362, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1", "powered_by"=>"fluent"}}]}]
[0] trace: [1666349362.323323884, {"type"=>2, "start_time"=>1666349362, "end_time"=>1666349362, "trace_id"=>"trace.3", "plugin_instance"=>"nest.2", "records"=>[{"timestamp"=>1666349362, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}]}]
[0] trace: [1666349362.323334509, {"type"=>3, "trace_id"=>"trace.3", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349362, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}], "start_time"=>1666349362, "end_time"=>1666349362}]
[2022/10/21 10:49:24] [ warn] [engine] service will shutdown in max 1 seconds
[2022/10/21 10:49:25] [ info] [engine] service has stopped (0 pending tasks)
[2022/10/21 10:49:25] [ info] [output:stdout:stdout.0] thread worker #0 stopping...
[2022/10/21 10:49:25] [ info] [output:stdout:stdout.0] thread worker #0 stopped
[2022/10/21 10:49:25] [ info] [output:null:null.0] thread worker #0 stopping...
[2022/10/21 10:49:25] [ info] [output:null:null.0] thread worker #0 stopped$ docker run --rm -ti -p 2020:2020 fluent/fluent-bit:latest -Z -H -i dummy -p alias=input_dummy -o stdout -f 1
...
[0] dummy.0: [1674805465.976012761, {"message"=>"dummy"}]
[0] dummy.0: [1674805466.973669512, {"message"=>"dummy"}]$ curl 127.0.0.1:2020/api/v1/trace/input_dummy -d '{"output":"stdout", "params": {"format": "json"}}'
{"status":"ok"}...
[0] dummy.0: [1674805635.972373840, {"message"=>"dummy"}]
[{"date":1674805634.974457,"type":1,"trace_id":"0","plugin_instance":"dummy.0","plugin_alias":"input_dummy","records":[{"timestamp":1674805634,"record":{"message":"dummy"}}],"start_time":1674805634,"end_time":1674805634},{"date":1674805634.974605,"type":3,"trace_id":"0","plugin_instance":"dummy.0","plugin_alias":"input_dummy","records":[{"timestamp":1674805634,"record":{"message":"dummy"}}],"start_time":1674805634,"end_time":1674805634},{"date":1674805635.972398,"type":1,"trace_id":"1","plugin_instance":"dummy.0","plugin_alias":"input_dummy","records":[{"timestamp":1674805635,"record":{"message":"dummy"}}],"start_time":1674805635,"end_time":1674805635},{"date":1674805635.972413,"type":3,"trace_id":"1","plugin_instance":"dummy.0","plugin_alias":"input_dummy","records":[{"timestamp":1674805635,"record":{"message":"dummy"}}],"start_time":1674805635,"end_time":1674805635}]
[0] dummy.0: [1674805636.973970215, {"message"=>"dummy"}]
[{"date":1674805636.974008,"type":1,"trace_id":"2","plugin_instance":"dummy.0","plugin_alias":"input_dummy","records":[{"timestamp":1674805636,"record":{"message":"dummy"}}],"start_time":1674805636,"end_time":1674805636},{"date":1674805636.974034,"type":3,"trace_id":"2","plugin_instance":"dummy.0","plugin_alias":"input_dummy","records":[{"timestamp":1674805636,"record":{"message":"dummy"}}],"start_time":1674805636,"end_time":1674805636}]{
"type": 2,
"start_time": 1666349231,
"end_time": 1666349231,
"trace_id": "trace.1",
"plugin_instance": "nest.2",
"records": [{
"timestamp": 1666349231,
"record": {
"dummy": "dummy_0",
"powered_by": "fluent",
"data": {
"key_name": "foo",
"key_cnt": "1"
}
}
}]
}kill -CONT `pidof fluent-bit`...
[engine] caught signal (SIGCONT)
[2020/03/23 17:39:02] Fluent Bit Dump
===== Input =====
syslog_debug (syslog)
│
├─ status
│ └─ overlimit : no
│ ├─ mem size : 60.8M (63752145 bytes)
│ └─ mem limit : 61.0M (64000000 bytes)
│
├─ tasks
│ ├─ total tasks : 92
│ ├─ new : 0
│ ├─ running : 92
│ └─ size : 171.1M (179391504 bytes)
│
└─ chunks
└─ total chunks : 92
├─ up chunks : 35
├─ down chunks: 57
└─ busy chunks: 92
├─ size : 60.8M (63752145 bytes)
└─ size err: 0
===== Storage Layer =====
total chunks : 92
├─ mem chunks : 0
└─ fs chunks : 92
├─ up : 35
└─ down : 57The hostname.
localhost
http2
Enable HTTP/2 protocol support for the OpenTelemetry receiver.
true
listen
The network address to listen on.
0.0.0.0
log_level
Specifies the log level for this plugin. If not set here, the plugin uses the global log level specified in the service section of your configuration file.
info
log_supress_interval
Suppresses log messages from this plugin that appear similar within a specified time interval. 0 no suppression.
0
logs_body_key
Specify a body key.
none
logs_metadata_key
Key name to store OpenTelemetry logs metadata in the record.
otlp
mem_buf_limit
Set a memory buffer limit for the input plugin. If the limit is reached, the plugin will pause until the buffer is drained. The value is in bytes. If set to 0, the buffer limit is disabled.
0
net.accept_timeout
Set maximum time allowed to establish an incoming connection. This time includes the TLS handshake.
10s
net.accept_timeout_log_error
On client accept timeout, specify if it should log an error. When disabled, the timeout is logged as a debug message.
true
net.backlog
Set the backlog size for listening sockets.
128
net.io_timeout
Set maximum time a connection can stay idle.
0s
net.keepalive
Enable or disable keepalive support.
true
net.share_port
Allow multiple plugins to bind to the same port.
false
port
The port for Fluent Bit to listen for incoming connections.
4318
profiles_support
This is an experimental feature, feel free to test it but don't enable this in production environments.
false
raw_traces
Forward traces without processing. When set to false (default), traces are processed using the unified JSON parser with strict validation. When set to true, trace data is forwarded as raw log messages without validation or processing.
false
routable
If set to true, the data generated by the plugin will be routable, meaning that it can be forwarded to other plugins or outputs. If set to false, the data will be discarded.
true
storage.pause_on_chunks_overlimit
Enable pausing on an input when they reach their chunks limit.
none
storage.type
Sets the storage type for this input, one of: filesystem, memory or memrb.
memory
successful_response_code
Allows for setting a successful response code. Supported values: 200, 201, or 204.
201
tag
Set a tag for the events generated by this input plugin.
none
tag_from_uri
By default, the tag will be created from the URI. For example, v1_metrics from /v1/metrics. This must be set to false if using tag.
true
tag_key
Record accessor key to use for generating tags from incoming records.
none
threaded
Enable threading on an input.
false
thread.ring_buffer.capacity
Set custom ring buffer capacity when the input runs in threaded mode.
1024
thread.ring_buffer.window
Set custom ring buffer window percentage for threaded inputs.
5
tls
Enable or disable TLS/SSL support.
off
tls.ca_file
Absolute path to CA certificate file.
none
tls.ca_path
Absolute path to scan for certificate files.
none
tls.crt_file
Absolute path to Certificate file.
none
tls.ciphers
Specify TLS ciphers up to TLSv1.2.
none
tls.debug
Set TLS debug level. Accepts 0 (No debug), 1(Error), 2 (State change), 3 (Informational) and 4 (Verbose).
1
tls.key_file
Absolute path to private Key file.
none
tls.key_passwd
Optional password for tls.key_file file.
none
tls.max_version
Specify the maximum version of TLS.
none
tls.min_version
Specify the minimum version of TLS.
none
tls.verify
Force certificate validation.
on
tls.verify_hostname
Enable or disable to verify hostname.
off
tls.vhost
Hostname to be used for TLS SNI extension.
none
Traces
/v1/traces
/opentelemetry.proto.collector.metric.v1.MetricService/Export
/opentelemetry.proto.collector.metrics.v1.MetricsService/Export
Traces
/opentelemetry.proto.collector.trace.v1.TraceService/Export
/opentelemetry.proto.collector.traces.v1.TracesService/Export
FLB_OTEL_TRACES_ERR_INVALID_PARENT_SPAN_ID - Invalid parent span IDFLB_OTEL_TRACES_ERR_STATUS_FAILURE - Invalid span status code
FLB_OTEL_TRACES_ERR_INVALID_ATTRIBUTES - Invalid attribute format
FLB_OTEL_TRACES_ERR_INVALID_EVENT_ENTRY - Invalid span event
FLB_OTEL_TRACES_ERR_INVALID_LINK_ENTRY - Invalid span link
HTTP/1.1: Returns 400 Bad Request with an error message when validation fails. Returns the configured successful_response_code (default 201 Created) when processing succeeds.
gRPC: Returns gRPC status 2 (UNKNOWN) with message "Serialization error." when validation fails. Returns gRPC status 0 (OK) with an empty ExportTraceServiceResponse when processing succeeds.
alias
Sets an alias for multiple instances of the same input plugin. If no alias is specified, a default name will be assigned using the plugin name followed by a dot and a sequence number.
none
buffer_max_size
Maximum size of the HTTP request buffer in KB, MB, or GB.
4M
buffer_chunk_size
Size of each buffer chunk allocated for HTTP requests (advanced users only).
512K
encode_profiles_as_log
Encode profiles received as text and ingest them in the logging pipeline.
true
Logs
Stable
Stable
Stable
Metrics
Unimplemented
Stable
Stable
Traces
Stable
Stable
host
Stable
pipeline:
inputs:
- name: opentelemetry
listen: 127.0.0.1
port: 4318
outputs:
- name: stdout
match: '*'[INPUT]
name opentelemetry
listen 127.0.0.1
port 4318
[OUTPUT]
name stdout
match *curl --header "Content-Type: application/json" --request POST --data '{"resourceLogs":[{"resource":{},"scopeLogs":[{"scope":{},"logRecords":[{"timeUnixNano":"1660296023390371588","body":{"stringValue":"{\"message\":\"dummy\"}"},"traceId":"","spanId":""}]}]}]}' http://0.0.0.0:4318/v1/logs{
"resourceSpans": [
{
"resource": {
"attributes": [
{
"key": "service.name",
"value": {
"stringValue": "my-service"
}
}
]
},
"scopeSpans": [
{
"scope": {
"name": "my-instrumentation",
"version": "1.0.0"
},
"spans": [
{
"traceId": "0123456789abcdef0123456789abcdef",
"spanId": "0123456789abcdef",
"name": "my-span",
"kind": 1,
"startTimeUnixNano": "1660296023390371588",
"endTimeUnixNano": "1660296023391371588",
"status": {
"code": "OK"
},
"attributes": [
{
"key": "http.method",
"value": {
"stringValue": "GET"
}
}
]
}
]
}
]
}
]
}size
Amount of bytes used by the Chunk.
size err
Number of Chunks in an error state where its size couldn't be retrieved.
down
Total number of filesystem chunks down (not loaded in memory).
A plugin based on Prometheus Node Exporter to collect system and host level metrics
is a popular way to collect system level metrics from operating systems, such as CPU, disk, network, and process statistics. Fluent Bit includes a node exporter metrics plugin that builds off the Prometheus design to collect system level metrics without having to manage two separate processes or agents.
The Node exporter metrics plugin contains a subset of collectors and metrics available from Prometheus Node exporter.
scrape_interval sets the default for all scrapes. To set granular scrape intervals, set the specific interval. For example, collector.cpu.scrape_interval. When using a granular scrape interval, if a value greater than 0 is used, it overrides the global default. Otherwise, the global default is used.
The plugin top-level scrape_interval setting is the global default. Any custom settings for individual scrape_intervals override that specific metric scraping interval.
Each collector.xxx.scrape_interval option only overrides the interval for that specific collector and updates the associated set of provided metrics.
Overridden intervals only change the collection interval, not the interval for publishing the metrics which is taken from the global setting.
For example, if the global interval is set to 5 and an override interval of 60 is used, the published metrics will be reported every five seconds. However, the specific collector will stay the same for 60 seconds until it's collected again.
This helps with down-sampling when collecting metrics.
collector.cpu.scrape_interval
The rate in seconds at which cpu metrics are collected from the host operating system.
0
collector.cpufreq.scrape_interval
The rate in seconds at which cpufreq metrics are collected from the host operating system.
0
collector.diskstats.scrape_interval
The rate in seconds at which diskstats metrics are collected from the host operating system.
0
collector.filefd.scrape_interval
The rate in seconds at which filefd metrics are collected from the host operating system.
0
The following table describes the available collectors as part of this plugin. They're enabled by default and respect the original metrics name, descriptions, and types from Prometheus Exporter. You can use your current dashboards without any compatibility problem.
The Version column specifies the Fluent Bit version where the collector is available.
cpu
Exposes CPU statistics.
Linux, macOS
1.8
cpufreq
Exposes CPU frequency statistics.
Linux
1.8
diskstats
Exposes disk I/O statistics.
Linux, macOS
This input always runs in its own thread.
In the following configuration file, the input plugin node_exporter_metrics collects metrics every two seconds and exposes them through the Prometheus Exporter output plugin on HTTP/TCP port 2021.
You can test the expose of the metrics by using curl:
When deploying Fluent Bit in a container you will need to specify additional settings to ensure that Fluent Bit has access to the host operating system. The following Docker command deploys Fluent Bit with specific mount paths and settings enabled to ensure that Fluent Bit can collect from the host. These are then exposed over port 2021.
If you use dashboards for monitoring, Grafana is one option. The Fluent Bit source code repository contains a docker-compose example.
Download the Fluent Bit source code:
Start the service and view your dashboard:
Open your browser and use the address http://127.0.0.1:3000.
When asked for the credentials to access Grafana, use admin for the username and password. See .
By default, Grafana dashboard plots the data from the last 24 hours. Change it to Last 5 minutes to see the recent data being collected.
The plugin implements a subset of the available collectors in the original Prometheus Node exporter. If you would like a specific collector prioritized, open a GitHub issue by using the following template:
# Node Exporter Metrics + Prometheus Exporter
# -------------------------------------------
# The following example collect host metrics on Linux and expose
# them through a Prometheus HTTP endpoint.
#
# After starting the service try it with:
#
# $ curl http://127.0.0.1:2021/metrics
#
service:
flush: 1
log_level: info
pipeline:
inputs:
- name: node_exporter_metrics
tag: node_metrics
scrape_interval: 2
outputs:
- name: prometheus_exporter
match: node_metrics
host: 0.0.0.0
port: 2021
# Node Exporter Metrics + Prometheus Exporter
# -------------------------------------------
# The following example collect host metrics on Linux and expose
# them through a Prometheus HTTP end-point.
#
# After starting the service try it with:
#
# $ curl http://127.0.0.1:2021/metrics
#
[SERVICE]
flush 1
log_level info
[INPUT]
name node_exporter_metrics
tag node_metrics
scrape_interval 2
[OUTPUT]
name prometheus_exporter
match node_metrics
host 0.0.0.0
port 2021
git clone https://github.com/fluent/fluent-bit
cd fluent-bit/docker_compose/node-exporter-dashboard/docker-compose up --force-recreate -d --buildcurl http://127.0.0.1:2021/metricsdocker run -ti -v /proc:/host/proc \
-v /sys:/host/sys \
-p 2021:2021 \
fluent/fluent-bit:1.8.0 \
/fluent-bit/bin/fluent-bit \
-i node_exporter_metrics \
-p path.procfs=/host/proc \
-p path.sysfs=/host/sys \
-o prometheus_exporter \
-p "add_label=host $HOSTNAME" \
-f 1docker-compose downcollector.filesystem.scrape_interval
The rate in seconds at which filesystem metrics are collected from the host operating system.
0
collector.hwmon.chip-exclude
Regex of chips to exclude for the hwmon collector.
Not set by default.
collector.hwmon.chip-include
Regex of chips to include for the hwmon collector.
Not set by default.
collector.hwmon.scrape_interval
The rate in seconds at which hwmon metrics are collected from the host operating system.
0
collector.hwmon.sensor-exclude
Regex of sensors to exclude for the hwmon collector.
Not set by default.
collector.hwmon.sensor-include
Regex of sensors to include for the hwmon collector.
Not set by default.
collector.loadavg.scrape_interval
The rate in seconds at which loadavg metrics are collected from the host operating system.
0
collector.meminfo.scrape_interval
The rate in seconds at which meminfo metrics are collected from the host operating system.
0
collector.netdev.scrape_interval
The rate in seconds at which netdev metrics are collected from the host operating system.
0
collector.netstat.scrape_interval
The rate in seconds at which netstat metrics are collected from the host operating system.
0
collector.nvme.scrape_interval
The rate in seconds at which nvme metrics are collected from the host operating system.
0
collector.processes.scrape_interval
The rate in seconds at which system-level process metrics are collected from the host operating system.
0
collector.sockstat.scrape_interval
The rate in seconds at which sockstat metrics are collected from the host operating system.
0
collector.stat.scrape_interval
The rate in seconds at which stat metrics are collected from the host operating system.
0
collector.systemd.scrape_interval
The rate in seconds at which systemd metrics are collected from the host operating system.
0
collector.textfile.path
Specify path or directory to collect textfile metrics from the host operating system.
Not set by default.
collector.textfile.scrape_interval
The rate in seconds at which textfile metrics are collected from the host operating system.
0
collector.thermalzone.scrape_interval
The rate in seconds at which thermal_zone metrics are collected from the host operating system.
0
collector.time.scrape_interval
The rate in seconds at which time metrics are collected from the host operating system.
0
collector.uname.scrape_interval
The rate in seconds at which uname metrics are collected from the host operating system.
0
collector.vmstat.scrape_interval
The rate in seconds at which vmstat metrics are collected from the host operating system.
0
diskstats.ignore_device_regex
Specify the regular expression for the diskstats to prevent collection of/ignore.
^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\\d+n\\d+p)\\d+$
filesystem.ignore_filesystem_type_regex
Specify the regular expression for the filesystem types to prevent collection of or ignore.
^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
filesystem.ignore_mount_point_regex
Specify the regular expression for the mount points to prevent collection of/ignore.
^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
metrics
Specify which metrics are collected from the host operating system. These metrics depend on /procfs, /sysfs, systemd, or custom files. The actual values of metrics will be read from /proc, /sys, or systemd as needed. cpu, cpufreq, meminfo, diskstats, filesystem, stat, loadavg, vmstat, netdev, netstat, sockstat, filefd, nvme, and processes depend on procfs. cpufreq, hwmon, and thermal_zone depend on sysfs. systemd depends on systemd services. textfile requires explicit path configuration using collector.textfile.path.
"cpu,cpufreq,meminfo,diskstats,filesystem,uname,stat,time,loadavg,vmstat,netdev,netstat,sockstat,filefd,systemd,nvme,thermal_zone,hwmon"
path.procfs
The mount point used to collect process information and metrics.
/proc
path.rootfs
The root filesystem mount point.
/
path.sysfs
The path in the filesystem used to collect system metrics.
/sys
scrape_interval
The rate in seconds at which metrics are collected from the host operating system.
5
systemd_exclude_pattern
Regular expression to determine which units are excluded in the metrics produced by the systemd collector.
.+\\.(automount|device|mount|scope|slice)
systemd_include_pattern
Regular expression to determine which units are included in the metrics produced by the systemd collector.
Not applied unless explicitly set.
systemd_include_service_task_metrics
Determines if the collector will include service task metrics.
false
systemd_service_restart_metrics
Determines if the collector will include service restart metrics.
false
systemd_unit_start_time_metrics
Determines if the collector will include unit start time metrics.
false
1.8
filefd
Exposes file descriptor statistics from /proc/sys/fs/file-nr.
Linux
1.8.2
filesystem
Exposes filesystem statistics from /proc/*/mounts.
Linux
2.0.9
hwmon
Exposes hardware monitoring metrics from /sys/class/hwmon.
Linux
2.2.0
loadavg
Exposes load average.
Linux, macOS
1.8
meminfo
Exposes memory statistics.
Linux, macOS
1.8
netdev
Exposes network interface statistics such as bytes transferred.
Linux, macOS
1.8.2
netstat
Exposes network statistics from /proc/net/netstat.
Linux
2.2.0
nvme
Exposes nvme statistics from /proc.
Linux
2.2.0
processes
Exposes processes statistics from /proc.
Linux
2.2.0
sockstat
Exposes socket statistics from /proc/net/sockstat.
Linux
2.2.0
stat
Exposes various statistics from /proc/stat. This includes boot time, forks, and interruptions.
Linux
1.8
systemd
Exposes statistics from systemd.
Linux
2.1.3
textfile
Exposes custom metrics from text files. Requires collector.textfile.path to be set.
Linux
2.2.0
thermal_zone
Exposes thermal statistics from /sys/class/thermal/thermal_zone/*.
Linux
2.2.1
time
Exposes the current system time.
Linux
1.8
uname
Exposes system information as provided by the uname system call.
Linux, macOS
1.8
vmstat
Exposes statistics from /proc/vmstat.
Linux
1.8.2

Fluent Bit uses CMake as its build system.
To build and install Fluent Bit from source, you must also install the following packages:
bison
build-essentials
cmake (version 3.31.6 or later)
flex
libssl-dev
libyaml-dev
pkg-config
Additionally, certain or plugins might depend on additional components. For example, some plugins require Kafka.
If you already know how CMake works, you can skip this section and review the available .
The following steps explain how to build and install the project with the default options.
Change to the build/ directory inside the Fluent Bit sources:
Let configure the project specifying where the root path is located:
This command displays a series of results similar to:
Start the compilation process using the make command:
This command displays results similar to:
Fluent Bit provides configurable options to CMake that can be enabled or disabled.
Input plugins gather information from a specific source type like network interfaces, some built-in metrics, or through a specific input device.
The following input plugins are available:
Processor plugins handle the events within the processor pipelines to allow modifying, enriching, or dropping events.
The following table describes the processors available:
Filter plugins let you modify, enrich or drop records.
The following table describes the filters available on this version:
Output plugins let you flush the information to some external interface, service, or terminal.
The following table describes the output plugins available:
To continue installing the binary on the system, use make install:
If the command indicates insufficient permissions, prefix the command with sudo.
FLB_AWS_ERROR_REPORTER
Build with AWS error reporting support
No
FLB_BENCHMARKS
Enable benchmarks
No
FLB_BINARY
Build executable
Yes
FLB_CHUNK_TRACE
Enable chunk traces
Yes
FLB_COVERAGE
Build with code-coverage
No
FLB_CONFIG_YAML
Enable YAML configuration support
Yes
FLB_CORO_STACK_SIZE
Set coroutine stack size
FLB_CUSTOM_CALYPTIA
Enable Calyptia Support
Yes
FLB_ENFORCE_ALIGNMENT
Enable limited platform specific aligned memory access
No
FLB_EXAMPLES
Build examples
Yes
FLB_HTTP_SERVER
Enable HTTP Server
Yes
FLB_INOTIFY
Enable Inotify support
Yes
FLB_JEMALLOC
Build with Jemalloc support
No
FLB_KAFKA
Enable Kafka support
Yes
FLB_LUAJIT
Enable Lua scripting support
Yes
FLB_METRICS
Enable metrics support
Yes
FLB_MTRACE
Enable mtrace support
No
FLB_PARSER
Build with Parser support
Yes
FLB_POSIX_TLS
Force POSIX thread storage
No
FLB_PROFILES
Enable profiles support
Yes
FLB_PROXY_GO
Enable Go plugins support
Yes
FLB_RECORD_ACCESSOR
Enable record accessor
Yes
FLB_REGEX
Build with Regex support
Yes
FLB_RELEASE
Build with release mode (-O2 -g -DNDEBUG)
No
FLB_SHARED_LIB
Build shared library
Yes
FLB_SIGNV4
Enable AWS Signv4 support
Yes
FLB_SIMD
Enable SIMD support
No
FLB_SQLDB
Enable SQL embedded database support
Yes
FLB_STATIC_CONF
Build binary using static configuration files. The value of this option must be a directory containing configuration files.
FLB_STREAM_PROCESSOR
Enable Stream Processor
Yes
FLB_TLS
Build with SSL/TLS support
Yes
FLB_UNICODE_ENCODER
Build with Unicode (UTF-16LE, UTF-16BE) encoding support
Yes (if C++ compiler found)
FLB_UTF8_ENCODER
Build with UTF8 encoding support
Yes
FLB_WASM
Build with Wasm runtime support
Yes
FLB_WASM_STACK_PROTECT
Build with WASM runtime with strong stack protector flags
No
FLB_WAMRC
Build with Wasm AOT compiler executable
No
FLB_WINDOWS_DEFAULTS
Build with predefined Windows settings
Yes
FLB_ZIG
Enable zig integration
Yes
FLB_TESTS_INTERNAL_FUZZ
Enable internal fuzz tests
No
FLB_TESTS_OSSFUZZ
Enable OSS-Fuzz build
No
FLB_TESTS_RUNTIME
Enable runtime tests
No
FLB_TRACE
Enable trace mode
No
FLB_VALGRIND
Enable Valgrind support
No
Enable Docker input plugin
On
Enable Docker events input plugin
On
Enable Dummy input plugin
On
Enable Linux eBPF input plugin
Off
Enable Elasticsearch (Bulk API) input plugin
On
Enable Exec input plugin
On
Enable Exec WASI input plugin
On
Enable Fluent Bit metrics input plugin
On
Enable Forward input plugin
On
Enable GPU metrics input plugin
On
Enable Head input plugin
On
Enable Health input plugin
On
Enable HTTP input plugin
On
Enable Kafka input plugin
On
Enable Kernel log input plugin
On
Enable Kubernetes Events input plugin
On
Enable Memory input plugin
On
Enable MQTT Broker input plugin
On
Enable Network I/O metrics input plugin
On
Enable NGINX metrics input plugin
On
Enable Node exporter metrics input plugin
On
Enable OpenTelemetry input plugin
On
Enable Node exporter metrics input plugin
On
Enable Podman metrics input plugin
On
Enable Process input plugin
On
Enable Process exporter metrics input plugin
On
Enable Prometheus remote write input plugin
On
Enable Prometheus scrape metrics input plugin
On
Enable Prometheus textfile input plugin
On
Enable Random input plugin
On
Enable Serial input plugin
On
Enable Serial input plugin
On
Enable StatsD input plugin
On
Enable Standard input plugin
On
Enable Syslog input plugin
On
Enable Systemd input plugin
On
Enable Tail input plugin
On
Enable TCP input plugin
On
Enable Thermal input plugin
On
Enable UDP input plugin
On
Enable Windows Event Log input plugin (Windows Only)
Off
Enable Windows Event Log input plugin using winevt.h API (Windows Only)
Off
Enable Windows exporter metrics input plugin
On
Enable Windows system statistics input plugin
Off
Enable sampling processor
On
Enable SQL processor
On
Enable Geoip2 filter
On
Enable Grep filter
On
Enable Kubernetes metadata filter
On
Enable Log derived metrics filter
On
Enable Lua scripting filter
On
Enable Modify filter
On
Enable Multiline stack trace filter
On
Enable Nest filter
On
Enable Nightfall filter
On
Enable Parser filter
On
Enable Record Modifier filter
On
Enable Rewrite Tag filter
On
Enable Stdout filter
On
Enable Sysinfo filter
On
Enable Tensorflow filter
Off
Enable Throttle filter
On
Enable Type Converter filter
On
Enable Wasm filter
On
Enable Google BigQuery output plugin
On
Enable Google Chronicle output plugin
On
Enable Amazon CloudWatch output plugin
On
Enable Counter output plugin
On
Enable Datadog output plugin
On
Enable Elastic Search output plugin
On
Enable Exit output plugin
On
Enable File output plugin
On
Enable Flow counter output plugin
On
Enable output plugin
On
Enable GELF output plugin
On
Enable HTTP output plugin
On
Enable InfluxDB output plugin
On
Enable Kafka output
On
Enable Kafka REST Proxy output plugin
On
Enable Amazon Kinesis Data Firehose output plugin
On
Enable Amazon Kinesis Data Streams output plugin
On
FLB_OUT_LIB
Enable Library output plugin
On
Enable LogDNA output plugin
On
Enable Loki output plugin
On
Enable NATS output plugin
On
Enable New Relic output plugin
On
Enable NULL output plugin
On
Enable OpenSearch output plugin
On
Enable OpenTelemetry output plugin
On
Enable Oracle Cloud Infrastructure Logging output plugin
On
Enable PostgreSQL output plugin
Off
Enable Plot output plugin
On
Enable Prometheus exporter output plugin
On
Enable Prometheus remote write output plugin
On
Enable Amazon S3 output plugin
On
Enable Apache Skywalking output plugin
On
Enable Slack output plugin
On
Enable Splunk output plugin
On
Enable Stackdriver output plugin
On
Enable STDOUT output plugin
On
Enable Syslog output plugin
On
Enable Treasure Data output plugin
On
Enable TCP/TLS output plugin
On
Enable UDP output plugin
On
Enable Vivo exporter output plugin
On
Enable UDP output plugin
On
FLB_ALL
Enable all features available
No
FLB_ARROW
Build with Apache Arrow support
No
FLB_AVRO_ENCODER
Build with Avro encoding support
No
FLB_AWS
Enable AWS support
Yes
FLB_BACKTRACE
Enable stack trace support
Yes
FLB_DEBUG
Build with debug mode (-g)
No
FLB_SMALL
Optimize for small size
No
FLB_TESTS_INTERNAL
Enable internal tests
No
FLB_MSGPACK_TO_JSON_INIT_BUFFER_SIZE
Determine initial buffer size for msgpack to json conversion in terms of memory used by payload.
2.0
FLB_MSGPACK_TO_JSON_REALLOC_BUFFER_SIZE
Determine percentage of reallocation size when msgpack to json conversion buffer runs out of memory.
0.1
Enable Blob input plugin
On
Enable Collectd input plugin
On
Enable CPU input plugin
On
Enable Disk I/O Metrics input plugin
On
Enable content modifier processor
On
Enable metrics label manipulation processor
On
Enable metrics selector processor
On
Enable OpenTelemetry envelope processor
On
Enable AWS metadata filter
On
Enable Checklist filter
On
Enable AWS ECS metadata filter
On
Enable Expect data test filter
On
Enable Microsoft Azure output plugin
On
Enable Microsoft Azure storage blob output plugin
On
Enable Azure Data Explorer (Kusto) output plugin
On
Enable Azure Log Ingestion output plugin
On
Learn how to monitor your Fluent Bit data pipelines
Fluent Bit includes features for monitoring the internals of your pipeline, in addition to connecting to Prometheus and Grafana, Health checks, and connectors to use external services:
Fluent Bit includes an HTTP server for querying internal information and monitoring metrics of each running plugin.
You can integrate the monitoring interface with Prometheus.
To get started, enable the HTTP server from the configuration file. The following configuration instructs Fluent Bit to start an HTTP server on TCP port 2020 and listen on all network interfaces:
Start Fluent Bit with the corresponding configuration chosen previously:
Fluent Bit starts and generates output in your terminal:
Use curl to gather information about the HTTP server. The following command sends the command output to the jq program, which outputs human-readable JSON data to the terminal.
Fluent Bit exposes the following endpoints for monitoring.
The following descriptions apply to v1 metric endpoints.
/api/v1/metrics/prometheus endpointThe following descriptions apply to metrics outputted in Prometheus format by the /api/v1/metrics/prometheus endpoint.
The following terms are key to understanding how Fluent Bit processes metrics:
Record: a single message collected from a source, such as a single long line in a file.
Chunk: log records ingested and stored by Fluent Bit input plugin instances. A batch of records in a chunk are tracked together as a single unit.
The Fluent Bit engine attempts to fit records into chunks of at most 2 MB, but the size can vary at runtime. Chunks are then sent to an output. An output plugin instance can successfully send the full chunk to the destination and mark it as successful, or it can fail the chunk entirely if an unrecoverable error is encountered, or it can ask for the chunk to be retried.
/api/v1/storage endpointThe following descriptions apply to metrics outputted in JSON format by the /api/v1/storage endpoint.
The following descriptions apply to v2 metric endpoints.
/api/v2/metrics/prometheus or /api/v2/metrics endpointThe following descriptions apply to metrics outputted in Prometheus format by the /api/v2/metrics/prometheus or /api/v2/metrics endpoints.
The following terms are key to understanding how Fluent Bit processes metrics:
Record: a single message collected from a source, such as a single long line in a file.
Chunk: log records ingested and stored by Fluent Bit input plugin instances. A batch of records in a chunk are tracked together as a single unit.
The Fluent Bit engine attempts to fit records into chunks of at most 2 MB, but the size can vary at runtime. Chunks are then sent to an output. An output plugin instance can either successfully send the full chunk to the destination and mark it as successful, or it can fail the chunk entirely if an unrecoverable error is encountered, or it can ask for the chunk to be retried.
The following are detailed descriptions for the metrics collected by the storage layer.
Introduced in Fluent Bit 4.0.6, the fluentbit_output_latency_seconds histogram metric captures end-to-end latency from the time a chunk is created by an input plugin until it's successfully delivered by an output plugin. This provides observability into chunk-level pipeline performance and helps identify slowdowns or bottlenecks in the output path.
The histogram uses the following default bucket boundaries, designed around the Fluent Bit typical flush interval of 1 second:
These boundaries provide:
High resolution around 1 s latency: Captures normal operation near the default flush interval.
Small backpressure detection: Identifies minor delays in the 1-2.5 s range.
Bottleneck identification: Detects retry cycles, network stalls, or plugin bottlenecks in higher ranges.
Complete coverage**: The +Inf bucket ensures all latencies are captured.
When exposed through the Fluent Bit built-in HTTP server, the metric appears in Prometheus format:
Performance monitoring: Monitor overall pipeline health by tracking latency percentiles:
Bottleneck detection: Identify specific input/output pairs experiencing high latency:
SLA monitoring: Track how many chunks are delivered within acceptable time bounds:
Alerting: Create alerts for degraded pipeline performance:
Query the service uptime with the following command:
The command prints a similar output like this:
Query internal metrics in JSON format with the following command:
The command prints a similar output like this:
Query internal metrics in Prometheus Text 0.0.4 format:
This command returns the same metrics in Prometheus format instead of JSON:
By default, configured plugins on runtime get an internal name in the format _plugin_name.ID_. For monitoring purposes, this can be confusing if many plugins of the same type were configured. To make a distinction each configured input or output section can get an alias that will be used as the parent name for the metric.
The following example sets an alias to the INPUT section of the configuration file, which is using the input plugin:
When querying the related metrics, the aliases are returned instead of the plugin name:
You can create Grafana dashboards and alerts using Fluent Bit exposed Prometheus style metrics.
The provided is heavily inspired by 's with a few key differences, such as the use of the instance label, stacked graphs, and a focus on Fluent Bit metrics. See for more information.
Sample alerts .
Fluent Bit supports the following configurations to set up the health check.
Not every error log means an error to be counted. The error retry failures count only on specific errors, which is the example in configuration table description.
Based on the HC_Period setting, if the real error number is over HC_Errors_Count, or retry failure is over HC_Retry_Failure_Count, Fluent Bit is considered unhealthy. The health endpoint returns an HTTP status 500 and an error message. Otherwise, the endpoint returns HTTP status 200 and an ok message.
The equation to calculate this behavior is:
The HC_Errors_Count and HC_Retry_Failure_Count only count for output plugins and count a sum for errors and retry failures from all running output plugins.
The following configuration examples show how to define these settings:
Use the following command to call the health endpoint:
With the example configuration, the health status is determined by the following equation:
If this equation evaluates to TRUE, then Fluent Bit is unhealthy.
If this equation evaluates to FALSE, then Fluent Bit is healthy.
is a hosted service that lets you monitor your Fluent Bit agents including data flow, metrics, and configurations.
make installcd build/cmake ../-- The C compiler identification is GNU 4.9.2
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- The CXX compiler identification is GNU 4.9.2
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
...
-- Could NOT find Doxygen (missing: DOXYGEN_EXECUTABLE)
-- Looking for accept4
-- Looking for accept4 - not found
-- Configuring done
-- Generating done
-- Build files have been written to: /home/edsiper/coding/fluent-bit/buildmakeScanning dependencies of target msgpack
[ 2%] Building C object lib/msgpack-1.1.0/CMakeFiles/msgpack.dir/src/unpack.c.o
[ 4%] Building C object lib/msgpack-1.1.0/CMakeFiles/msgpack.dir/src/objectc.c.o
[ 7%] Building C object lib/msgpack-1.1.0/CMakeFiles/msgpack.dir/src/version.c.o
...
[ 19%] Building C object lib/monkey/mk_core/CMakeFiles/mk_core.dir/mk_file.c.o
[ 21%] Building C object lib/monkey/mk_core/CMakeFiles/mk_core.dir/mk_rconf.c.o
[ 23%] Building C object lib/monkey/mk_core/CMakeFiles/mk_core.dir/mk_string.c.o
...
Scanning dependencies of target fluent-bit-static
[ 66%] Building C object src/CMakeFiles/fluent-bit-static.dir/flb_pack.c.o
[ 69%] Building C object src/CMakeFiles/fluent-bit-static.dir/flb_input.c.o
[ 71%] Building C object src/CMakeFiles/fluent-bit-static.dir/flb_output.c.o
...
Linking C executable ../bin/fluent-bit
[100%] Built target fluent-bit-bin/api/v1/storage
Get internal metrics of the storage layer / buffered data. This option is enabled only if in the SERVICE section of the property storage.metrics is enabled.
JSON
/api/v1/health
Display the Fluent Bit health check result.
String
/api/v2/metrics
Display internal metrics per loaded plugin.
/api/v2/metrics/prometheus
Display internal metrics per loaded plugin ready in Prometheus Server format.
Prometheus Text 0.0.4
/api/v2/reload
Execute hot reloading or get the status of hot reloading. See the .
JSON
name: the name or alias for the output instance
The number of log records dropped by the output. These records hit an unrecoverable error or retries expired for their chunk.
counter
records
fluentbit_output_errors_total
name: the name or alias for the output instance
The number of chunks with an error that's either unrecoverable or unable to retry. This metric represents the number of times a chunk failed, and doesn't correspond with the number of error messages visible in the Fluent Bit log output.
counter
chunks
fluentbit_output_proc_bytes_total
name: the name or alias for the output instance
The number of bytes of log records that this output instance sent successfully. This metric represents the total byte size of all unique chunks sent by this output. If a record isn't sent due to some error, it doesn't count towards this metric.
counter
bytes
fluentbit_output_proc_records_total
name: the name or alias for the output instance
The number of log records that this output instance sent successfully. This metric represents the total record count of all unique chunks sent by this output. If a record isn't sent successfully, it doesn't count towards this metric.
counter
records
fluentbit_output_retried_records_total
name: the name or alias for the output instance
The number of log records that experienced a retry. This metric is calculated at the chunk level, the count increased when an entire chunk is marked for retry. An output plugin might perform multiple actions that generate many error messages when uploading a single chunk.
counter
records
fluentbit_output_retries_failed_total
name: the name or alias for the output instance
The number of times that retries expired for a chunk. Each plugin configures a Retry_Limit, which applies to chunks. When the Retry_Limit is exceeded, the chunk is discarded and this metric is incremented.
counter
chunks
fluentbit_output_retries_total
name: the name or alias for the output instance
The number of times this output instance requested a retry for a chunk.
counter
chunks
fluentbit_uptime
The number of seconds that Fluent Bit has been running.
counter
seconds
process_start_time_seconds
The Unix Epoch timestamp for when Fluent Bit started.
gauge
seconds
chunks.fs_chunks_down
The count of chunks that are only in the file system.
chunks
input_chunks.{plugin name}.status.overlimit
Indicates whether the input instance exceeded its configured Mem_Buf_Limit.
boolean
input_chunks.{plugin name}.status.mem_size
The size of memory that this input is consuming to buffer logs in chunks.
bytes
input_chunks.{plugin name}.status.mem_limit
The buffer memory limit (Mem_Buf_Limit) that applies to this input plugin.
bytes
input_chunks.{plugin name}.chunks.total
The current total number of chunks owned by this input instance.
chunks
input_chunks.{plugin name}.chunks.up
The current number of chunks that are in memory for this input. If file system storage is enabled, chunks that are "up" are also stored in the filesystem layer.
chunks
input_chunks.{plugin name}.chunks.down
The current number of chunks that are "down" in the filesystem for this input.
chunks
input_chunks.{plugin name}.chunks.busy
Chunks are that are being processed or sent by outputs and aren't eligible to have new data appended.
chunks
input_chunks.{plugin name}.chunks.busy_size
The sum of the byte size of each chunk which is currently marked as busy.
bytes
name: the name or alias for the input instance
The number of log records this input ingested successfully.
counter
records
fluentbit_filter_bytes_total
name: the name or alias for the filter instance
The number of bytes of log records that this filter instance has ingested successfully.
counter
bytes
fluentbit_filter_records_total
name: the name or alias for the filter instance
The number of log records this filter has ingested successfully.
counter
records
fluentbit_filter_added_records_total
name: the name or alias for the filter instance
The number of log records added by the filter into the data pipeline.
counter
records
fluentbit_filter_drop_records_total
name: the name or alias for the filter instance
The number of log records dropped by the filter and removed from the data pipeline.
counter
records
fluentbit_output_dropped_records_total
name: the name or alias for the output instance
The number of log records dropped by the output. These records hit an unrecoverable error or retries expired for their chunk.
counter
records
fluentbit_output_errors_total
name: the name or alias for the output instance
The number of chunks with an error that's either unrecoverable or unable to retry. This metric represents the number of times a chunk failed, and doesn't correspond with the number of error messages visible in the Fluent Bit log output.
counter
chunks
fluentbit_output_proc_bytes_total
name: the name or alias for the output instance
The number of bytes of log records that this output instance sent successfully. This metric represents the total byte size of all unique chunks sent by this output. If a record isn't sent due to some error, it doesn't count towards this metric.
counter
bytes
fluentbit_output_proc_records_total
name: the name or alias for the output instance
The number of log records that this output instance sent successfully. This metric represents the total record count of all unique chunks sent by this output. If a record isn't sent successfully, it doesn't count towards this metric.
counter
records
fluentbit_output_retried_records_total
name: the name or alias for the output instance
The number of log records that experienced a retry. This metric is calculated at the chunk level, the count increased when an entire chunk is marked for retry. An output plugin might perform multiple actions that generate many error messages when uploading a single chunk.
counter
records
fluentbit_output_retries_failed_total
name: the name or alias for the output instance
The number of times that retries expired for a chunk. Each plugin configures a Retry_Limit, which applies to chunks. When the Retry_Limit is exceeded, the chunk is discarded and this metric is incremented.
counter
chunks
fluentbit_output_retries_total
name: the name or alias for the output instance
The number of times this output instance requested a retry for a chunk.
counter
chunks
fluentbit_output_latency_seconds
input: the name of the input plugin instance, output: the name of the output plugin instance
End-to-end latency from chunk creation to successful delivery. Provides observability into chunk-level pipeline performance.
histogram
seconds
fluentbit_uptime
hostname: the hostname on running Fluent Bit
The number of seconds that Fluent Bit has been running.
counter
seconds
fluentbit_process_start_time_seconds
hostname: the hostname on running Fluent Bit
The Unix Epoch time stamp for when Fluent Bit started.
gauge
seconds
fluentbit_build_info
hostname: the hostname, version: the version of Fluent Bit, os: OS type
Build version information. The returned value is originated from initializing the Unix Epoch time stamp of configuration context.
gauge
seconds
fluentbit_hot_reloaded_times
hostname: the hostname on running Fluent Bit
Collect the count of hot reloaded times.
gauge
seconds
None
The total number of chunks saved to the file system.
gauge
chunks
fluentbit_storage_fs_chunks_up
None
The count of chunks that are both in file system and in memory.
gauge
chunks
fluentbit_storage_fs_chunks_down
None
The count of chunks that are only in the file system.
gauge
chunks
fluentbit_storage_fs_chunks_busy
None
The total number of chunks are in a busy state.
gauge
chunks
fluentbit_storage_fs_chunks_busy_bytes
None
The total bytes of chunks are in a busy state.
gauge
bytes
fluentbit_input_storage_overlimit
name: the name or alias for the input instance
Indicates whether the input instance exceeded its configured Mem_Buf_Limit.
gauge
boolean
fluentbit_input_storage_memory_bytes
name: the name or alias for the input instance
The size of memory that this input is consuming to buffer logs in chunks.
gauge
bytes
fluentbit_input_storage_chunks
name: the name or alias for the input instance
The current total number of chunks owned by this input instance.
gauge
chunks
fluentbit_input_storage_chunks_up
name: the name or alias for the input instance
The current number of chunks that are in memory for this input. If file system storage is enabled, chunks that are "up" are also stored in the filesystem layer.
gauge
chunks
fluentbit_input_storage_chunks_down
name: the name or alias for the input instance
The current number of chunks that are "down" in the filesystem for this input.
gauge
chunks
fluentbit_input_storage_chunks_busy
name: the name or alias for the input instance
Chunks are that are being processed or sent by outputs and aren't eligible to have new data appended.
gauge
chunks
fluentbit_input_storage_chunks_busy_bytes
name: the name or alias for the input instance
The sum of the byte size of each chunk which is currently marked as busy.
gauge
bytes
fluentbit_output_upstream_total_connections
name: the name or alias for the output instance
The sum of the connection count of each output plugins.
gauge
bytes
fluentbit_output_upstream_busy_connections
name: the name or alias for the output instance
The sum of the connection count in a busy state of each output plugins.
gauge
bytes
/
Fluent Bit build information.
JSON
/api/v1/uptime
Return uptime information in seconds.
JSON
/api/v1/metrics
Display internal metrics per loaded plugin.
JSON
/api/v1/metrics/prometheus
Display internal metrics per loaded plugin in Prometheus Server format.
Prometheus Text 0.0.4
fluentbit_input_bytes_total
name: the name or alias for the input instance
The number of bytes of log records that this input instance has ingested successfully.
counter
bytes
fluentbit_input_records_total
name: the name or alias for the input instance
The number of log records this input ingested successfully.
counter
records
chunks.total_chunks
The total number of chunks of records that Fluent Bit is currently buffering.
chunks
chunks.mem_chunks
The total number of chunks that are currently buffered in memory. Chunks can be both in memory and on the file system at the same time.
chunks
chunks.fs_chunks
The total number of chunks saved to the filesystem.
chunks
chunks.fs_chunks_up
The count of chunks that are both in file system and in memory.
chunks
fluentbit_input_bytes_total
name: the name or alias for the input instance
The number of bytes of log records that this input instance has ingested successfully.
counter
bytes
fluentbit_input_ingestion_paused
name: the name or alias for the input instance
Indicates whether the input instance ingestion is currently paused (1) or not (0).
gauge
boolean
fluentbit_input_chunks.storage_chunks
None
The total number of chunks of records that Fluent Bit is currently buffering.
gauge
chunks
fluentbit_storage_mem_chunk
None
The total number of chunks that are currently buffered in memory. Chunks can be both in memory and on the file system at the same time.
gauge
chunks
Health_Check
Enable Health check feature
Off
HC_Errors_Count
the error count to meet the unhealthy requirement, this is a sum for all output plugins in a defined HC_Period, example for output error: [2022/02/16 10:44:10] [ warn] [engine] failed to flush chunk '1-1645008245.491540684.flb', retry in 7 seconds: task_id=0, input=forward.1 > output=cloudwatch_logs.3 (out_id=3)
5
HC_Retry_Failure_Count
the retry failure count to meet the unhealthy requirement, this is a sum for all output plugins in a defined HC_Period, example for retry failure: [2022/02/16 20:11:36] [ warn] [engine] chunk '1-1645042288.260516436.flb' cannot be retried: task_id=0, input=tcp.3 > output=cloudwatch_logs.1
5
HC_Period
The time period by second to count the error and retry failure data point
[SERVICE]
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_PORT 2020
[INPUT]
Name cpu
[OUTPUT]
Name stdout
Match *
service:
http_server: on
http_listen: 0.0.0.0
http_port: 2020
pipeline:
inputs:
- name: cpu
outputs:
- name: stdout
match: '*'fluentbit_output_dropped_records_total
fluentbit_input_records_total
fluentbit_storage_fs_chunks
60
# For YAML configuration.
$ fluent-bit --config fluent-bit.yaml
# For classic configuration.
$ fluent-bit --config fluent-bit.conf...
[2020/03/10 19:08:24] [ info] [engine] started
[2020/03/10 19:08:24] [ info] [http_server] listen iface=0.0.0.0 tcp_port=2020$ curl -s http://127.0.0.1:2020 | jq
{
"fluent-bit": {
"version": "0.13.0",
"edition": "Community",
"flags": [
"FLB_HAVE_TLS",
"FLB_HAVE_METRICS",
"FLB_HAVE_SQLDB",
"FLB_HAVE_TRACE",
"FLB_HAVE_HTTP_SERVER",
"FLB_HAVE_FLUSH_LIBCO",
"FLB_HAVE_SYSTEMD",
"FLB_HAVE_VALGRIND",
"FLB_HAVE_FORK",
"FLB_HAVE_PROXY_GO",
"FLB_HAVE_REGEX",
"FLB_HAVE_C_TLS",
"FLB_HAVE_SETJMP",
"FLB_HAVE_ACCEPT4",
"FLB_HAVE_INOTIFY"
]
}
}0.5, 1.0, 1.5, 2.5, 5.0, 10.0, 20.0, 30.0, +Inf# HELP fluentbit_output_latency_seconds End-to-end latency in seconds
# TYPE fluentbit_output_latency_seconds histogram
fluentbit_output_latency_seconds_bucket{le="0.5",input="random.0",output="stdout.0"} 0
fluentbit_output_latency_seconds_bucket{le="1.0",input="random.0",output="stdout.0"} 1
fluentbit_output_latency_seconds_bucket{le="1.5",input="random.0",output="stdout.0"} 6
fluentbit_output_latency_seconds_bucket{le="2.5",input="random.0",output="stdout.0"} 6
fluentbit_output_latency_seconds_bucket{le="5.0",input="random.0",output="stdout.0"} 6
fluentbit_output_latency_seconds_bucket{le="10.0",input="random.0",output="stdout.0"} 6
fluentbit_output_latency_seconds_bucket{le="20.0",input="random.0",output="stdout.0"} 6
fluentbit_output_latency_seconds_bucket{le="30.0",input="random.0",output="stdout.0"} 6
fluentbit_output_latency_seconds_bucket{le="+Inf",input="random.0",output="stdout.0"} 6
fluentbit_output_latency_seconds_sum{input="random.0",output="stdout.0"} 6.0015411376953125
fluentbit_output_latency_seconds_count{input="random.0",output="stdout.0"} 6# 95th percentile latency
histogram_quantile(0.95, rate(fluentbit_output_latency_seconds_bucket[5m]))
# Average latency
rate(fluentbit_output_latency_seconds_sum[5m]) / rate(fluentbit_output_latency_seconds_count[5m])# Outputs with highest average latency
topk(5, rate(fluentbit_output_latency_seconds_sum[5m]) / rate(fluentbit_output_latency_seconds_count[5m]))# Percentage of chunks delivered within 2 seconds
(
rate(fluentbit_output_latency_seconds_bucket{le="2.0"}[5m]) /
rate(fluentbit_output_latency_seconds_count[5m])
) * 100# Example Prometheus alerting rule
- alert: FluentBitHighLatency
expr: histogram_quantile(0.95, rate(fluentbit_output_latency_seconds_bucket[5m])) > 5
for: 2m
labels:
severity: warning
annotations:
summary: "Fluent Bit pipeline experiencing high latency"
description: "95th percentile latency is {{ $value }}s for {{ $labels.input }} -> {{ $labels.output }}"curl -s http://127.0.0.1:2020/api/v1/uptime | jq{
"uptime_sec": 8950000,
"uptime_hr": "Fluent Bit has been running: 103 days, 14 hours, 6 minutes and 40 seconds"
}curl -s http://127.0.0.1:2020/api/v1/metrics | jq{
"input": {
"cpu.0": {
"records": 8,
"bytes": 2536
}
},
"output": {
"stdout.0": {
"proc_records": 5,
"proc_bytes": 1585,
"errors": 0,
"retries": 0,
"retries_failed": 0
}
}
}curl -s http://127.0.0.1:2020/api/v1/metrics/prometheusfluentbit_input_records_total{name="cpu.0"} 57 1509150350542
fluentbit_input_bytes_total{name="cpu.0"} 18069 1509150350542
fluentbit_output_proc_records_total{name="stdout.0"} 54 1509150350542
fluentbit_output_proc_bytes_total{name="stdout.0"} 17118 1509150350542
fluentbit_output_errors_total{name="stdout.0"} 0 1509150350542
fluentbit_output_retries_total{name="stdout.0"} 0 1509150350542
fluentbit_output_retries_failed_total{name="stdout.0"} 0 1509150350542service:
http_server: on
http_listen: 0.0.0.0
http_port: 2020
pipeline:
inputs:
- name: cpu
alias: server1_cpu
outputs:
- name: stdout
alias: raw_output
match: '*'[SERVICE]
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_PORT 2020
[INPUT]
Name cpu
Alias server1_cpu
[OUTPUT]
Name stdout
Alias raw_output
Match *{
"input": {
"server1_cpu": {
"records": 8,
"bytes": 2536
}
},
"output": {
"raw_output": {
"proc_records": 5,
"proc_bytes": 1585,
"errors": 0,
"retries": 0,
"retries_failed": 0
}
}
}health status = (HC_Errors_Count > HC_Errors_Count config value) OR
(HC_Retry_Failure_Count > HC_Retry_Failure_Count config value) IN
the HC_Period intervalservice:
http_server: on
http_listen: 0.0.0.0
http_port: 2020
health_check: on
hc_errors_count: 5
hc_retry_failure_count: 5
hc_period: 5
pipeline:
inputs:
- name: cpu
outputs:
- name: stdout
match: '*'[SERVICE]
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_PORT 2020
Health_Check On
HC_Errors_Count 5
HC_Retry_Failure_Count 5
HC_Period 5
[INPUT]
Name cpu
[OUTPUT]
Name stdout
Match *curl -s http://127.0.0.1:2020/api/v1/healthHealth status = (HC_Errors_Count > 5) OR (HC_Retry_Failure_Count > 5) IN 5 secondsFluent Bit container images are available on Docker Hub ready for production usage. Current available images can be deployed in multiple architectures.
Use the following command to start Docker with Fluent Bit:
Use the following command to start Fluent Bit while using a configuration file:
The following table describes the Linux container tags that are available on Docker Hub repository:
It's strongly suggested that you always use the latest image of Fluent Bit.
Container images for Windows Server 2019 and Windows Server 2022 are provided for v2.0.6 and later. These can be found as tags on the same Docker Hub registry.
Fluent Bit production stable images are based on . Focusing on security, these images contain only the Fluent Bit binary and minimal system libraries and basic configuration.
Debug images are available for all architectures (for 1.9.0 and later), and contain a full Debian shell and package manager that can be used to troubleshoot or for testing purposes.
From a deployment perspective, there's no need to specify an architecture. The container client tool that pulls the image gets the proper layer for the running architecture.
Version 1.9 and 2.0 container images are signed using Cosign/Sigstore. Verify these signatures using cosign ():
Replace cosign with the binary installed if it has a different name (for example, cosign-linux-amd64).
Keyless signing is also provided but is still experimental:
COSIGN_EXPERIMENTAL=1 is used to allow verification of images signed in keyless mode. To learn more about keyless signing, see the documentation.
Download the last stable image from 2.0 series:
After the image is in place, run the following test which makes Fluent Bit measure CPU usage by the container:
That command lets Fluent Bit measure CPU usage every second and flushes the results to the standard output. For example:
Alpine Linux uses Musl C library instead of Glibc. Musl isn't fully compatible with Glibc, which generated many issues in the following areas when used with Fluent Bit:
Memory Allocator: To run properly in high-load environments, Fluent Bit uses Jemalloc as a default memory allocator which reduces fragmentation and provides better performance. Jemalloc can't run smoothly with Musl and requires extra work.
Alpine Linux Musl functions bootstrap have a compatibility issue when loading Golang shared libraries. This causes problems when trying to load Golang output plugins in Fluent Bit.
Alpine Linux Musl Time format parser doesn't support Glibc extensions.
The Fluent Bit maintainers' preference for base images are Distroless and Debian for security and maintenance reasons.
The reasons for using distroless are well covered in
.
Include only what you need, reduce the attack surface available.
Reduces size and improves performance.
Reduces false positives on scans (and reduces resources required for scanning).
Reduces supply chain security requirements to only what you need.
With any choice, there are downsides:
No shell or package manager to update or add things.
Generally, dynamic updating is a bad idea in containers as the time it's done affects the outcome: two containers started at different times using the same base image can perform differently or get different dependencies.
A better approach is to rebuild a new image version. You can do this with Distroless, but it's harder and requires multistage builds or similar to provide the new dependencies.
Using exec to access a container will potentially impact resource limits.
For debugging, debug containers are available now in K8S:
This can be a significantly different container from the one you want to investigate, with lots of extra tools or even a different base.
No resource limits applied to this container, which can be good or bad.
Runs in pod namespaces. It's another container that can access everything the others can.
Might need architecture of the pod to share volumes or other information.
docker run -ti cr.fluentbit.io/fluent/fluent-bit4.1.1-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
4.1.1
x86_64, arm64v8, arm32v7, s390x
Release
4.1.0-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
4.1.0
x86_64, arm64v8, arm32v7, s390x
Release
4.0.12-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
4.0.12
x86_64, arm64v8, arm32v7, s390x
Release
4.0.11-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
4.0.11
x86_64, arm64v8, arm32v7, s390x
Release
4.0.10-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
4.0.10
x86_64, arm64v8, arm32v7, s390x
Release
4.0.9-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
4.0.9
x86_64, arm64v8, arm32v7, s390x
Release
4.0.8-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
4.0.8
x86_64, arm64v8, arm32v7, s390x
Release
4.0.7-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
4.0.7
x86_64, arm64v8, arm32v7, s390x
Release
4.0.6-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
4.0.6
x86_64, arm64v8, arm32v7, s390x
Release
4.0.5-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
4.0.5
x86_64, arm64v8, arm32v7, s390x
Release
4.0.4-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
4.0.4
x86_64, arm64v8, arm32v7, s390x
Release
4.0.3-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
4.0.3
x86_64, arm64v8, arm32v7, s390x
Release
4.0.1-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
4.0.1
x86_64, arm64v8, arm32v7, s390x
Release
4.0.0-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
4.0.0
x86_64, arm64v8, arm32v7, s390x
Release
3.2.10-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.2.10
x86_64, arm64v8, arm32v7, s390x
Release
3.2.9-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.2.9
x86_64, arm64v8, arm32v7, s390x
Release
3.2.8-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.2.8
x86_64, arm64v8, arm32v7, s390x
Release
3.2.7-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.2.7
x86_64, arm64v8, arm32v7, s390x
Release
3.2.6-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.2.6
x86_64, arm64v8, arm32v7, s390x
Release
3.2.5-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.2.5
x86_64, arm64v8, arm32v7, s390x
Release
3.2.4-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.2.4
x86_64, arm64v8, arm32v7, s390x
Release
3.2.3-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.2.3
x86_64, arm64v8, arm32v7, s390x
Release
3.2.2-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.2.2
x86_64, arm64v8, arm32v7, s390x
Release
3.2.1-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.2.1
x86_64, arm64v8, arm32v7, s390x
Release
3.1.10-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.1.10
x86_64, arm64v8, arm32v7, s390x
Release
3.1.9-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.1.9
x86_64, arm64v8, arm32v7, s390x
Release
3.1.8-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.1.8
x86_64, arm64v8, arm32v7, s390x
Release
3.1.7-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.1.7
x86_64, arm64v8, arm32v7, s390x
Release
3.1.6-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.1.6
x86_64, arm64v8, arm32v7, s390x
Release
3.1.5-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.1.5
x86_64, arm64v8, arm32v7, s390x
Release
3.1.4-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.1.4
x86_64, arm64v8, arm32v7, s390x
Release
3.1.3-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.1.3
x86_64, arm64v8, arm32v7, s390x
Release
3.1.2-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.1.2
x86_64, arm64v8, arm32v7, s390x
Release
3.1.1-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.1.1
x86_64, arm64v8, arm32v7, s390x
Release
3.1.0-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.1.0
x86_64, arm64v8, arm32v7, s390x
Release
3.0.7-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.0.7
x86_64, arm64v8, arm32v7, s390x
Release
3.0.6-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.0.6
x86_64, arm64v8, arm32v7, s390x
Release
3.0.5-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.0.5
x86_64, arm64v8, arm32v7, s390x
Release
3.0.4-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.0.4
x86_64, arm64v8, arm32v7, s390x
Release
3.0.3-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.0.3
x86_64, arm64v8, arm32v7, s390x
Release
3.0.2-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.0.2
x86_64, arm64v8, arm32v7, s390x
Release
3.0.1-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.0.1
x86_64, arm64v8, arm32v7, s390x
Release
3.0.0-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
3.0.0
x86_64, arm64v8, arm32v7, s390x
Release
2.2.2-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
2.2.2
x86_64, arm64v8, arm32v7, s390x
Release
2.2.1-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
2.2.1
x86_64, arm64v8, arm32v7, s390x
Release
2.2.0-debug
x86_64, arm64v8, arm32v7
Debug images
2.2.0
x86_64, arm64v8, arm32v7
Release
2.1.10-debug
x86_64, arm64v8, arm32v7
Debug images
2.1.10
x86_64, arm64v8, arm32v7
Release
2.1.9-debug
x86_64, arm64v8, arm32v7
Debug images
2.1.9
x86_64, arm64v8, arm32v7
Release
2.1.8-debug
x86_64, arm64v8, arm32v7
Debug images
2.1.8
x86_64, arm64v8, arm32v7
Release
2.1.7-debug
x86_64, arm64v8, arm32v7
Debug images
2.1.7
x86_64, arm64v8, arm32v7
Release
2.1.6-debug
x86_64, arm64v8, arm32v7
Debug images
2.1.6
x86_64, arm64v8, arm32v7
Release
2.1.5
x86_64, arm64v8, arm32v7
Release
2.1.5-debug
x86_64, arm64v8, arm32v7
Debug images
2.1.3
x86_64, arm64v8, arm32v7
Release
2.1.3-debug
x86_64, arm64v8, arm32v7
Debug images
2.1.2
x86_64, arm64v8, arm32v7
Release
2.1.2-debug
x86_64, arm64v8, arm32v7
Debug images
2.1.1
x86_64, arm64v8, arm32v7
Release
2.1.1-debug
x86_64, arm64v8, arm32v7
v2.1.x releases (production + debug)
2.1.0
x86_64, arm64v8, arm32v7
Release
2.1.0-debug
x86_64, arm64v8, arm32v7
v2.1.x releases (production + debug)
2.0.11
x86_64, arm64v8, arm32v7
Release
2.0.11-debug
x86_64, arm64v8, arm32v7
v2.0.x releases (production + debug)
2.0.10
x86_64, arm64v8, arm32v7
Release
2.0.10-debug
x86_64, arm64v8, arm32v7
v2.0.x releases (production + debug)
2.0.9
x86_64, arm64v8, arm32v7
Release
2.0.9-debug
x86_64, arm64v8, arm32v7
v2.0.x releases (production + debug)
2.0.8
x86_64, arm64v8, arm32v7
Release
2.0.8-debug
x86_64, arm64v8, arm32v7
v2.0.x releases (production + debug)
2.0.6
x86_64, arm64v8, arm32v7
Release
2.0.6-debug
x86_64, arm64v8, arm32v7
v2.0.x releases (production + debug)
2.0.5
x86_64, arm64v8, arm32v7
Release
2.0.5-debug
x86_64, arm64v8, arm32v7
v2.0.x releases (production + debug)
2.0.4
x86_64, arm64v8, arm32v7
Release
2.0.4-debug
x86_64, arm64v8, arm32v7
v2.0.x releases (production + debug)
2.0.3
x86_64, arm64v8, arm32v7
Release
2.0.3-debug
x86_64, arm64v8, arm32v7
v2.0.x releases (production + debug)
2.0.2
x86_64, arm64v8, arm32v7
Release
2.0.2-debug
x86_64, arm64v8, arm32v7
v2.0.x releases (production + debug)
2.0.1
x86_64, arm64v8, arm32v7
Release
2.0.1-debug
x86_64, arm64v8, arm32v7
v2.0.x releases (production + debug)
2.0.0
x86_64, arm64v8, arm32v7
Release
2.0.0-debug
x86_64, arm64v8, arm32v7
v2.0.x releases (production + debug)
1.9.9
x86_64, arm64v8, arm32v7
Release
1.9.9-debug
x86_64, arm64v8, arm32v7
v1.9.x releases (production + debug)
1.9.8
x86_64, arm64v8, arm32v7
Release
1.9.8-debug
x86_64, arm64v8, arm32v7
v1.9.x releases (production + debug)
1.9.7
x86_64, arm64v8, arm32v7
Release
1.9.7-debug
x86_64, arm64v8, arm32v7
v1.9.x releases (production + debug)
1.9.6
x86_64, arm64v8, arm32v7
Release
1.9.6-debug
x86_64, arm64v8, arm32v7
v1.9.x releases (production + debug)
1.9.5
x86_64, arm64v8, arm32v7
Release
1.9.5-debug
x86_64, arm64v8, arm32v7
v1.9.x releases (production + debug)
1.9.4
x86_64, arm64v8, arm32v7
Release
1.9.4-debug
x86_64, arm64v8, arm32v7
v1.9.x releases (production + debug)
1.9.3
x86_64, arm64v8, arm32v7
Release
1.9.3-debug
x86_64, arm64v8, arm32v7
v1.9.x releases (production + debug)
1.9.2
x86_64, arm64v8, arm32v7
Release
1.9.2-debug
x86_64, arm64v8, arm32v7
v1.9.x releases (production + debug)
1.9.1
x86_64, arm64v8, arm32v7
Release
1.9.1-debug
x86_64, arm64v8, arm32v7
v1.9.x releases (production + debug)
1.9.0
x86_64, arm64v8, arm32v7
Release
1.9.0-debug
x86_64, arm64v8, arm32v7
v1.9.x releases (production + debug)
Helps prevent unauthorised processes or users interacting with the container.
Less need to harden the container (and container runtime, K8s, and so on).
Faster CI/CD processes.
More specifically you need applications set up to properly expose information for debugging rather than rely on traditional debug approaches of connecting to processes or dumping memory. This can be an upfront cost versus a runtime cost but does shift left in the development process so hopefully is a reduction overall.
Assumption that Distroless is secure: nothing is secure and there are still exploits so it doesn't remove the need for securing your system.
Sometimes you need to use a common base image, such as with audits, security, health, and so on.
Requires more recent versions of K8S and the container runtime plus RBAC allowing it.
4.2.0-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
4.2.0
x86_64, arm64v8, arm32v7, s390x
Release v4.2.0
4.1.2-debug
x86_64, arm64v8, arm32v7, s390x
Debug images
4.1.2
x86_64, arm64v8, arm32v7, s390x
docker run -ti -v ./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf \
cr.fluentbit.io/fluent/fluent-bitdocker run -ti -v ./fluent-bit.yaml:/fluent-bit/etc/fluent-bit.yaml \
cr.fluentbit.io/fluent/fluent-bit \
-c /fluent-bit/etc/fluent-bit.yaml
Release
$ cosign verify --key "https://packages.fluentbit.io/fluentbit-cosign.pub" fluent/fluent-bit:2.0.6
Verification for index.docker.io/fluent/fluent-bit:2.0.6 --
The following checks were performed on each of these signatures:
- The cosign claims were validated
- The signatures were verified against the specified public key
[{"critical":{"identity":{"docker-reference":"index.docker.io/fluent/fluent-bit"},"image":{"docker-manifest-digest":"sha256:c740f90b07f42823d4ecf4d5e168f32ffb4b8bcd87bc41df8f5e3d14e8272903"},"type":"cosign container image signature"},"optional":{"release":"2.0.6","repo":"fluent/fluent-bit","workflow":"Release from staging"}}]COSIGN_EXPERIMENTAL=1 cosign verify fluent/fluent-bit:2.0.6docker pull cr.fluentbit.io/fluent/fluent-bit:2.0docker run -ti cr.fluentbit.io/fluent/fluent-bit:2.0 \
-i cpu -o stdout -f 1[2019/10/01 12:29:02] [ info] [engine] started
[0] cpu.0: [1504290543.000487750, {"cpu_p"=>0.750000, "user_p"=>0.250000, "system_p"=>0.500000, "cpu0.p_cpu"=>0.000000, "cpu0.p_user"=>0.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>1.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>1.000000, "cpu2.p_cpu"=>1.000000, "cpu2.p_user"=>1.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>0.000000, "cpu3.p_user"=>0.000000, "cpu3.p_system"=>0.000000}]