syslog
Here are 453 public repositories matching this topic...
I have an application which receives log messages from a firewall. The logs are written into a MongoDB. My goal is to process 30'000 messages per second (more or less constantly for 7*24 hours, not as transient peak value)
As peak value I expect app. 50'000 messages per second.
With several settings I reached up to 20'000 msg/sec. but that is not sufficient for our life traffic. The MongoDB ho
currently when rsyslog starts, it checks to see if a pidfile exists, and if it exists, rsyslog refuses to start.
However, if rsyslog crashes or is killed with a -9, it does not have a chance to remove the pidfile and so a replacement cannot be started
As an enhancement, rather than just depending only on the existance of a pid file, rsyslog should look in the pid file and check to see if the
-
Updated
Jan 13, 2022 - JavaScript
Unlike all the existing charts that graph a string (most likely) against a number, this graphs numbers on both axises. So in addition to the configuration changes needed for passing the right field to chartjs, the PR for this should also change the "preferred type" to "number" for the x axis when the chart type is scatter plot.
-
Updated
Apr 9, 2022 - JavaScript
-
Updated
Apr 22, 2022 - Shell
-
Updated
Mar 9, 2022 - Ruby
-
Updated
Feb 23, 2022 - Go
-
Updated
Feb 9, 2021
-
Updated
Feb 12, 2022 - Pascal
希望作者能够提供上述方法
-
Updated
Apr 22, 2022 - Python
-
Updated
Feb 8, 2021 - Python
Add DataDog webhook
Improve this page
Add a description, image, and links to the syslog topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the syslog topic, visit your repo's landing page and select "manage topics."


I have noticed when ingesting backlog(older timestamped data) that the "Messages per minute" line graph and "sources" data do not line up.
The Messages per minute appear to be correct for the ingest rate, but the sources breakdown below it only show messages for each type from within the time window via timestamp. This means in the last hour if you've ingested logs from 2 days ago, the data is