I just deployed the #ELK stack to demonstrate log analysis to an audience of IT Ops engineers 👨💻. It was easy to ingest #MongoDB ‘s logs and process them through #LogStash ‘s JSON filter and to get #Kibana to show a count of connections from each client node. Before the demo, I looked at using #FluentD which RedHat uses in #OpenShift as the #EFK stack.
#elk #mongodb #logstash #kibana #fluentd #openshift #efk
@technoprenerd I definitely recommend #fluentbit if you are new to the world of log transport. Learning #fluentbit and #fluentd is hugely beneficial to being able to gather data from systems, applications, and operations at scale. If you run into any issues, I'm in the #Matrix and #Libera channels for it and there's an official Slack org for it, too.
#libera #matrix #fluentd #fluentbit
@technoprenerd #Fluentd/ #fluentbit paired with #OpenSearch.
#opensearch #fluentbit #fluentd
@vwbusguy What's your preferred method for installing #Fluentd plugins with #Ansible ?
For example, I'd like add the #systemd journal plugin.
https://github.com/One-com/fluent-plugin-journal-parser
@markstos #fluentd is definitely more powerful, but it's also more complex and it helps to know at least a little #ruby. You run the exact same agent on the client as the aggregation servers and it all depends on the config. #fluentbit is lightweight with a much simpler config. If you're just getting started, I'd definitely recommend trying #fluentbit first at this point.
@markstos Run it on every server, though #fluentbit is now more common than #fluentd in our deployments. It's deployed and managed via #Ansible AWX.
@vwbusguy Do you run #fluentd on every server or do you use something like systemd-journal-upload to upload to centralized fluentd instances?
https://www.freedesktop.org/software/systemd/man/systemd-journal-upload.html
@markstos I'm the mod/owner of the #fluentd channel on #Matrix and @liberachat, by the way, so feel free to reach out there as well. There's also an official Slack channel for the project if you prefer that. To be clear, I don't work for CNCF, TD, etc., I'm just a community member who's been using it for years and wants to help out where I can.
@vwbusguy Even after I re-wrote my fluent-bit config to not send its own logs to Cloudwatch, it still took the server into a death spiral after a few days of running, just like before. So I'll be trying #fluentd instead of #fluentbit !
🐻 #Bearcheology : #Fluentd : pour une meilleure gestion de vos logs : https://bearstech.com/societe/blog/normaliser-les-logs/
And I've lost about 1 hour figuring out why a perfectly valid certificate was giving "expired certificate" errors. Old CA certificates because of a 3-year old #Docker image, that's why...
#Fluentd and ELK/EFK stack explained https://www.youtube.com/watch?v=JZ7J0eSrTbA
Been reading some parts of fluentd's documentation and doing some testing. It seems very nice. #fluentd
🐻 #bearcheology : #Fluentd : pour une meilleure gestion de vos #logs : https://bearstech.com/societe/blog/normaliser-les-logs/
Ha! Finally figured out what the problem with my #fluentd logs were: I had an infinite loop in there.
I basically had a math on "service1.task1.**" which added different tags to different log types like this:
tag += "typeA"
Those then ended up as "service1.task1.typeA". Which means: They fit the initial "service1.task1.**" filter again. Which produced infinite recursion and stack overflow.
One more bug fixed. 🎉
#fluentd #homelab #100DaysOfHomelab #selfhosted
And now back to my Fluentd woes. I'm getting a "stack too deep" error, and it occurs in the Forward Input plugin.
I can also reliably reproduce it by accessing my Nextcloud instance.
The stacktrace seems to point to the tag rewrite plugin. But sadly I haven't yet gotten Fluentd to show me the problematic log record yet.
One sidenode though: It seems to only crash a single thread. Fluentd itself still works fine, which is nice.
I'm amazed at how far #fluentd #fluentbit has come, even in the past year. I honestly feel like I'm at the point of defaulting to it over full fluentd now.