Kafka clusters provide a number of opportunities for monitoring. At the network level, you can monitor connections between Kafka nodes, Zookeeper, and clients. At the host level, you can monitor Kafka resource usage, such as CPU, memory and disk usage. And Kafka itself provides log files, an API to query offsets, and JMX support to monitor internal process metrics.
In this blog post, the first in a series that show you how to use Beats for monitoring a Kafka cluster, we’ll focus on collecting and parsing Kafka logs by using Filebeat and Elasticsearch Ingest Node. After indexing the Kafka logs into Elasticsearch, we’ll finish this post by building Kibana dashboards for visualizing the data.
This blog post is based on The Elastic Stack version 5.1.1. All configuration files, dashboards and sample log files can be found on github.
Our setup contains a Kafka cluster of 3 nodes named kafka0, kafka1, and kafka2. Each node runs Kafka version 0.10.1 and a set of Beats to monitor the node itself. The Beats will send all information collected to Elasticsearch. For visualization we will use Kibana.
Meanwhile producers and consumer groups are actively using the Kafka cluster.
Toda la información en: https://www.elastic.co/blog/monitoring-kafka-with-elastic-stack-1-filebeat?baymax=rtp&storm=recommendation&elektra=blog&iesrc=rcmd&astid=defb6dad-9d3f-4f9c-9674-b80071a69c01&at=7&rcmd_source=WIDGET&req_id=c435f78c-1f21-4ef7-b292-de33ebd39a38