How to set up Kafka, Kafka-connect, Zookeeper using Docker-Compose.

Anand Varne
4 min readJun 28, 2020

--

At Digite, Recently we completed a use case for our Application (SwiftKanban and SwiftEnterprise) to provide a push-based notification of various collaboration platforms, like Microsoft Teams, Slack, etc. with the SLO of reducing the load of a pull request on APIs. After a detailed exploration, we have found that Apache Kafka is satisfying the purpose. In this article I am trying to list down the steps we took to set up Kafka, Kafka-connect, Zookeeper using Docker-Compose.

Introduction

  1. Kafka: Kafka is used for building real-time data pipelines and streaming apps. It is horizontally scalable, fault-tolerant, wicked fast, and runs in production in thousands of companies.
  2. Kafka Connect: Kafka Connect is a tool that comes with Kafka that imports/exports data to Kafka. It is an extensible tool that runs connectors, which implement the custom logic for interacting with an external system. In this quickstart, we’ll see how to run Kafka Connect with simple connectors that import data from a file to a Kafka topic and export data from a Kafka topic to a file.
  3. Zookeeper: Zookeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services.

Prerequisite and Assumption

Linux OS with Docker and Docker-compose installed on it.

Steps to write docker-compose

  1. Create a docker-compose.yml file in your directory.
  2. Create a zookeeper service with basic configuration in docker-compose.yml.
Zookeeper service code snippet

In this, I kept ALLOW_ANONYMOUS_LOGIN environment variable as yes to connect with unauthorized users. If you have authorization on for zookeeper then you can skip this property setting and connect zookeeper via authorization. Click on zookeeper configuration for more details.

3. Then we have to create a Kafka service.

Kafka service code snippet
  • The bitnami Kafka image looks for configuration files (server.properties, log4j.properties, etc.) in the /bitnami/kafka/config/ directory, this directory can be changed by setting the KAFKA_MOUNTED_CONF_DIR environment variable. OR you can directly set path like <hostserver server.properties>:/opt/bitnami/kafka/config/server.properties. E.g /data/kafka/server.properties:/opt/bitnami/kafka/config/server.properties
  • Also to access the Kafka service log from the host server you can set KAFKA_CFG_LOG_DIRS environment variable value to a specific path and mount that folder to the host server. Refer above Kafka service code snippet for the same.

NOTE: As this is a non-root container, the mounted files and directories must have the proper permissions for the UID 1001.

  • KAFKA_CFG_ZOOKEEPER_CONNECT is used to access zookeeper service from Kafka. As we are using docker-compose you can give service name and expose the port of zookeeper container directly. E.g zookeeper:2181
  • Click On Kafka configurations environment variables for your reference.
  • Do not forget to map the Kafka container port to the host machine’s port.

4. After creating Kafka service we need to create Kafka-connect service

Kafka connect service code snippet

In this, we are using the same docker images as we use for Kafka. The change is that we need to execute a connect-distributed.sh file instead of default entrypoint for bitnami Kafka image. You can see “command” in the above image.

For this, you need to map connect-distributed.properties file with container’s connect-distributed.properties using docker volume mounting.

5. So your full docker-compose file looks like below,

6. Now navigate to your directory where the docker-compose file is present. And execute command docker-compose up -d. This will start your containers in demon mode.

fig. 1 Zookeeper container logs after the successful start

To check Zookeeper container logs you can execute docker-compose logs -f zookeeper. Refer snippet of the container log for a successful start of zookeeper container.

fig. 2 Kafka container logs after the successful start

To check Kafka container logs you can execute docker-compose logs -f kafka. Refer snippet of the container log for a successful start of Kafka container.

fig. 3 Kafka connect container logs after the successful start

To check Kafka container logs you can execute docker-compose logs -f kafka_connect. Refer snippet of the container log for a successful start of Kafka connect container.

fig. 4 Test service using telnet command

You can validate the services using the telnet command. E.g telnet localhost 2181 for zookeeper.

We have successfully implemented this at Digite, Inc.

--

--

Anand Varne

DevOps enthusiastic | DevOps Lead | GitOps | CI / CD | Process Automation | Developer | Git | Jenkins | Docker | Ubuntu | Shell / Bash