Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.
  • Labs icon Lab
  • A Cloud Guru
Google Cloud Platform icon

Creating a Kafka Cluster with Confluent

With this hands-on lab, we will have the opportunity to install and configure a three-broker cluster using Confluent Community. We will start with three regular Ubuntu servers and build a working Kafka cluster. Kafka is a powerful tool for messaging and data stream processing. On top of this, Confluent offers additional features and immensely simplifies some aspects of the installation process for Kafka.

Google Cloud Platform icon

Path Info

Clock icon Intermediate
Clock icon 30m
Clock icon Oct 18, 2019

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

Table of Contents

  1. Challenge

    Install the Confluent Community Package on the Broker Nodes

    1. On all three nodes, add the GNU Privacy Guard (GPG) key plus package repository, and then install Confluent Community, plus Java. The format should look like this:
    wget -qO - | sudo apt-key add -
    sudo add-apt-repository "deb [arch=amd64] stable main"
    sudo apt-get update && sudo apt-get install -y openjdk-8-jre-headless confluent-community-2.12
  2. Challenge

    Configure Zookeeper

    1. On all three nodes, edit the hosts file:
    sudo vi /etc/hosts
    1. Add the following entries to the hosts file: zoo1 zoo2 zoo3
    1. Edit the zookeeper config file:
    sudo vi /etc/kafka/
    1. Delete the contents of the config file and add the following:
    1. Set up the zookeeper ID for each server:
    sudo vi /var/lib/zookeeper/myid
    1. On each server, set the contents of /var/lib/zookeeper/myid to the server's ID. On Node 1, enter 1, on Node 2, enter 2, and lastly on Node 3, enter 3:
    <server id 1, 2, or 3>
  3. Challenge

    Configure Kafka

    1. Edit the kafka config file:
    sudo vi /etc/kafka/
    1. Edit the and zookeeper.connect in the config file. Set the broker ID to the appropriate ID for each server (1 on Node 1, 2 on Node 2, and so on).

    2. Set zookeeper.connect to zoo1:2181:<server id 1, 2, or 3>
  4. Challenge

    Start Zookeeper and Kafka

    1. Start and enable the Zookeeper and Kafka services:
    sudo systemctl start confluent-zookeeper
    sudo systemctl enable confluent-zookeeper
    sudo systemctl start confluent-kafka
    sudo systemctl enable confluent-kafka
    1. Both services should be active (running) on all three servers. Check the services to make sure they are running:
    sudo systemctl status confluent*
    1. We can test our cluster by listing the current topics:
    kafka-topics --list --bootstrap-server localhost:9092

    The output should look like this:

The Cloud Content team comprises subject matter experts hyper focused on services offered by the leading cloud vendors (AWS, GCP, and Azure), as well as cloud-related technologies such as Linux and DevOps. The team is thrilled to share their knowledge to help you build modern tech solutions from the ground up, secure and optimize your environments, and so much more!

What's a lab?

Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.

Provided environment for hands-on practice

We will provide the credentials and environment necessary for you to practice right within your browser.

Guided walkthrough

Follow along with the author’s guided walkthrough and build something new in your provided environment!

Did you know?

On average, you retain 75% more of your learning if you get time for practice.

Start learning by doing today

View Plans