Apache Kafka Tutorial for Beginners | Part 1


Apache Kafka



Part 1: Kafka Introduction

Apache Kafka is a distributed publish-subscribe messaging system that receives data from different source applications and makes the data available to target applications in real time.

Kafka is written in Scala and Java and is often associated with real-time event stream processing for big data.

What is the need for Apache Kafka?



Applications are tightly coupled.
Redundant code in parent application i.e. Website application


How Kafka Helps to Solve




  1. Website is being the producer of the information/message to Message Broker.
  2. Website job is completed after publishing the message to Message Broker.
  3. Consumer who are interested in the message/information will connect to the Message Broker to get the information and process them.
  4. Now you can see all the applications are decoupled.
  5. You can add as many consumer as your wish.
  6. Some of the other message brokers in the market are RabbitMQ, ActiveMQ, IBM MQ, etc.


Kafka vs RabbitMQ



Kafka’s code base, which was originally developed at LinkedIn to provide a mechanism for parallel load in Hadoop systems, became an open source project under the Apache Software Foundation in 2011.

In 2014, the developers at LinkedIn who created Kafka started a company called Confluent to facilitate Kafka deployments and support enterprise-level Kafka-as-a-service products.

Happy Learning !!!

Post a Comment

0 Comments