× Our Work

Apache Kafka

Apache Kafka

Date : Mar 01, 2021
Brainvire was a proud part of Imagine 2016


Kafka‘s quality and growth area unit on its incomparable high. it's become thus fashionable that currently, it's beginning to overshadow the recognition of its soul author. Its quality is clear from the undeniable fact that over five hundred Fortune firms use a writer.

These firms embrace prime seven banks, 9 out of the highest 10 medium firms, prime 10 travel firms, eight out of the highest 10 insurance firms, etc. Netflix, LinkedIn, and Microsoft area unit few names that method four-comma messages (1,000,000,000,000) per day with the writer.

Now, you want to be thinking about what makes writer thus fashionable right? Well, if you’re having this question then you’re not alone. And you’ve come back to the correct place as here we tend to area unit progressing to discuss each facet of a writer together with its origin story, working, key differentiators, use cases, and lots of additional.


What is Apache Kafka?


Apache writer is an associate ASCII text file streaming platform developed by the Apache package Foundation. it was earlier developed as an electronic messaging queue at LinkedIn; but, over the years writer has emerged to be far more than simply an electronic messaging queue. Now, it's become a sturdy tool for information streams. Not solely this, however, it additionally has several various use cases.

One of the most important benefits of the writer is that it is often scaled up whenever required. To proportion, all you wish to try and do is add some new nodes (servers) to the writer cluster.

Kafka is additionally far-famed for managing a high quantity of knowledge per unit time. It additionally permits the process of knowledge in period mode thanks to its low latency. the writer is written in Java and Scala. However, it’s compatible with different programming languages similarly.

Kafka also can connect with external systems for export and import through {kafka|Kafka|Franz writer|writer|author} Connect. what is more, it additionally provides writer Streams that may be a Java stream process library. the writer uses a binary TCP-based protocol that depends on a “message set” abstraction. This teams the messages along to chop the overhead of the network roundtrip.

This ends up in larger consecutive disk operations, larger network packets, and contiguous memory blocks that alter the writer to convert a stream of the random message into linear writes.

There area unit several factors that makes the writer completely different from its ancient counterparts like RabbitMQ. First, the writer retains the message for a few amount of your time (default amount is seven days) when its consumption, whereas, RabbitMQ removes the message as before long because it receives the consumer’s confirmation.

Not solely this, however, RabbitMQ additionally pushes messages to shoppers in conjunction with keeping track of their load. It determines on what number messages ought to be underneath process by every one of the shoppers.

On the opposite hand, the writer supports shoppers to fetch messages. this can be additionally called propulsion. the writer is meant to scale horizontally with the addition of nodes. this can be quite completely different from ancient messaging queues because the ancient messaging queues expect to scale within the vertical direction with the addition of additional power to the machine.

This is one of the most important factors that differentiate writers from different ancient electronic messaging systems.


The origin story at LinkedIn


Kafka was designed round the year 2010 at LinkedIn by Jun Rao, Jay Kreps, and Neha Narkhede. the most issue that the writer was supposed to resolve was of low-latency activity of massive amounts of event information from the LinkedIn web site into a lambda design that controlled period event process systems and Hadoop.

The “Real-time” process was the key at that point since there was no resolution for this sort of ingress of period applications.

There were smart solutions that were used for ingesting information into offline batch systems. However, they want to leak implementation details to the downstream users. what is more, they additionally used a push model that was enough to overwhelm any client. most significantly, they weren't designed for the period use case.

If we tend to remark the standard electronic messaging queues then they guarantee an excellent delivery and support things like protocol mediation, transactions, and message consumption following. However, they want to be overkill for the utilization case that LinkedIn was functioning on.

At now everybody together with LinkedIn was wanting to return up with a learning algorithmic program. however, algorithms area unit nothing while not information. to urge information from the supply systems and to faithfully move it around was a tricky raise. and therefore the existing enterprise electronic messaging solutions and batch-based solutions didn’t resolve the problem.

Kafka was designed to become an activity backbone. within the year 2011, the writer was ingesting over one billion events per day. Currently, the activity rates reportable by LinkedIn area unit somewhere around one trillion messages per day.


Terminologies related to the writer


To understand the operating of the writer, you want to shrewdly stream applications' work. And for that, you just ought to perceive numerous ideas and terminologies such as:


The event is that the very first thing that everybody ought to perceive to grasp the operating of streaming applications. The event is nothing however associate an atomic piece of knowledge. For associate instance, once the user registers into the system, then that action creates an incident. an incident also can be a message with information.

The registration event refers to the message wherever info like email, location, user’s name, location, etc. is enclosed. the writer is that the platform that works on the streams of events.



Producers unceasingly write events to the writer. this can be precisely the reason why they're referred to as producers. Producers area unit of many sorts like entire applications, elements of the associate application, net servers, observation agents, IoT devices, etc.

A weather device will produce weather events each hour which can include info relating to humidness, temperature, wind speed, and lots of additional. Similarly, the element of a website that is accountable for user registrations will produce an incident “new user registered”. In straightforward words, a producer is something that makes information.



Consumers area unit those entities that use knowledge. In easy words, they receive and use the information that area unit written by producers. It’s conjointly vital to notice that the entities like whole applications, elements of applications, observance systems, etc. will act as producers similarly as shoppers.

Whether an associate entity is going to be a producer or shopper depends on the architecture of the system. However, generally, entities like knowledge analytics applications, databases, data lakes, etc. act as shoppers as they typically need to store the created knowledge somewhere.



Kafka acts as a middleman between producers and shoppers. The Franz {kafka|writer|author} system is additionally referred to as Kafka cluster since it consists of multiple parts. These parts area units are called nodes.



The package elements that run on a node area unit are referred to as brokers. thanks to brokers, Kafka is additionally categorized as a distributed system.



The data within the Kafka cluster is distributed among many numerous brokers. Also, the Kafka cluster consists of many copies of constant knowledge. These copies area unit referred to as replicas.

The presence of replicas makes Kafka reliable, stable, and fault-tolerant. It’s as a result of, notwithstanding one thing unhealthy happens to a broker, then the knowledge isn't lost because it remains safe with alternative replicas. thanks to this, another broker begins acting the functions of the defective broker.



Producers area unit to blame for commercial enterprise events to Kafka topics. shoppers will get access to the information by merely subscribing to those explicit topics. Kafka topics area unit nothing however associate changeless log of events. each topic serves the information to varied shoppers. this can be the rationale why producers also are called publishers. Similarly, shoppers area unit referred to as subscribers.



The main objective of Partitions is to duplicate knowledge across brokers. each Kafka topic is split into numerous partitions. and every partition may be placed on completely different nodes.



A unit or record inside Kafka is termed a message. each message incorporates a worth, key, and optionally headers.



Every message gift inside the partition is allotted to associate offset. associate offset is the associate whole number that will increase monotonically. moreover, it conjointly is a novel symbol for the message gift inside the partition.



A client is alleged to expertise insulation once he reads from the partition at a slower rate than the speed of messages being made. Lag is expressed within the terms of the number of offsets that area unit behind the top of the partition. The time required to catch up or endure the lag depends on how briskly the patron is ready to consume messages per second.


How will it work?


Now that we tend to have a glance at the varied terminologies associated with Kafka. Let’s see however it works. Kafka receives all the knowledge from an oversized variety of knowledge sources and organizes it into “topics”. These knowledge sources may be one thing as easy as a transactional log of the grocery records for every store.

The topics might be “number of oranges sold” or “no. of sales between ten AM to one PM”. These topics may be analyzed by anyone WHO desires insight into the information.

You might suppose that this sounds terribly like the operating of standard information. However, as compared to the traditional information, Kafka would be a lot appropriate for one thing as massive as a national chain of grocery stores that method thousands of apple sales every minute.

Kafka achieves this effort with the assistance of a Producer that acts as an associate interface between applications and therefore the topics. Kafka’s information of metameric and ordered knowledge is termed Kafka Topic Log.

This knowledge stream is mostly accustomed to feeding data processing pipelines like Storm or Spark. Moreover, it’s conjointly accustomed to fill knowledge lakes like Hadoop’s distributed databases.

Like Producer, the shopper is another interface that permits topic logs to be scan. Not solely this, however, it conjointly allows the knowledge to hold on in it to pass onto alternative applications which could need them.

The moment you set all the elements along in conjunction with alternative common parts of massive knowledge analytics framework, then Kafka begins to make the central system. Through this method, the information passes via input and captures applications, storage lakes, and processing engines.


Why use Kafka?


There area unit an embarrassment of decisions once it involves selecting to publish/subscribe to electronic messaging systems. This begs the question of what makes Kafka a standout selection for developers. Let’s determine.


Multiple producers

Kafka comes with the aptitude to manage multiple producers seamlessly. It will handle multiple producers whether or not those purchasers area unit victimization constant topics or multiple completely different topics. This makes the system consistent and ideal for aggregating knowledge from multiple frontend systems.

For associate instance, a website that has content for users with the assistance of a variety of microservices will have one topic for page views that each service writes to use a standard format. As a result, the shopper application receives the one stream of application’s page views on the location which too with none would like to coordinate overwhelming from multiple topics.


Multiple shoppers

Apart from multiple producers, Kafka is additionally designed for multiple shoppers to scan one stream of messages which too with none quite interference from one another. this can be an incomplete distinction to several of the queuing systems wherever a message once consumed by one shopper becomes inaccessible for the remainder of the purchasers.

Multiple Kafka shoppers may also opt to share a stream which can make sure that the whole cluster gets to method the given message for under once.


Disk-Based Retention


Managing multiple shoppers isn't the sole issue in Kafka’s arsenal. With sturdy message retention, Kafka frees its shoppers from operating in the period. In this, first, the messages area unit committed to disk and area unit then hold on as per the configurable retention rules. this permits a special stream of messages to possess a varied quantity of retention looking on the wants of the patron.




In this article, we've got provided you with an entire orient writer. First, we tend to mentioned what Apache writer is? Then we tend to mention its origin story and journey on LinkedIn. we tend to additionally have a glance at the slightest degree the terminologies related to the writer. Then we tend to mention its operating and high reasons on why one should use a writer. we tend to additionally saw Kafka’s best use cases.

We are always ready

Request a call back