Category Archives: Hazelcast

What is Hazelcast?

Hazelcast is a distributed in-memory database product.

In the past decade, the amount of data has been changed dramatically. And the problem of how to store and serve the data also evolved. It is used to be in the database in the past. But the question is, what it is now and what it should be?

We know about Twitter. Where did you think your tweets get stored? Do you think our tweets get stored in Oracle database or in MySQL database?

Well, the answer is, NO!

Actually, Twitter stores everything in memory. But why? Because, it is fast. Memory is 1000 times faster than this. And thanks to Memory, Twitter is able to propagate your tweets to your followers in almost near time.

And guess what, the data is now bigger and bigger and many organizations from industries like ecommerce, telecom and financials are already moving their data from database to the Memory to serve faster.

But we have problem here, We cannot store all our data in single machine’s memory. Right? it just not fit.

So we need the solution that will manage hundreds of servers as a single unit big memory. It will partition the data and store it across the servers.

So this solution is called as “Hazelcast”.

Hazelcast simplifies the hard work of distributed programming and provides an easy and simple to use API to build, drawn, cloud ready, super simple and scalable applications.

And its not only the Twitter who has big data, and it needs to store and process the data in the speed of memory. Let’s if we take financials. Financial Exchanges needs to process their tens of thousands of transactions per second. They need to be up all the time. They need to be able to scale easily to handle unpredictable volume spikes.

5 of the top 10 US banks were already using Hazelcast. And not only financials, companies like Mozilla were already using the Hazelcast to boost the performance of their applications.

Here are the features of the Hazelcast:

1. Data in the cluster is almost evenly distributed (partitioned) across all nodes.

2. If a member goes down, its backup replica that also holds the same data, will dynamically redistribute the data including the ownership and locks on them to remaining live nodes. As a result, no data will get lost.

3. When a new node joins the cluster, new node takes ownership(responsibility) and load of -some- of the entire data in the cluster.

There is no single cluster master or something that can cause single point of failure. Every node in the cluster has equal rights and responsibilities. No-one is superior. And no dependency on external ‘server’ or ‘master’ kind of concept.

Wait for more :-)…………