Integrity: For Your Data, For Your Business

As your organization moves into the era of in-memory computing to leverage its powerful data capabilities of speed at scale, recognize that the new breed of vendors has a lot to live up to in terms of expectations. You’re selecting from a critically-important, yet a relatively young set of new tech providers.

The Balancing Act

The general rule that most young enterprise software companies follow is to focus on the exciting new capabilities that are attractive to customers for initial evaluation while promising to smooth the “rough edges” of the product over time.

This is where customer vigilance is paramount: Those “rough edges” are the things customers naturally expect in a production-grade system, especially the wide range of things that contribute to system robustness.

However, robustness issues are rarely encountered during proof-of-concept phases and readiness reviews. They usually only come up when something goes wrong in production. That is why failure to evaluate properly can lead to disastrous consequences once in production.

The Distributed Challenge

The in-memory computing era is driven by a massive leap forward in the need for processing speed and scale. Holding data in-memory provides radically faster read/write speeds than spinning disks. However, to achieve the speed at scale also requires the software system to be based on a distributed architecture.

Distributed architectures are very challenging to make robust. They are comprised of multiple software “nodes” communicating over the network. Because the software does not control the network, it is not uncommon to have a loss of communication between nodes (known as a “network partition”). This can produce disastrous issues related to data integrity.

When the data processing nodes lose communication, they can get out of sync. The application above isn’t aware that a network partition has occurred, so it keeps sending processing instructions to the system. This can cause the non-communicating nodes to a) stop responding, or b) to keep responding but with inconsistent data, leading to incorrect results. In this way, the business results of a loss of data integrity can be disastrous.

For a distributed system to be robust, it must have solved for the challenge of dealing with network partitions. This is one of the hardest technology problems for software developers to resolve, yet it is a critical area for new project owners (both business and technical) to be vigilant in vendor selection.

Data Integrity Delivered – Again

Hazelcast has proven our ability to deliver profound customer value across hundreds of the world’s largest organizations. Through the years, our world-class technical team has tackled some of the hardest challenges to system robustness and data integrity and today we announced another huge leap forward in this regard.

For Hazelcast, integrity is everything. Because it’s critical for your data, and your business, it’s critical to us.