Unifying streaming and saved knowledge

IT spending is again, anticipated to regain 2019 ranges by subsequent yr with digitization initiatives accelerated after COVID. Spending on enterprise software program — together with database, analytics, and enterprise intelligence — will develop the quickest of all, Gartner says.

Deriving worth from knowledge, whether or not from insights or transactions, is on the root of bettering enterprise outcomes. That perennial reputation is why the database and analytics market is valued at practically $200 billion.

Digitization has produced a brand new vector of worth within the pursuit to derive worth from knowledge: real-time. A Forrester Consulting report discovered greater than 80% of executives consider within the want for real-time decision-making primarily based on instantaneous insights into occasions and market circumstances.

And but… there’s a large hole between want and skill. Greater than two-thirds of the executives Forrester spoke to mentioned their organizations weren’t capable of get hold of real-time, data-driven insights and actions.

A deluge of knowledge

Digitization is producing an enormous quantity of real-time knowledge. It’s pouring in from servers, gadgets, sensors, and IoT issues, a lot in order that it’s estimated that extra knowledge will probably be generated within the subsequent three years than within the final 30.

All new knowledge is born in actual time. In that second it comprises the distinctive worth regarding what simply occurred. Nonetheless, that worth is perishable as time passes and the information loses its time-based relevance.

Executives wish to discover worth by leveraging real-time knowledge, however most are failing because of contemporary knowledge overload. A big majority of execs (70%) surveyed for the Dell Technologies 2020 Digital Transformation Index said their organizations are creating more data than they can analyze or understand.

With this deluge of real-time data comes a new macro challenge: a new type of data silo. Real-time processing requires different technologies than for stored data because the nature of the two types of data are very different:

  • The unique value of real-time data perishes within moments.
  • Real-time data tends to be an atomic payload without deeper context.
  • The informational values are different; one describes what just happened, and the other describes history.

In other words, while real-time data contains time-critical information about an event that just occurred, it lacks rich context that can be found in records of stored data.

What good is it to know that a specific customer just viewed a retail item online if that event cannot be combined instantaneously with the context of that unique customer’s profile and history? When a financial market transaction just occurred, how can its financial risk be profiled without combining it with the performance history of those involved in the transaction? When event data from a manufacturing sensor shows an aberrant blip, how can the need for preventive action be assessed without knowing the recent maintenance history?

The world of data has permanently changed. The dominant force is now real-time data, while rich contextual stores remain. This change agent presents powerful potential for creating valuable business outcomes — if it can be properly harnessed.

Not the database way

Databases sit between applications and historical data. They excel at performing transactions and queries on that stored data — but only for traditional applications. Both the functionality and the performance of databases were designed to address a previous era of expectations. Digitization has introduced a step-function change in performance requirements: microseconds now matter, and this is out of the reach of database architectures.

Additionally, databases were not designed to process real-time data that originated at Point A and is being transferred to Point B. They must therefore plug into engines that can perform that type of processing. These interfaces produce significant latency, which is the sworn enemy of real-time data, the value of which perishes quickly with time. Even if cobbling together multiple systems can be achieved, it introduces cost and architectural complexity that must be supported and maintained.

In order to unify the processing of real-time and stored data, a new category of data processing platform is needed. This platform must leverage the existence of databases and support applications that utilize both types of data.

This multi-function platform includes a streaming engine for ingestion, transformation, distribution, and synchronization of data. To meet the ultra-low latency requirements for data processing, this platform is necessarily based on in-memory technology. And to meet the dual requirements of scale and resilience, it must be a distributed architecture. With this combination, this platform can deliver sub-millisecond responses with millions of complex transactions performed per second.

A new data processing platform

We are now producing more fresh data than enterprises can process, and deriving value from it requires merging it with rich context from databases. It’s time to augment IT architectures to include a new data processing platform designed for the real-time world, one that can deliver insights and actions at the speed demanded by real-time digital operations to capture value at every moment.

Kelly Herrell is CEO of Hazelcast, the maker of a streaming and memory-first application platform for fast, stateful, data-intensive workloads.

New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to [email protected]

Copyright © 2021 IDG Communications, Inc.

Source link