In-Memory Computing Meets Cloud Native Computing
In this week’s episode of The New Stack Context podcast, we delve into an area of computing not usually discussed in cloud native circles: In-memory computing. This form of distributed technology has been around for decades. The idea is to band together with the memory of multiple servers, or cloud compute instances, to act as one gigantic pool or memory. Sounds like a good fit with Kubernetes’ scalable computing, yes? Instead of the application waiting for the results of a database query, the data can be more quickly returned to the user — by an order of magnitude by some estimates — by way of an in-memory data store spread out across multiple servers.
Recently, Mike Yawn, a senior solution architect at Hazelcast contributed a post to TNS explaining how an in-memory technology could make microservices run more smoothly. Hazelcast offers an in-memory data grid, Hazelcast IMDG, along with stream processing software Hazelcast Jet.
In his post, Yawn explains, “Just as our application services can be scaled up or down to meet workload demands, our operational data store is also elastically scalable — additional nodes can be added to the data grid, and the software will automatically re-balance the data partitions to take advantage of the increased capacity (when scaling up) or consolidate data onto fewer nodes (when scaling down). Backups of each data partition are automatically maintained so that in the event of an unplanned node outage, no data is lost.”
We wanted to know more about how in-memory could be used with microservices. So we invited him on the show. In the podcast, Yawn talks about replacing the term “operational data store” with “digital integration hub” in the hopes that the terminology will be more welcoming to potential users. While in-memory offers caching just like key-value databases such as Redis, it also offers additional computing capacity, which can help process that data on the fly.
He also spoke about the growing deployments of Kubernetes among the company’s users, which tend to be on the higher end of enterprise users, such as banks. “We support Kubernetes because our users tell us it's important to them,” Yawn tells us.
Be sure to check out this week’s TNS Context podcast for more info, which will go live, like it does every week, on Friday.