Spring boot 2.5 and Hazelcast embdded distributed cache on k8s (ft Java11)

At my day job I am currently working with a Spring boot Microservices stack that consumes and orchestrates many HTTP APIs. The  data in most of these downstream APIs does not change a lot but it is read many times. Naturally this hints at using  caching to improve performance. The code base already uses simple HashMap (not ConcurrentHashMap mind you) to do some local caching on per request basis within  each micro-service. The next step would be to use proper caching technology following a common caching patterns for microservices. I decided to look at the first option* (i.e. Embedded Distributed Cache) and see how quickly I can implement one in my spare time (how hard can it be, I asked my self). I found an excellent resource from Piotr Minkowski to begin following his recepie and do some of my own cooking (And I think I found some ingredients of my own on the way).

Setting up local k8s tools

Since we run our workload on Kubernetes, the first thing that I wanted to do was to quickly setup k8s locally for testing. On Windows 10 with WSL2 the easiest way for me to get a working k8 env was to install Docker for Desktop (with WSL2 backend) and enable kubernetes. With Docker you will get kubectl command line (which has built in kustomize as kubectl -k).  This will also  add a hosts entry for kubernetes.docker.internal. In order to have a great developer experience, I would recommend the following  tools.

  • kubectx/kubens (to easily witch context/name space of k8s, needed if you have some of them setup)
  • stern (to easily look at logs of different k8 pods)
  • skaffold (to easily bring up a local development cluster)
  • helm (to install helm charts, kind of like homebrew or win-get , but for k8s)

Another thing I did to make my life easier was to add ingress resource (i.e nginx). This helps direct the the http traffic to right services easily. I use the helm option. Occasionally I need to cleanup the k8s and docker objects to get a totally new k8s deployment. At which point I just need to redo the last command as helm repo is already there.
 
Make sure you can run a kubectl get pods  command and check if you can connect to default context. You should be able to see nginx controller.

Basic Sprsing boot app and the magic of @Cacheable

There are plenty of resources out there to get you started with Spring boot and k8s. First is to create a project from start.spring.io , and add spring-web, actuator, and cache-abstraction.



The next step is to use Spring boot declarative cache. Declarative caching (i.e. just saying to the framework to make some thing cachable and it just makes it happen for you) has been in Spring core since 2011, so it is not a new things.  If you just add @EnableCaching annotation to your application and @Cachable annotations to one of your public methods , you can already start taking advantage of it. But basic setup uses ConcurrentHashMap as its default cache provider (so be aware). 
One quirk that I learned about @Cachable while working on the sample is the special case  when calling a public methods that have this annotation from the same class. I would recommend always calling such methods from another layer (i.e Controller layer can call a cachable service method and so on). 
 
In my example repo (in tag/step2) I created a simple controller that fetches customer by id  from a service and the service take 5 seconds before returning its results. (customers is just a list of names, but their updated date is when the service is called). Go ahead and try it locally
 
curl -H "Accept: application/json" -H "Content-Type: application/json" http://localhost:8080/customer/1
 

Setting K8s deployment, service and probes

For making this projects k8s friendly I personally like this guide by Dr. Syer from the Spring boot team. But I have some of my own tweaks to it. For example, at least for testing locally on docker-desktop, you don't need another registry. (so image name would just work). Also you can create your kubernetes deployment and service yaml files with dry-run=client as well as you can use the alpha feature to init skaffold with buildpacks  See the following steps 
 
mkdir -p k8s && cd k8s;
 
kubectl create deployment cachingdemo --image=cachingdemo --dry-run=client -o=yaml > 01-deployment.yaml;
 
kubectl create service clusterip cachingdemo --tcp=8080:8080 --dry-run=client -o=yaml > 02-service.yaml;
 
You will need to add the kubernetes probes to your service 



And create a kustomization file manually and refer your deployment and service from it. 

After this you can create a skaffold file like this : skaffold init --XXenableBuildpacksInit
and that should pick your k8s folder with the right kustomization (see how it executed on my machine)
 

As I previously mentioned that I added an nginx ingress as well to make life easy, but we need to have an ingress config file in our project to let kubernetes know what ingress rules to apply. I also add a custom header x-pod-hostname to show which pod  a response comes from

If you are following along, your code could should resemble tag/step3. I am using 3 replicas of the chachingdemo service.  If you check out this code at tag/step3  and do a skaffold dev on it you should be able to poke the API few times (for any of the customer 1,2..100)

curl -v -H "Accept: application/json" -H "Content-Type: application/json" http://kubernetes.docker.internal/customer/1

Note that  it would be slow first time for each pod for a customer object. And then it would be cached for the lifetime of this object (which is till the pod is running).

It was also good to note that I did not even need to do any extra steps for the kubernetes probes to work.

Hazelcast Embedded Distributed Cache

Of all the steps so far, I found this one to be the easiest. I will not cover theoretical part of cluster setup, which is already covered in Piotor's excellent blog post. But I will summarize the steps I went through.
 
One has to include the required dependencies. Then it's a matter of configuring hazelcast configuration bean and letting it connect to other instances. And that's it. 
In the EmbddedConfig we make sure that only kubernetes auto discovery is enabled and we have an eviction policy setup.  We are using the DNS based setup,  for kubernetes. There is more involved way to setup discovery using kubernetes API. Please see pros/cons of both approaches on the hazelcast kubernetes discovery project page.  You may like to read about details of whats under the hood , written by Rafal Leszko.



Now try to run the application with a new skaffold dev to do the same curl as before and see it you get cached results. Notice that if some thing is cached , it can be served from any pod. But after certain time this cache is evicted.

If you are following along your code should now look like tag/step4
 
Tip: you can use stern cachingdemo to easily view all the logs. You will see in the logs that cluster members are discovered and evicted. I would recommend you to wait for the cluster to be stable

When the cluster is stable, one of the nodes should log like follows

[cachingdemo] Members {size:3, ver:3} [
[cachingdemo]   Member [10.1.1.95]:5701 - e95a5caa-38f6-4682-aef9-89daa93c3d31
[cachingdemo]   Member [10.1.1.97]:5701 - b08403ed-6a47-4b69-ace7-3e1de71d7f44
[cachingdemo]   Member [10.1.1.96]:5701 - 82d6687f-9468-440a-a22e-e503bf50a423 this
[cachingdemo] ]
[cachingdemo]

Performance/Security

I have not done any extensive testing of my own when it comes to performance. that would be the next logical step to explore. But suffice to say that there are some pro and cons to this approach (as listed in hazelcast blog about caching patterns). I am a bit curious about how long it needs to take for the cluster to stabilize and how little is latency of this approach compared to say client-server in redis.

 The data in embedded distributed approach is available to the application very easily. And it's colocated with the application. This should be considered in your design.


Overall I am very pleased to have tried this technology my self. I plan to continue hacking around in this area.


Comments