Recently, I've been playing with internal micro services. I've was used to work with micro which was doing the work pretty well for little thing.
Also recently I've come to work with fastify which is quite interesting as well. I love the internal hooks system but also the vast variety of data validation and feature it does offer out of the box.
Also I have been used to work with in-memory caching while using Keyv.
Keyv is simple and also can be used with a huge variety of adapters/in-memory or not database such as redis, mongo, sqlite and so on.. I have been using Keyv as a main tool for managing small caching unit in my codebase, when I can easily identify and cache computed data that can be invalidate later on, it does save a little bit of compute, bandwidth and make your stuff work quicker when same internal data are requested multiple times.
Most of my internal micro services are single threaded, but for some of them I make them run with each core or my machine. This is why I started using Cluster to spawn multiple threads of an internal service when needed.
Single threaded way
What was the issue
The first question is how I was used to do caching ? Since I was not doing that much caching, I needed it quite quickly and did not have that huge amount of data to store (all of my data expire after few seconds or minutes TTL based). It's was a good thing to use the basic Keyv in-memory thing. This can be achieve that way:
So that way you'll have a classic workflow function which does naively trying to get from cache a computed value. If that value already has been cached before take it, otherwise if it never has been cached, get it the usual way and cache it afterward. However what happen if your load is spread through forks and each fork has it own context. You won't be able to share in-memory data easily without having specific mechanism to share data in-between. An easy going solution to this is an independent redis instance gathering your cache independently of your running processes.
So easiest way to setup a redis on a server was actually doing it through docker
Everything is explained into the hub but shortly
- Make sure docker is installed on your machine
- run redis docker that way
docker run --name redis -p host:external_port:6379 -d redis
Pay attention if you are exposing your redis instance publicly make sure it has the protected mode on and user/pass authentication. Check Security notes from docker's hub.
That way you'll be able to use Keyv and point it that way to your redis instance.
npm install --save @keyv/redis
Then just reference your redis instance into Keyv
The advantage of working with Keyv is that you won't need to change your logic or workflow whatever the adapter your are using and this is very helpful. I was glad to enhance my caching logic, make it less thread independent and make it point to my redis instance in a blink of an eye 👀.
That's sit for today, hope you enjoyed it.