This post is a basic introduction to caching and how it works. We will be using Redis for caching with Node.js to show you how we can take advantage of caching to improve the speed and performance of the application and to reduce the overhead from server resources.
What is Caching?
Caching (pronounced "cashing") is the process of storing data in a cache. A cache is a temporary storage area. A cache as a data store is easier for the client (or server) to reach, as opposed to a permanent data store that might be located on a different service, which takes more time and resources to reach (a database or an external API endpoint).
For example: The files you automatically request by looking at a Web page are stored on your hard disk in a cache subdirectory under the directory for your browser.
Now let's take a look at Redis. (Source: Redis Official)
What is Redis?
Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker. It supports data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, geospatial indexes with radius queries, and streams. Redis has built-in replication, Lua scripting, LRU eviction, transactions, and different levels of on-disk persistence, and provides high availability via Redis Sentinel and automatic partitioning with Redis Cluster.
Let's Build
Install Redis
If you are an OSX user, install it using the command below:
brew install redis
Other helpful commands:
brew services start redis # start Redis server
brew services stop redis # stop Redis server
brew services restart redis # restart Redis server
For other platforms, follow the guide on redis.io/download.
Project Setup
Create a new directory and initialize the project:
mkdir redis-tutorial
cd redis-tutorial
npm init -y
Create an index.js file and install the dependencies:
npm install express node-fetch redis --save
The Code
Now we have Redis installed and running. Add the following code to index.js:
const express = require('express');
const fetch = require("node-fetch");
const redis = require('redis');
const app = express();
// Creating and connecting redis client to local instance (port 6379)
const redisClient = redis.createClient(6379);
redisClient.on('error', (err) => {
console.log(err);
});
// Get User Posts
app.get('/posts', (req, res) => {
const postsRedisKey = 'user:posts';
return redisClient.get(postsRedisKey, (err, posts) => {
if (posts) {
return res.json({ source: 'cache', data: JSON.parse(posts) });
} else {
// Fetching directly from remote api
fetch('https://jsonplaceholder.typicode.com/posts')
.then(result => result.json())
.then(posts => {
// Save the API response in Redis store,
// data expire time in 3600 seconds, means one hour
redisClient.setex(postsRedisKey, 3600, JSON.stringify(posts));
return res.json({ source: 'API', data: posts });
})
.catch(error => {
console.log(error);
return res.json(error.toString());
});
}
});
});
app.listen(3000, () => {
console.log('Server listening on port:', 3000);
});
How It Works
- When a request hits
/posts, thecachemiddleware checks Redis first. - If the data exists in Redis, it returns immediately from cache.
- If not, it falls through to
fetchPosts, which fetches from the JSONPlaceholder API, stores the result in Redis with a 1-hour expiration (3600seconds), and returns the response.
Testing It Out
Start your node server (make sure Redis is up and running):
node index.js
Go to localhost:3000/posts — the first request fetches from the API:
- 1st request: ~761ms (source: API)
- 2nd request: ~28ms (source: Redis)
The difference is dramatic. The cached response is over 27x faster than the original API call.
Conclusion
Redis request is much faster than normal data fetching from an API. Redis is a very powerful in-memory data store that we can use in our applications for caching. It's very simple to save and get data without much overhead.
References: