After implementing the primary-replica structure, most functions ought to be capable of scale to a number of hundred thousand customers, and a few easy functions would possibly be capable of attain 1,000,000 customers.
Nevertheless, for some read-heavy functions, primary-replica structure may not be capable of deal with visitors spikes nicely. For our e-commerce instance, flash sale occasions like Black Friday gross sales in the USA may simply overload the databases. If the load is sufficiently heavy, some customers may not even be capable of load the gross sales web page.
The following logical step to deal with such conditions is so as to add a cache layer to optimize the learn operations.
Redis is a well-liked in-memory cache for this objective. Redis reduces the learn load for a database by caching steadily accessed knowledge in reminiscence. This enables for sooner entry to the information since it’s retrieved from the cache as an alternative of the slower database. By lowering the variety of learn operations carried out on the database, Redis helps to cut back the load on the database cluster and enhance its general scalability. As summarized under by Jeff Dean et al, in-memory entry is 1000X sooner than disk entry.
For our instance utility, we deploy the cache utilizing the read-through caching technique. With this technique, knowledge is first checked within the cache earlier than being learn from the database. If the information is discovered within the cache, it’s returned instantly, in any other case, it’s loaded from the database and saved within the cache for future use.
There are different cache methods and operational concerns when deploying a caching layer at scale. For instance, with…