Caching is a technique used in computer science and web development to store and reuse previously computed or fetched data to improve performance and reduce latency. There are several caching techniques, each designed for specific use cases and scenarios. Here are some of the most common caching techniques:
- Memory Cache:
- In-Memory Cache: Data is stored in the system’s main memory (RAM). This is the fastest form of caching but is limited by the amount of available memory.
- Distributed Memory Cache: Data is stored in a distributed network of in-memory caches, often used in a clustered or cloud-based environment for high availability.
2. Page Cache:
- Web Page Caching: Entire web pages are cached, including HTML, CSS, JavaScript, and images. This reduces server load and improves page load times for users.
3. Object Cache:
Object Caching: Objects, such as database query results, API responses, or serialized data, are cached in memory to reduce the need for expensive calculations or database queries.
4. Content Delivery Network (CDN):
- CDN Caching: CDNs cache static assets (e.g., images, stylesheets) on distributed edge servers, reducing the latency and load on the origin server.
5. Database Caching:
- Query Result Caching: The results of frequently executed database queries are cached to avoid redundant database access.
- Database Table Caching: Entire database tables or parts of tables are cached to reduce database load.
6. Object-Relational Mapping (ORM) Caching:
- ORM Result Caching: Result sets from ORM queries (e.g., Hibernate, Entity Framework) are cached to improve data access performance.
7. Full-Page Caching:
- Full-Page Caching: Entire rendered web pages are cached, including both static and dynamic content, to reduce the load on the server and speed up page rendering.
8. Proxy Server Caching:
- Proxy Caching: Intermediate proxy servers, such as reverse proxies, cache content to serve cached versions to clients, reducing server load.
9. Content Fragment Caching:
- Content Fragment Caching: Smaller pieces of content, such as specific sections of a webpage, are cached to improve load times for frequently accessed portions of a page.
10. Opcode Caching:
- Opcode Caching: In PHP and similar interpreted languages, opcode caching stores compiled bytecode in memory to avoid recompiling scripts on each request.
11. Browser Caching:
- Browser Caching: Web browsers cache resources like images, stylesheets, and scripts locally, so they don’t need to be re-downloaded on subsequent visits, improving page load times.
12. Session Caching:
- Session Caching: Session data for user sessions is cached to reduce the load on the session store, which is often a database.
13. CDN Page Caching:
- CDN Page Caching: CDNs can cache entire HTML pages, serving cached versions to users based on their location, improving page load times.
14. Lazy Loading:
- Lazy Loading: Instead of loading all content upfront, some resources (e.g., images) are loaded only when they are needed, reducing initial load times.
The choice of caching technique depends on the specific use case, the type of data being cached, and the goals of performance optimisation. Often, a combination of caching techniques is used to achieve the best results for a given application or system.
Caching Policies ::
Different caching policies or algorithms used to manage cached items in a data structure. These policies determine how items are stored, retrieved, and evicted from the cache. Here are some common caching policies:
- FIFO (First-In-First-Out):
- In FIFO caching, the first item added to the cache is the first one to be removed when the cache reaches its capacity. It follows a queue-like behaviour.
2. LIFO (Last-In-First-Out):
- LIFO caching, also known as stack caching, removes the most recently added item first. It follows a stack-like behavior.
3. LRU (Least Recently Used):
- LRU caching removes the least recently used item when the cache is full. It’s based on the idea that if an item hasn’t been accessed recently, it’s less likely to be used again soon.
4. LFU (Least Frequently Used):
- LFU caching removes the least frequently used item when the cache is full. It counts how often each item is accessed and evicts the one with the lowest access frequency.
5. MRU (Most Recently Used):
- MRU caching removes the most recently used item when the cache is full. It’s the opposite of LRU caching.
6. Random Replacement:
- Random replacement policy selects a cached item randomly for eviction when the cache is full. It doesn’t consider access patterns or usage history.
7. 2Q (Two-Queue):
- The 2Q caching strategy maintains two separate queues, a frequently used queue (F1) and a not-recently-used queue (F2). New items are placed in F1, and items are moved to F2 when they are accessed again. Eviction occurs from F2.
8. ARC (Adaptive Replacement Cache):
- ARC is an adaptive caching policy that dynamically adjusts the cache size based on the performance of LRU and LFU policies. It aims to combine the strengths of both policies to improve cache hit rates.
9. Clock (or Second Chance):
- Clock caching is a variation of FIFO that uses a circular buffer. Items are given a “second chance” before they are evicted, based on a reference bit that is set when an item is accessed.
10. MQ (Multi-Queue):
- Multi-Queue caching maintains multiple queues of varying priorities. Items are placed in different queues based on their access patterns and are evicted from lower-priority queues first.
The choice of caching policy depends on the specific use case and the access patterns of the data. Some policies, like LRU and LFU, are more effective in certain scenarios, while others, like random replacement, may be used when no specific access patterns are known or when simplicity is preferred. Different caching mechanisms and policies are selected to optimize performance and resource utilization for a given application or system.