<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Saksham Malhotra on Medium]]></title>
        <description><![CDATA[Stories by Saksham Malhotra on Medium]]></description>
        <link>https://medium.com/@communicate.saksham?source=rss-91623b5127df------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sat, 16 May 2026 10:07:51 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@communicate.saksham/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Denormalization in DBMS: A Path Toward Scalability Often Overlooked]]></title>
            <link>https://medium.com/@communicate.saksham/denormalization-in-dbms-a-path-toward-scalability-often-overlooked-d3ada2fec3a1?source=rss-91623b5127df------2</link>
            <guid isPermaLink="false">https://medium.com/p/d3ada2fec3a1</guid>
            <category><![CDATA[scalability]]></category>
            <category><![CDATA[database]]></category>
            <category><![CDATA[data]]></category>
            <category><![CDATA[system-design-interview]]></category>
            <category><![CDATA[computer-science]]></category>
            <dc:creator><![CDATA[Saksham Malhotra]]></dc:creator>
            <pubDate>Sun, 26 Jan 2025 13:48:22 GMT</pubDate>
            <atom:updated>2025-01-26T13:48:22.563Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*9Iiw-bUpjr_yql-S" /><figcaption>Photo by <a href="https://unsplash.com/@campaign_creators?utm_source=medium&amp;utm_medium=referral">Campaign Creators</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>In database design, normalization is widely regarded as the standard approach. It organizes data to eliminate redundancy, enhance data integrity, and optimize storage efficiency. However, another approach, known as <strong>denormalization</strong>, often goes unnoticed despite its potential to boost system scalability.</p><p>Denormalization deliberately introduces redundancy into database structures to balance storage efficiency against performance. This article explores the concept, its significance in achieving scalability, and why many developers shy away from it.</p><h3>What is Denormalization?</h3><p>Denormalization is the process of reorganizing database structures by combining data from multiple normalized tables into a single table or restructuring the schema. This reduces the complexity of joins and optimizes read performance, particularly in high-demand systems.</p><h3>Benefits of Denormalization</h3><h4>1. Improved Performance for Read-Heavy Operations</h4><p>Many modern applications are read-heavy, with far more data retrievals than updates. Denormalization reduces the reliance on joins, significantly speeding up query performance.</p><h4>2. Enhanced Scalability</h4><p>As data volumes grow, normalized databases often struggle to meet real-time application demands. Denormalization restructures data to align better with access patterns, enabling distributed systems to handle large-scale workloads.</p><h4>3. Simpler Queries</h4><p>Queries in denormalized databases are often more straightforward, as they require fewer joins. This simplicity improves developer efficiency and reduces query execution times.</p><h4>4. Reduced Overhead for Distributed Systems</h4><p>In distributed environments, such as those using Cassandra or DynamoDB, joins can be computationally expensive. Denormalization minimizes this overhead, boosting system efficiency.</p><h3>Comparison</h3><p>Following is an example of <strong>Normalized</strong> schema :</p><pre>CREATE TABLE Customers (<br>    CustomerID INT PRIMARY KEY,<br>    Name VARCHAR(100),<br>    Email VARCHAR(100)<br>);<br><br>CREATE TABLE Products (<br>    ProductID INT PRIMARY KEY,<br>    Name VARCHAR(100),<br>    Price DECIMAL(10, 2)<br>);<br><br>CREATE TABLE Orders (<br>    OrderID INT PRIMARY KEY,<br>    CustomerID INT,<br>    ProductID INT,<br>    Quantity INT,<br>    OrderDate DATETIME,<br>    FOREIGN KEY (CustomerID) REFERENCES Customers(CustomerID),<br>    FOREIGN KEY (ProductID) REFERENCES Products(ProductID)<br>);</pre><blockquote>With this schema, fetching all orders with customer and product details would require a <strong>JOIN</strong>:</blockquote><pre>SELECT <br>    o.OrderID, <br>    c.Name AS CustomerName, <br>    p.Name AS ProductName, <br>    p.Price, <br>    o.Quantity, <br>    o.OrderDate<br>FROM Orders o<br>JOIN Customers c ON o.CustomerID = c.CustomerID<br>JOIN Products p ON o.ProductID = p.ProductID;</pre><p>While in a denormalized schema, we can combine frequently accessed data into a single table to eliminate the need for joins:</p><pre>CREATE TABLE Orders (<br>    OrderID INT PRIMARY KEY,<br>    CustomerName VARCHAR(100),<br>    CustomerEmail VARCHAR(100),<br>    ProductName VARCHAR(100),<br>    ProductPrice DECIMAL(10, 2),<br>    Quantity INT,<br>    OrderDate DATETIME<br>);</pre><blockquote>Now, retrieving all orders becomes simpler and faster:</blockquote><pre>SELECT <br>    OrderID, <br>    CustomerName, <br>    ProductName, <br>    ProductPrice, <br>    Quantity, <br>    OrderDate<br>FROM Orders;</pre><p><strong>Trade-Off</strong>: While the query is faster, data redundancy is introduced (e.g., customer and product data are repeated for every order).</p><h3>When to Consider Denormalization</h3><ol><li><strong>Read-Heavy Applications</strong>: Systems like analytics platforms, e-commerce websites, or social media feeds benefit greatly from denormalized designs.</li><li><strong>Distributed Databases</strong>: NoSQL databases such as MongoDB and DynamoDB often rely on denormalization to optimize scalability and performance.</li><li><strong>Caching Use Cases</strong>: Denormalized schemas work well with caching systems, reducing latency for frequently accessed data.</li><li><strong>Real-Time Systems</strong>: Applications with real-time data requirements, like monitoring dashboards or event-driven architectures, find denormalization especially useful.</li></ol><h3>Balancing Normalization and Denormalization</h3><p>Adopting denormalization doesn’t mean abandoning normalization entirely. A hybrid approach is often ideal: maintain normalized data for core operations and use denormalization for performance-critical, read-heavy workflows. Features like materialized views and optimized indexing in modern databases can support this balance.</p><h3>Conclusion</h3><p>Denormalization offers a valuable strategy for designing scalable, high-performance systems. While it introduces redundancy, this trade-off often results in faster query performance, making it an excellent choice for read-heavy or distributed systems.</p><p>However, embracing denormalization requires a shift in mindset — from adhering strictly to theoretical best practices to considering practical needs. By thoughtfully incorporating denormalization, developers can design databases that meet modern application demands without sacrificing performance.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d3ada2fec3a1" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Understanding Rendezvous Hashing: A Scalable and Balanced Approach]]></title>
            <link>https://medium.com/@communicate.saksham/understanding-rendezvous-hashing-a-scalable-and-balanced-approach-b5dbff84d0d9?source=rss-91623b5127df------2</link>
            <guid isPermaLink="false">https://medium.com/p/b5dbff84d0d9</guid>
            <category><![CDATA[hashing-algorithm]]></category>
            <category><![CDATA[hashing]]></category>
            <category><![CDATA[system-design-concepts]]></category>
            <category><![CDATA[scalable-server-solutions]]></category>
            <category><![CDATA[system-design-interview]]></category>
            <dc:creator><![CDATA[Saksham Malhotra]]></dc:creator>
            <pubDate>Wed, 15 Jan 2025 17:15:11 GMT</pubDate>
            <atom:updated>2025-01-15T17:15:11.121Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*YAuWAhudECJXAAxd" /><figcaption>Photo by <a href="https://unsplash.com/@jentheodore?utm_source=medium&amp;utm_medium=referral">Jen Theodore</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>In the world of distributed systems, efficient load distribution and fault tolerance are critical for performance and scalability. Hashing plays a central role in achieving these goals, with <strong>consistent hashing</strong> being a popular choice. However, <strong>rendezvous hashing</strong> (also known as highest-random-weight hashing) offers a compelling alternative with unique advantages in scalability and balancing. Let’s dive into how rendezvous hashing works and why it might be a better approach in certain scenarios.</p><h4>How it Works</h4><ol><li><strong>Assign Weights to Resources and Nodes:</strong> For each resource (e.g., a request or data shard) and each node , a hash function generates a pseudo-random weight.</li><li><strong>Select the Node with the Highest Weight:</strong> The node that receives the resource is the one with the highest weight for that resource:</li><li><strong>Recompute for Changes:</strong> If a node joins or leaves the system, weights are recalculated, and resources are reassigned to the node with the next-highest weight. This minimizes disruption compared to consistent hashing.</li></ol><h3>Advantages of Rendezvous Hashing</h3><p><strong>1. Improved Scalability</strong></p><ul><li>Rendezvous hashing scales seamlessly with the addition of new nodes. The algorithm’s simplicity ensures that adding nodes requires recalculating weights only for affected resources, avoiding the ring-based complexity of consistent hashing.</li></ul><p><strong>2. Minimal Resource Reallocation</strong></p><ul><li>When nodes are added or removed, only the resources mapped to the affected nodes need to be reassigned. This results in lower data movement compared to consistent hashing, especially for large-scale systems.</li></ul><p><strong>3. Better Load Balancing</strong></p><ul><li>The highest-weight selection ensures an even distribution of resources across nodes. Combined with a good hash function, it reduces the chances of hot spots or overloaded nodes.</li></ul><p><strong>4. Simpler Implementation</strong></p><ul><li>Unlike consistent hashing, which requires maintaining a virtual node structure, rendezvous hashing operates without additional metadata or structural dependencies.</li></ul><h3>Use Cases for Rendezvous Hashing</h3><ol><li><strong>Distributed Caches</strong></li></ol><ul><li>Systems like Memcached can use rendezvous hashing for distributing keys across cache servers, ensuring minimal disruption during scaling events.</li></ul><ol><li><strong>Task Scheduling</strong></li></ol><ul><li>Assign tasks to workers in distributed processing frameworks like Apache Spark or Kubernetes, where balancing workload dynamically is crucial.</li></ul><ol><li><strong>Content Delivery Networks (CDNs)</strong></li></ol><ul><li>Efficiently map requests to the nearest or most available edge servers</li></ul><ol><li><strong>Load Balancers</strong></li></ol><ul><li>Map incoming requests to backend servers based on capacity and availability.</li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b5dbff84d0d9" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Load Balancers: What, Why, and How]]></title>
            <link>https://medium.com/@communicate.saksham/load-balancers-what-why-and-how-12c0dae0c25b?source=rss-91623b5127df------2</link>
            <guid isPermaLink="false">https://medium.com/p/12c0dae0c25b</guid>
            <category><![CDATA[algorithms]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[load-balancer]]></category>
            <category><![CDATA[system-design-interview]]></category>
            <category><![CDATA[system-design-concepts]]></category>
            <dc:creator><![CDATA[Saksham Malhotra]]></dc:creator>
            <pubDate>Thu, 09 Jan 2025 12:01:36 GMT</pubDate>
            <atom:updated>2025-01-09T12:01:36.300Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*ppwjcwkNnjAfwAC1" /><figcaption>Photo by <a href="https://unsplash.com/@revolok?utm_source=medium&amp;utm_medium=referral">Blake Goodell</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>In modern distributed systems, the efficient handling of traffic and resource allocation is critical for ensuring performance, reliability, and scalability. Load balancers play a pivotal role in achieving these objectives by distributing incoming network traffic across multiple servers or resources. This document explores the need for load balancers, their types, and the algorithms they employ to optimize traffic distribution.</p><h3>Why Are Load Balancers Needed?</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*8gi-wKg6wAXp1vQSRL2cNQ.png" /><figcaption>Need of load balancers</figcaption></figure><ol><li>Scalability: Load balancers enable systems to handle increasing amounts of traffic by distributing requests across multiple servers, thus avoiding bottlenecks.</li><li>High Availability &amp; Redundancy: By rerouting traffic from failed servers to operational ones, load balancers ensure minimal downtime and continuous service availability.</li><li>Efficient Resource Utilization: They optimize resource usage by evenly distributing workloads, preventing overloading of certain servers while others remain underutilized.</li><li>Improved Performance: Balancing the load reduces response times and enhances user experience.</li><li>Security: Load balancers can incorporate security features such as SSL termination, DDoS mitigation, and request filtering.</li></ol><blockquote>In the world of distributed systems, a load balancer is not just a tool — it’s the silent hero that ensures smooth operations behind the scenes</blockquote><h3>Types of Load Balancers</h3><p>Load balancers can be categorized based on where they operate in the network stack and their deployment architecture.</p><h4>Based on Network Stack Layers:</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*NhCEXOKyU5mOqfUEZ8XDRQ.png" /><figcaption>Load balancers based on Network Layer</figcaption></figure><ol><li><strong>Layer 4 Load Balancers: </strong>Operate at the transport layer (TCP/UDP). Route traffic based on IP address and port. Example: HAProxy in Layer 4 mode.</li><li><strong>Layer 7 Load Balancers: </strong>Operate at the application layer (HTTP/HTTPS). Make routing decisions based on HTTP headers, URLs, or cookies. Example: NGINX, AWS Application Load Balancer.</li></ol><h4>Based on Deployment Architecture:</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*0HcCn1J0nGbLL6B-qmzVJA.png" /><figcaption>Load balancers based on deployment</figcaption></figure><ol><li><strong>Hardware Load Balancers: </strong>Specialized appliances designed for high performance and enterprise-level applications. Example: F5 BIG-IP, Citrix ADC.</li><li><strong>Software Load Balancers: </strong>Flexible and cost-effective solutions deployed on commodity hardware or cloud infrastructure. Example: NGINX, HAProxy.</li><li><strong>Cloud-Based Load Balancers: </strong>anaged solutions offered by cloud providers, eliminating the need for on-premises hardware. Example: AWS Elastic Load Balancer, Azure Load Balancer.</li></ol><blockquote>Performance isn’t about one powerful server; it’s about many servers working seamlessly together, orchestrated by a smart load balancer</blockquote><h3>Algorithms Used in Load Balancers</h3><p>Load balancers use various algorithms to decide how traffic is distributed. Some commonly used algorithms include:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*FeJU44QWkX0JBImyJdKsdw.png" /><figcaption>Algorithms for Load balancers</figcaption></figure><ol><li><strong>Round Robin: </strong>Distributes requests to each server in the pool sequentially in a circular manner. This algorithm is simple and effective for environments with evenly distributed workloads and similar server capacities.</li><li><strong>Least Connections: </strong>Sends traffic to the server with the fewest active connections. This algorithm is particularly useful in scenarios where requests differ significantly in resource consumption.</li><li><strong>IP Hash: </strong>Maps incoming requests to servers based on a hash of the client’s IP address. This ensures session persistence, meaning a client is consistently routed to the same server, which is critical for stateful applications.</li><li><strong>Weighted Round Robin: </strong>Extends the Round Robin algorithm by assigning weights to servers based on their capacity. Servers with higher capacity receive a proportionally larger share of the traffic.</li><li><strong>Random: </strong>Randomly selects a server to handle each request. This algorithm is suitable for basic scenarios where server capacities are uniform and traffic patterns are simple.</li><li><strong>Geographic Proximity: </strong>Routes requests to the server closest to the client’s geographical location. This reduces latency and improves user experience, particularly for globally distributed systems.</li></ol><h3>Conclusion</h3><p>Load balancers are integral to modern IT infrastructures, ensuring systems remain scalable, reliable, and efficient. By understanding the various types of load balancers and their underlying algorithms, organizations can make informed decisions to meet their specific requirements. As systems continue to evolve, load balancing will remain a cornerstone of robust and high-performing architectures.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=12c0dae0c25b" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Unveiling the Magic Behind JavaScript Execution: From Parsing to the Event Loop]]></title>
            <link>https://medium.com/@communicate.saksham/unveiling-the-magic-behind-javascript-execution-from-parsing-to-the-event-loop-9aa6189209fc?source=rss-91623b5127df------2</link>
            <guid isPermaLink="false">https://medium.com/p/9aa6189209fc</guid>
            <category><![CDATA[interview-preparation]]></category>
            <category><![CDATA[event-loop]]></category>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[javascript-tips]]></category>
            <category><![CDATA[parser]]></category>
            <dc:creator><![CDATA[Saksham Malhotra]]></dc:creator>
            <pubDate>Sun, 05 Jan 2025 17:16:27 GMT</pubDate>
            <atom:updated>2025-01-05T17:16:27.489Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*NDVXku305gRNT1QD" /><figcaption>Photo by <a href="https://unsplash.com/@wocintechchat?utm_source=medium&amp;utm_medium=referral">Christina @ wocintechchat.com</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>JavaScript powers the web, but have you ever wondered what happens under the hood when you run a simple script? Behind the scenes, your code undergoes a fascinating journey, from parsing to bytecode generation, and finally, to execution. This article breaks down these steps and uncovers how the JavaScript runtime, including the event loop, ensures smooth execution of both synchronous and asynchronous tasks.</p><h3>The Journey Begins: Parsing Your Code</h3><p>When the browser encounters a &lt;script&gt; tag in your HTML:</p><ol><li><strong>Fetching the Script</strong>: For external scripts, the browser fetches the JavaScript file from the specified server or source.</li><li><strong>Byte Stream Decoding</strong>: The retrieved JavaScript file is a byte stream that needs decoding. The <strong>byte stream decoder</strong> generates tokens based on the bytes.</li><li><strong>Abstract Syntax Tree (AST)</strong>: These tokens are passed to a parser, which constructs an Abstract Syntax Tree (AST). During this process:</li></ol><ul><li>The parser takes notes on the tokens.</li><li>Syntax rules are checked to ensure code validity.</li></ul><h3>Ignition Interpreter: Generating Bytecode</h3><p>Once the AST is ready:</p><ol><li><strong>Bytecode Generation</strong>: The <strong>Ignition Interpreter</strong> generates bytecode using the AST.</li><li><strong>Registers</strong>: Registers such as r0, r1, r2, and the accumulator (a0) are used for efficient memory management and operations. The a0 register helps locate keys in objects efficiently.</li><li><strong>Optimization</strong>: The generated bytecode is optimized by specialized optimizers to improve performance.</li><li><strong>Bytecode Interpretation</strong>: Finally, the bytecode is executed by the <strong>bytecode interpreter</strong>, bringing your code to life.</li></ol><h3>Inline Caches: A Boost for Performance</h3><p>To make execution faster, <strong>inline caches</strong> store metadata about frequently accessed objects. This reduces lookup times and enhances the interpreter’s efficiency.</p><h3>The Event Loop: Orchestrating JavaScript Execution</h3><p>Once the code is ready to execute, the JavaScript runtime manages the execution process using the <strong>event loop</strong>. Here’s how it works:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/551/1*cz5NdlD36gbiOWKIZV0K0w.png" /><figcaption>Components of JS runtime</figcaption></figure><h4>Components of the JavaScript Runtime</h4><ol><li><strong>Call Stack</strong>: Tracks the execution context of running functions in a Last-In-First-Out (LIFO) manner.</li><li><strong>Web APIs</strong>: Asynchronous tasks like setTimeout, fetch, and DOM events are handled here.</li><li><strong>Task Queue</strong>: Holds tasks such as setTimeout callbacks and event listeners.</li><li><strong>Microtask Queue</strong>: Dedicated to promises and other high-priority asynchronous operations (e.g., .then, catch, await, queueMicrotask, MutationObserver).</li></ol><h4>Event Loop Workflow</h4><ul><li>The event loop continuously checks the <strong>call stack</strong> for running tasks.</li><li>If the call stack is empty then it processes the <strong>microtask queue</strong> first, lastly it moves one task from the <strong>task queue</strong> to the call stack.</li></ul><h4>Microtasks vs. Task Queue</h4><p>Microtasks always take priority over the task queue. For example:</p><ol><li>A fetch request resolves to a promise.</li><li>The promise’s .then() callback is added to the <strong>microtask queue</strong>.</li><li>Before processing tasks from the task queue, the event loop ensures all microtasks are completed.</li></ol><h4>Freezing Points and Chaining Microtasks</h4><p>One microtask can schedule another, potentially creating a “freezing point” for tasks in the task queue. This behavior ensures microtasks maintain a high priority but can also delay lower-priority tasks.</p><h3>Promisifying Callback APIs</h3><p>For better control over asynchronous behavior, callback-based APIs can be converted to promises. Here’s an example:</p><pre>const promisify = (fn) =&gt; {<br>  return (...args) =&gt;<br>    new Promise((resolve, reject) =&gt; {<br>      fn(...args, (err, result) =&gt; {<br>        if (err) return reject(err);<br>        resolve(result);<br>      });<br>    });<br>};</pre><h3>Conclusion</h3><p>From parsing to bytecode optimization, and the seamless management of tasks and microtasks by the event loop, JavaScript’s execution engine is a marvel of modern computing. Whether you’re a beginner or an experienced developer, understanding these underlying mechanics can help you write more efficient and optimized code.</p><p>So the next time you write a setTimeout or fetch request, you&#39;ll know exactly how JavaScript works its magic behind the scenes.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9aa6189209fc" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>