<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Tai Titans on Medium]]></title>
        <description><![CDATA[Stories by Tai Titans on Medium]]></description>
        <link>https://medium.com/@taititansofficial?source=rss-449d0fcbb03d------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sat, 16 May 2026 18:22:49 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@taititansofficial/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Beyond the Benchmarks: Understanding fasthttp vs. net/http Architecture]]></title>
            <link>https://medium.com/@taititansofficial/beyond-the-benchmarks-understanding-fasthttp-vs-net-http-architecture-44d435e7f342?source=rss-449d0fcbb03d------2</link>
            <guid isPermaLink="false">https://medium.com/p/44d435e7f342</guid>
            <category><![CDATA[backend]]></category>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[http-request]]></category>
            <category><![CDATA[fiber]]></category>
            <category><![CDATA[golang]]></category>
            <dc:creator><![CDATA[Tai Titans]]></dc:creator>
            <pubDate>Sun, 04 Jan 2026 14:46:29 GMT</pubDate>
            <atom:updated>2026-01-04T14:46:29.106Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*LQ6w169_Rf91t5dX9qEnVA.png" /></figure><p>If you are coming from Node.js (Express) to Go, <strong>Fiber</strong> feels like home. It’s ergonomic, incredibly fast, and consistently tops the benchmark charts.</p><p>But as Senior Engineers, we know there is no such thing as a “free lunch” in software architecture. Fiber achieves its blazing speed not by magic, but by making specific trade-offs in memory management — specifically, by building on top of fasthttp instead of Go’s standard net/http.</p><p>If you don’t understand these trade-offs, you risk introducing subtle race conditions and memory leaks into your production environment. Today, let’s look under the hood at the <strong>Lifecycle of a Fiber Context</strong> and why it matters.</p><h3>The Engine: Standard Lib vs. Fasthttp</h3><p>To understand Fiber, we must understand the engine powering it: <strong>fasthttp</strong>.</p><h3>1. net/http: The Safe Sedan</h3><p>In the standard Go library (net/http), every incoming request spawns a new goroutine and <strong>allocates new objects</strong> for the Request and Response.</p><ul><li><strong>Pros:</strong> It’s safe. Once a request is handled, the memory is garbage collected. You can pass the request object to background goroutines without fear.</li><li><strong>Cons:</strong> High memory allocation creates pressure on the Garbage Collector (GC) under high loads.</li></ul><h3>2. fasthttp: The F1 Car</h3><p>Fasthttp (and thus Fiber) prioritizes Zero Memory Allocation.</p><p>It achieves this by using sync.Pool to reuse request contexts. When a request comes in, Fiber grabs an existing empty context from a pool. When the response is sent, that context is reset (wiped clean) and thrown back into the pool to serve the next user.</p><p>This approach drastically reduces GC overhead, but it introduces a critical constraint: <strong>The Context object is mutable and not thread-safe.</strong></p><h3>The Trap: Context Lifetime &amp; Goroutines</h3><p>Here is the most common mistake developers make when moving to Fiber.</p><p>Imagine you want to process a job in the background (e.g., sending an email or logging analytics) after responding to the user.</p><h3>The Wrong Way (Production Incident Waiting to Happen)</h3><pre>func Handler(c *fiber.Ctx) error {<br>    // DANGER: Passing the Context pointer to a goroutine<br>    go func(ctx *fiber.Ctx) {<br>        time.Sleep(1 * time.Second) // Simulate async work<br>        fmt.Println(&quot;User ID:&quot;, ctx.Params(&quot;id&quot;)) <br>    }(c)<br>    return c.SendString(&quot;Response Sent&quot;)<br>}</pre><p><strong>Why this fails:</strong></p><ul><li>The handler returns “Response Sent”.</li><li>Fiber assumes the request is done. It <strong>resets</strong> c and puts it back in the pool.</li><li>A new request comes in. Fiber assigns that <strong>same</strong> c object to the new user.</li><li>Your goroutine wakes up after 1 second and reads ctx.Params(&quot;id&quot;). <strong>Surprise! You are now reading the ID of the NEW user, not the original one.</strong></li></ul><p>This is a classic <strong>Race Condition</strong> that is extremely hard to debug because it only happens under load.</p><h3>The Senior Fix (Copy by Value)</h3><p>To fix this, you must extract the <strong>data</strong> you need before spawning the goroutine.</p><pre>func Handler(c *fiber.Ctx) error {<br>    // SAFE: Extract data (primitive types/copies) first<br>    userID := c.Params(&quot;id&quot;)<br>    userIP := c.IP()<br>    go func(id, ip string) {<br>        time.Sleep(1 * time.Second)<br>        // Use the local variables, not the Context<br>        fmt.Println(&quot;User ID:&quot;, id, &quot;IP:&quot;, ip)<br>    }(userID, userIP)<br>    return c.SendString(&quot;Response Sent&quot;)<br>}</pre><h3>The Confusion: ctx.Context() vs ctx.UserContext()</h3><p>Another point of confusion arises when integrating Fiber with libraries like GORM, Redis, or OpenTelemetry, which expect a standard context.Context.</p><p>Fiber provides two methods that look similar but behave very differently:</p><h3>1. ctx.Context() (The Engine Context)</h3><p>This returns the underlying *fasthttp.RequestCtx.</p><ul><li>It implements the context.Context interface but is <strong>tied to the HTTP request lifecycle</strong>.</li><li><strong>Warning:</strong> It resets when the request ends. Do not use this for long-running background tasks.</li></ul><h3>2. ctx.UserContext() (The Logical Context)</h3><p>This is where you store your <strong>Business Logic Context</strong>.</p><ul><li>It returns a standard context.Context (usually context.Background() or one injected by middleware).</li><li><strong>Best Practice:</strong> Use this when calling your Service layer, Databases, or passing Trace IDs (OpenTelemetry).</li></ul><p><strong>Example Pattern:</strong></p><pre>// Controller Layer<br>func (l *AuthController) Login(c *fiber.Ctx) error {<br>    // Parse Body<br>    req := new(LoginRequest)<br>    if err := c.BodyParser(req); err != nil {<br>        return err<br>    }<br>    // Pass &#39;UserContext&#39; to the service layer<br>    // This ensures compatibility with GORM/Tracing/Timeouts<br>    result, err := l.service.Login(c.UserContext(), req)<br>    <br>    return c.JSON(result)<br>}</pre><h3>Conclusion: When to use what?</h3><p>Choosing between standard Go and Fiber isn’t just about syntax; it’s about architectural requirements.</p><ul><li><strong>Choose Standard </strong><strong>net/http (or Gin/Echo) if:</strong> You prioritize absolute safety, need HTTP/2 or gRPC support out of the box, and want 100% compatibility with the standard Go ecosystem without wrappers.</li><li><strong>Choose Fiber if:</strong> You are building a high-performance Gateway, a Real-time system (AdTech, Gaming), or simply love the Express.js style API — and <strong>you are disciplined enough to manage memory lifetimes correctly.</strong></li></ul><p>Fiber is a Ferrari. It’s fast, but you need to know how to drive it.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=44d435e7f342" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Beyond RAM: A Deep Dive into the Redis Network Stack]]></title>
            <link>https://medium.com/@taititansofficial/beyond-ram-a-deep-dive-into-the-redis-network-stack-6804e689a28e?source=rss-449d0fcbb03d------2</link>
            <guid isPermaLink="false">https://medium.com/p/6804e689a28e</guid>
            <category><![CDATA[backend]]></category>
            <category><![CDATA[redis]]></category>
            <category><![CDATA[golang]]></category>
            <category><![CDATA[software-development]]></category>
            <dc:creator><![CDATA[Tai Titans]]></dc:creator>
            <pubDate>Sat, 03 Jan 2026 16:29:26 GMT</pubDate>
            <atom:updated>2026-01-03T16:29:26.868Z</atom:updated>
            <content:encoded><![CDATA[<p>When senior engineers discuss why Redis is fast, the conversation almost immediately shuts down with a single phrase: <em>“It’s in-memory.”</em></p><p>While storing data in RAM is undoubtedly the foundation of its sub-millisecond performance, it is not the whole story. If you write a naive, single-threaded TCP server in C that just stores a value in a variable, it won’t necessarily match Redis’s real-world throughput under load.</p><p>In modern distributed architecture, the bottleneck is rarely CPU cycles or memory bandwidth — it is the <strong>network I/O</strong>.</p><p>To truly understand Redis performance, we must look beyond the memory hierarchy and dive into the communication layer. We need to understand how the backend application “talks” to Redis across the wire. This article explores the architectural pillars of the Redis network stack: the protocol, the event loop, and latency-busting techniques like Pipelining.</p><h3>1. RESP: The Art of Parsing Efficiency</h3><p>Most modern backend communication relies on HTTP/JSON or gRPC (Protobuf). While robust, these carry significant overhead in parsing and metadata. Redis bypasses this by using its own protocol: <strong>RESP (Redis Serialization Protocol)</strong>.</p><p>RESP is a bridge between human readability and binary efficiency. It is text-based (you can read it via Telnet), but it is painstakingly designed to be parsed by a machine with minimal CPU cycles.</p><p>Consider a standard HTTP JSON request. The server must parse headers, handle encoding, and scan the JSON string character-by-character to find delimiters like braces {} or quotes &quot;&quot;.</p><p>Now, look at how a Redis client sends a SET user:100 &quot;john_doe&quot; command in raw RESP:</p><pre>*3\r\n      (Array of 3 elements)<br>$3\r\nSET\r\n (Bulk String length 3)<br>$8\r\nuser:100\r\n (Bulk String length 8)<br>$8\r\njohn_doe\r\n (Bulk String length 8)</pre><p><strong>Why this is faster:</strong></p><ul><li><strong>Predictive Parsing:</strong> By reading the prefix length byte (e.g., $8), the Redis server knows exactly how many bytes to read next. It does not need to scan the payload looking for a terminator character.</li><li><strong>Zero Metadata Overhead:</strong> No HTTP headers, no cookies. Just raw data structures.</li></ul><p>The parsing logic is effectively O(N) where N is the number of bytes, with extremely low constant factors compared to JSON parsing.</p><h3>2. The Myth of the Single Thread (I/O Multiplexing)</h3><p>One of the most common technical interview questions is: <em>“Redis is single-threaded. How does it handle 10,000 concurrent connections?”</em></p><p>If Redis used a standard blocking I/O model, one slow client connection could halt the entire server. If it used a thread-per-connection model, the overhead of context switching and memory usage for 10,000 threads would crush the CPU.</p><p>The answer lies in <strong>I/O Multiplexing</strong> and the <strong>Event Loop</strong>.</p><p>Redis utilizes the operating system’s non-blocking I/O primitives — specifically epoll (on Linux) or kqueue (on BSD/macOS).</p><h3>The Mechanism:</h3><ul><li><strong>Registration:</strong> When a backend application connects to Redis, Redis registers that socket descriptor with the kernel via epoll_ctl.</li><li><strong>The Wait:</strong> The single Redis thread enters an event loop where it calls epoll_wait. It effectively says to the OS kernel: <em>&quot;Here is a list of 10,000 sockets. Wake me up only when one or more of them actually have data ready to be read.&quot;</em></li><li><strong>Execution:</strong> When a packet arrives, the kernel wakes up Redis. Redis iterates through only the “ready” sockets, processes the commands (which is fast because it’s RAM), writes the response to an output buffer, and goes back to sleep.</li></ul><p>This architecture maximizes CPU cache locality and eliminates race conditions entirely, allowing a single core to saturate a network link.</p><h3>3. Connection Pooling and TCP Tuning</h3><p>In high-throughput systems, the <strong>TCP 3-Way Handshake</strong> is expensive latency that you cannot afford on every request.</p><p>Robust Redis clients (like go-redis for Go, or Jedis for Java) utilize <strong>Connection Pooling</strong>. They establish persistent sockets at startup and reuse them, avoiding the OS overhead of creating file descriptors and the network latency of handshakes for every command.</p><h3>Disabling Nagle’s Algorithm (TCP_NODELAY)</h3><p>Furthermore, Redis clients typically set the TCP_NODELAY flag on the socket.</p><p>By default, TCP uses <strong>Nagle’s Algorithm</strong> to improve network efficiency by buffering small packets to combine them into larger segments. This is great for bandwidth but terrible for latency. If your backend sends a tiny command like GET x, Nagle might pause transmission for milliseconds, waiting for more data.</p><p>Setting TCP_NODELAY disables this, forcing the kernel to push data onto the wire immediately.</p><h3>4. The Game Changer: Pipelining</h3><p>We have optimized the protocol and the socket handling. But we still face an immutable rule of physics: the speed of light.</p><p>Even if Redis takes 0s to process a command, the <strong>Round Trip Time (RTT)</strong> between your backend and the Redis server dictates max throughput in a synchronous model. If the RTT is 1ms, you are mathematically capped at 1,000 requests per second per connection.</p><p>Redis Pipelining changes this equation. It allows the client to send multiple commands to the socket <em>without waiting for the previous responses</em>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*PJDz8QZo8-07wIyzSc9A4Q.png" /></figure><p>The client buffers commands and flushes them to the network in one go. Redis reads the batch, executes them sequentially, and sends back a combined response. This reduces the total cost from N x RTT to effectively 1 x RTT.</p><h3>Benchmarking Pipelining in Go</h3><p>Let’s prove this with a benchmark using the excellent github.com/redis/go-redis/v9 library. We will compare setting 10,000 keys sequentially versus using a pipeline.</p><p><em>(Note: Ensure you have a Redis instance running locally on port 6379 for this test)</em></p><pre>package main<br>import (<br> &quot;context&quot;<br> &quot;fmt&quot;<br> &quot;log&quot;<br> &quot;testing&quot;<br> &quot;time&quot;<br> &quot;github.com/redis/go-redis/v9&quot;<br>)<br>// Setup Redis client<br>func getClient() *redis.Client {<br> return redis.NewClient(&amp;redis.Options{<br>  Addr:     &quot;localhost:6379&quot;,<br>  Password: &quot;&quot;, // no password set<br>  DB:       0,  // use default DB<br> })<br>}<br>// Benchmark 1: Sequential Requests (Request-Reply model)<br>func BenchmarkSequentialSet(b *testing.B) {<br> rdb := getClient()<br> defer rdb.Close()<br> ctx := context.Background()<br> // Flush DB to ensure clean state if needed (optional for read benchmarks)<br> // rdb.FlushDB(ctx)<br> b.ResetTimer()<br> for i := 0; i &lt; b.N; i++ {<br>  key := fmt.Sprintf(&quot;seq_key:%d&quot;, i)<br>  err := rdb.Set(ctx, key, &quot;value&quot;, 0).Err()<br>  if err != nil {<br>   b.Fatal(err)<br>  }<br> }<br>}<br>// Benchmark 2: Pipelined Requests<br>func BenchmarkPipelinedSet(b *testing.B) {<br> rdb := getClient()<br> defer rdb.Close()<br> ctx := context.Background()<br> b.ResetTimer()<br>    // We create a pipeline<br> pipe := rdb.Pipeline()<br> for i := 0; i &lt; b.N; i++ {<br>  key := fmt.Sprintf(&quot;pipe_key:%d&quot;, i)<br>        // Commands are queued in memory, not sent yet<br>  pipe.Set(ctx, key, &quot;value&quot;, 0)<br>        // Execute in batches of 100 to avoid excessive memory usage in the client buffer<br>  if (i+1)%100 == 0 {<br>   _, err := pipe.Exec(ctx)<br>   if err != nil {<br>    b.Fatal(err)<br>   }<br>  }<br> }<br>    // Execute any remaining commands<br> _, err := pipe.Exec(ctx)<br> if err != nil &amp;&amp; err != redis.Nil { // ignore nil if pipe was empty<br>  b.Fatal(err)<br> }<br>}<br>// To run this benchmark:<br>// 1. Save as main_test.go<br>// 2. Run: go test -bench=. -benchmem</pre><h4>The Results</h4><p>Running this on a local machine (where RTT is already very low, around 0.1ms) shows a dramatic difference. On a real network where RTT is 1ms+, the difference is even starker.</p><pre><br>BenchmarkSequentialSet-10      45616      26046 ns/op      208 B/op      6 allocs/op<br>BenchmarkPipelinedSet-10      396820       3012 ns/op      120 B/op      3 allocs/op</pre><p><strong>Interpretation:</strong></p><ul><li><strong>Sequential:</strong> ~26,000 nanoseconds (26µs) per operation.</li><li><strong>Pipelined:</strong> ~3,000 nanoseconds (3µs) per operation.</li></ul><p>Pipelining provided an <strong>8.6x speed improvement</strong> even on localhost. By batching network calls, we shifted the bottleneck from network latency back to raw CPU processing power.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6804e689a28e" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Go Context Deep Dive: Anatomy of Immutability and Memory Management]]></title>
            <link>https://medium.com/@taititansofficial/go-context-deep-dive-anatomy-of-immutability-and-memory-management-c49068fe9b1b?source=rss-449d0fcbb03d------2</link>
            <guid isPermaLink="false">https://medium.com/p/c49068fe9b1b</guid>
            <category><![CDATA[systems-thinking]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[golang]]></category>
            <category><![CDATA[backend]]></category>
            <dc:creator><![CDATA[Tai Titans]]></dc:creator>
            <pubDate>Sat, 13 Dec 2025 06:18:25 GMT</pubDate>
            <atom:updated>2025-12-13T06:18:25.481Z</atom:updated>
            <content:encoded><![CDATA[<p>As backend engineers working with Go, we interact with context.Context in almost every function signature. It is the lifeblood of request scoping, cancellation, and value propagation.</p><p>However, I often see middleware code that looks like this:</p><pre>func SetCtx(r *ghttp.Request, key interface{}, value interface{}) {<br>    // 1. Get current context<br>    ctx := r.Context()<br>    <br>    // 2. &quot;Set&quot; the value?<br>    valueCtx := value<br>    ctx = context.WithValue(ctx, key, valueCtx)<br>    <br>    // 3. Update the request<br>    r.SetCtx(ctx)<br>}</pre><p>At first glance, this looks like a simple setter. But if you look closer, you realize context doesn&#39;t behave like a mutable map. You can&#39;t just &quot;add&quot; a key to it. You have to overwrite the ctx variable and explicitly re-assign it to the request.</p><p>Why? Because <strong>Go Context is immutable.</strong></p><p>To truly master Go concurrency patterns, we need to look beyond the syntax and understand what actually happens in memory. Let’s dive deep into the data structures and lifecycle of the Context.</p><h3>1. The Onion Model: The Reverse Linked List (For Values)</h3><p>The most important mental model to establish is that <strong>a Context is not a bucket; it is a Linked List.</strong></p><p>When you call context.WithValue(parent, key, value), Go does not modify the parent. Instead, it allocates a completely new struct (a child) that holds:</p><ul><li>The new key and value.</li><li>A pointer <strong>backwards</strong> to the parent.</li></ul><p>I like to visualize this as an onion, or a reverse linked list:</p><ul><li><strong>Layer 1 (Root):</strong> context.Background() (The empty core).</li><li><strong>Layer 2:</strong> Middleware A adds a RequestID (Points to Root).</li><li><strong>Layer 3:</strong> Your SetCtx function adds UserData (Points to Layer 2).</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*cQJHQXeqRp1x9M-xhCn3Hg.png" /></figure><h3>How Lookup Works (Bottom-Up)</h3><p>When you call ctx.Value(key), Go traverses this list <strong>from bottom to top</strong> (Child -&gt; Parent -&gt; Root):</p><ul><li>It checks the current node.</li><li>If the key isn’t found, it follows the pointer to the parent.</li><li>It repeats until it hits the Root (which returns nil).</li></ul><p>This explains why we must re-assign the context to the Request (r.SetCtx(ctx)). If we didn&#39;t, the Request would still be pointing to the &quot;Layer 2&quot; node, and our new data would be effectively lost in memory, detached from the request flow.</p><h3>2. The Hidden Tree: Top-Down Signal Propagation (For Cancellation)</h3><p>Here is where the design gets fascinating. While WithValue builds a Linked List for data, WithCancel (and its siblings WithTimeout, WithDeadline) builds a completely different structure simultaneously: <strong>A Tree.</strong></p><p>This duality exists because the requirements are opposite:</p><ul><li><strong>Data Lookup</strong> asks: <em>“What does my parent have?”</em> (Bottom-Up)</li><li><strong>Cancellation</strong> commands: <em>“Shut down all my children!”</em> (Top-Down)</li></ul><h3>The cancelCtx Struct</h3><p>When you call ctx, cancel := context.WithCancel(parent), Go creates a specialized cancelCtx struct. Unlike the simple valueCtx, this struct carries heavier machinery, including a children map:</p><pre>type cancelCtx struct {<br>    Context<br>    // ...<br>    children map[canceler]struct{} // The &quot;Tree&quot; structure<br>    // ...<br>}</pre><h3>The Domino Effect</h3><p>When a context is created, it registers itself with its nearest cancellable ancestor. This builds a tree where the parent knows about all its cancellable children.</p><p>When cancel() is called:</p><ul><li>It closes the Done channel.</li><li>It iterates over the children map.</li><li>It calls cancel() on every child.</li></ul><p>This creates a <strong>Top-Down propagation</strong>. If the Root context (e.g., the HTTP Request) times out, the signal cascades down the tree, instantly killing database queries, HTTP calls, and calculation workers spawned from that request.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*pRDuwLjBSgSgILlKkWKKcg.png" /></figure><h3>3. Under the Hood: The 4 Hidden Structs</h3><p>If we look at the Go source code (src/context/context.go), Context is just an Interface. Behind the scenes, the runtime uses four private structs to implement it. Understanding these reveals the hybrid nature of the library:</p><ul><li><strong>emptyCtx</strong>: This is context.Background() and context.TODO(). It’s essentially an empty integer. It has no parent and is the root of every context tree.</li><li><strong>valueCtx</strong>: Created by WithValue. Acts as a node in a <strong>Singly Linked List</strong>. Optimized for immutability.</li><li><strong>cancelCtx</strong>: Created by WithCancel. Uses a <strong>Map/Tree</strong> structure to track children for cancellation propagation.</li><li><strong>timerCtx</strong>: Created by WithTimeout. Inherits from cancelCtx but adds a timer to automatically trigger the cancellation signal.</li></ul><p><strong>Key Takeaway:</strong> Context is a hybrid data structure. It behaves like a Linked List for data retrieval (Value()) and a Tree for signal propagation (Done()).</p><h3>4. Lifecycle and Garbage Collection</h3><p>A common question is: <em>“When does this context disappear? Does creating so many layers cause memory leaks?”</em></p><p>In a standard HTTP server (like net/http or frameworks like Gin/GoFrame), the http.Request struct holds the pointer to the &quot;youngest&quot; (most recent) context node.</p><pre>type Request struct {<br>    // ...<br>    ctx context.Context <br>}</pre><p>The lifecycle is tied directly to the Request object:</p><ul><li><strong>Creation:</strong> When a request hits the server, a root context is created.</li><li><strong>Expansion:</strong> As the request passes through middleware, new valueCtx nodes are allocated in the Heap, forming the linked list.</li><li><strong>Cleanup:</strong> Once the handler returns and the HTTP response is sent, the Request object becomes unreachable. Since the Request was the only thing holding the pointer to the tail of the context chain, the entire chain becomes unreachable.</li></ul><p>The <strong>Go Garbage Collector (GC)</strong> then sees these structs as “garbage” and reclaims the memory.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*aDvI8nWUyFNcmybPPQELMA.png" /></figure><p><strong>Warning:</strong> This is why you should be careful passing a Request context to a background goroutine. If the Request finishes, the context is technically “done” (cancelled). Accessing values from a stale request context is an anti-pattern.</p><h3>5. Why This Design?</h3><p>Why didn’t the Go team just use a mutable, thread-safe map?</p><ul><li><strong>Lock-Free Concurrency:</strong> In a highly concurrent backend, locking (Mutex) is expensive. By making Context immutable, multiple goroutines can read from the same context simultaneously <strong>without any locks</strong>.</li><li><strong>Scope Isolation:</strong> If a function adds a value to the context, that value is only visible to that function and its children. It does not “pollute” the context for the caller (parent). This prevents side effects that are notorious for causing bugs in other languages.</li></ul><h3>Summary</h3><p>The next time you write ctx = context.WithValue(ctx, k, v), remember:</p><ul><li>You aren’t changing the object; you are extending a <strong>Linked List</strong>.</li><li>Simultaneously, you might be participating in a <strong>Cancellation Tree</strong>.</li><li>The ctx travels with the Request pointer, and the <strong>Garbage Collector</strong> handles the cleanup automatically once the Request is done.</li></ul><p>Understanding these internals allows us to write more efficient Go code and avoid pitfalls like context leaks or over-stuffing the context chain.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c49068fe9b1b" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>