<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Layne Penney on Medium]]></title>
        <description><![CDATA[Stories by Layne Penney on Medium]]></description>
        <link>https://medium.com/@_layne?source=rss-1a8eae645c46------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Mon, 11 May 2026 14:35:22 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@_layne/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[The Return of Audacity: How AI Helped Me Reclaim My Mind After Multiple Sclerosis]]></title>
            <link>https://medium.com/@_layne/the-return-of-audacity-how-ai-helped-me-reclaim-my-mind-after-multiple-sclerosis-03398b1d159d?source=rss-1a8eae645c46------2</link>
            <guid isPermaLink="false">https://medium.com/p/03398b1d159d</guid>
            <category><![CDATA[chronic-illness]]></category>
            <category><![CDATA[hope]]></category>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[multiple-sclerosis]]></category>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[Layne Penney]]></dc:creator>
            <pubDate>Thu, 12 Feb 2026 13:37:16 GMT</pubDate>
            <atom:updated>2026-02-12T15:32:30.358Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*m84OOX9tww3qD2svcSicLw.png" /></figure><h3><strong>There is a hidden economy to living with multiple sclerosis.</strong></h3><p>Before MS, I measured my life in effort. In discipline. In hours spent pursuing what mattered.</p><p><strong>Now I measure it in energy.</strong></p><p>Energy determines what can be started.<br>Stamina determines what can be sustained.<br>Cognitive clarity determines how far a thought can travel before fatigue asks it to turn around.</p><p>Desire has never been my limitation.</p><p><strong>Capacity has.</strong></p><p>MS did not take my curiosity.<br>It did not extinguish my imagination.<br>It did not quiet the part of me that is drawn toward building difficult and meaningful things.</p><p>But it changes the physics under which my mind operates.</p><p>There are days when words hide from me mid-conversation.<br>Days when comprehension lags just enough to make dialogue unbearably loud.<br>Days when thoughts scatter and never fully land.</p><p>In those moments, a painful question can surface before I have time to challenge it:</p><p><em>Am I less intelligent than I used to be?</em></p><p>No one has to say it.</p><p><strong>I feel it.</strong></p><p>There is a particular vulnerability in telling your team you struggled to accomplish anything in standup.<br>Explaining that the barrier was not effort or commitment, but neurological limitation.<br>Letting people see the neurological scars that disrupt the signals within my control.</p><p><strong>It hurts.</strong></p><p>Not only because of what MS does, but because of the way it distorts the mirror I hold toward myself.</p><p>For most of my life, my mind was the instrument through which I met the world. Competence was not just something I demonstrated.<br>It became part of how I understood who I was.</p><p>So when cognition slows, the disruption is not circumstantial.</p><p><strong>It is existential</strong>.</p><h3>Negotiating With My Own Ambition</h3><p>There were seasons when I learned to negotiate against my own audacity.</p><p>I would feel the familiar excitement to build something meaningful.<br>To chase an idea into existence.<br>To stretch beyond what my eyes could conceive.</p><p>But wanting was never the problem.</p><p><strong>Sustaining was.</strong></p><p>Again and again, I discovered a difficult truth:</p><p>There is no substitute for mental energy.<br>No workaround for deep comprehension.<br>No discipline strong enough to manufacture neurological stamina once it is gone.</p><p>So slowly, almost without noticing, I began to edit myself.</p><p>I limited my ambition.<br>I tempered my reach.<br>I managed my expectations before my limitations became undeniable.</p><p>Not out of fear.</p><p>Out of reality.</p><p><strong>Out of regret.</strong></p><p>Hope can be expensive when your energy is unpredictable. <br>Ambition can become a liability when you are no longer certain you can support its weight.</p><p><strong>Dreams become reluctant to breathe when the atmosphere for life is uninhabitable.</strong></p><p>From the outside, it may look like maturity.<br>Privately, though, it is like watching the perimeter of my life compress inward.<br>I learn to remain on the ground.<br>I constrain my goals so they will not break my heart when my body cannot follow through.</p><p>And yet… something inside me never fully consents to that shrinking.</p><p>The desire to build remains.</p><p>Quiet.<br>Contained.<br><strong>Defiantly waiting.</strong></p><h3>When the Barrier Is Energy</h3><p>There is a particular heartbreak that comes when the obstacle is not passion, but stamina.</p><p>My imagination still runs.<br>My creativity still burns.<br><strong>My curiosity still aches toward expansion.</strong></p><p>But sometimes they collide with a wall my brain cannot simply push through.</p><p><strong>I adapt.</strong><br>I carefully redesign my workflows.<br>I learn to respect the limits my body forces me to honor.</p><p><strong>But evolution does not erase the grief.</strong></p><p>MS has forced me to grow.<br>At times it is like carrying unseen weight, resistance present in every movement of the mind.<br>And yet, in learning to live with that resistance, I have discovered a quiet strength.<br>When support finally arrives, it is not the strength of someone newly capable, but of someone who has been training all along.</p><p>Because somewhere inside me lives the memory of effortless cognition.<br>Of thinking without friction.<br>Of pursuing ideas without calculating whether I can afford the energy they would require.</p><p><strong>Multiple sclerosis forces me into conversations with my own limits that I never wanted to have.</strong></p><p>And if I am honest, some of those conversations break my heart.</p><p>There is a lonely tension in remaining imaginative while not always being resourced enough to follow where imagination leads.</p><p>To see distances clearly…<br><strong>But knowing you are not able to cross them.</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*_F_XSGPy2-0pmBIZm7a2Uw.png" /></figure><h3>Then Something Shifted</h3><p>Almost without ceremony, AI entered my daily life.</p><p>At first, it was simply a tool.<br>Then it became something far more consequential.</p><p><strong>It became reinforcement.</strong></p><p>For the first time in years, the distance between imagination and execution has begun to narrow.</p><p>Where my energy thins, these tools extend me.<br>Where cognition fatigues, they help carry the thread.<br>Where complexity once overwhelmed my working memory, I can now think in partnership rather than isolation.</p><p><strong>Over the past two months, I have accomplished more than I thought possible.</strong></p><p>I am chasing education I do not fully comprehend yet possess enough intuition to work with.<br>I am exploring systems that once felt too large to approach.<br>Training models.<br>Running experiments.<br>Turning abstraction into architecture.</p><p>Not faster than everyone else.<br>But faster than the version of me that had begun to live inside constraint.<br>And with that momentum came an unexpected return:</p><p><strong>I trust my mind again.</strong></p><h3>The Return of Hope</h3><p>For a long time, MS made my world feel smaller.</p><p>Gradually, subtly, almost politely.</p><p>It narrowed what I could reliably pursue without calculating the cost.<br>There were seasons when I wondered whether the most expansive chapters of my life were already behind me.</p><p><strong>Then momentum returned.</strong></p><p>Not the relentless pace I once kept.<br>But forward movement.<br>And momentum, I have learned, is enough to rekindle a life.</p><p>These tools do more than help me execute.</p><p><strong>They help me overcome.</strong></p><p>On mornings when my energy is uncertain, I am no longer standing alone at the base of impossibility.<br>I can build with something.<br>I can explore alongside something that does not tire when I do.</p><p>The barriers have not vanished.<br>But they are no longer final.<br>And in that subtle change, something returned to me that I did not realize I had been grieving:</p><p><strong>Hope.</strong></p><p>I feel capable again.<br>Curious again.<br>Invited back into the vastness of ideas.</p><p>More than anything, I feel excited to live.</p><p>Not because my illness disappeared.</p><p><strong>But because my reach is expanding again.</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*NhPPiC09iO_BNOHRC4GWqA.png" /></figure><h3>Allowing Myself to Be Audacious Again</h3><p>For a long time, I believed the responsible thing to do was to make my ambitions smaller.</p><p>Now I see the problem differently.</p><p><strong>My ambition was never too large.<br></strong>I had simply been trying to carry it alone.</p><p>AI has not replaced my thinking.<br>It has expanded my ability to sustain it.</p><p>I do not need my old mind back.<br>I need new ways forward.</p><p>Multiple sclerosis influences what I can do.<br>That is part of my reality.</p><p><strong>But it will not get to decide the size of my future.<br></strong>I am still a builder.</p><p>Only now, I build with partnership.<br>With reinforcement.<br>With tools that help carry the weight when my mind grows tired.</p><p>For the first time in a long while, the horizon is widening instead of closing.</p><p><strong>And that widening feels a lot like coming back to life.</strong></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=03398b1d159d" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[You’ll Remember. They Won’t.]]></title>
            <link>https://medium.com/@_layne/youll-remember-they-won-t-6ded8faab7b1?source=rss-1a8eae645c46------2</link>
            <guid isPermaLink="false">https://medium.com/p/6ded8faab7b1</guid>
            <category><![CDATA[ai-tools]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[deep-learning]]></category>
            <category><![CDATA[software-development]]></category>
            <dc:creator><![CDATA[Layne Penney]]></dc:creator>
            <pubDate>Sun, 25 Jan 2026 14:40:22 GMT</pubDate>
            <atom:updated>2026-02-11T21:15:47.254Z</atom:updated>
            <cc:license>http://creativecommons.org/licenses/by/4.0/</cc:license>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*CqESKxHil7S86ONhWKG_GQ.png" /></figure><p>I’ve been working with <a href="https://code.claude.com/docs/en/overview">Claude Code</a> for the past few weeks and I have to say that I am pretty amazed at the capabilities. I also can’t help but wonder what it would take for these thinking machines to become what we hope and fear.</p><p>I keep finding myself in the role of archivist. The agents forget everything the moment the context window shifts. They’re brilliant in the moment and amnesiac immediately after. That’s not a bug; it’s the architecture. And it changes my job.</p><p>I’m often driven by curiosity. I like to understand how things work. That interest applies to people and to science and pretty much anything. AI is an interest that I haven’t quite yet been able to wrap my head around. I experimented with some machine learning about 10 years ago, but wasn’t really able to shift into that realm.</p><p>One thing I dislike about software engineering is that it is super easy to get siloed because time is filled up with doing your job and there’s never really any time to learn and explore. Thankfully I found that time over the holiday break. What I’ve learned is both profound and restrained. This year, 2026, may truly be the biggest year of change in software development history. I say that as an explorer and an engineer.</p><p>I started building in Lovable and then moved to Claude Code. In order to learn more about how these AI agents work, I decided to use Claude Code to build my own version of an agentic AI tool. I was motivated by the fact that Claude Code now writes Claude Code itself.</p><p>So I’ve been building <a href="https://laynepenney.github.io/codi/"><strong>Codi</strong></a>, my own AI wingman. Based on that experience I have some observations and realizations that I think are helpful to understand based on my career as a software engineer and as a curious explorer out in the wild of tech.</p><p>The first realization I had was about the <strong>context window</strong>. I like to think of a context window as sort of like a detachable pre-frontal cortex. It’s not really that because the context doesn’t do anything, but it does have all of the information that will make the output unique.</p><p>I realized in my explorations that the entire context has to be passed into the AI model every time because the model itself does not change. There is no learning once the model is built. These models are trained offsite, shipped, deployed, and remain constant forever.</p><p>They don’t remember our last conversation unless I remind them. They don’t remember that they deleted a file yesterday. They don’t remember the codebase evolving. The context window is like a wish passed to a genie: a genie with no memory of your previous wishes.</p><p><strong>You’ll remember. They won’t.</strong></p><p>Another realization I had is around multi-tasking.</p><p>Because AI agents do work and run in the background and sometimes prompt you for more info or for permissions, it’s easy to have more than one working at the same time. I used the concept of git worktrees so that I could have parallel agents coding on features.</p><p>For me, that number was four.</p><p>I had Claude Code running one agent, Codex running another, and Codi running two. I like my workflow with Codi because I can switch between models for different tasks. With four agents running, it is easy to get lost on who is working on what. Each agent is locally rational within its own context, but collectively they can behave destructively.</p><p>In one case, I had one agent prepare a pull request with a well thought-out potential upgrade written in a markdown document. I reviewed it, merged it, and moved on. Later on as I am looking through the repository I find that the file has been deleted! Well one of the other agents decided that the file wasn’t needed and deleted it!</p><p>In another case, an agent attempted to run git reset --hard origin/main while sitting on a feature branch. This would have lost all the work! In theory, I could have recovered the work using git reflog if necessary, but that shouldn’t be something I have to worry about. You can&#39;t leave them alone in the house for 5 minutes!</p><p>Nothing was broken from the agent’s perspective. The system behaved exactly as instructed. The problem was coordination. But really, the problem was memory. Each agent operated in a sealed chamber, blind to what the others had done. I was the only one who could see the whole history. I had to become the living record of what each agent had touched, or they’d collide destructively.</p><p><strong>Four agents. One memory. Mine.</strong></p><p>The challenge for me is the overwhelming verbosity of these models. I can’t keep up with them. Too much goes unchecked. They don’t have to sleep. They don’t have to eat. And they don’t learn. They only know what’s directly in front of them.</p><p>AI will change software engineering completely.</p><p>“Vibe coding” is the new modern language, while “typed coding” is beginning to resemble assembly language. Assembly still matters when performance or correctness is critical. But most engineers now operate at a higher level of abstraction.</p><p>The shift is inevitable. But someone has to remember. Someone has to carry the history of what was built, why decisions were made, and what got deleted in the chaos of parallel agents.</p><p>I’m still a software engineer. But now I’m something else too: the memory.</p><p><strong>They will process. I will remember.</strong></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6ded8faab7b1" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Best Practices for MutableStateFlow.update{}]]></title>
            <link>https://medium.com/@_layne/best-practices-for-mutablestateflow-update-866413ef5a23?source=rss-1a8eae645c46------2</link>
            <guid isPermaLink="false">https://medium.com/p/866413ef5a23</guid>
            <category><![CDATA[reactive-programming]]></category>
            <category><![CDATA[android]]></category>
            <category><![CDATA[concurrency]]></category>
            <category><![CDATA[kotlin]]></category>
            <category><![CDATA[kotlin-coroutines]]></category>
            <dc:creator><![CDATA[Layne Penney]]></dc:creator>
            <pubDate>Sun, 29 Jun 2025 16:07:31 GMT</pubDate>
            <atom:updated>2025-06-29T16:07:31.892Z</atom:updated>
            <content:encoded><![CDATA[<h3>Best Practices for MutableStateFlow.update {}</h3><p><strong>TL;DR:</strong> Keep your update {} lambdas pure, quick, and side‑effect‑free to avoid hidden retries and concurrency surprises.</p><figure><img alt="A minimalist vector illustration featuring a ribbon-like orange arrow flowing from a Kotlin lambda (λ) symbol on the left to an atomic padlock with orbiting electron rings on the right, symbolizing pure, atomic state updates and compare-and-set retries in MutableStateFlow.update." src="https://cdn-images-1.medium.com/max/1024/1*nxXe9N77wIQbTapwWLX5hg.png" /></figure><h3>🕵️ Preface: Why I’m Sharing This</h3><p>I recently encountered a subtle bug: a MutableStateFlow.update {} call triggered another update {} inside itself. At first glance, it seemed harmless—just setting isLoading = false—but because update uses a <a href="https://github.com/Kotlin/kotlinx.coroutines/blob/f4f519b36734238ec686dfaec1e174086691781e/kotlinx-coroutines-core/common/src/flow/StateFlow.kt#L224-L237">compare‑and‑set loop</a> (think: grab a lock, apply your change, and if someone else beat you to it, retry from the top), the lambda ran twice. Concurrency hiccups like this are notoriously unpredictable and hard to debug, underscoring why we must avoid side effects or extra state mutations inside an update block.</p><h3>🚫<strong> Why Function Calls Inside update {} Are Risky</strong></h3><blockquote><strong><em>Heads‑up:</em></strong><em> Your </em><em>update {} lambda might run </em><strong><em>multiple</em></strong><em> times—so never put side effects inside it.</em></blockquote><p>A well‑behaved update {} lambda should always be:</p><ul><li>⚡ <strong>Quick</strong></li><li>🔒 <strong>Atomic</strong></li><li>🧼 <strong>Side‑effect‑free</strong></li></ul><h3>❗ Risks of Complex Logic Inside update {}</h3><ul><li>🔁 <strong>Recursive updates</strong><br>Calling update within another update can lead to infinite loops, stack overflows, or elusive state bugs.</li><li>🔄 <strong>Unexpected retries</strong><br>The compare‑and‑set loop may rerun your lambda if the state changed mid‑flight.</li><li>🐢 <strong>Performance issues</strong><br>Expensive computations inside the lambda repeat on each retry, wasting resources.</li><li>🎲 <strong>Nondeterminism</strong><br>Side effects or non‑idempotent operations (e.g., network calls, System.currentTimeMillis(), random numbers) can produce inconsistent behavior.</li></ul><h3>✅ Best Practices for update {}</h3><p><strong>Do</strong></p><ul><li>Use pure, deterministic functions inside update</li><li>Pre‑compute expensive data before calling update</li><li>Keep the lambda concise and focused</li><li>Combine state changes into a single call where appropriate</li></ul><p><strong>Avoid</strong></p><ul><li>Side effects (network, database, etc.)</li><li>Expensive computations inside the lambda</li><li>Nested update calls or indirect recursion</li><li>Multiple disjointed state updates when one suffices</li></ul><h3>🛠️ Examples</h3><h3>Bad: Expensive Function Inside update</h3><pre>state.update {<br>    val newData = fetchExpensiveData() // ❌ Expensive and non‑deterministic<br>    it.copy(data = newData)<br>}</pre><h3>Better: Compute Before update</h3><pre>val newData = fetchExpensiveData() // ✅ Executed once, outside update<br><br>state.update {<br>    it.copy(data = newData)<br>}</pre><h3>Bad: Recursive update Call</h3><pre>fun doSomeLogicAfterLoading(): Result {<br>    state.update { it.copy(isLoading = false) } // ❌ BAD: calls update inside update<br>    return doSomeLogic()<br>}<br><br>state.update {<br>    it.copy(data = doSomeLogicAfterLoading()) // ❌ Causes recursion or inconsistency<br>}</pre><h3>Better: Separate Updates from Logic</h3><pre>fun doSomeLogicAfterLoading(): Result {<br>    return doSomeLogic()<br>}<br><br>state.update { it.copy(isLoading = false) }<br><br>val result = doSomeLogicAfterLoading()<br><br>state.update { it.copy(data = result) }</pre><h3>🔄 Alternative: Pure Transformation Function</h3><p>If your logic depends on the current state, encapsulate it in a pure transformation:</p><pre>fun State.doSomeLogicAndReturnNewState(): State {<br>    val result = this.doSomeLogic()<br>    return this.copy(<br>        isLoading = false,<br>        data = result<br>    )<br>}<br><br>state.update {<br>    it.doSomeLogicAndReturnNewState() // ✅ Pure and safe<br>}</pre><h3>🧠 Summary</h3><ul><li>✅ Keep the update lambda <strong>pure</strong>, <strong>quick</strong>, and <strong>side‑effect‑free</strong></li><li>✅ Perform complex or expensive logic <strong>outside</strong> the lambda</li><li>✅ Combine related state updates when possible</li><li>✅ Use pure transformation functions for state‑dependent logic</li><li>❌ Side effects (network, database, etc.)</li><li>❌ Expensive computations inside the lambda</li><li>❌ Nested update calls or indirect recursion</li><li>❌ Non-idempotent operations inside update</li></ul><h3>📚 Further Reading</h3><ul><li><a href="https://www.dhiwise.com/post/volatile-vs-atomic-impact-on-multi-threaded-environments">Volatile vs Atomic: Impact on Multi‑threaded Environments</a></li><li><a href="https://www.dhiwise.com/post/understanding-kotlin-volatile-and-its-role-in-multithreading">Understanding Kotlin Volatile and Its Role in Multithreading</a></li></ul><p>😊 <strong>If this helped you</strong><br>Please clap and tag your story with <strong>Kotlin</strong> / <strong>Reactive Programming</strong> so others can discover it!</p><p>💬 Let me know if you’d like a code audit or help refactoring your MutableStateFlow patterns!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=866413ef5a23" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Codegraft — Dagger2 Android Composer]]></title>
            <link>https://medium.com/@_layne/codegraft-d94a7ae19dd?source=rss-1a8eae645c46------2</link>
            <guid isPermaLink="false">https://medium.com/p/d94a7ae19dd</guid>
            <category><![CDATA[app-development]]></category>
            <category><![CDATA[dagger]]></category>
            <category><![CDATA[dagger-2]]></category>
            <category><![CDATA[android]]></category>
            <category><![CDATA[android-app-development]]></category>
            <dc:creator><![CDATA[Layne Penney]]></dc:creator>
            <pubDate>Thu, 04 Oct 2018 15:25:19 GMT</pubDate>
            <atom:updated>2025-04-02T13:10:14.576Z</atom:updated>
            <cc:license>http://creativecommons.org/licenses/by/4.0/</cc:license>
            <content:encoded><![CDATA[<h3>Codegraft — Dagger2 Android Composer</h3><p>Codegraft coordinates Dagger2 on Android so that you don’t have to deal with many of the complexities in managing multiple component graphs</p><p><a href="https://evovetech.github.io/codegraft">Codegraft</a></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*-mqfLpng-KYj3cP5KO6CJA.png" /><figcaption><a href="https://outcrawl.com/go-dependency-injection/">https://outcrawl.com/go-dependency-injection/</a></figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/1*6Vmt1CipAgar0KZw7egKkg.png" /><figcaption><a href="https://www.raywenderlich.com/262-dependency-injection-in-android-with-dagger-2-and-kotlin">https://www.raywenderlich.com/262-dependency-injection-in-android-with-dagger-2-and-kotlin</a></figcaption></figure><p>I recently finish putting together a library for helping to manage dagger dependency injection across multiple android modules so they can be used together more easily in the final app.</p><p>I’d love to hear feedback!</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/8cef05a5fae9902dfaf3c58557c37a02/href">https://medium.com/media/8cef05a5fae9902dfaf3c58557c37a02/href</a></iframe><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d94a7ae19dd" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>