<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Pedro V F C Leite on Medium]]></title>
        <description><![CDATA[Stories by Pedro V F C Leite on Medium]]></description>
        <link>https://medium.com/@pedro.v.f.c.leite?source=rss-43186db7f4c0------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sat, 16 May 2026 13:11:51 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@pedro.v.f.c.leite/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[From Skeptic to Believer: My Journey into AI-Assisted Development]]></title>
            <link>https://medium.com/@pedro.v.f.c.leite/from-skeptic-to-believer-my-journey-into-ai-assisted-development-c99b82a3108d?source=rss-43186db7f4c0------2</link>
            <guid isPermaLink="false">https://medium.com/p/c99b82a3108d</guid>
            <category><![CDATA[claude]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[github-copilot]]></category>
            <category><![CDATA[vscode]]></category>
            <category><![CDATA[software-development]]></category>
            <dc:creator><![CDATA[Pedro V F C Leite]]></dc:creator>
            <pubDate>Fri, 06 Feb 2026 12:00:55 GMT</pubDate>
            <atom:updated>2026-02-06T12:00:55.763Z</atom:updated>
            <content:encoded><![CDATA[<blockquote><em>A three-week experiment in building an enterprise admin portal with AI assistance</em></blockquote><h3>The Experiment</h3><p>Could a single developer, working with AI assistance, build a scalable enterprise admin portal in three weeks? My team wanted to find out. I was chosen for the experiment.</p><h4><strong>What we set out to build:</strong></h4><ul><li>An admin portal with tenant management, user CRUD, role-based navigation, and a dashboard</li><li>Frontend in Angular 20+ with a NestJS backend-for-frontend</li><li>Nx monorepo with shared component libraries</li><li>Authentication integration with an existing identity provider</li><li>Internationalization infrastructure</li><li>Real-time notifications via WebSockets</li></ul><h4><strong>What “production-ready” meant for this experiment:</strong></h4><ul><li>All 40 acceptance criteria <strong>addressed</strong> (35 complete; 5 partial due to external API dependencies, portal infrastructure ready and waiting for integration)</li><li>Zero TypeScript compilation errors, zero linting violations</li><li>Unit tests passing across shared libraries and the BFF (CI enforced)</li><li>CI/CD pipeline configured and running (green on main at the end of the experiment)</li><li>Docker containerization for both frontend and backend</li><li>Documented architecture (10 chapters of technical documentation)</li><li>A repeatable workflow: Nx boundaries enforced, strict TypeScript mode, standardized patterns across UI and API layers</li></ul><h4><strong>What was not in scope:</strong></h4><ul><li>Multi-region deployment</li><li>Full penetration testing</li><li>Disaster recovery setup</li><li>Complete accessibility audit</li><li>Performance load testing at scale</li></ul><h4><strong>Existing infrastructure we leveraged:</strong></h4><ul><li>An authentication service was already deployed from a previous project (I exported its configuration and integrated with it rather than building authentication from scratch)</li></ul><p>The “14–16 weeks traditional estimate” was a rough-sizing estimate provided by an experienced architect based on the scope and typical delivery constraints.</p><h3><strong>The Beginning: The Wrong Approach</strong></h3><p>The first three days did not go as planned.</p><p>I started with what seemed like a logical approach: give the AI the Figma screenshots and say <strong><em>“Build this.”</em></strong></p><p>The result was not good. Layouts with misaligned components and inconsistent margins. The grid system was completely ignored. The navigation was pure static HTML, clicking on menu items did nothing. No routing, no state management, just styled dead links. Mobile did not work. Tablet was worse.</p><p>Three days into the experiment, I had something that looked vaguely like an application but would fall apart the moment you tried to use it.</p><p>Time to change the approach.</p><h3><strong>The Realization: AI Is Not Magic</strong></h3><p>After deleting everything and starting over, I had a moment of clarity. I had been treating AI as a magic wand, wave it at a problem and the solution appears.</p><p>But AI does not do magic. What AI does is amplification. It amplifies whatever you give it.</p><p>Give it vague, context-free requests? You get vague, context-free code. Give it screenshots without architecture? You get pixel-approximations without structure.</p><p>AI optimizes for plausibility given constraints. Vague constraints yield plausible junk.</p><p>Here is something else that initially bothered me: the same question, phrased identically, could produce different answers. Not wrong answers. Just different. Coming from a mathematics background where a well-posed problem has one correct answer, this felt strange.</p><p>But software engineering rarely has one correct answer. It has trade-offs. Different valid approaches exist for most problems, and the “best” one depends on constraints and context. Once I accepted this, I stopped fighting the tool and started working with it.</p><p>Practically, this led to a key habit: <strong>freeze decisions.</strong> When the AI suggested multiple plausible approaches, I would choose one, document it (an ADR-style decision note), and treat it as a constraint going forward. That’s how I kept the agent from oscillating and kept the codebase consistent.</p><h3><strong>The Turning Point: Architecture First</strong></h3><p>After those first three days, I changed everything.</p><p><strong>Before (Failed Approach):</strong></p><ul><li>Figma screenshots as the only input</li><li>“Build this” as my prompt</li><li>Zero technical context</li><li>One giant prompt to do everything</li></ul><p><strong>After (What Actually Worked):</strong></p><ul><li>Architecture documentation written first</li><li>Phased development plan with clear milestones</li><li>Defined technology stack and patterns</li><li>Libraries and infrastructure before features</li><li>Iterative conversation, building on accumulated context</li></ul><p>Instead of asking the AI to guess what I wanted, I told it exactly what I wanted. I documented the architecture before writing a single line of code. I defined the development phases. I created the foundational libraries first, establishing patterns that would be reused everywhere.</p><p>Only then did I start building features.</p><h3><strong>Division of Labor: What I Did vs What AI Did</strong></h3><p>To be clear about who did what:</p><p><strong>My responsibilities:</strong></p><ul><li>Architecture decisions and boundary definitions</li><li>Mapping requirements to acceptance criteria</li><li>Code review of every generated file</li><li>Integration validation and debugging</li><li>Security decisions (token storage, redirect URIs, CORS)</li><li>Risk assessment and trade-off choices</li><li>Final approval on all patterns</li></ul><p><strong>What AI handled:</strong></p><ul><li>Scaffolding and boilerplate generation</li><li>Repetitive implementation (CRUD operations, similar components)</li><li>Test generation based on specifications</li><li>Documentation drafts</li><li>Debugging suggestions when I hit issues</li><li>Configuration file generation</li></ul><p>The AI was a highly productive assistant. I remained the architect and decision-maker.</p><h3>The Build: 18 Days of Real Progress</h3><h4>Building the Foundations</h4><p>The first day after the reset was pure setup: monorepo workspace, frontend framework configuration, backend service scaffolding, styling systems, linting, formatting, git hooks. Done in about eight hours.</p><p>The next couple of days were about creating shared libraries. The atomic components that would be the building blocks for everything else. Buttons, inputs, cards, modals, the primitives. Plus all the TypeScript interfaces and contracts.</p><p>By the end of that phase, I had 8 libraries generated, 7 atomic components, 25+ TypeScript interfaces, and 69 passing tests.</p><p>Next came authentication. I exported the configuration from our existing identity provider, provided the AI with the service URL and settings, and asked it to integrate. One day later, the entire authentication flow was working, token validation, silent refresh, the complete integration.</p><p>Then the application shell: header with branding, collapsible sidebar navigation, footer, tenant selector, dynamic breadcrumbs.</p><p>One week after the reset. Base infrastructure complete.</p><p><strong>A quick note on “scalable,” because the word is vague:</strong> In this context, “scalable” did not mean “multi-region with auto-scaling load tests.” It meant the codebase could scale in <em>scope and complexity</em> without collapsing:</p><ul><li>Nx boundaries enforced separation (apps vs libs, UI vs domain vs data access)</li><li>Shared patterns were established early (component APIs, state approach, error handling, DTOs/contracts)</li><li>Feature work became additive rather than disruptive</li><li>New modules could be added without rewriting core foundations</li></ul><h4>Features and Refinement</h4><p>The remaining two weeks were about building on those foundations. Dashboard with KPI cards. Activity feeds. Internationalization infrastructure. CRUD operations. Global search with keyboard shortcuts. Real-time notifications via WebSockets.</p><p>Each day had a clear objective. Each phase built on the previous one.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fdatawrapper.dwcdn.net%2FmKZLs%2F1%2F&amp;display_name=Datawrapper&amp;url=https%3A%2F%2Fdatawrapper.dwcdn.net%2FmKZLs%2F1%2F&amp;image=https%3A%2F%2Fdatawrapper.dwcdn.net%2FmKZLs%2Fplain-s.png%3Fv%3D1&amp;type=text%2Fhtml&amp;schema=dwcdn" width="600" height="340" frameborder="0" scrolling="no"><a href="https://medium.com/media/dc252c3aa4b349582d7f6ff740f37bd7/href">https://medium.com/media/dc252c3aa4b349582d7f6ff740f37bd7/href</a></iframe><p>The 5 partial items were features dependent on backend APIs that were still being built by another team, the portal infrastructure was ready, waiting for integration.</p><h3>What I Learned</h3><h4>1. Context Is Everything</h4><p>AI without context produces code that compiles but does not work. When I gave the AI architecture documentation, development plans, and clear specifications, it produced code that fit together. When I gave it screenshots and said “build this,” I got unusable output.</p><h4>2. Vague Prompts = Vague Results</h4><p><em>“Make a login”</em> produces something that vaguely resembles a login.</p><p><em>“Integrate with the identity provider at this URL using OAuth 2.0 with PKCE flow, handling token refresh silently, storing tokens in memory, and extracting roles from the token claims”</em> produces what you actually need.</p><h4>3. Sequential Phases Matter</h4><p>Trying to build everything at once results in code that does not fit together. Building in phases (foundations first, then libraries, then features) means each layer builds on the last. The AI maintains context. Patterns established early propagate forward.</p><h4>4. Trust, But Verify</h4><p>AI sometimes generates code that compiles perfectly but does not make sense. You remain the architect. You remain responsible for the code. Review everything.</p><h4>5. Adapt Rather Than Demand Perfection</h4><p>It is easier to adapt a well-made component than to specify every detail upfront. When the AI generates a solid component, tweaking it takes minutes. Trying to get perfection from the first prompt wastes time.</p><h4>6. You Must Teach It Your Sense of Readability</h4><p>AI often defaults to patterns that are widely considered “best practice,” such as heavy file separation and strict structural purity.</p><p>That is not always what I want.</p><p>In multiple cases, I had to ask the AI to rewrite code to be more <strong>humanly readable</strong>, not necessarily more abstract or more “correct,” but easier to understand at a glance. In practice, that often meant explicitly asking it to <strong>separate concerns</strong> by splitting logic, templates, styles, and tests into their own files, instead of collapsing everything into a single file for convenience.</p><p>These decisions were not about right versus wrong. They were about <strong>taste, ergonomics, and maintainability for real humans</strong>.</p><p>AI does not have taste. It borrows it from whatever context you provide.<br> If you care about how code feels to read and work with, you must teach that explicitly.</p><h4>7. Commit Small, Step-by-Step Improvements (Or You’ll Lose Time)</h4><p>One rule I had to enforce was committing progress <strong>increment by increment</strong>.</p><p>AI can diverge quickly. If you let it make large, multi-file changes in one go, it becomes surprisingly hard to keep track of what actually changed, why it changed, and whether it violated an earlier decision or pattern. The bigger the batch, the more review becomes “trust me bro,” and that’s where mistakes slip in.</p><p>So I worked in small slices:</p><ul><li>one component at a time,</li><li>one flow at a time,</li><li>one refactor at a time,</li><li>verify, commit, move on.</li></ul><p>When I ignored this and got greedy, asking for too much at once, I paid for it. A couple of times, I lost <strong>one to two hours</strong> unwinding changes, restoring clarity, and re-establishing the original direction.</p><p>The takeaway: <strong>AI makes iteration cheap, but only if you keep the loop tight.</strong> Small commits keep the system understandable, the diffs reviewable, and the project aligned with the architecture.</p><h3>The Quality Question</h3><p>One concern I hear: <em>“If it is so fast, the quality must be terrible.”</em></p><p>In this experiment, quality was higher than I expected, not because AI is magic, but because of how the workflow enforced discipline:</p><ul><li>Tests were written alongside the code, not deferred</li><li>Documentation was generated as we developed</li><li>Code consistency was higher because the same patterns repeated with fewer human-driven style variations</li><li>Architecture was defined upfront and followed throughout</li></ul><p><strong>The velocity came from consistency, not from cutting corners.</strong></p><p>That said, I would not claim “absolute quality” or “100% coverage.” What I can say: the codebase passed our team’s code review standards, the CI pipeline was green, and the acceptance criteria were addressed (with the partial items clearly identified as integration-dependent).</p><h3>What This Changes</h3><h4>For Developers</h4><p>This is not about replacement. It is about amplification. If you understand architecture, AI helps you implement faster. If you do not know what you want, AI will not save you. It amplifies your knowledge, including the gaps in it.</p><h4>For Teams</h4><p>The economics shift. Projects that required teams of 8 for months might require smaller teams for shorter periods. This does not mean fewer developers, it means developers can take on more ambitious scope.</p><h4>For Planning</h4><p>Traditional estimates may need recalibration. But be careful: AI-assisted development still requires someone who understands architecture, can review code, and can catch the gotchas. The time savings come from execution, not from skipping expertise.</p><h3>A Personal Reflection</h3><p>There’s a line from <em>Limitless(2011)</em> that stuck with me throughout this experiment:</p><blockquote>“The pill doesn’t make you smarter. It makes you more of what you already are.”</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/498/0*kHg97dyq5URJScnS.gif" /></figure><p>Copilot works the same way.</p><p>It doesn’t give you judgment. It doesn’t give you architectural thinking. It doesn’t give you taste.</p><p>It amplifies whatever you bring to the table.</p><p>If you bring clear architecture, discipline, and strong fundamentals, AI multiplies their impact.<br> If you bring vague thinking and weak foundations, it will happily multiply those too… just faster.</p><p>That realization fundamentally changed how I approach AI-assisted development.</p><h3>The Conclusion</h3><p>The experiment answered the question: yes, one developer with AI assistance could deliver a functional, well-structured admin portal in three weeks.</p><p>I started this journey skeptical of tools that promised too much. I ended it with a more nuanced view: AI-assisted development works, but it works because of what you bring to it. Architecture knowledge matters more, not less. Ability to specify what you want matters more, not less. Judgment about what is right matters more, not less.</p><p>The AI is a force multiplier. Bring something real to the table (context, clarity, specifications, architectural thinking) and the results are significant. Skip the preparation, and you will spend your time debugging output that looks plausible but does not work.</p><h3>The Next Phase: Teaching the Tool</h3><p>The experiment proved that AI-assisted development works when constraints are clear and stable. The obvious next step is to stop relying on implicit knowledge and make those constraints explicit.</p><p>The goal is not to let AI freely generate code, but to <strong>encode architectural intent directly into the project</strong> so the tool operates inside a narrow, safe problem space.</p><p>In practice, this means turning judgment into structure:</p><ul><li>Architectural rules are written down and non-negotiable</li><li>Approved patterns are documented and reused by default</li><li>Folder structure, naming, and boundaries are enforced</li><li>Feature work follows a predefined, repeatable path</li></ul><p>Instead of starting from a blank prompt, feature development begins inside a constrained template that already respects the architecture.</p><p>In that model, a Business Analyst does not write code. They describe behavior and acceptance criteria within a bounded framework. Copilot generates implementation that already conforms to the system’s rules. A developer still reviews and approves, but no longer needs to hand-craft every feature from scratch.</p><p>This does not eliminate risk. Business logic is subtle, and AI can still produce plausible nonsense if boundaries are weak. But the experiment made one thing clear: <strong>AI performs best when it is not allowed to guess</strong>.</p><p>The question going forward is no longer <em>“Can AI write code?”</em> It is <em>“How narrow and safe can we make the space in which it operates?”</em></p><p>If that space is well-defined, even partial success changes the economics: less time spent on scaffolding, more time spent on architecture, edge cases, and real complexity.</p><p>That is the direction worth exploring next.</p><h3>Key Takeaways</h3><p><strong>Do:</strong></p><ul><li>Document architecture before generating code</li><li>Use sequential phases (foundations → libraries → features)</li><li>Review every generated file</li><li>Be specific in what you ask for</li><li>Validate against real endpoints and documentation</li><li>Freeze decisions when multiple valid solutions exist (document and enforce them)</li></ul><p><strong>Do Not:</strong></p><ul><li>Try to build everything at once</li><li>Go from designs directly to code without architecture</li><li>Trust AI output without verification</li><li>Use vague prompts and hope AI figures out what you mean</li></ul><p><em>The numbers in this article are from a real experiment. The gotchas are real failures I encountered. The lessons are what I took away from the experience.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c99b82a3108d" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building an AI-Powered “Keeper of Specs”: How We Solved Documentation Overload with GitHub Copilot]]></title>
            <link>https://medium.com/@pedro.v.f.c.leite/building-an-ai-powered-keeper-of-specs-how-we-solved-documentation-overload-with-github-copilot-dde605b3bb4d?source=rss-43186db7f4c0------2</link>
            <guid isPermaLink="false">https://medium.com/p/dde605b3bb4d</guid>
            <category><![CDATA[documentation]]></category>
            <category><![CDATA[typescript]]></category>
            <category><![CDATA[react]]></category>
            <category><![CDATA[docusaurus]]></category>
            <dc:creator><![CDATA[Pedro V F C Leite]]></dc:creator>
            <pubDate>Sun, 11 Jan 2026 09:38:48 GMT</pubDate>
            <atom:updated>2026-01-11T09:38:48.277Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*faxBZOl01-TK5deNTHy6Sg.png" /></figure><blockquote><em>A story about turning 500+ pages of technical documentation into an intelligent, conversational assistant — built entirely through AI-assisted development.</em></blockquote><h3><strong>The Problem: Drowning in Documentation</strong></h3><p>If you’ve ever worked on a large enterprise software platform, you know the feeling. Hundreds of architectural decision records. Dozens of API specifications. Installation guides, change logs, blog posts, and design documents scattered across folders. Each one carefully written, meticulously reviewed and almost impossible to find when you actually need it.</p><p>That was our reality.</p><p>Our documentation site had grown to include:</p><ul><li><strong>ADRs (Architectural Decision Records)</strong> explaining every major technical decision</li><li><strong>Specifications</strong> for every feature and integration</li><li><strong>Installation Guides</strong> for deployment and configuration</li><li><strong>Change Logs</strong> documenting every release</li><li><strong>Blog Posts</strong> sharing team knowledge and updates</li></ul><p>The irony wasn’t lost on us. We had invested heavily in documentation, yet developers still pinged each other on Teams asking, “Where’s the spec for X?” or “What’s the reasoning behind Y?” The knowledge was there, but finding it felt like searching for a needle in a haystack.</p><h3><strong>The Vision: A “Knower of All Things”</strong></h3><p>I had a dream. A simple one, really:</p><blockquote>What if we could just… ask the documentation?</blockquote><p>Not search it. Not browse it. <strong>Ask it.</strong> In natural language. Like talking to a colleague who had read every single page and remembered every detail.</p><p>I wanted an AI assistant that could:</p><ul><li>Answer questions about our platform in plain English</li><li>Always cite its sources so developers could dig deeper</li><li>Never make things up-strictly grounded to our actual documentation</li><li>Feel like a helpful colleague, not a generic chatbot</li></ul><p>I called this vision <strong>“The Keeper of Specs”</strong> — an AI that would be the ultimate authority on our product documentation.</p><h3><strong>How It Started: A Single Prompt</strong></h3><p>Here’s where the story gets interesting. I didn’t start by writing code. I started by talking to GitHub Copilot.</p><p>The very first prompt was deceptively simple:</p><blockquote>“I want to add an AI chat that indexes this documentation and uses Azure OpenAI to answer questions about it”</blockquote><p>That single sentence kicked off an extraordinary development journey. Over the course of several iterative conversations, Copilot helped me build:</p><ol><li><strong>A custom Docusaurus plugin</strong> that scans and indexes all markdown files at build time</li><li><strong>A React chat component</strong> with a floating UI, markdown rendering, and source links</li><li><strong>Azure OpenAI integration</strong> for intelligent response generation</li><li><strong>A sophisticated search engine</strong> using TF-IDF and BM25 ranking algorithms</li></ol><p>The entire system was built through conversation-refining, iterating, and improving with each prompt.</p><h3>The Architecture: RAG to the Rescue</h3><p>The solution follows a <strong>Retrieval-Augmented Generation (RAG)</strong> pattern. If you’re not familiar with RAG, here’s the core idea:</p><ol><li><strong>Retrieve</strong> the most relevant documentation chunks based on the user’s question</li><li><strong>Augment</strong> the AI’s prompt with that context</li><li><strong>Generate</strong> a response that’s grounded in the retrieved information</li></ol><p>This approach solves the hallucination problem. Instead of relying on the AI’s training data (which might be outdated or wrong), we force it to answer only based on our actual documentation.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*VNTpZrg_OQdSjR-9vIlOBA.png" /></figure><h3>The Fine-Tuning Journey</h3><p>The first version worked, but it wasn’t great. Here’s what we learned and refined:</p><h4>1. Chunking Strategy: Hybrid Approach</h4><p>Our initial approach indexed whole documents. The problem? Documents are too long, and context windows are limited. We evolved to a <strong>hybrid chunking strategy</strong>:</p><ul><li><strong>Document-level chunks</strong> (1,500 chars) for overall context</li><li><strong>Section-level chunks</strong> (800–1,200 chars) for specific topics</li><li><strong>Overlapping windows</strong> (150–250 chars) to avoid cutting important context mid-sentence</li></ul><pre>const CONFIG = {<br>  docSummaryMaxChars: 1500, // Document summary<br>  sectionTargetSize: 1000, // Section chunks<br>  sectionMinSize: 300, // Minimum viable chunk<br>  sectionOverlap: 200, // Overlap between chunks<br>};</pre><h4>2. Search Relevance: Beyond Simple TF-IDF</h4><p>Basic TF-IDF wasn’t enough. We enhanced the search with:</p><ul><li><strong>BM25 scoring</strong> with tuned parameters (k1=1.5, b=0.5)</li><li><strong>Field weighting</strong> (titles and section headers weighted 3x higher than content)</li><li><strong>Synonym expansion</strong> (e.g., “performance” matches “speed”, “optimization”, “latency”)</li><li><strong>Fuzzy matching</strong> as a fallback for typos and variations</li></ul><pre>const fieldWeights = {<br>  title: 3.0, // Most important<br>  sectionHeader: 3.0, // Section headers are navigation targets<br>  keywords: 2.5, // Explicitly tagged<br>  description: 1.5, // Summary content<br>  content: 1.0, // Bulk text<br>  path: 0.5, // URL paths<br>};<br></pre><h4>3. Grounding: The System Prompt That Changed Everything</h4><p>The biggest improvement came from engineering the system prompt. Early versions would confidently make up information. We fixed this with strict grounding rules:</p><pre>const systemPrompt = `You are the Keeper of Specs, a documentation assistant.<br><br>## CRITICAL INSTRUCTIONS:<br><br>### GROUNDING RULES (MUST FOLLOW):<br>1. ONLY use information from the DOCUMENTATION CONTEXT below<br>2. NEVER guess or infer information not explicitly stated<br>3. NEVER present assumptions as facts<br>4. ALWAYS cite sources with document title and URL<br><br>### IF CONTEXT IS INSUFFICIENT:<br>You MUST respond with:<br>1. What specific information is missing<br>2. Which document would likely contain it<br>3. A reformulated question you could answer<br><br>### CITATION REQUIREMENTS:<br>- Every factual claim must be traceable to a source<br>- Use the exact document title from chunk metadata<br>- Copy URLs exactly as shown`;</pre><p>This prompt engineering made the AI honest. When it doesn’t know something, it says so-and suggests where to look.</p><h4>4. Response Quality: Structured Formats</h4><p>We mandated a consistent response structure:</p><pre>**Answer:**<br>[Concise answer with inline citations]<br><br>**Recommended Next Steps:** (if applicable)<br><br>- [Actionable items]<br><br>**Sources:**<br><br>- [Document Title](/path) - [what info came from here]</pre><p>This made answers scannable and verifiable.</p><h3>The Technical Stack</h3><p>For those who want to replicate this:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/492/1*xyO_rvclUD3pERO_Wj_qkQ.png" /></figure><h4>The Docusaurus Plugin</h4><p>The heart of the system is a custom plugin that runs at build time:</p><pre>module.exports = function docIndexerPlugin(context, options) {<br>  return {<br>    name: &quot;docusaurus-plugin-ai-chat-indexer&quot;,<br><br>    async contentLoaded({ actions }) {<br>      const { setGlobalData } = actions;<br>      // Scan all markdown in docs/, adr/, specs/, etc.<br>      // Extract frontmatter, split into chunks<br>      // Generate searchable index<br>    },<br><br>    async postBuild({ outDir }) {<br>      // Write ai-chat-index.json for production<br>    },<br>  };<br>};</pre><h4>The Document Indexer</h4><p>A TypeScript class that handles runtime search:</p><pre>interface DocumentChunk {<br>  id: string;<br>  title: string;<br>  sectionHeader?: string;<br>  content: string;<br>  path: string;<br>  url?: string;<br>  category: string;<br>  keywords?: string[];<br>  type?: &quot;document&quot; | &quot;section&quot;;<br>};</pre><h3>What We Learned</h3><h4>1. AI-Assisted Development is Real</h4><p>This entire feature was built through iterative prompting with GitHub Copilot. Not just code completion, actual architectural discussions, debugging sessions, and refinement. The AI suggested patterns I wouldn’t have considered.</p><h4>2. Grounding is Everything</h4><p>Without strict grounding, LLMs will hallucinate confidently. The system prompt that says “I don’t know” is more valuable than one that makes things up.</p><h4>3. Search Quality Matters More Than AI Quality</h4><p>Garbage in, garbage out. If your retrieval returns irrelevant chunks, the best AI in the world can’t save you. We spent more time on BM25 tuning than on Azure OpenAI configuration.</p><h4>4. Chunking is an Art</h4><p>Too big and you waste context window. Too small and you lose meaning. Overlapping chunks help, but there’s no perfect size-it depends on your content structure.</p><h4>5. Source Citations Build Trust</h4><p>Developers are skeptical (rightfully so). When every answer includes clickable source links, adoption skyrockets. People can verify, and they learn to trust.</p><h3>The Result: The Keeper of Specs in Action</h3><p>Today, the Keeper of Specs lives as a floating chat bubble on every page of our documentation site. Developers ask questions like:</p><ul><li><em>“How does the event error handling work?”</em></li><li><em>“What’s the caching strategy for case loading?”</em></li><li><em>“Who approved the multitenancy ADR?”</em></li></ul><p>And they get answers-with sources. Grounded. Verifiable. Helpful.</p><p>The dream of having an AI that truly “knows” our entire product spec is now reality.</p><h3>Try It Yourself</h3><p>If you want to build something similar, here’s my advice:</p><ol><li><strong>Start with a clear use case.</strong> Ours was “answer questions about documentation.”</li><li><strong>Use RAG, not fine-tuning.</strong> It’s simpler, cheaper, and your context stays current.</li><li><strong>Invest in chunking and search.</strong> The retrieval step is more important than you think.</li><li><strong>Engineer your system prompt ruthlessly.</strong> Grounding prevents hallucination.</li><li><strong>Use AI to build AI.</strong> GitHub Copilot accelerated our development dramatically.</li></ol><p>The era of static documentation is ending. The future is conversational, intelligent, and grounded.</p><p>Welcome to the age of the Keeper of Specs.</p><p><em>This article was written to share our journey building an AI-powered documentation assistant. The entire feature was developed using GitHub Copilot, demonstrating how AI can accelerate development while maintaining code quality and architectural consistency.</em></p><p><strong>Tags:</strong> #AI #AzureOpenAI #RAG #Docusaurus #DeveloperExperience #Documentation #GitHubCopilot #MachineLearning</p><p><em>Pedro V. Leite is a software engineer passionate about developer experience and AI-assisted development. Follow for more stories about building intelligent tools.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=dde605b3bb4d" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[A Symmetric Nx Monorepo Architecture for Scalable Portals and BFFs]]></title>
            <link>https://medium.com/@pedro.v.f.c.leite/a-symmetric-nx-monorepo-architecture-for-scalable-portals-and-bffs-8d9b83fbbd65?source=rss-43186db7f4c0------2</link>
            <guid isPermaLink="false">https://medium.com/p/8d9b83fbbd65</guid>
            <category><![CDATA[monorepo]]></category>
            <category><![CDATA[scalability]]></category>
            <category><![CDATA[nx]]></category>
            <category><![CDATA[angular]]></category>
            <dc:creator><![CDATA[Pedro V F C Leite]]></dc:creator>
            <pubDate>Fri, 19 Dec 2025 08:30:35 GMT</pubDate>
            <atom:updated>2025-12-19T08:30:35.354Z</atom:updated>
            <content:encoded><![CDATA[<p>This article proposes a monorepo architecture for teams that want to start a project cleanly and scale it to multiple portals, multiple backend-for-frontend (BFF) services, and a growing library ecosystem — without losing governance or making deployments painful.</p><p>The core idea is simple: <strong>use Nx to manage one repository containing many deployable apps, and enforce a symmetric library model on both frontend and backend</strong>. If you learn the UI layering once, you can apply the same mental model to NestJS features and platform code.</p><h4>What this architecture is for</h4><p>You should consider this approach if you expect:</p><ul><li>Multiple Angular applications over time (e.g., portal-a, portal-b, …)</li><li>Multiple NestJS BFFs aligned to those portals (e.g., bff-portal-a, bff-portal-b, …)</li><li>A shared design system and UI reuse across portals</li><li>Shared backend “platform” capabilities across BFFs (auth, logging, error handling, caching, downstream orchestration patterns)</li><li>CI/CD that remains fast as the repository grows (affected-only builds/tests)</li></ul><h4>The core principle: symmetry and strict dependency direction</h4><p>Most monorepos fail because “shared” becomes a dumping ground. This design avoids that by enforcing:</p><ol><li><strong>Symmetric layers</strong> on both FE and BE:</li></ol><ul><li><strong>atomic</strong>: stable primitives, minimal dependencies, rarely changed</li><li><strong>shared</strong>: reusable composites, still generic</li><li><strong>projects</strong>: portal-specific composition, isolated per portal folder</li></ul><p>2. <strong>One-way dependency flow</strong> (no cyclic or sideways imports)</p><p>That gives you scale without chaos.</p><h4>Repository structure</h4><p>A representative structure looks like this:</p><pre>apps/<br>  portal-a/                 # Angular SPA<br>  portal-b/                 # Angular SPA<br>  bff-portal-a/             # NestJS BFF<br>  bff-portal-b/             # NestJS BFF<br><br>libs/<br>  # FRONTEND (Angular)<br>  ui-atomic/                # Immutable UI primitives<br>  ui/                       # Shared composite UI<br>  ui-projects/<br>    portal-a/               # Portal-specific UI composition<br>    portal-b/<br><br>  # BACKEND (NestJS) — same layering model<br>  bff-atomic/               # Immutable backend primitives<br>  bff/                      # Shared backend composites<br>  bff-projects/<br>    portal-a/               # Portal-specific backend composition<br>    portal-b/<br><br>  # CROSS-CUTTING<br>  shared-types/             # DTO/contracts (types-only)<br>  shared-utils/             # Pure utilities (env-neutral)<br>  design-tokens/            # Theme tokens (frontend foundation)<br>  api-clients/              # Typed downstream clients (backend foundation)</pre><p>This is intentionally explicit. It scales because each new portal has a predictable “home” on both sides.</p><h4>Frontend layers (Angular)</h4><p><strong>ui-atomic: immutable primitives</strong></p><p>Examples: button, input, dropdown, icon wrappers, basic form controls.</p><ul><li>Stable API surface</li><li>Minimal dependencies (ideally only design-tokens and minimal shared-utils)</li><li>Changes are rare and require strong review because everything depends on it</li></ul><p><strong>ui: shared composites</strong></p><p>Examples: layouts, navigation patterns, reusable widgets.</p><ul><li>Built by composing ui-atomic</li><li>Still portal-agnostic</li><li>No business logic, no portal assumptions</li></ul><p><strong>ui-projects/portal-x: portal-specific composition</strong></p><p>Examples: portal-specific UI wrappers, variations, styling decisions, portal-specific layouts.</p><ul><li>Each portal gets its own folder</li><li>No cross-portal imports (portal-a must not import portal-b)</li><li>This prevents polluting shared UI with portal-specific behavior</li></ul><h4>Backend layers (NestJS) — mirrored to UI</h4><p><strong>bff-atomic: immutable backend primitives</strong></p><p>Think “backend primitives” instead of “UI primitives”:</p><ul><li>base error types and error mapping building blocks</li><li>request context primitives (correlation ID, tenant, user)</li><li>standardized HTTP wrappers (timeouts/retries)</li><li>foundational logging structures</li></ul><p>Minimal dependencies, stable surface area.</p><p><strong>bff: shared backend composites</strong></p><p>Reusable platform modules:</p><ul><li>shared interceptors, exception filters</li><li>standardized auth integration patterns</li><li>caching wrappers</li><li>generic orchestration helpers and response shaping patterns</li></ul><p>Built on top of bff-atomic, still portal-agnostic.</p><p><strong>bff-projects/portal-x: portal-specific backend composition</strong></p><p>This is where portal-specific orchestration lives as reusable modules for that portal:</p><ul><li>portal-specific controllers/modules</li><li>mapping helpers and facades</li><li>composition of downstream calls for portal-x</li></ul><p>Again: no cross-portal imports.</p><h4>Cross-cutting contracts and utilities</h4><p><strong>shared-types (types-only)</strong></p><p>This is your shared API contract layer:</p><ul><li>DTOs and enums used by portals and BFFs</li></ul><p>Hard rule: types only. No Angular code, no Node APIs, no runtime logic.</p><p><strong>shared-utils</strong></p><p>Pure utilities that work in both browser and Node (no environment coupling).</p><p><strong>api-clients</strong></p><p>Typed downstream clients for BFFs (REST/GraphQL wrappers). Keeps orchestration consistent and testable.</p><p><strong>design-tokens</strong></p><p>Theme and design primitives consumed by ui-atomic. Prevents hard-coded styling constants scattered across portals.</p><h4>The dependency network: one-way imports</h4><p>This architecture stays scalable because imports are predictable:</p><p><strong>Frontend</strong></p><p>portal-x → ui-projects/portal-x → ui → ui-atomic → design-tokens</p><p><strong>Backend</strong></p><p>bff-portal-x → bff-projects/portal-x → bff → bff-atomic</p><p><strong>Shared layers</strong></p><ul><li>shared-types and shared-utils can be used broadly, under strict constraints</li><li>backend libraries never import frontend libraries (ever)</li></ul><p>In practice, these rules must be enforced by Nx tagging and lint constraints; otherwise teams will bypass them.</p><h4>Why this scales operationally</h4><p><strong>Adding a new portal becomes mechanical</strong></p><p>To add portal-c, you do not redesign architecture. You add:</p><ul><li>apps/portal-c</li><li>apps/bff-portal-c</li><li>libs/ui-projects/portal-c</li><li>libs/bff-projects/portal-c</li></ul><p>Everything else is shared and already governed.</p><p><strong>Teams get clean ownership boundaries</strong></p><ul><li>design-system owners maintain ui-atomic and design-tokens</li><li>shared UI owners maintain u</li><li>portal teams own ui-projects/portal-x</li><li>platform/backend owners maintain bff-atomic and bff</li><li>portal backend teams own bff-projects/portal-x</li></ul><p>This prevents the classic “everyone touches everything” failure mode.</p><p><strong>CI remains fast as the repo grows</strong></p><p>Nx can run only what is affected:</p><ul><li>change portal-a only → build/test portal-a only</li><li>change shared-types → build/test impacted portals and BFFs</li><li>change ui-atomic → build/test everything that depends on it (intended)</li></ul><h4>Deployment stays simple: one repo, many deployable artifacts</h4><p>Monorepo does not imply monolithic release.</p><ul><li>Each portal-x builds to static assets and deploys independently (CDN/object storage/Nginx).</li><li>Each bff-portal-x builds to its own container image and deploys independently (Kubernetes/ECS/etc.).</li><li>Libraries do not deploy; they are build-time reuse.</li></ul><p>This keeps the operational model clean even as the repository grows.</p><h4>Straight talk: trade-offs</h4><p>This architecture is not “lightweight.” It buys scalability by enforcing structure.</p><ul><li>You must enforce boundaries, or the repo will degrade.</li><li>ui-atomic must remain stable, or you create churn everywhere.</li><li>shared-types must be treated as an API contract surface, or you create tight coupling and painful breaking changes.</li><li>You get long-term velocity at the cost of upfront discipline.</li></ul><h4>Conclusion</h4><p>If you want a codebase that can grow from two portals to twenty without collapsing under its own weight, this symmetric Nx monorepo model is the right foundation. It replaces ad-hoc sharing with a controlled dependency network, gives each portal a clear place to evolve without contaminating shared code, and keeps both frontend and backend understandable by using the same layering logic end-to-end.</p><p>The payoff is not theoretical: you get faster delivery as the repo grows, cleaner ownership boundaries, safer refactors, and deployability that remains straightforward because each portal and BFF is still an independent artifact. The cost is discipline — strict import rules, stable atomic layers, and contracts treated as contracts. If your team is willing to enforce those rules, this architecture won’t just scale; it will stay maintainable while scaling.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8d9b83fbbd65" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>