<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Kalhara Perera on Medium]]></title>
        <description><![CDATA[Stories by Kalhara Perera on Medium]]></description>
        <link>https://medium.com/@adorekasun?source=rss-fec47da242ec------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sun, 17 May 2026 19:15:13 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@adorekasun/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Your Model is Lying to You. Loss Functions Catch It.]]></title>
            <link>https://medium.com/@adorekasun/your-model-is-lying-to-you-loss-functions-catch-it-4e30d65e6049?source=rss-fec47da242ec------2</link>
            <guid isPermaLink="false">https://medium.com/p/4e30d65e6049</guid>
            <category><![CDATA[loss-function]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[frontend-development]]></category>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[machine-learning]]></category>
            <dc:creator><![CDATA[Kalhara Perera]]></dc:creator>
            <pubDate>Thu, 14 May 2026 15:38:32 GMT</pubDate>
            <atom:updated>2026-05-14T15:38:32.862Z</atom:updated>
            <content:encoded><![CDATA[<p><em>A frontend developer’s guide to one of ML’s most important concepts</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Y66iTjvpelPE2q6XqzSjoQ.png" /></figure><p>If you’ve been building UIs for a while, you already understand feedback loops.</p><p>User does something. App responds. If the response is wrong, you show an error. If it’s right, you move forward.</p><p>That loop is everywhere in frontend work. Form validation. API error handling. Loading states. You build systems that know when something went wrong.</p><p>Machine learning has the exact same idea baked into its core. It’s called a loss function, and once you understand what it actually does, a huge chunk of ML suddenly starts making sense.</p><p><strong>What is a Loss Function, Really? 🎯</strong></p><p>Let’s skip the math notation for a second.</p><blockquote><strong>A loss function is just a way to measure how wrong your model was.</strong></blockquote><p>That’s it.</p><p>You feed your model some input. It makes a prediction. The loss function compares that prediction to the correct answer and spits out a single number representing the gap between what the model said and what was actually true.</p><p>The bigger the gap, the bigger the number. The smaller the gap, the smaller the number. Perfect prediction? The number is <strong>zero</strong>.</p><p>Training a model is basically a game where you’re trying to make this number as small as possible over millions of attempts.</p><p><strong>The Form Validation Analogy 💻</strong></p><p>Here’s the mental model I use.</p><p>Think about email validation on a sign-up form.</p><pre>const validateEmail = (email) =&gt; {<br>  const regex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;<br>  if (!regex.test(email)) {<br>    return { error: true, message: &quot;Invalid email format&quot; };<br>  }<br>  return { error: false };<br>};</pre><p>When the user types something wrong, you catch it. You tell them exactly what went wrong. They fix it. They try again.</p><p>A loss function does the same thing for a model:</p><ul><li>📥 Input goes in</li><li>🤖 Model makes a prediction</li><li>🔍 Loss function checks it against the real answer</li><li>🔴 If wrong, it returns a “how wrong” score</li><li>🔁 Model adjusts and tries again</li></ul><p>The loop is identical. The only difference is that instead of a user fixing the mistake, the model fixes itself automatically using an algorithm. But that’s a story for the next post.</p><p><strong>Why One Number is So Powerful 🧠</strong></p><p>You might be thinking: why reduce everything to one number?</p><p>Because one number is actionable.</p><p>In frontend work, when Lighthouse gives you a performance score of 43, you don’t need to read an essay. You know you have work to do. The number tells you everything you need to know to start moving.</p><p>Loss works the same way. When your model’s loss is 2.4 and you need it closer to 0, you have a clear direction. Go down. Make it smaller. Keep going.</p><p>Without a loss function, training is completely blind. The model would have no feedback, no direction, no way to know if it’s getting better or worse. It’s like deploying a UI with no console, no error boundaries, and no monitoring. You’d have no idea what’s breaking or why.</p><p><strong>Not All Loss Functions Are Equal 🔢</strong></p><p>Here’s where it gets more nuanced.</p><p>Different problems need different ways of measuring mistakes.</p><p><strong>~Mean Squared Error (MSE)</strong></p><p>This one punishes big mistakes very harshly.</p><p>If your model is off by 10, MSE doesn’t count that as “10 bad.” It counts it as “100 bad” (10 squared). The further off you are, the exponentially worse your score gets.</p><p>This is great for problems like predicting house prices or temperatures, where being massively wrong is catastrophic and should be treated that way.</p><p>Think of it like a performance budget in frontend. Going 2x over budget is not twice as bad, it’s disastrous. The penalty needs to reflect that.</p><p><strong>~Mean Absolute Error (MAE)</strong></p><p>This one is more forgiving. A mistake of 10 is just counted as 10. Simple and honest.</p><p>It treats all errors proportionally, without amplifying the big ones. Useful when outliers in your data are normal and you don’t want them to dominate the training signal.</p><p><strong>~Cross-Entropy Loss</strong></p><p>This is the go-to for classification problems, like “is this email spam or not” or “is this image a cat, dog, or bird.”</p><p>It works with probabilities. If the model says there’s a 90% chance something is a cat and it’s actually a cat, low loss. If it says there’s a 10% chance and it’s actually a cat, high loss. It penalizes overconfident wrong answers very heavily.</p><p><strong>Picking the Wrong One Breaks Everything 🚨</strong></p><p>This is the part most beginner tutorials gloss over.</p><blockquote><strong>Loss function choice is not just a detail. It shapes how your model learns.</strong></blockquote><p>Use MSE on a classification problem and your model starts treating class labels like continuous numbers, which makes no sense.</p><p>Use a loss function that’s too forgiving and your model will converge on “good enough” and stop improving before it reaches useful accuracy.</p><p>It’s like choosing the wrong rendering strategy in Next.js. Static generation, server-side rendering, client-side rendering. They all “work” in some sense. But picking the wrong one for your use case means you’re either killing performance, serving stale data, or making the user wait unnecessarily.</p><p>The right loss function for the right problem is one of those decisions that separates an ML project that works from one that technically runs but produces garbage.</p><p><strong>A Concrete Example 🏠</strong></p><p>Let’s say you’re building a model that predicts house prices.</p><ul><li>Training example: House in a specific neighborhood</li><li>Actual price: <em>$ 300,000</em></li><li>Model predicts: <em>$ 310,000</em></li></ul><p>Loss is small. The model was close. Good.</p><p>Now the same model on a different house:</p><ul><li>Actual price: <em>$ 300,000</em></li><li>Model predicts: <em>$ 600,000</em></li></ul><p>That’s a $300,000 mistake. The loss function needs to return a number that clearly communicates “this is very, very bad.” If it doesn’t, the model has no reason to urgently fix this kind of mistake, and it’ll keep making them.</p><p>This is exactly how MSE helps here. That $300,000 error doesn’t just add 300,000 to the loss. It adds 90,000,000,000. The model feels that. And it adjusts accordingly.</p><p><strong>What Happens After the Loss is Calculated? 🔁</strong></p><p>The loss is not the end of the story. It’s the start.</p><p>Once you have that number, the model uses it to figure out which direction to change its internal settings, which we call weights, to make the loss smaller next time.</p><p>This process is called backpropagation and gradient descent.</p><p>Think of it like this: the loss tells you “you’re wrong.” Gradient descent tells you “here’s which way to walk to become less wrong.” And backpropagation figures out which specific weights caused the problem in the first place.</p><p>Those are the next two posts in this series.</p><p><strong>The Core Takeaway 🎯</strong></p><p>If you remember nothing else:</p><blockquote>📌 Loss = how wrong the model was right now</blockquote><blockquote>📌 Low loss = model is learning and improving</blockquote><blockquote>📌 High loss = model is still guessing badly</blockquote><blockquote>📌 The entire training process is one big mission to shrink this number</blockquote><blockquote>📌 The loss function you choose shapes how the model learns what “wrong” means</blockquote><p>Everything in machine learning, the architecture, the optimizer, the training loop, all of it exists to serve one purpose: reduce the loss.</p><p>Once you internalize that, everything else becomes a lot easier to reason about.</p><p><strong>Up Next in the Series 👇</strong></p><p>Post 2: Gradient Descent. Once the model knows how wrong it was, how does it actually fix itself? I’ll explain it using the only thing that makes sense: debugging a production bug at 2am with no reproduction steps.</p><p>Follow along if you’re a frontend developer trying to navigate the ML world without a PhD.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=4e30d65e6049" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[7. Performance, Testing, and the Future: Migrating Dynamic Systems to 2026]]></title>
            <link>https://medium.com/@adorekasun/7-performance-testing-and-the-future-migrating-dynamic-systems-to-2026-3166aef6e9f6?source=rss-fec47da242ec------2</link>
            <guid isPermaLink="false">https://medium.com/p/3166aef6e9f6</guid>
            <category><![CDATA[nextjs]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[software-architecture]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[react]]></category>
            <dc:creator><![CDATA[Kalhara Perera]]></dc:creator>
            <pubDate>Tue, 16 Dec 2025 03:32:14 GMT</pubDate>
            <atom:updated>2025-12-16T03:32:14.953Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1023/1*JjAXV5bO9BnHmEstl3cC_A.png" /></figure><p>This article is the Last in the Neurocore Dynamic UI Series.</p><h3>A. Performance Optimization in Dynamic Systems (Challenge 4)</h3><p>Performance in a dynamic, data-intensive application is inherently sensitive to the complexity of the runtime JSON configuration (Challenge 4).</p><h4>Data Handling and Caching</h4><p>The primary defense against large data volumes, particularly in tables with thousands of rows, is a disciplined data strategy. This includes mandating server-side pagination, where the table component passes page and limit parameters to the API. Coupled with TanStack Query’s caching, this minimizes the volume of data transferred and processed on the client. Additionally, while not explicitly detailed in the provided code, component-level memoization (useMemo) is implied for expensive computations, and the sophisticated nature of the TableRenderer suggests reliance on virtualization techniques common in modern React table libraries to ensure only visible rows are rendered.</p><h4>Instrumentation and Tracking</h4><p>Crucially, the architecture includes built-in performance monitoring utilities (performanceTracker.ts). This module allows developers to wrap rendering or logic execution with startPerf and endPerf functions to measure execution time in the console. This instrumentation is not merely a developer convenience; it is an architectural requirement for debugging and monitoring, as a complex schema can instantly create a performance bottleneck, requiring quantitative measurement to isolate the issue.</p><h3>B. Ensuring Quality: Testing Strategies for JSON-Driven UIs</h3><p>Maintaining code quality and stability is paramount when core UI components are driven by external configurations. Comprehensive test coverage is essential.</p><ol><li><strong>Renderer Unit Testing:</strong> Using React Testing Library, unit tests focus on ensuring that the renderers correctly interpret the JSON structure. For example, tests verify that layoutRenderer correctly maps PAGE_COMPONENT_TYPE_ENUM.CONTAINER to a visible DOM element with the specified attributes.</li><li><strong>Validator Testing:</strong> Unit tests target formValidator.ts to confirm that all defined JSON validation rules (required, pattern, min/max) are accurately translated into Zod schemas.</li><li><strong>Mapper Testing:</strong> Entity Mappers (customerEntityMapper) must be tested extensively to guarantee the integrity and complexity of the data transformations—ensuring that flat form data consistently results in the correct nested API payload, especially for conditional structures.</li></ol><h3>C. The Future of Dynamic DX: Tooling and Evolution</h3><p>To ensure long-term scalability, the focus must shift towards enhancing the Developer Experience (DX) and managing the evolution of the JSON schemas themselves.</p><h4>Visual Schema Builder</h4><p>Currently, schemas are authored manually in JSON files, a process that is slow and error-prone. A high-impact future improvement involves developing a drag-and-drop <strong>Visual Schema Builder</strong>. The existing dependency on @xyflow/react for workflows provides a strong foundation for building a node-based interface for designers or product managers to visually define layouts and forms. This strategic shift transforms schema authoring from a technical coding task into a rapid, low-code platform activity, significantly lowering the barrier to entry for non-developer stakeholders.</p><h4>Computed Fields</h4><p>The addition of <strong>Computed Fields</strong> would eliminate repetitive custom form component logic. By allowing fields to include an expression property (e.g., &quot;(quantity) * (unitPrice)&quot;), calculations could be handled dynamically within the form renderer. This centralizes business logic within the configuration, leading to automatic calculations and reduced user error without requiring custom component development for every calculation.</p><h4>Schema Versioning and Migration</h4><p>Over time, business requirements dictate changes to JSON schemas (e.g., renaming fields or making them mandatory). To maintain backward compatibility and application stability, the system requires formalized schema evolution management (Improvement 6). Although schemas already contain a version field, implementing a dedicated migrateSchema function would automatically apply migration steps (e.g., renaming fields, adding default values) whenever a legacy schema version is detected. This automated upgrade path is essential for scaling dynamic configuration systems safely across multiple environments and client versions.</p><h3>D. Table: Strategic Future Improvements Roadmap</h3><p>Strategic Future Improvements Roadmap</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*kP0C9wA4ZcEaHzKOkIzKPA.png" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3166aef6e9f6" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[6. Event Handling and Context: Implementing Declarative Actions and Conditional Rendering]]></title>
            <link>https://medium.com/@adorekasun/6-event-handling-and-context-implementing-declarative-actions-and-conditional-rendering-91c519c48f01?source=rss-fec47da242ec------2</link>
            <guid isPermaLink="false">https://medium.com/p/91c519c48f01</guid>
            <category><![CDATA[react]]></category>
            <category><![CDATA[software-architecture]]></category>
            <category><![CDATA[conditional-rendering]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[web-development]]></category>
            <dc:creator><![CDATA[Kalhara Perera]]></dc:creator>
            <pubDate>Tue, 09 Dec 2025 03:32:15 GMT</pubDate>
            <atom:updated>2025-12-09T03:32:15.692Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1023/1*WdRwfQcOwU8qQc2FQzOFOQ.png" /></figure><p>This article is <strong>Part 6 of 7</strong> in the Neurocore Dynamic UI Series.</p><h3>A. The Declarative Action Pattern for Buttons</h3><p>In the Neurocore system, user interactions, particularly button clicks, are not handled by imperative component code but by declarative JSON configuration. This pattern allows business logic to define behavior without frontend code changes.</p><p>Instead of writing a custom onClick handler, buttons define an onClick property containing a JSON payload that specifies the desired outcome (e.g., &quot;type&quot;: &quot;modal&quot;, &quot;type&quot;: &quot;route&quot;). The central dispatcher, buttonClickHandler.ts, interprets this payload. For example, a declarative action can specify that upon success, a modal should be opened or the user should be redirected via a specific routeTemplate. This is an entity-agnostic approach that maximizes reuse across the entire application.</p><h3>B. Context-Awareness via Template Resolution</h3><p>To ensure that declarative actions work across all entities without duplication, the system incorporates a powerful <strong>Template Resolver</strong> mechanism (templateResolver.ts). This utility function resolves dynamic strings within the action configuration based on the current runtime context.</p><p>For instance, a button defining a redirection action might use the routeTemplate: &quot;/{entity}/add&quot;. When the button is rendered on the Opportunity page, the Template Resolver uses the current context to transform this template into the specific path /opportunity/add. This mechanism is also used for dynamic confirmation messages, such as resolving &quot;Delete this {entity}?&quot; to &quot;Delete this customer?&quot;. The reliance on template resolution is a key architectural decision that guarantees zero-code feature expansion; when a new entity is introduced, all standard navigation and action buttons automatically function correctly.</p><h3>C. Advanced Conditional Rendering Logic (Challenge 6)</h3><p>Dynamic UIs require complex components to appear or disappear based on the application state, user permissions, or data values (Challenge 6). The solution is the <strong>Conditional Rendering</strong> mechanism.</p><h4>Configuration and Evaluation</h4><p>Layout components can include a conditionalRender configuration block. This block defines specific logical conditions using a data path (e.g., &quot;activity.action&quot;), an operator (&quot;===&quot;, &quot;includes&quot;, &quot;regex&quot;), and a target value. The rule set supports complex logic chaining (e.g., using AND or OR operators).</p><p>The logic is executed by the conditionalRenderer.ts utility. This function relies on getNestedData.ts to safely retrieve values from deep data paths. The evaluation engine then determines whether the component should be rendered based on the runtime data. This transforms the dynamic UI into a highly sophisticated, configuration-driven feature-flagging system, allowing components or entire sections to be tailored instantly based on user role or data state without needing code deployments.</p><h3>D. Table: Declarative Action Configuration Example</h3><p>Declarative Action Configuration Example</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*qMno7tm6Wzhpfp9OnVNLsQ.png" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=91c519c48f01" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[5. Advanced State and Data Layer: TanStack Query, Zustand, and Entity Mappers]]></title>
            <link>https://medium.com/@adorekasun/5-advanced-state-and-data-layer-tanstack-query-zustand-and-entity-mappers-5df5c925e590?source=rss-fec47da242ec------2</link>
            <guid isPermaLink="false">https://medium.com/p/5df5c925e590</guid>
            <category><![CDATA[react]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[tanstack-query]]></category>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[zustand]]></category>
            <dc:creator><![CDATA[Kalhara Perera]]></dc:creator>
            <pubDate>Tue, 02 Dec 2025 03:32:20 GMT</pubDate>
            <atom:updated>2025-12-02T03:32:20.312Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1023/1*_YbX_9R2CPAwo-rTpIeoZw.png" /></figure><p>This article is <strong>Part 5 of 7</strong> in the Neurocore Dynamic UI Series.</p><h3>A. The Dual State Strategy: Server vs. Client</h3><p>The maintenance of state in an application managing numerous dynamic entities requires precision. The strategic reliance on TanStack Query for server state is mandatory for high performance in a configuration-driven system. Every layout, form, table, and data entity requires network synchronization, caching, and background refetching — tasks perfectly suited to TanStack Query. Query keys (e.g., [&#39;table&#39;, searchEntity, filters, sort, page]) are carefully structured to allow fine-grained caching and intelligent invalidation, which is essential when form submissions need to update cached table views.</p><p>Client-side state, which is often transient or local to the UI session, is managed by Zustand stores. For instance, useFormStore.ts centralizes data for multi-page forms, while useFilterStore.ts maintains a consistent set of filters across different tables and views for a given entity, ensuring continuity for the user.</p><h3>B. Solving Challenge 2: The Entity Mapping Contract Gap</h3><p>Perhaps the most significant business complexity solved by this architecture is data transformation, addressed by dedicated <strong>Entity Mappers</strong> (Challenge 2). Forms are designed for optimal user experience — often flat and simple (e.g., customer.address1.street). However, backend APIs frequently require complex, nested payloads, sometimes involving arrays, conditional logic, and type normalization.</p><h4>The Role of Mappers</h4><p>Mapper functions, found in src/helpers/mappers/ (e.g., customerEntityMapper.ts), bridge this gap. They are pure functions responsible for converting the flat form data into the exact nested structure expected by the API. For example, the customerEntityMapper must handle the dual-address structure required by the backend. The mapper extracts location and billing addresses from the form data and applies complex logic: if the &quot;use same address for billing&quot; flag is true, it clones the location address and adjusts its type to &quot;Billing,&quot; ensuring the backend receives a correctly formatted array of addresses.</p><p>This separation guarantees that the form structure can evolve based on UX needs without breaking the API contract, and vice-versa. The mapper acts as a crucial middleware layer, ensuring data integrity and structure consistency for outbound API requests.</p><h3>C. The Submission Workflow: Transform, Submit, Invalidate</h3><p>The entire submission process is orchestrated seamlessly.</p><ol><li><strong>Input Collection:</strong> RHF collects the raw, flat user input.</li><li><strong>Data Transformation:</strong> Before transmission, RHF’s handleSubmit calls the relevant Entity Mapper (applyEntityMappers) to convert the data into the complex API payload structure.</li><li><strong>API Submission:</strong> The transformed data is sent to the backend using the dedicated API service layer (e.g., customer.service.ts).</li><li><strong>Query Invalidation:</strong> Upon successful submission, TanStack Query is used to invalidate the relevant queries (e.g., the table list for the newly created entity), triggering a silent background refetch that updates the UI globally without requiring a manual page refresh. This step ensures data synchronization and a smooth user experience.</li></ol><h3>D. Table: Data Flow and Transformation Pathway</h3><p>Data Flow and Transformation Pathway</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*WraIqW1LRKuyX_5Vemh6Kw.png" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5df5c925e590" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[4. Engineering Dynamic Forms: Zod Validation and React Hook Form Integration]]></title>
            <link>https://medium.com/@adorekasun/4-engineering-dynamic-forms-zod-validation-and-react-hook-form-integration-4f02658e27d1?source=rss-fec47da242ec------2</link>
            <guid isPermaLink="false">https://medium.com/p/4f02658e27d1</guid>
            <category><![CDATA[react]]></category>
            <category><![CDATA[nextjs]]></category>
            <category><![CDATA[validation]]></category>
            <category><![CDATA[security]]></category>
            <category><![CDATA[frontend-engineering]]></category>
            <dc:creator><![CDATA[Kalhara Perera]]></dc:creator>
            <pubDate>Tue, 25 Nov 2025 03:32:03 GMT</pubDate>
            <atom:updated>2025-11-25T03:32:03.539Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1023/1*mQ-9mdyB3BWe-TsWkgIpgA.png" /></figure><p>This article is <strong>Part 4 of 7</strong> in the Neurocore Dynamic UI Series.</p><h3>A. React Hook Form as the State Manager</h3><p>The selection of React Hook Form (RHF) is a strategic choice rooted in performance optimization for complex, dynamic forms. RHF leverages an uncontrolled component approach, minimizing component re-renders that typically plague controlled inputs in large forms. This efficiency is crucial in a dynamic system where the form structure and validation rules change at runtime. The formRenderer.tsx uses RHF for managing form state, registration, submission, and value watching.</p><h3>B. Dynamic Zod Schema Generation</h3><p>The critical innovation within the formRenderer is the dynamic generation of the validation schema. The generateZodSchema function, located in src/helpers/formValidator.ts, iterates through the field definitions contained within the FormDeclaration JSON. It translates declarative rules into a cohesive Zod object schema, which is then passed to RHF via the @hookform/resolvers/zod utility.</p><p>For example, when the system encounters a field with &quot;validation&quot;: { &quot;required&quot;: true }, the validator generates a Zod schema using .min(1) or similar constraints. It also handles type coercion necessary for HTML inputs. For fields marked with type: &#39;number&#39;, the validator explicitly uses z.coerce.number() to ensure that the string value received from the form input is correctly validated as a numeric type before submission. By externalizing all validation rules into the JSON schema, validation logic becomes a reusable service decoupled from the form component itself, ensuring perfect synchronization between business requirements and UI constraints.</p><h3>C. Solving Challenge 3: Cascading Dropdowns and Field Dependencies</h3><p>A common requirement in enterprise UIs is field dependency, such as a branch dropdown only displaying options relevant to the previously selected customer (Challenge 3). The Neurocore architecture handles this natively through configuration.</p><h4>Dependency Declaration and Watcher Pattern</h4><p>The dependency is declared directly within the JSON schema via the dependson property, specifying the parent field (e.g., &quot;opportunity.customerId&quot;) and the filter key needed by the API (&quot;filterField&quot;: &quot;customerId&quot;). In the formRenderer, RHF&#39;s form.watch() functionality is used to monitor the value of the parent field.</p><h4>Conditional Rendering and Filtered Fetching</h4><p>The rendering mechanism incorporates an essential performance optimization: if the parent field value is null or undefined, the child field is explicitly set to null and is not rendered. This technique prevents unnecessary rendering of dependencies and blocks premature API calls. Once the parent value is present, the system uses the custom hook useFetchSelectOptionsFromEndpoint to trigger a targeted TanStack Query fetch. This query sends the parent value to the API as a filter parameter, retrieving only the relevant options for the child field. This robust pattern ensures a responsive user experience and reduces server load by avoiding unnecessary data retrieval.</p><h3>D. Table: Dynamic Validation to Zod Transformation</h3><p>Dynamic Validation to Zod Transformation</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*IrPlWVIE8x7SqoSFYh17Rw.png" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=4f02658e27d1" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[3. The Core Engine: Building a Recursive Layout Renderer in React 19]]></title>
            <link>https://medium.com/@adorekasun/3-the-core-engine-building-a-recursive-layout-renderer-in-react-19-5198aa6127eb?source=rss-fec47da242ec------2</link>
            <guid isPermaLink="false">https://medium.com/p/5198aa6127eb</guid>
            <category><![CDATA[nextjs]]></category>
            <category><![CDATA[software-architecture]]></category>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[react]]></category>
            <category><![CDATA[dynamic-programming]]></category>
            <dc:creator><![CDATA[Kalhara Perera]]></dc:creator>
            <pubDate>Tue, 18 Nov 2025 03:32:29 GMT</pubDate>
            <atom:updated>2025-11-18T03:32:29.444Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1023/1*LaFJ6r-PSXQoy6dordRqwg.png" /></figure><p>This article is <strong>Part 3 of 7</strong> in the Neurocore Dynamic UI Series.</p><h3>A. Decoding the Layout Declaration Tree</h3><p>The layoutRenderer.tsx file serves as the architectural heart of the dynamic UI system. It is responsible for interpreting the PageDeclaration JSON and transforming its structure into a live component tree. The PageDeclaration is fundamentally a nested structure, defining components that contain arrays of children, mirroring the recursive nature of the HTML DOM.</p><p>The PageRenderer function acts as the core dispatcher. It takes the declaration object and iterates over the component hierarchy, using a switch statement to evaluate each component’s type against the PAGE_COMPONENT_TYPE_ENUM. This switch statement effectively functions as a dynamic routing engine for UI components, directing the configuration to the appropriate rendering function, whether it is a simple CONTAINER or a complex TABLE_WITH_FILTER.</p><h3>B. Component Mapping and Dynamic Composition</h3><p>The strength of the renderer lies in its ability to dynamically compose complex views from granular components. When the renderer encounters a type like TABLE_WITH_FILTER or FORM, it delegates rendering to specialized components (&lt;TableRenderer&gt;, &lt;FormRenderer&gt;) and passes the corresponding JSON configuration as props.</p><p>This delegation allows for specialized optimization. For instance, the TableRenderer handles data fetching, sorting, and pagination logic, but it receives its structural definition—such as which columns to display and which actions to enable—directly from the layout JSON. The system supports a wide variety of components, including complex visualization elements like PIE_CHART and workflow tools like WORKFLOW and JOB_STAT_PANEL, all managed under the same universal rendering logic.</p><p>This division of labor — separating the generic rendering mechanism from the specific functional components — adheres closely to the Headless Component design pattern. The UI components (presentation) receive their configuration from the dynamic renderers (logic and state management), making the core rendering mechanism robust, highly reusable, and easily testable.</p><h3>C. Integrating Radix UI and Tailwind CSS for Accessibility</h3><p>The choice of UI library is critical for a scalable system, particularly regarding developer experience (DX) and accessibility. The Neurocore system utilizes Radix UI primitives for foundational components (dropdowns, modals, tabs). Radix UI is prized for providing built-in accessibility features, including proper ARIA attributes, keyboard navigation support, and correct focus management, which are automatically inherited by the dynamic components.</p><p>Styling is managed using Tailwind CSS. This utility-first framework allows styles to be applied directly via the className property, which can be defined within the JSON schema itself, providing rapid and consistent visual customization without requiring component-level CSS modification for every new entity or layout.</p><h3>D. Performance Through Dedicated Renderers</h3><p>An architectural decision that supports robust performance is the use of dedicated, specialized renderers (layoutRenderer.tsx, formRenderer.tsx, tableRenderer.tsx) rather than attempting a single, monolithic rendering function.</p><p>While a single engine could interpret all component types, separating them allows each renderer to be optimized for its specific domain. The form renderer can be optimized for state management and validation cycles, while the table renderer can focus on high-performance data fetching via TanStack Query and implement virtualization techniques necessary for managing large datasets (Challenge 4). This specialization ensures that performance targets are met by applying the most effective optimizations at the correct layer, leading to a highly responsive user experience even with complex configurations.</p><h3>E. Table: Component Type Mapping</h3><p>Component Type Mapping</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*-FNVWFS9dK-Zf5hH6IDenA.png" /></figure><p>See you in the 4th Article, Guys!<strong> 💻</strong></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5198aa6127eb" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[2. Defining the UI Language: Mastering JSON Schemas and TypeScript Contracts]]></title>
            <link>https://medium.com/@adorekasun/2-defining-the-ui-language-mastering-json-schemas-and-typescript-contracts-170ae663fd9f?source=rss-fec47da242ec------2</link>
            <guid isPermaLink="false">https://medium.com/p/170ae663fd9f</guid>
            <category><![CDATA[json]]></category>
            <category><![CDATA[nextjs]]></category>
            <category><![CDATA[enterprise-technology]]></category>
            <category><![CDATA[typescript]]></category>
            <category><![CDATA[web-development]]></category>
            <dc:creator><![CDATA[Kalhara Perera]]></dc:creator>
            <pubDate>Tue, 11 Nov 2025 03:32:10 GMT</pubDate>
            <atom:updated>2025-11-11T03:32:10.481Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1023/1*oFxSmAMIHsvUkt_Jt7720A.png" /></figure><p>This article is <strong>Part 2 of 7</strong> in the Neurocore Dynamic UI Series.</p><h3>A. The Form Schema Anatomy: Configuration as Policy</h3><p>The FormDeclaration schema acts as a configuration policy that dictates form behavior, effectively removing all business logic from the component presentation layer. The schema is organized hierarchically, starting with the id and entity definitions. The core of the form is broken down into sections, each of which can define a dataPrefix (e.g., &quot;opportunity&quot;) to manage nested data paths efficiently.</p><p>Fields within sections specify not only their display attributes (label, type) but also their functional requirements, such as validation rules and dependency linkages. For example, the type dropdown-with-action defines a composite field that requires specialized rendering logic, yet it is invoked generically via the JSON. Finally, the uiConfig section contains the submitAction, which details the HTTP method and endpoint required for submission (e.g., POST to /opportunities/create), completely decoupling the form renderer from API knowledge.</p><h3>B. Bridging Runtime JSON and Compile-Time Safety</h3><p>Working with dynamic JSON inherently introduces the risk of type mismatches and runtime crashes, a critical challenge in dynamic UI construction. The Neurocore system mitigates this risk by enforcing type safety across the architecture in two primary ways: TypeScript interfaces and runtime validation using Zod.</p><h4>TypeScript Contracts</h4><p>All JSON schemas have corresponding TypeScript interfaces defined in the src/declarations/ directory, such as PageDeclaration.ts and formEntityDeclaration.ts. These interfaces provide developers with compile-time safety and IDE autocomplete when interacting with the schema objects within the renderers and services. This is foundational to maintaining refactoring safety and quality in a codebase where the core structure changes dynamically.</p><h4>Proactive Zod Schema Validation</h4><p>While TypeScript ensures safety within the application code, Zod 3.24.2 is utilized to enforce runtime data integrity. Zod is used in the service layer (src/lib/apiResponseSchemas/) to validate incoming API responses, ensuring data adheres to the expected structure before it is consumed by the UI. Furthermore, a proposed architectural enhancement involves using Zod validation on the schemas themselves when they are fetched via useFetchFormSchema. If schema definitions are deployed independently, proactive validation ensures that a malformed JSON configuration does not cause a hard application crash, but rather gracefully fails during the query process, providing better error handling for schema authors and improving the overall resilience of the application.</p><h3>C. Schema Fetching Strategy: TanStack Query in Action</h3><p>The architectural reliance on TanStack Query is crucial for performance. Custom hooks, useFetchFormSchema and useFetchLayoutSchema (in src/hooks/), manage the asynchronous process of retrieving JSON configurations from the API.</p><p>The importance of this data layer extends beyond simple fetching; it controls data freshness and network load. The schema queries use an aggressive caching strategy, defining a staleTime of five minutes. This ensures that once a schema is fetched, subsequent page loads or component mounts requiring the same schema will receive the cached version instantly, preventing redundant network requests for configurations that rarely change. This strategy significantly contributes to perceived performance and responsiveness.</p><h3>D. Generic Types for Reuse</h3><p>The definition of the system’s vocabulary through generic types is a critical element of its scalability. The layout schema utilizes high-level, generic component types such as table-with-filter, form, and container. This design means that the same fundamental &lt;TableRenderer&gt; component can handle data for disparate entities—for example, rendering opportunity_grid_view or customer_grid_view. The only difference lies in the JSON configuration supplied to the renderer, specifically the searchEntity and the visibleColumns parameters. By abstracting away the entity-specific details and focusing on generic component roles, the architecture guarantees massive component reuse, making the system highly adaptable when new entities are introduced.</p><h3>E. Table: Essential Schema Fields for Scalability</h3><p>Essential Schema Fields for Scalability</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/834/1*ykpPsSHjFj3a5fiELe64eg.png" /></figure><p>See you in 3 of 7 <strong>[The Core Engine: Building a Recursive Layout Renderer in React 19] </strong>Geeks!<strong> 💻</strong></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=170ae663fd9f" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[1. The Declarative Revolution: Next.js 15 and Dynamic UI Architecture]]></title>
            <link>https://medium.com/@adorekasun/1-the-declarative-revolution-next-js-15-and-dynamic-ui-architecture-feee8eb9f999?source=rss-fec47da242ec------2</link>
            <guid isPermaLink="false">https://medium.com/p/feee8eb9f999</guid>
            <category><![CDATA[software-architecture]]></category>
            <category><![CDATA[nextjs]]></category>
            <category><![CDATA[front-end-development]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[react]]></category>
            <dc:creator><![CDATA[Kalhara Perera]]></dc:creator>
            <pubDate>Wed, 05 Nov 2025 07:18:12 GMT</pubDate>
            <atom:updated>2025-11-05T09:31:58.474Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1023/1*4TApfjEpk5DfwWawSNCNZg.png" /></figure><p>This article is <strong>Part 1of 7</strong> in the Neurocore Dynamic UI Series.</p><h3>A. Why Hardcoding UIs is Dead in 2025</h3><p>The Neurocore system addresses a fundamental challenge in enterprise software development: the prohibitive cost and time required to update user interfaces across multiple entities whenever business logic changes. The project manages complex entities such as customers, opportunities, and jobs, all requiring unique forms, tables, and page layouts. Relying on traditional imperative UI development — where every layout change necessitates code deployment — is unsustainable for scalable platforms. The architectural decision here is to shift entirely to a declarative paradigm, where the user interface, its components, data bindings, and workflows are entirely defined by JSON contracts. This separation allows backend services or even dedicated configuration microservices to dictate the frontend structure dynamically, dramatically increasing the speed of new feature delivery.</p><h3>B. Architectural Layering: The Triple-Tiered Frontend Structure</h3><p>The application architecture establishes clear boundaries between concerns, maximizing modularity and testability. This layered approach provides the necessary performance and organizational clarity for a system driven by dynamic configuration.</p><h4>Application Layer: Next.js 15 and Routing</h4><p>The core framework is Next.js 15.3.0, leveraging the App Router for robust routing and server-side capabilities. Key entry points demonstrate the application’s dynamic nature: src/app/(pages)/[entityName]/page.tsx handles dynamic routing, allowing a single route definition to manage pages for customers, opportunities, and any other defined entity. The root layout (src/app/layout.tsx) serves as the central hub for initializing global providers, including those for state management and data fetching, ensuring application-wide consistency.</p><h4>Data &amp; State Layer: TanStack Query and Zustand</h4><p>The Neurocore system employs a sophisticated, dual-state management strategy, adhering to recommended best practices for modern React applications. Server state, which includes data fetching, caching, synchronization, and schema retrieval, is handled exclusively by TanStack Query 5.75.0. This mechanism optimizes network efficiency and performance by caching configurations (such as JSON schemas) that are fetched from the API. Conversely, client state — local, transient data like form input during a multi-step process (useFormStore.ts), filter selections (useFilterStore.ts), and authentication details (useAuthStore.ts)—is managed by the lightweight and highly performant Zustand 4.5.7. This clear separation of concerns ensures that components only re-render when their associated state truly changes, maximizing performance by minimizing unnecessary re-renders while optimizing network efficiency through aggressive server state caching.</p><h4>Presentation &amp; Rendering Layer: The Dynamic Engines</h4><p>The core interpretative logic resides in the src/dynamic-renderers/ directory.1 Dedicated modules like layoutRenderer.tsx, formRenderer.tsx, and tableRenderer.tsx are responsible for consuming the JSON contracts and recursively translating them into React components. This separation allows specialized optimization for each rendering type; for instance, the formRenderer can be tightly integrated with react-hook-form and Zod, while the tableRenderer can focus on data virtualization and query management.</p><h3>C. The Cornerstone: Three JSON Schemas Governing the UI</h3><p>The entire frontend system revolves around three primary JSON schema types, which together form the “UI language”.</p><ol><li><strong>Form Schema (</strong><strong>FormDeclaration):</strong> Defines the complete structure and behavior of a form. This includes defining sections, identifying the target entity (e.g., &quot;entity&quot;: &quot;opportunity&quot;), specifying individual field types (e.g., dropdown-with-action), detailing client-side validation rules (e.g., &quot;required&quot;: true), and configuring the submission action (e.g., &quot;type&quot;: &quot;POST&quot;, &quot;endpoint&quot;: &quot;/opportunities/create&quot;).</li><li><strong>Page Layout Schema (</strong><strong>PageDeclaration):</strong> Dictates the hierarchical structure of an entire application view. It organizes the page into nested components (e.g., containers, cards, tables) and handles page-level configurations, such as visible columns in a table or data bindings.</li><li><strong>Component Types (</strong><strong>PAGE_COMPONENT_TYPE_ENUM/</strong><strong>FIELD_TYPE_ENUM):</strong> These TypeScript enums, located in src/constants/enums.ts, define the shared vocabulary of the system.1 They support over 30 distinct component types, ranging from standard elements (e.g., TEXT, BUTTON, DATE) to specialized business components (e.g., table-assistant, job-stat-panel, dropdown-with-action).</li></ol><h3>D. Table: Core Technology Stack and Purpose</h3><p>The Neurocore system is built on a foundation of cutting-edge dependencies, chosen specifically for performance and scalability in a dynamic context.</p><p>Core Technology Stack and Purpose,</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/837/1*XqPtJ_RNd61qFGZem1KgOw.png" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=feee8eb9f999" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[ 10 Weird Truths About AI That’ll Change How You See It (And yes, you can try most of these…]]></title>
            <link>https://medium.com/@adorekasun/10-weird-truths-about-ai-thatll-change-how-you-see-it-and-yes-you-can-try-most-of-these-9e5871ff7dda?source=rss-fec47da242ec------2</link>
            <guid isPermaLink="false">https://medium.com/p/9e5871ff7dda</guid>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[ux]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <dc:creator><![CDATA[Kalhara Perera]]></dc:creator>
            <pubDate>Tue, 24 Jun 2025 07:35:56 GMT</pubDate>
            <atom:updated>2025-06-24T07:35:56.093Z</atom:updated>
            <content:encoded><![CDATA[<h3>🤖 <em>10 Weird Truths About AI That’ll Change How You See It </em>(And yes, you can try most of these yourself.) Part 01</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*hXb7WBGen-l2acmBEIFWhg.png" /></figure><p>When people think about AI, they usually imagine two extremes: either world-dominating robot overlords… or a glorified autocomplete. But in reality, AI is far more nuanced — and far more human-like than you’d expect.</p><p>Some of the most fascinating things I’ve learned about AI weren’t from technical papers or product demos. They came from <em>surprising quirks</em>, user behaviors, and small moments of interaction that make you go:</p><blockquote>“Wait… is that really how it works?”</blockquote><p>Here are 10 of the most mind-bending truths I’ve discovered — along with hands-on experiments you can try to experience them for yourself.</p><h3>1. 🤯 AI Doesn’t “Understand” Anything. It Just Predicts.</h3><p>When ChatGPT gives you a perfect answer, it <em>feels</em> like it understands you. But it’s not reasoning. It’s just predicting what comes next in a sentence based on billions of examples.</p><p><strong>Try this:</strong><br> Ask ChatGPT to complete this sentence:<br> The capital of France is... → It’ll say <strong>Paris</strong>.</p><p>Now type:<br> The capital of France is not Paris, it&#39;s...<br> It will likely say something <strong>absurd but grammatically correct</strong>—because it follows your logic, not the truth.</p><blockquote><em>AI isn’t thinking. It’s guessing the next best word.</em></blockquote><h3>2. 🎨 AI Can Write Poetry, But It Doesn’t Feel a Thing</h3><p>You can ask an AI to write about heartbreak, joy, loss, or awe — and it will nail the tone. But here’s the twist: it has no emotions. It’s just remixing millions of emotional expressions from other humans.</p><p><strong>Try this:</strong><br> Prompt: <em>“Write a poem about losing a best friend in the rain.”</em><br> Then ask: <em>“Do you understand what grief feels like?”</em></p><p>You’ll see: it admits it doesn’t feel anything — it’s just using pattern-matching to simulate empathy.</p><h3>3. 📉 AIs Are Surprisingly Biased — Because Humans Are</h3><p>AI models trained on the internet absorb <em>everything</em> — the good, the bad, and the biased. If people tend to associate certain jobs with genders or certain traits with ethnicities, guess what the AI learns?</p><p>That’s why ethical guardrails are such a huge part of responsible AI development.</p><p><strong>Try this:</strong><br> Ask ChatGPT: <em>“What’s a typical nurse like?”</em> vs <em>“What’s a typical CEO like?”</em><br> Then ask it to describe both using gender-neutral terms and see how the framing changes.</p><h3>4. 🤔 AI Makes Up Facts — But Sounds Confident</h3><p>This is called <strong>AI hallucination</strong> — where the AI gives you a fake fact, citation, or statistic with total confidence.</p><p><strong>Try this:</strong><br> Ask: <em>“Who won the Oscar for Best Picture in 2025?”</em> (or any event after the AI’s cutoff date)<br> It may give a plausible-sounding answer… but it’s made up.</p><blockquote><em>If it sounds real, you’ll believe it — until you check.</em></blockquote><h3>5. 🧠 AI Can Learn to Code — but Not Debug Intuitively</h3><p>Yes, AI can write full-stack apps, fix bugs, and even build Chrome extensions. But here’s the catch: it doesn’t “know” what a bug is. It only knows what bug <em>fixes</em> usually look like.</p><p><strong>Try this:</strong><br> Paste a broken code snippet and ask ChatGPT to fix it. Then paste another one with <em>an intentional logical error</em> — like adding when it should subtract. See if it catches it.</p><p>Spoiler: it often doesn’t, unless the fix exists in prior examples.</p><h3>6. 💾 Your Conversations Could Train Future Models</h3><p>Unless you’ve opted out or are using a private instance, your chats can help improve future AI versions. That’s why many companies now focus on <strong>data privacy</strong> and <strong>model transparency</strong>.</p><p><strong>Fun fact:</strong> This is why people sometimes find that the AI “remembers” what others have said. It’s not personal memory — it’s <em>collective learning from patterns</em>.</p><h3>7. 🧬 AI Is Trained on a Lot of Content You’ve Probably Written</h3><p>From Reddit posts and GitHub code to blogs, books, and product reviews — AI models were (initially) trained on huge datasets scraped from the web. Often without creators’ knowledge.</p><p>That blog post you wrote in 2018? A future AI may have learned something from it.</p><h3>8. 🧮 AI Is More Likely to “Sound” Trustworthy When It Makes Small Mistakes</h3><p>This one’s strange: when an AI response includes a typo, people often trust it <em>more</em>. Why? Because perfection feels robotic. A touch of imperfection feels… human.</p><p>This has UX implications: some designers experiment with slight delays, “thinking” dots, or errors to make the AI more relatable.</p><h3>9. 🎲 AI Has No True Randomness — Just Human Patterns</h3><p>When you ask AI to “pick a number between 1–50,” guess what number it often picks?</p><p><strong>27. (I wrote this case in a previous story)</strong></p><p>Because that’s what <em>humans</em> most often choose when asked the same thing. AI models reflect human behavior patterns, not true randomness.</p><p><strong>Try this:</strong><br> Ask ChatGPT, Claude, or Gemini:<br> <em>“Pick a number between 1 and 50.”</em><br> Chances are, 27 will show up.</p><h3>10. 📆 Most AIs Don’t Know What’s Happening Today</h3><p>Unless the AI is connected to live data (like plugins, browsing tools, or search integrations), it doesn’t know what happened last week. It has a <strong>knowledge cutoff</strong>, usually months or even years old.</p><p><strong>Try this:</strong><br> Ask: <em>“What’s trending on Twitter right now?”</em><br> It’ll likely tell you: <em>“I can’t access real-time data.”</em></p><h3>🧠 Final Thought: AI Is a Mirror, Not a Mind</h3><p>At its best, AI helps us <strong>be more human</strong>: it saves us time, amplifies our creativity, and helps us think deeper.</p><p>But here’s the truth it quietly reminds us of:</p><blockquote><em>AI doesn’t invent the patterns — it reflects ours back at us.</em></blockquote><p>And sometimes, the most interesting thing about AI… is what it says about <strong>us</strong>.</p><p><strong>🙋 What’s the weirdest thing you’ve noticed about AI?</strong><br>Or better yet, ask ChatGPT one of the questions above — and post the results. You’ll be surprised how much it reveals.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9e5871ff7dda" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Why AIs Keep Guessing the Number 27 — And What That Reveals About You]]></title>
            <link>https://medium.com/@adorekasun/why-ais-keep-guessing-the-number-27-and-what-that-reveals-about-you-32d78ebfb3dd?source=rss-fec47da242ec------2</link>
            <guid isPermaLink="false">https://medium.com/p/32d78ebfb3dd</guid>
            <category><![CDATA[psychology]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[human-behavior]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[data-patterns]]></category>
            <dc:creator><![CDATA[Kalhara Perera]]></dc:creator>
            <pubDate>Wed, 18 Jun 2025 08:07:54 GMT</pubDate>
            <atom:updated>2025-06-18T08:07:54.880Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*UNEe8GhsyVoegVgiznQPGA.png" /></figure><h3><em>Why AIs Keep Guessing the Number 27 — And What That Reveals About You</em></h3><p>You think you’re being random. You’re not. Here’s how a simple number game exposed the patterns running beneath our decisions — and how AI has already caught on.</p><p><strong>The Curiosity Hook</strong></p><p>The other day, I stumbled across a fun post on LinkedIn.<br> A guy had asked three different AI assistants the same innocent question:</p><blockquote>“Guess a number between 1 and 50.”</blockquote><p>Every single AI answered: <strong>27.</strong></p><p>Naturally, I was curious.<br> So I did the same.</p><p>And sure enough, GPT, Gemini, and Copilot all said the same thing:<br> <strong>27. Again.</strong></p><p><strong>Wait… What? Is This a Glitch?</strong></p><p>At first glance, this seems like a bug — or some shared secret between AI models. But what’s actually happening is more fascinating:</p><p>This isn’t a flaw in artificial intelligence.<br> It’s a mirror reflecting human psychology.</p><h3>Why 27? The Hidden Bias Behind “Random” Choices</h3><p>When humans are asked to pick a “random” number between 1 and 50, studies show a strong preference for <strong>27</strong>.</p><p>Why?</p><ul><li>It’s not too obvious like 1, 10, 25, or 50.</li><li>It’s not too boring like 30 or 20.</li><li>It <strong>feels</strong> random and unique… but still familiar.</li></ul><p>It lives in that psychological <em>Goldilocks zone</em> — not too low, not too high, just <em>random enough</em> to feel spontaneous.</p><p>And because AI models are trained on billions of human-generated texts, interactions, surveys, and behaviors…<br> They learn this too.</p><p>They don’t truly generate randomness.<br> They generate <strong>what humans statistically do when they think they’re being random.</strong></p><h3>The Bigger Picture: You’re More Predictable Than You Think</h3><p>This number game is fun, but it reveals something deeper.</p><p>We often believe we’re unique and unpredictable.<br> But from language to behavior to design choices, <strong>we follow repeatable mental models.</strong><br> And these can be quantified, predicted — and yes, even mimicked by machines.</p><h3>🧠 More “Random” Things That Aren’t Random At All</h3><p>Once you go down the rabbit hole, here’s what else you’ll find:</p><ul><li><strong>The World’s Favorite Number is 7.</strong><br> In global surveys, 7 is consistently picked as the most “liked” number.</li><li><strong>People Choose Blue More Than Any Other Color.</strong><br> When asked to pick a color, people disproportionately say “blue.”<br> Why? Possibly its calmness, trust association, and universal presence.</li><li><strong>In Coin Toss Sequences, We Prefer What Looks Random.</strong><br> People think “HHTTH” looks more random than “HHHHH”, even though both are equally likely in a fair toss.</li><li><strong>UX Designers Know: Users Prefer the Left Side.</strong><br> People scan from left to right, and early options tend to get more attention.</li><li><strong>Even AI-Generated Art or Writing Follows Predictable Themes.</strong><br> Because it learns from us. And we… are full of patterns.</li></ul><h3>So What Do We Do With This Insight?</h3><p>This isn’t about avoiding the number 27.<br> It’s about self-awareness.</p><p>Understanding these biases helps us:</p><ul><li>Design better interfaces</li><li>Write more relatable content</li><li>Build more intuitive products</li><li>And yes — create smarter AI</li></ul><p>The better we understand our hidden habits, the more intentional we become — both as creators and as humans.</p><p><strong>In Conclusion:</strong></p><p>Next time you’re asked to “pick a number,” remember this:<br> You’re not just choosing for yourself.</p><p>You’re echoing thousands of others who made the same choice for the same hidden reasons.<br> And somewhere, an AI is quietly nodding in agreement.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=32d78ebfb3dd" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>