<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Engineering in The Craft on Medium]]></title>
        <description><![CDATA[Latest stories tagged with Engineering in The Craft on Medium]]></description>
        <link>https://craft.faire.com/tagged/engineering?source=rss----4af981bb79f--engineering</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Wed, 06 May 2026 12:38:17 GMT</lastBuildDate>
        <atom:link href="https://medium.com/feed/faire-the-craft/tagged/engineering" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <item>
            <title><![CDATA[Scalable test coverage: How Faire selects which tests (not) to run]]></title>
            <link>https://craft.faire.com/scalable-test-coverage-how-faire-selects-which-tests-not-to-run-1bc0cf1484d6?source=rss----4af981bb79f--engineering</link>
            <guid isPermaLink="false">https://medium.com/p/1bc0cf1484d6</guid>
            <category><![CDATA[faire]]></category>
            <category><![CDATA[engineering]]></category>
            <category><![CDATA[software-testing]]></category>
            <category><![CDATA[continuous-integration]]></category>
            <category><![CDATA[testing]]></category>
            <dc:creator><![CDATA[Mike Boos]]></dc:creator>
            <pubDate>Thu, 19 Feb 2026 17:35:57 GMT</pubDate>
            <atom:updated>2026-02-19T17:35:56.203Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="Scalable test coverage: How Faire selects which tests (not) to run" src="https://cdn-images-1.medium.com/max/1024/1*OtPKFr_6P4aNHS1zOVa6kg.jpeg" /></figure><p>At <a href="https://www.faire.com/">Faire</a>, we have a problem that many test engineers would envy: we have over 71,000 automated tests in our backend repository. But here’s the challenge — running all those tests on every pull request would bring development to a crawl. So we don’t. Instead, we skip about two-thirds of them.</p><p>In order to run the right tests at the right time, we use sets of deliberately simple strategies, each tailored to a specific testing context. (Internally, we talk about “test avoidance” rather than “test selection” — the goal isn’t to pick a few tests to run, but to confidently skip the ones we don’t need.)</p><p>Here’s how we approach test selection at Faire, and what we’ve learned along the way.</p><h3>The friction point</h3><p>Before diving into solutions, it’s worth understanding why this matters. Every pull request goes through several gates: local development, code review, merging to main, and eventually release to production. Each gate asks a question:</p><ul><li><strong>Local development:</strong> Are my changes good enough to put up for review?</li><li><strong>Pull request:</strong> Are my changes good enough to merge?</li><li><strong>Main branch:</strong> Are my changes good enough to release?</li></ul><p>The earlier we can answer these questions, the faster developers get feedback. But the more tests we run, the more friction and delay. Test selection helps us balance speed with confidence.</p><h3>Faire’s current state</h3><p>Let me ground this with some numbers from our backend monorepo:</p><ul><li><strong>71,000+ unique test cases</strong> (and growing)</li><li><strong>67% test avoidance rate</strong> on pull requests (we define our test avoidance rate as the percentage of tests can we skip)</li><li><strong>300+ engineers</strong> contributing to the codebase across all repos, either directly or through AI coding agents</li></ul><p>The use of coding agents is accelerating our pace of development, and with it the rate of new pull requests. We need good test coverage to check that AI isn’t breaking anything unintentionally, but we also need to control testing costs and avoid bottlenecking development with limited compute resources.</p><p>We’re striving for the ideal test pyramid — lots of fast unit tests, fewer integration tests, minimal end-to-end tests. We’re not there yet, which makes test selection even more critical in the meantime.</p><h3>How we choose which tests to run</h3><p>Test selection isn’t one-size-fits-all. At Faire, our approach depends heavily on the context — what kind of tests we’re running and what constraints we’re working within. Here’s how we handle three different scenarios.</p><h3>Backend integration tests: configuration-based avoidance</h3><p>Our backend repository includes build graph analysis out of the box — Gradle understands which modules depend on each other. In theory, this should tell us exactly which tests to run when code changes.</p><p><strong>The problem:</strong> Our largest backend service has integration tests that use real objects as much as possible, creating a web of interdependencies. Leveraging the build graph alone only allows us to skip about 30% of our tests on average, because nearly everything is connected to everything else.</p><figure><img alt="A build graph, where project A depends on projects B, C, and D, and B depends on E. B and D are highlighted in green to show they are allowed triggers for A. C and E are shaded red to show they are not allowed triggers for A. Ex 1 shows a code change applied to B, with a red arrow demonstrating that changes to B trigger tests in A. Ex 2 shows changes applied to C and E. There are dashed red lines from C and E to A to show that these changes do not propagate to trigger tests in A." src="https://cdn-images-1.medium.com/max/1024/1*WYBbCepeFbeim3BVYxtf3g.png" /><figcaption>On the left, build graph with project A’s allowed triggers marked in green. On the right, examples of these trigger rules in action. Ex 1: If project B is changed, trigger tests for project A. Ex 2: If only C or E are changed, A’s tests are not triggered, even though they form part of its build graph.</figcaption></figure><p><strong>Our solution:</strong> We layer a configuration-based approach on top of the build graph to filter dependency chains when choosing which tests to run. Engineers can define rules in their build.gradle.kts files that specify:</p><ul><li><strong>Allowed triggers:</strong> “Only run tests for Project A if changes touch Projects A, B, or D”</li><li><strong>Allowed tests:</strong> Changes to a particular project can only be used to trigger a certain subset of projects’ tests. Useful for Protobuf projects, where most issues are caught through compilation errors instead of tests.</li><li><strong>Force run:</strong> “Always run tests for Project A when Project B changes” is useful when we’ve seen escaped regressions. Force run always overrides other test avoidance rules that may originate from upstream build files.</li></ul><p>For example:</p><pre>testAvoidance {<br>    skipTests {<br>		    // Allowed triggers:<br>        unlessTriggeredByProject(&quot;:service-A:subproject-B&quot;) { includeSubprojects = true }<br>        unlessTriggeredByProject(&quot;:core:subproject-C&quot;) { includeSubprojects = true }<br>        <br>        // Also break dependency chains for projects that depend on this one unless<br>        // one or more allowed triggering project changes.<br>		    avoidTriggeringDownstreamProjects = true  <br>    }<br>    <br>    runTests {<br>		    // Force run this project&#39;s tests when subproject-D has changes, regardless<br>		    // of test avoidance rules in other projects:<br>				whenTriggeredByProject(&quot;:service-A:subproject-D&quot;)<br>		}    <br>}</pre><p>In practice, this means tests for this module are skipped unless changes touch one of the explicitly listed upstream projects.</p><p>The filtering rules we have in place today allow us to skip over <strong>twice</strong> as many tests as we would using build graphs alone.</p><p><strong>What makes this work:</strong> The configuration lives in code, version-controlled alongside the application. When engineers change a module, they update its test avoidance rules in the same PR. Less potential for drift, as developers are accountable for maintaining these with their build files.</p><h3>End-to-end tests with backend changes: product area tagging</h3><p>For E2E tests, we ran into different constraints — every test depends on the system as a whole, and not some smaller subset, making build-graph approaches impossible. We also run E2E tests in parallel using a shared backend sandbox environment. This meant we couldn’t use code coverage analysis for backend changes — profiling would interfere with parallel test execution.</p><p><strong>Our solution:</strong> Both backend code and E2E tests have already been <a href="https://craft.faire.com/raising-code-quality-for-faires-kotlin-codebase-f61420b3e5e6#c511">tagged with product areas</a> (Payments, Search, Orders, etc.). We also created mappings between related product areas — changes to Payments might affect Checkout tests, for example.</p><p>When a PR includes backend changes, we:</p><ol><li>Identify which product areas the changed files belong to</li><li>Look up related product areas through our mappings</li><li>Run E2E tests tagged with those product areas</li></ol><figure><img alt="Flow diagram for how backend changes are mapped to E2E tests. On the left is the backend changes with an arrow to corresponding product areas. From the product areas, another arrow shows mappings to related product areas. Finally, an arrow connects these related product areas to related frontend E2E tests on the far right." src="https://cdn-images-1.medium.com/max/1024/1*UK0x6Ume90ppynC3lI2xrg.png" /><figcaption>How Faire maps backend code changes to frontend E2E using product areas. This allows us to link areas of the code across repos that can’t be filtered via build graphs.</figcaption></figure><p>If a test is skipped that subsequently regresses in main branch due to the changes, then developers are encouraged to update the mappings when fixing forward after reverting.</p><p><strong>The tradeoff:</strong> This requires discipline to keep tags and mappings accurate. But it gave us test selection for E2E tests when code coverage wasn’t an option.</p><h3>End-to-end tests with frontend changes: from coverage to opt-in</h3><p>For frontend changes, we initially used a code coverage-based approach. We’d profile which code each E2E test exercised, then run only the tests that covered the changed frontend code.</p><figure><img alt="Flow diagram of for generating and applying relevancy maps in frontend E2E tests. A main branch commit is built into a regular main build and an instrumented build. Instrumented build is connected to an E2E test run. Coverage results flow from main branch E2E run to relevancy map. A subgraph for pull requests contains a pull request E2E build that relies on the relevancy map to filter tests to be run." src="https://cdn-images-1.medium.com/max/1024/1*aC4A4iSaEO8PQcyLP750Jg.png" /><figcaption>Instrumented main branch builds are used to generate “relevancy maps” from test coverage, helping to identify relevant tests to run against pull requests based on file changes.</figcaption></figure><p><strong>What changed:</strong> When we began <a href="https://craft.faire.com/boosting-performance-faires-transition-to-nextjs-3967f092caaf">using React Server Components with Next.js</a>, we lost the ability to profile our tests effectively. The server-side rendering architecture made it impossible to get accurate coverage data.</p><p><strong>Our current approach:</strong> E2E tests are now opt-in for pull requests. Engineers explicitly choose to run E2E tests when they believe their changes warrant it by adding a label to their pull requests. This isn’t ideal, but it’s honest about our current constraints while we explore better solutions.</p><p>This shifts responsibility onto engineers, which is a real risk. We mitigate it through strong code review norms, production monitoring, and by still running the full E2E suite on main. It’s not perfect — but it’s better than pretending we have signal when we don’t.</p><h3>What we considered (but didn’t implement)</h3><p>We didn’t arrive at our current solution without some detours along the way. Here are the other approaches we evaluated:</p><p><strong>Predictive test selection with ML:</strong> Train a model (like XGBoost) on historical data — which files changed, which tests failed, code metrics, etc. — to predict which tests are likely to fail. In theory, powerful. In practice, when applied to our E2E tests, we couldn’t achieve a low test selection rate without also having a high escaped failure rate. Training was vulnerable to flaky failures, and we lacked strong “cross-features” to confidently measure the relationship between tests and changed code.</p><p><strong>LLM-based selection:</strong> Could we use large language models to decide which tests to run? The non-determinism is a dealbreaker for CI. We need predictable, consistent behaviour. However, developers may still wish to ask coding agents to identify and run appropriate tests locally.</p><p><strong>API endpoint mapping:</strong> Map tests to the API endpoints they exercise. Works great if your code is well-organized around API boundaries, but shared code and asynchronous job code will be more difficult to map in an automated fashion.</p><h3>Putting test selection into practice at Faire</h3><p>Having a technique is one thing. Making it work in production is another. Here’s Faire’s process:</p><h4>Step 1: Define success metrics</h4><p>Before implementing anything, we defined what “good” looks like:</p><ul><li><strong>Test avoidance rate:</strong> What percentage of tests can we skip?</li><li><strong>Escape rate:</strong> What percentage of failures do we miss?</li><li><strong>Time savings:</strong> How much faster do PRs get through CI?</li></ul><p>We set targets based on our risk tolerance. Missing a bug is costly, but so is slow developer feedback.</p><h4>Step 2: Choose and configure the tool</h4><p>For our backend monolith, we started with build graph analysis and source module mapping — proven approaches that matched our architecture.</p><p>Configuration happens in our build files, making it code-reviewed and version-controlled alongside the application code.</p><h4>Step 3: Run in dry-run mode</h4><p>This is critical: we didn’t just turn test selection on. First, we collected statistics:</p><ul><li>Which tests <em>would</em> have been skipped?</li><li>Which tests <em>would</em> have run?</li><li>Would we have missed any failures?</li></ul><p>We compared dry-run results against our success metrics and iterated until we were confident.</p><h4>Step 4: Turn it on (carefully)</h4><p>Once dry-run metrics met our targets, we enabled test selection for real. But the work didn’t stop there.</p><h4>Step 5: Monitor continuously</h4><p>Your codebase changes every day. Test selection performance can drift because:</p><ul><li>New features may not fit existing patterns</li><li>Refactoring can change dependency structures</li><li>Technical debt accumulates</li></ul><p>At Faire, we saw this firsthand when we introduced server-side rendering. Suddenly, a growing number of source files could no longer be profiled and mapped to tests. We had to adjust our configurations to adapt to the new architecture.</p><p><strong>The key lesson:</strong> Test selection isn’t a set-it-and-forget-it solution. It requires ongoing monitoring and maintenance.</p><h3>Results and lessons</h3><p>Skipping 67% of tests on every PR has given our engineers faster feedback and reduced CI costs. But more importantly, we’ve learned:</p><ol><li><strong>Match the approach to your architecture.</strong> Build graphs work great for well-modularized code. API mappings work for service-oriented architectures. There’s no one-size-fits-all.</li><li><strong>Simpler is often better.</strong> Predictive ML sounds exciting, but configuration-based approaches delivered good value without the complexity.</li><li><strong>Configuration must live in code.</strong> When test selection configs are external, they drift. When they’re in version control and code-reviewed, they stay more accurate.</li><li><strong>Always dry-run first.</strong> The cost of escaped failures is high. Measure twice, cut once.</li><li><strong>Treat it as a living system.</strong> Your codebase evolves. Your test selection needs to evolve with it.</li></ol><h3>What’s next for test selection at Faire</h3><p>We’re not done. As we continue breaking down our monolith backend service into services and modularizing our frontend code, we’re exploring:</p><ul><li><strong>Service-level smoke tests:</strong> As we extract services and frontends, which smoke tests should run?</li><li><strong>Finer-grained selection:</strong> Can we get more specific than module-level?</li><li><strong>Developer- and coding-agent-facing test recommendations:</strong> Which tests should be run locally before opening a pull request?</li></ul><p>The goal remains the same: run the right tests at the right time, giving engineers fast, reliable feedback without sacrificing quality.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1bc0cf1484d6" width="1" height="1" alt=""><hr><p><a href="https://craft.faire.com/scalable-test-coverage-how-faire-selects-which-tests-not-to-run-1bc0cf1484d6">Scalable test coverage: How Faire selects which tests (not) to run</a> was originally published in <a href="https://craft.faire.com">The Craft</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Conductor: Faire’s platform for orchestrating in-app messages]]></title>
            <link>https://craft.faire.com/conductor-faires-platform-for-orchestrating-in-app-messages-ceb5197972f9?source=rss----4af981bb79f--engineering</link>
            <guid isPermaLink="false">https://medium.com/p/ceb5197972f9</guid>
            <category><![CDATA[design-systems]]></category>
            <category><![CDATA[sdui]]></category>
            <category><![CDATA[in-app-messaging]]></category>
            <category><![CDATA[mobile-design]]></category>
            <category><![CDATA[engineering]]></category>
            <dc:creator><![CDATA[Faire Data and Engineering Team]]></dc:creator>
            <pubDate>Tue, 20 Jan 2026 20:34:26 GMT</pubDate>
            <atom:updated>2026-01-20T20:34:25.160Z</atom:updated>
            <content:encoded><![CDATA[<h4>How we show the right message to the right user at the right time</h4><p>Written by <a href="https://www.linkedin.com/in/divyasudhakar/">Divya Sudhakar</a> and <a href="https://www.linkedin.com/in/julian-s-pettit/">Julian Pettit</a></p><figure><img alt="An abstract illustration of a laptop displaying interconnected UI components, representing Conductor orchestrating in-app messages." src="https://cdn-images-1.medium.com/max/1024/1*jZ-oGMnB6iCULsEaDi2F5A.png" /></figure><p>How do you communicate the right message to the right users at the right time? At <a href="https://www.faire.com/">Faire</a>, the Engineering team grappled with this challenge as our product and company grew. Different teams were adding banners, nudges, modals, and alerts throughout the app to promote and upsell their various programs. Without a unified system, the result was often fragmented with duplicated logic and inconsistent user experiences. On some of our highest impact pages like the checkout page, a retailer might have encountered up to five different banners at once. These were controlled by disparate client code paths and often looked entirely different, with no common design language. In short, we were <a href="https://en.wikipedia.org/wiki/Conway%27s_law">shipping our org chart</a>.</p><p>In addition to our product looking chaotic, this made it arduous for engineers to add or modify messages since the display rules were scattered across codebases. We needed a better way to coordinate customer-facing messages that would streamline development and ensure a polished experience for our users.</p><p>Our answer was <strong>Conductor</strong>, an internal platform that now serves as a one-stop shop for creating and managing any in-app message we show to retailers. Conductor lets us define what message to show and when to show it using a low-code UI, and takes care of delivering that message to all eligible users across web and mobile platforms. Having all our promotional messages flow through Conductor also lets us prioritize which messages to show and create hyper-personalized content to ensure that users are seeing the most relevant and engaging messages.</p><figure><img alt="A diagram showing Conductor coordinating messages from Faire’s backend services and delivering targeted in-app message components to web and mobile clients." src="https://cdn-images-1.medium.com/max/1024/1*yTrowG5Zz4aRRisxpZZmnQ.png" /><figcaption><em>Conductor sits in between the messages our Faire teams want to show and our users.</em></figcaption></figure><p>In this post, we’ll share how Conductor works, why we built it, and how it successfully powered a major refactor of our checkout page banners. We’ll discuss the technical approach and use of server-driven UI components, and the impact we’ve seen: dramatic drops in bugs and a boost in developer velocity.</p><figure><img alt="A screenshot showing Conductor’s self-serve low-code tool displaying a list of campaigns." src="https://cdn-images-1.medium.com/max/1024/1*i0ot3-HnR5piCcM25N8zYA.png" /><figcaption><em>A list of Conductor “campaigns” in our self-serve, low code tool.</em></figcaption></figure><h3>The challenge and our approach</h3><p>By late 2024, our approach to in-product messaging was hitting its limits. Each product team would implement their own banners or modals, often hard-coded into the frontend (more on this later). These banners were not consistently organized; an audit of the checkout surface found this code spread across multiple frontend packages, with no single source of truth for when each should appear. This led to several pain points:</p><ul><li><strong>Engineering complexity</strong>: Adding or updating a banner required digging through various code paths to find the right place to insert logic. Removing an outdated banner was equally slow, since it was not immediately clear what other components might interact with it.</li><li><strong>An outdated mobile experience:</strong> This client-centric engineering approach meant teams required one or more frontend, Android, and iOS engineers to build any given feature. In practice, if all the client engineers were not available simultaneously, the feature would launch on the web first. Mobile updates would often be delayed. The mobile experience tended to lag behind the web experience resulting in fragmentation and lack of cohesion for clients who use the apps and the web.</li><li><strong>No global coordination</strong>: Because each banner’s logic was isolated, nothing prevented five different teams’ banners from showing all at once. We saw instances of many banners stacking on the same page, cluttering the UI and potentially confusing users.</li><li><strong>Inconsistent design and UX</strong>: With each team building banners independently, styling and placement varied. Some banners used templated designs, others were one-offs, leading to a jarring experience and more work for our design and engineering teams to review each time we implemented them.</li></ul><figure><img alt="A screenshot showing a wizard in Conductor’s self-serve admin tool for creating a modal in a campaign." src="https://cdn-images-1.medium.com/max/1024/1*YZ0HDPOevXy8tSN_7Z5pBQ.png" /><figcaption><em>The form for creating a new modal in the self-serve tool.</em></figcaption></figure><p>We built Conductor to address these challenges with a focus on:</p><ul><li><strong>Self-serve configuration</strong>: Messages are configured via a low-code admin UI, with no code deployment needed for new or modified messages.</li><li><strong>Standardized and opinionated presentation</strong>: All messages use pre-built server-driven UI (SDUI) components from our internal design system.</li><li><strong>Centralized coordination</strong>: With full knowledge of all the messages that can be shown to a user, Conductor determines what message to show. Indeed, this aspect of “conducting” the messages was so important that we named the platform after this capability.</li></ul><p>This turned our previously scattered banner logic into a coordinated, declarative system that improved both developer velocity and user experience.</p><h3>Architecture and terminology overview</h3><figure><img alt="Table defining key Conductor terms: Campaign as a collection of related messaging components, Component as a server-defined UI element such as a banner or modal, and Trigger as an event or condition that shows a campaign’s components to a user." src="https://cdn-images-1.medium.com/max/1024/1*Msx1LmHW2MZ2MUSJsA8M5w.png" /><figcaption><em>An overview of Conductor terminology.</em></figcaption></figure><p>Conductor runs as a backend service that delivers <a href="https://craft.faire.com/transitioning-to-server-driven-ui-a76b216ed408">server-driven UI (SDUI)</a> <strong>components</strong> (for example: banners and modals) to eligible users.</p><p>A <strong>campaign</strong> defines a messaging theme or goal — such as a promotional push or a failed payment alert — and can include multiple <strong>components</strong> across different parts of the app. Each campaign encapsulates targeting rules and priority. Components use standard UI templates from our design system, Slate.</p><p>Users are enrolled in these campaigns through async events we call <strong>triggers</strong><em>.</em> For example, a trigger might fire when a user receives a promotion, enrolling them in the corresponding Conductor campaign to display relevant messages. Campaigns can also target all users globally, as we often do during our biannual Market events.</p><figure><img alt="System architecture diagram showing Conductor as a backend service that receives client requests, evaluates campaign eligibility and priority, and delivers server-driven UI components to web and mobile clients while logging analytics." src="https://cdn-images-1.medium.com/max/1024/1*KscahRSDQKXq5GtoEwy3YA.jpeg" /><figcaption><em>The Conductor System Architecture.</em></figcaption></figure><h4><strong>Runtime flow</strong></h4><ul><li><strong>Client request</strong>: When a page loads (e.g., checkout), it calls Conductor with user context and surface (e.g., CHECKOUT_PAGE_TOP).</li><li><strong>Eligibility evaluation</strong>: Conductor filters active campaigns by audience, triggers, and enrollment status to determine whether or not a user is eligible to receive messaging.</li><li><strong>Prioritization</strong>: If multiple campaigns qualify, it returns only the highest-priority ones within configured limits. Prioritization ensures that only a specified number of campaigns display per surface.</li><li><strong>Rendering</strong>: The client renders messages using our SDUI framework, which can be displayed directly as provided.</li><li><strong>Analytics</strong>: Conductor natively logs every impression and interaction with components. This gives us high quality data on what messages users are receiving and which ones are resonating with them we can use to debug issues and train our model. These guardrails allow product teams to experiment and iterate quickly without compromising user experience.</li></ul><p>This system keeps client code simple. Campaign logic, targeting, and presentation live on the backend, while clients just render the result.</p><h3>Showing the right message at the right time</h3><p>While Phase 1 of Conductor focused on improving developer velocity and design cohesion, Phase 2 centered on “intelligence.” As more campaigns migrated to Conductor, <strong>prioritization</strong> became one of its most valuable features. We also saw growing concern about “banner blindness,” prompting requests for <strong>guardrails</strong> to prevent showing the same message repeatedly to users who weren’t engaging with them. A central repository for messages — with a system to govern it — enabled us to build intelligence and capabilities that individual teams couldn’t achieve on their own, especially when banner development was just incidental work for them.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*kQYxCBUO-4FaxhnF0KEwOQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*IGQGZK63W-Dbbe3SKuaYLA.png" /><figcaption><em>Page “Hero” Banners advertising our Insider program for new users vs paused users. We rely on Conductor to pick the appropriate button text.</em></figcaption></figure><p>Closely related is <strong>personalization</strong>. We’ve always recognized the value of tailoring extremely personal experiences for our users. But having one central system made it easier to plug in all the data we have on our retailers and use it to pick the right copy and creative to meet retailers where they are.</p><figure><img alt="xamples of promotional modals with different background images and copy, demonstrating how Conductor personalizes content based on retailer store type." src="https://cdn-images-1.medium.com/max/1024/1*uNzQKdymxGjH-xCaVa_-yQ.png" /><figcaption><em>Another example of Conductor-driven personalization. Our new retailers see one of 16 different modals with different background images and text depending on their store type.</em></figcaption></figure><p>Together, prioritization and personalization transform Conductor from a delivery mechanism into an intelligence layer for in‑product messaging, ensuring that retailers see fewer — but more relevant and engaging — messages.</p><h3>Case study: migrating checkout banners with Conductor</h3><p>Checkout was a prime candidate for Conductor: a critical surface filled with conflicting, hard-coded banners. We migrated these banners to Conductor with the following steps:</p><ul><li><strong>Audit and standardization</strong>: We consolidated 11 banner types into only two standardized components, improving our visual consistency and adherence to Faire’s design principles.</li><li><strong>Triggering logic</strong>: Legacy conditions governing when and where components should be displayed were moved into Conductor campaigns. The triggering events automatically enroll users into the right campaigns when conditions are met to receive messaging.</li><li><strong>Determining priority</strong>: We limited checkout to show at most one banner at a time. Each campaign was assigned a priority which is enforced by Conductor at runtime so the most prominent message is always being shown to retailers.</li><li><strong>Integration and rollout</strong>: We phased out our client-side banners, replacing them with calls by checkout services to Conductor as part of page generation. Once the rollout completed we were able to clean up and greatly simplify the frontend checkout messaging code.</li></ul><figure><img alt="Stack of banner examples from the Orders page with different headlines and calls to action, representing variants selected by Conductor using contextual bandits." src="https://cdn-images-1.medium.com/max/996/1*kmY5BvAVtyiSyFxXiulRwQ.png" /><figcaption><em>The banner variants you might see on the Orders page. These banners are driven by Conductor using Contextual bandits.</em></figcaption></figure><h4><strong>Results</strong></h4><p>Conductor delivered strong results across multiple dimensions:</p><ul><li><strong>Velocity</strong>: Campaigns that previously took days to design, implement, and deploy to production now launch in minutes, saving days of effort and delay. The self-serve model and low-code tooling enabled faster feedback loops during the checkout banner migration, letting engineers and QA adjust configurations without writing new code. It’s also reduced engineering overhead by an estimated 30–40 hours per month.</li><li><strong>Quality</strong>: Checkout banner bugs dropped 43% the quarter after the migration, reducing engineering maintenance effort. The largest class of defects related to incorrect web banner interactions was eliminated altogether.</li><li><strong>User experience:</strong> Users see fewer, more relevant messages. Banners no longer stack or compete unintentionally, and design is no longer made more complex on surfaces that already feature many potential messages.</li><li><strong>Performance</strong>: Conductor API calls are fast (~25ms) and run concurrently with page loads, keeping the user experience snappy and responsive.</li></ul><figure><img alt="Checkout page showing a blocking banner warning about overdue invoices, placed above order and payment details and served through Conductor." src="https://cdn-images-1.medium.com/max/1024/1*96QrwDSh5fcpDuVbYfVTlA.png" /><figcaption><em>A checkout-blocking banner served using Conductor.</em></figcaption></figure><h3>Scaling a platform: adoption lessons</h3><p>Driving Conductor adoption has looked very different at different stages.</p><h4><strong>Early (provisional) stage</strong></h4><p>Adoption was relatively straightforward. We treated Conductor like a product and approached adoption as a search for <a href="https://www.firstround.com/levels">product-market fit</a>. We partnered closely with a small set of sophisticated application teams, designing the platform alongside them and iterating based on candid feedback. These teams were willing to invest in us, and our role was to stay just ahead of their roadmaps — delivering clear value without slowing them down.</p><h4><strong>Operational stage</strong></h4><p>As the platform matured and adoption shifted from <em>Erratic</em> to <em>Extrinsic Push</em> (from the <a href="https://tag-app-delivery.cncf.io/whitepapers/platform-eng-maturity-model/">Platform Engineering Maturity Model</a>), the nature of the problem changed. We were no longer working with a handful of aligned early adopters. Instead, we were asking many teams across the company to change how they built, shipped, and operated messaging. Each team had different incentives, constraints, and success metrics.</p><p>Driving adoption at this stage was messy and required a multi‑pronged approach. We had to rethink both the problem we were solving and the promise we were making, meeting teams where they were while still guiding the organization toward a shared platform. A single mandate or narrative would have been ineffective and, in some cases, counterproductive. Conductor’s strength was that it offered value along multiple dimensions — velocity, design consistency, prioritization, personalization, and long‑term maintainability — which gave us flexibility in how we framed that value. In parallel, we invested in identifying advocates who could speak to Conductor’s impact from different perspectives and act as internal partners, creating pockets of high adoption.</p><p>These advocates weren’t just engineers. Designers were especially helpful in surfacing gaps and unblock adoption early, motivated by a shared design language and cohesive user experience.</p><p>Ultimately, platform adoption isn’t just a technical problem. It’s a socio‑technical one — sitting at the intersection of product strategy, incentives, workflows, and culture — and it requires patience, empathy, and a shared commitment to craft.</p><h3>What’s next</h3><p>Looking ahead, we see Conductor as the execution layer in a larger ecosystem.</p><p>By integrating Conductor with our customer data platform — powered by a knowledge graph — we’re working to unlock deeper personalization and smarter prioritization. This sets the stage for messaging that is increasingly contextual, adaptive, and relevant to each retailer.</p><p>As Conductor continues to lower the cost of launching campaigns, we’re investing in stronger guardrails and cooldowns to ensure retailers aren’t bombarded with messages. In parallel, we’re continuing to improve Conductor’s tooling as adoption expands and we undertake what is effectively a large rewrite of messaging across our site and apps.</p><p>Conductor started as a way to ship faster. Now, it’s becoming a foundation for consistent design, intelligent messaging, and scalable growth across Faire.</p><h3>Acknowledgments</h3><p>Thanks to <a href="https://www.linkedin.com/in/jake-buller-631b8432/">Jake Buller</a>, <a href="https://www.linkedin.com/in/jli0423/">Justin Li</a>, <a href="https://www.linkedin.com/in/praviin-premsankar/">Praviin Premsankar</a>, <a href="https://www.linkedin.com/in/emilywcai/">Emily Cai</a>, <a href="https://www.linkedin.com/in/dingxizheng/">Eric Ding</a>, <a href="https://www.linkedin.com/in/zacharysweigart/">Zachary Sweigart</a>, <a href="https://www.linkedin.com/in/zacharyjradford/">Zachary Radford</a>, <a href="https://www.linkedin.com/in/pedro-almeida-bba747197/">Pedro Almeida</a>, <a href="https://www.linkedin.com/in/gordon-winch/">Gordon Winch</a>, <a href="https://www.linkedin.com/in/birolsenturk/">Birol Senturk</a>, and <a href="https://www.linkedin.com/in/emily-thompson-576634122/">Emily Thompson</a> for their contributions in building Conductor.</p><p>Thanks to <a href="https://www.linkedin.com/in/emilycoleary/">Emily O’Leary</a>, <a href="https://www.linkedin.com/in/sumit-somani-/">Sumit Somani</a>, <a href="https://www.linkedin.com/in/madcido/">Fabio Carmo</a>, <a href="https://www.linkedin.com/in/brandon-dang-899553221/">Brandon Dang</a>, <a href="https://www.linkedin.com/in/yao-w-11772a163/">Yao Wang</a>, and <a href="https://www.linkedin.com/in/degleeson/">Derek Gleeson</a> for their contributions migrating the Checkout page banners onto Conductor and also to our many many client teams for their work and patience.</p><p>Thanks to <a href="https://www.linkedin.com/in/luthfur-chowdhury-27651b6/">Luthfur Chowdhury</a>, <a href="https://www.linkedin.com/in/alexa-weiser-b8532283/">Alexa Weiser</a>, <a href="https://www.linkedin.com/in/jeffhodnett/">Jeff Hodnett</a>, <a href="https://www.linkedin.com/in/jessicakarle/">Jessica Karle</a>, <a href="https://www.linkedin.com/in/amytalus/">Amy Talus</a>, and <a href="https://www.linkedin.com/in/taichi-hoshino/">Taichi Hoshino</a> for their leadership and support throughout the last year of building Conductor and driving adoption.</p><p><strong><em>Interested in the kinds of problems we tackle at Faire?</em></strong><em> We’re hiring engineers and data scientists who want to solve unique marketplace challenges and combine cutting-edge tech with real-world impact.</em> <a href="https://www.faire.com/careers">Join us</a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ceb5197972f9" width="1" height="1" alt=""><hr><p><a href="https://craft.faire.com/conductor-faires-platform-for-orchestrating-in-app-messages-ceb5197972f9">Conductor: Faire’s platform for orchestrating in-app messages</a> was originally published in <a href="https://craft.faire.com">The Craft</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building beyond code: Faire’s product mindset in engineering]]></title>
            <link>https://craft.faire.com/building-beyond-code-faires-product-mindset-in-engineering-cd193bee061b?source=rss----4af981bb79f--engineering</link>
            <guid isPermaLink="false">https://medium.com/p/cd193bee061b</guid>
            <category><![CDATA[product-development]]></category>
            <category><![CDATA[engineering]]></category>
            <category><![CDATA[experimentation]]></category>
            <category><![CDATA[tech-talk]]></category>
            <category><![CDATA[scalability]]></category>
            <dc:creator><![CDATA[Cheuk-man]]></dc:creator>
            <pubDate>Thu, 18 Dec 2025 20:41:00 GMT</pubDate>
            <atom:updated>2025-12-18T20:46:37.631Z</atom:updated>
            <content:encoded><![CDATA[<h4>Highlights from our November Toronto Tech Talk — showcasing stories of Faire’s engineers delivering value for the retailers and brands we serve</h4><figure><img alt="Audience seated in Faire’s Toronto office listening to speaker Alier Hu, who stands at a podium presenting slides about experimentation tools and a Figma plugin on large screens around the room." src="https://cdn-images-1.medium.com/max/1024/1*L4FuEB5ZHnSTlaT6y_Ze-g.jpeg" /><figcaption><a href="https://www.linkedin.com/in/jingyuan-hu-9352277b/"><em>Alier Hu</em></a><em>, Front End Engineer in Acquisition pod, presenting how Faire is transforming experimentation through faster iteration and continuous learning at our recent Toronto Tech Talk.</em></figcaption></figure><p>On November 27, 2025, we welcomed Toronto’s tech community into our office for a <a href="https://faire.tech/events">Tech Talk</a> dedicated to how <a href="https://www.faire.com/">Faire</a> engineers build. Across six talks, from discovery and fulfillment to mobile, CRM, and experimentation, our speakers shared how Faire engineers have built out solutions shaped by interviews, experiments, and customer behavior, delivering real value for the brands and retailers we serve.</p><p>You can watch <a href="https://youtu.be/mKOrz4fHaNA?si=wrCqz5lhyKvQAWSI">a video recording</a> of the event. Here are some of the highlights!</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FmKOrz4fHaNA%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DmKOrz4fHaNA&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FmKOrz4fHaNA%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/24343fd9682c3a09c7eebd5096172420/href">https://medium.com/media/24343fd9682c3a09c7eebd5096172420/href</a></iframe><h3>Powering discovery on the Faire homepage</h3><p>The evening opened with <a href="https://www.linkedin.com/in/junozhu/">Juno Zhu</a> and <a href="https://www.linkedin.com/in/nakulpathak/">Nakul Pathak</a> from the Inspire pod, who shared the multi-iteration journey of building a real-time homepage feed. Retailers had repeatedly told us the page felt stale, with recommendations that refreshed too slowly. The team needed to rethink the experience without rebuilding the entire discovery stack.</p><p>Their first iteration reused the visually similar product engine already powering product-page recommendations. With a vertical grid and live ranking, they shipped an MVP in a week. Homepage views rose 1.5×, and retailers immediately noticed how responsive it felt. Building on the success, the team broadened the signals powering the feed, from cart activity and long-term embeddings to complementary categories and bestsellers. This led to a 110% increase in products added to cart compared to the old carousel model.</p><figure><img alt="Presentation slide titled “Retailer feedback” showing quotes from retailers about the homepage feed’s responsiveness, repetitive content, and desire for more inspirational recommendations" src="https://cdn-images-1.medium.com/max/1024/1*lKIImBnPUVKPvSA3F4MGcQ.png" /><figcaption><em>Samples of retailer feedback that shaped the next iteration of Faire’s homepage feed.</em></figcaption></figure><p>As usage grew, they tackled content fatigue by filtering out products already viewed, deprioritizing brands a retailer never orders from, and avoiding repeats across surfaces.</p><p>Looking ahead, the team is exploring tailored experiences for new retailers with sparse signals and new layout types designed for inspiration rather than pure relevance.</p><h3>Building Fulfilled by Faire as a lean pilot</h3><p><a href="https://www.linkedin.com/in/chinmayanathany/">Chinmaya Nathany</a> from the Fulfilled by Faire pod shared how we launched a centralized fulfillment service in just five months. Fulfillment typically accounts for 10–15 percent of a brand’s order value, and retailers face fragmented experiences when each brand ships separately.</p><p>We saw an opportunity to consolidate shipments by storing participating brands’ inventory in a Faire-operated warehouse and fulfilling orders on their behalf. Retailers benefit from a single minimum and consistent delivery, while brands offload the operational burden.</p><p>Chinmaya described building the pilot while also playing the roles of product and design. The team visited warehouses, prioritized requirements into clear buckets, partnered with a tech-forward 3PL, and built the integrations needed to sync products, orders, shipments, and inventory. They rolled out gradually to a small set of invited brands.</p><figure><img alt="Presentation slide titled “In-product changes,” listing three updates to the seller experience, simplified order fulfillment, reliable shipment tracking, and expanded free-shipping eligibility, shown alongside a screenshot of the Faire Orders dashboard with fulfillment actions and order details." src="https://cdn-images-1.medium.com/max/1024/1*4aqSB7vyO5sH-yiglGi_Tg.png" /><figcaption><em>In-product updates that supported the Fulfilled by Faire pilot, including simplified fulfillment, improved shipment tracking, and expanded free-shipping coverage.</em></figcaption></figure><p>Within six months, the pilot processed more than 15,000 orders for 41 brands. Early partners reported major time savings during peak seasons, and the initiative has now grown into two full engineering pods focused on scaling the service and improving the multi-brand checkout experience.</p><h3>Bringing Faire to iPad with a small but impactful redesign</h3><p>Next, <a href="https://www.linkedin.com/in/spencer-edgecombe/">Spencer Edgecombe</a>, an iOS engineer from the Brand team, shared how reframing a single metric revealed an overlooked opportunity. iPad users appeared to drive only 2% of order volume, but a deeper look showed that the 6% of retailers who used iPad generated 11% of total order volume and spent more than twice as much as non-iPad users. It was a small but highly valuable segment we weren’t serving well. Spencer’s investigation ultimately led to the greenlight for the iPad project.</p><p>To accelerate delivery, the team avoided building a separate app and instead introduced a split-view navigation model that reused most iPhone layouts while giving root screens a wider, more intuitive iPad experience. They also produced new App Store visuals, localized them across nine languages, and partnered with teams to ship the redesign.</p><figure><img alt="Presentation slide titled “In-product changes,” listing three updates to the seller experience, simplified order fulfillment, reliable shipment tracking, and expanded free-shipping eligibility, shown alongside a screenshot of the Faire Orders dashboard with fulfillment actions and order details." src="https://cdn-images-1.medium.com/max/1024/1*44rhJvUNQlXF3Szmz8fknA.png" /><figcaption><em>In-product updates that supported the Fulfilled by Faire pilot, including simplified fulfillment, improved shipment tracking, and expanded free-shipping coverage.</em></figcaption></figure><p>Following the launch, order volume from iPad retailers increased 11%, iPad’s contribution to total volume rose by 1.5 percentage points, and over 4,000 more retailers adopted the native app. The project also helped accelerate the team’s broader shift toward SwiftUI and modern concurrency.</p><h3>Rebuilding Brand CRM for speed and scale</h3><p><a href="https://www.linkedin.com/in/egeberkakkaya/">Ege Akkaya</a> from the Relationship Growth pod shared how Faire migrated Brand CRM to Elasticsearch to resolve major performance challenges.</p><p>CRM powers the targeted campaigns brands send through Faire, around 2.5 million emails per day. Queries for large brands had become increasingly slow or prone to timeouts as datasets grew, and our relational setup had reached its limits. The customer table held more than 135 million rows, and maintaining indexes for every filter combination was no longer feasible. Alternatives like distributed SQL or DynamoDB carried their own constraints. The team decided to leverage Elasticsearch after a detailed technical investigation. With its inverted index and horizontal scalability, Elasticsearch was a great fit for CRM’s flexible filtering needs.</p><p>The team migrated incrementally, starting with a proof of concept, then enabling read and write paths, and finally validating correctness through shadow reads in production. They separated order-history into its own index, tuned refresh intervals, and used partial updates to keep documents lightweight and fast.</p><figure><img alt="Presentation slide titled “Writes — break it down to partial updates,” showing a complex flow diagram. On the left, multiple event types feed into a central pipeline; in the middle, green and beige boxes represent update calculators and processors; on the right, a blue Elasticsearch cluster diagram shows customer and order-history indexes. The diagram visualizes how each event triggers only the necessary partial updates instead of rewriting entire documents." src="https://cdn-images-1.medium.com/max/1024/1*SJIJ7-BYZJIkHTh9lJZ8bw.png" /><figcaption><em>The team analyzed the data access pattern and opted for partial updates to the documents. Each update was determined by a specific calculator that is bound to the event scope.</em></figcaption></figure><p>By October, CRM traffic was fully migrated. Queries that previously took seconds now completed in under 100 milliseconds, and P99 latencies dropped more than eightfold, transforming the experience for some of our highest-volume brands.</p><figure><img alt="Two line graphs showing latency improvements after the CRM Elasticsearch migration. The left graph charts customer-page load times, with P99.9 dropping from several seconds to around 520 ms. The right graph charts marketing campaign send times, with P99 falling from multiple minutes to under 6 seconds. Labels highlight the dramatic &gt;8× reduction in both metrics." src="https://cdn-images-1.medium.com/max/1024/1*RMhzbDbu3RU6AdPHHVYHHw.png" /><figcaption><em>Latency dropped more than eightfold for our heavy customers</em></figcaption></figure><h3>Evolving experimentation for faster learning</h3><p><a href="https://www.linkedin.com/in/jingyuan-hu-9352277b/">Alier Hu</a> closed the engineering talks by describing how we accelerated both experiment setup and experiment learning across Faire. Previously, even simple A/B tests could require weeks of configuration and engineering support. To improve speed, the team introduced <a href="https://www.builder.io/">Builder.io</a>, a codeless CMS (Content Management System) integrated with Faire’s component system.</p><figure><img alt="Presentation slide titled “Builder.io,” featuring a screenshot of Builder’s editor. The interface shows Faire’s homepage content with editable blocks, a component layer on the left, layout controls in the center, and publishing options on the right. Labels highlight capabilities such as instant publish, updating layouts, and live preview." src="https://cdn-images-1.medium.com/max/1024/1*iVM4w-dkFu3iF9gTarUQRg.png" /><figcaption><a href="http://builder.io/"><em>Builder.io</em></a><em> is a visual, low-code development platform that provides a Figma-like interface for drag-and-drop UI creation.</em></figcaption></figure><p>Designers and PMs can now make layout and content adjustments directly, supported by a Figma plugin that converts design components into production-ready structures. Since adoption, 12 teams have shipped 37 experiments and features through Builder without writing bespoke frontend code.</p><p>To improve the pace of learning, the team implemented multi-armed bandits alongside traditional experimentation. Instead of splitting traffic evenly until significance, bandits gradually route more users to higher-performing variants, reducing the time it takes to converge on strong ideas and minimizing user exposure to weaker ones.</p><figure><img alt="Presentation slide titled “Comparison w/ traditional A/B tests.” On the left, a static A/B testing diagram shows equal, fixed allocation across Bundles 1–4. On the right, a multi-armed bandit diagram shows traffic gradually shifting over time toward higher-reward variants, with low-, medium-, and high-" src="https://cdn-images-1.medium.com/max/1024/1*f8JxUp9Z9yQA9btNurLYPg.png" /><figcaption><em>In a multi-armed bandit experiment, more users are sent to the higher-reward variant, and less to the weaker ones.</em></figcaption></figure><p>Looking ahead, the team is exploring AI-assisted variant generation and more automated experimentation loops.</p><h3>What’s next</h3><p>Conversations continued well into the networking session as guests connected with engineers, hiring managers, and our talent team. The talks only scratched the surface of the work happening across Engineering to deliver value to the brands and retailers who rely on Faire. If you’re interested in making an impact on millions of small businesses around the world, we’d love to meet you. We’re hiring across backend, frontend, mobile, data, and engineering management, explore open roles at <a href="https://www.faire.com/careers"><strong>faire.com/careers</strong></a>.</p><p>You can also keep an eye on <a href="https://faire.tech/events"><strong>faire.tech/events</strong></a> for upcoming tech talks. We look forward to seeing you at the next one.</p><figure><img alt="Photograph of group of people in an office smiling at the camera and waving or making peace signs, all wearing Faire branded shirts." src="https://cdn-images-1.medium.com/max/1024/1*PBICSWdqWZG6vqtqg6N9rg.jpeg" /><figcaption><em>The 2025 Q4 Toronto tech talk crew</em></figcaption></figure><p>A big thank-you to our speakers — <a href="https://www.linkedin.com/in/junozhu/">Juno Zhu</a>, <a href="https://www.linkedin.com/in/nakulpathak/">Nakul Pathak</a>, <a href="https://www.linkedin.com/in/chinmayanathany/">Chinmaya Nathany</a>, <a href="https://www.linkedin.com/in/spencer-edgecombe/">Spencer Edgecombe</a>, <a href="https://www.linkedin.com/in/egeberkakkaya/">Ege Akkaya</a>, and <a href="https://www.linkedin.com/in/jingyuan-hu-9352277b/">Alier Hu</a> — and to the organizers <a href="https://www.linkedin.com/in/jeffhodnett/">Jeff Hodnett</a>, <a href="https://www.linkedin.com/in/sandragc/">Sandra Campos</a>, <a href="https://www.linkedin.com/in/trevoromoto/">Trevor Omoto</a>, <a href="https://www.linkedin.com/in/jasminefernandes/">Jasmine Fernandes</a>, <a href="https://www.linkedin.com/in/becky-laufer-2b2b4699/">Becky Laufer</a>, <a href="https://www.linkedin.com/in/angela-trieu/">Angela Trieu</a>, <a href="https://www.linkedin.com/in/ava-quinn-226109255/">Ava Quinn</a>, <a href="https://www.linkedin.com/in/carolynnolan/">Carolyn Nolan</a>, <a href="https://www.linkedin.com/in/leandre-martinez/">Leandre Martinez</a>, <a href="https://www.linkedin.com/in/sabrina-fong/">Sabrina Fong</a>, <a href="https://www.linkedin.com/in/brockjenken/">Brock Jenken</a>, <a href="https://www.linkedin.com/in/edyakabosky/">Ed Yakabosky</a> and <a href="https://www.linkedin.com/in/cheukman/">Cheuk-man Kong</a> for making the night possible.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=cd193bee061b" width="1" height="1" alt=""><hr><p><a href="https://craft.faire.com/building-beyond-code-faires-product-mindset-in-engineering-cd193bee061b">Building beyond code: Faire’s product mindset in engineering</a> was originally published in <a href="https://craft.faire.com">The Craft</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Unbound belonging: Faire’s experience at the 2025 Grace Hopper Celebration]]></title>
            <link>https://craft.faire.com/unbound-belonging-faires-experience-at-the-2025-grace-hopper-celebration-22432ee7eb0c?source=rss----4af981bb79f--engineering</link>
            <guid isPermaLink="false">https://medium.com/p/22432ee7eb0c</guid>
            <category><![CDATA[events]]></category>
            <category><![CDATA[engineering]]></category>
            <category><![CDATA[faire]]></category>
            <category><![CDATA[grace-hopper-conference]]></category>
            <category><![CDATA[grace-hopper]]></category>
            <dc:creator><![CDATA[Victoria Schuster]]></dc:creator>
            <pubDate>Tue, 16 Dec 2025 17:30:33 GMT</pubDate>
            <atom:updated>2025-12-16T17:30:31.893Z</atom:updated>
            <content:encoded><![CDATA[<h4>Reflections on connections, community, and impact</h4><figure><img alt="The author, Victoria, a woman with long brown hair wearing a cardigan and jeans, stands in front of a mural that says “Grace Hopper Celebration 2025.”" src="https://cdn-images-1.medium.com/max/1024/1*N7Q-WKTAWDHkvPkaUVfDUw.png" /></figure><p>On a cool, early November week in Chicago Illinois, over 25,000 people gathered to champion diversity in tech at the <a href="https://ghc.anitab.org/">Grace Hopper Celebration</a> (GHC). As first time attendees, the experience was nothing but remarkable for our small group of data scientists, engineers, product managers, recruiters, and communications and marketing team members coming from <a href="https://www.faire.com/">Faire</a>. We enjoyed meeting attendees at the Faire booth, learning from speakers on a plethora of topics, and getting to connect with a diverse group of largely underrepresented talent within tech and hear about their experiences. What made it even more special was to be able to speak so proudly about Faire’s mission. At Faire, we’re focused on empowering brands and retailers to succeed through our marketplace and operations tools, which has a major impact on helping women-owned businesses launch and thrive. Overall, it was an awe-inspiring experience. In this blog post, we’ll share some of the most interesting and impactful moments from our time at GHC this year.</p><figure><img alt="Eleven Faire Employees standing in front of Faire’s booth at the Grace Hopper Celebration. The booth features the Faire logo and the words “shop local” on the front of the table and the backdrop are photos of Faire retailers and brands." src="https://cdn-images-1.medium.com/max/1024/1*9zIxfyIET1Hk_PDpwfF2Gg.jpeg" /><figcaption>The GHC 2025 Faire team.</figcaption></figure><h3>Making connections at the booth</h3><p>The Grace Hopper experience was especially meaningful this year as Faire hosted the conference’s first ever Small Business Marketplace, featuring eight women-owned brands local to Chicago. It was an incredible chance to showcase our mission and how Faire supports independent entrepreneurs. At our booth, we had a constant flow of individuals interested in hearing more about what we do. As a B2B company, it’s not uncommon to field many questions about our platform, mission, and our impact on customers; however, we found that a lot of the people we talked with could easily resonate with what Faire stands for: supporting local.</p><p>One of the highlights was getting to share projects we’ve worked on. (For example, I was able to proudly show off a homepage upgrade that my team built!) Our group was able to form connections with those who have utilized platforms similar to Faire.</p><p>We were also approached by many individuals who had used Faire for themselves, or had witnessed Faire’s impact on friends or family. It was incredible getting to hear directly from brands about how our work has impacted their sales, their business, and their lives as a whole.</p><figure><img alt="Eight female brand owners from the Faire GHC Marketplace posing in front of a Faire-branded banner poster displaying text “Faire: Where your favourite shops love to shop.”" src="https://cdn-images-1.medium.com/max/1024/1*A5_O9MQkz5ENxRCUObBgdg.jpeg" /><figcaption>Brands from the GHC Faire Marketplace.</figcaption></figure><h3>The power of connection</h3><p>One of the best parts of attending GHC was the people we met — both at our booth and on the floor. The audience was diverse, both in their roles and career stages. We met early career engineers and provided advice on landing that first internship; we chatted with senior staff designers about doing work that leaves a positive impact on the world; and we also connected with product managers looking to expand out of their current field. From the technical side, we connected on stacks and infrastructure, for example, discussing server-driven user interfaces and their limitations, or comparing approaches on when to best utilize certain AWS databases over others.</p><p>Although we all came from different cities, industries, roles, or even countries, we shared common experiences and aligned in ways one might not expect: a yearning to create a positive impact through our work, a passion for the evolution of diversity in technology, and, unfortunately, bruised hearts from years of micro-aggressions faced in grade school, university, and many of our workplaces.</p><p>That said, hearing our collective experiences reinforced the gratitude I have for the coworkers and community that I’m a part of at Faire. I was able to share that even in my first few months as a (fairly lost) new grad engineer, I was never belittled or made to feel less than for asking a question from anyone at Faire. Though it may seem like a low bar, that was almost shocking for many of the individuals I spoke with. Moreover, I shared that my team is majority women: three of four backend engineers and two of four frontend/client engineers are women, a rarity in tech. I’ve also had the privilege of being on leadership for both our Femmes (women+ focused) employee resource group, and our Femmes Code (women+ in technical roles) subchapter, where we organize events and opportunities to both uplift women in their careers and lives, and educate allies on how to better support, sponsor, and elevate those around them.</p><p>Although Faire’s culture stands out, equitable treatment is something everyone should experience, no matter our background. Events like GHC help us to share our experiences, understand what’s possible and what we should be aiming for, and feel more united in asking for what we deserve.</p><h3>Learning from other leaders</h3><p>We were able to learn from the vast selection of talks organized by GHC. A favorite was a presentation titled <em>The Fire You Carry: What Female Rage Can Teach Tech</em>, which shared how women can harness their rage to their advantage in the workplace. Anger is a call for long-term action, and not something to dismiss as being “too emotional” or “too sensitive” as we often do.</p><p>Other Faire attendees sat in on talks like <em>The Manager’s Edge: Unlock Your Mentoring Superpowers</em>, which shared how to empower people by matching the intervention to the need — teach to close skill gaps, coach to shift mindset, and advise selectively when guidance is explicitly needed. Other notable talks included <em>Emotional Intelligence For Success and Well-Being</em>, which discussed how emotional intelligence significantly impacts performance by improving self-awareness, communication, and stress management, which leads to better decision-making and higher productivity.</p><p>Throughout these talks, we were able to learn from a primarily female lens, rather than translating these topics through our own set of glasses, as we often do. That alone was an impactful shift compared to what we are used to and emphasized the importance of different voices in our industry.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*1R93mJTmtzQ1yEAB968o7g.jpeg" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*FqlsryGE2pDDvdU14qZ7Dw.jpeg" /><figcaption>Some of our favorite talks from the event.</figcaption></figure><h3>Feeling a sense of belonging</h3><p>Outside of the Faire booth, we were able to connect and explore beyond even the bounds of the conference. From the minute I stepped off of the plane until the minute I arrived back on Canadian soil, I felt like the tech landscape had flipped on its head. As I stepped into the elevator in Chicago, headed to the taxi area, women’s luggage tags and backpacks were labelled with logos from many large tech companies. Even the evening before the conference, sitting at a restaurant bar, I asked the woman next to me what brought her to Chicago. I’m sure the smile on my face proved just how excited I was that she was also there for GHC. We chatted about what we do, our experiences, and what we were most looking forward to. Even on my flight home, my entire row was filled with women in tech.</p><p>Though small, having an intersection of my identity be the majority not only within a room or conference, but what felt like the entire city, is an experience I will never forget. Despite having had largely positive experiences as a woman in tech, it was the first time that I felt like I belonged without caveats or qualifiers: nothing to adjust, prove or soften, just me.</p><h3>Reflecting on the experience</h3><p>Arriving home, though exhausted, we felt inspired by all of the conversations, presentations and moments surrounded by and celebrating minorities in tech. Hearing about others’ experiences in their workplaces both inspired us and also emphasized just how lucky we are to do what we do. Being reminded about the impact we have every day at Faire (through the brands that had sold out of many of their products during the conference) emphasized the power we have to help small businesses thrive even more.</p><p>The theme of GHC 2025 was “Unbound,” meant to represent releasing preconceived notions of what we can or can’t do, releasing limitations we’ve placed on ourselves and leading as we are. Both personally, and for Faire as a whole, I think we really took the theme to heart. In a time where many companies feel the need to fit into a mold, by embracing diversity and doing all we can to build a team that best represents our customers, Faire is becoming unbound from the traditional tech landscape. Our success has occurred not in spite of our diverse team, but because of it. Although we, like most companies, still have a ways to go, I walked away feeling very proud to say I work at Faire, a company that truly cares.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=22432ee7eb0c" width="1" height="1" alt=""><hr><p><a href="https://craft.faire.com/unbound-belonging-faires-experience-at-the-2025-grace-hopper-celebration-22432ee7eb0c">Unbound belonging: Faire’s experience at the 2025 Grace Hopper Celebration</a> was originally published in <a href="https://craft.faire.com">The Craft</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How we tamed Hibernate ORM in Kotlin with Project Yawn]]></title>
            <link>https://craft.faire.com/how-we-tamed-hibernate-orm-in-kotlin-with-project-yawn-17692cbbad0e?source=rss----4af981bb79f--engineering</link>
            <guid isPermaLink="false">https://medium.com/p/17692cbbad0e</guid>
            <category><![CDATA[engineering]]></category>
            <category><![CDATA[query]]></category>
            <category><![CDATA[orm]]></category>
            <category><![CDATA[kotlin]]></category>
            <category><![CDATA[hibernate]]></category>
            <dc:creator><![CDATA[Luan Nico]]></dc:creator>
            <pubDate>Thu, 11 Dec 2025 19:35:59 GMT</pubDate>
            <atom:updated>2025-12-19T18:47:31.256Z</atom:updated>
            <content:encoded><![CDATA[<h4>Plus, a big announcement about Faire’s open source initiatives</h4><figure><img alt="Abstract vector illustration including database, code and gear iconography, serving as cover image for the article." src="https://cdn-images-1.medium.com/max/1024/1*ZqYLmzC9EmrSQHQY_I0nTQ.jpeg" /></figure><p>At <a href="https://www.faire.com/">Faire</a>, we have a very robust <a href="https://craft.faire.com/how-faire-uses-kotlin-to-power-our-small-business-marketplace-3c3c7cafe4ec">Kotlin backend service infrastructure</a> that we’ve carefully honed over the years, powered by a collection of established libraries and frameworks used by the broader open-source community, all coupled with some special Faire glue to make it all work. And as many within the JVM world, for ease of database access without writing and mapping SQL queries to models by hand, we use <a href="https://hibernate.org/">Apache Hibernate</a>. The Hibernate ORM is probably the most famous (infamous?) ORM framework of all time, and it has set the standard — and many of the pain points — for ORMs in general for several decades (yep, I know — feeling old yet?). It’s a polarizing topic. The discourse is often that people hate it but still use it, which leads me to compare it with that old democracy adage: it’s the worst way to do things, except for all of those other ways that have been tried from time to time.</p><p>It’s undeniable how easy Hibernate can make setting up basic database access, especially for smaller CRUD applications, by directly mapping tables to your POJOs, requiring very little boilerplate and glue. But it can lead to many issues that lead a lot of people to choose alternatives. But we’re not here today to talk about those, or discuss how to do ORMs or DB access in general (I leave that for the philosophers in the comments). Instead, I wanted to share a small way in which we made our usage of Hibernate, specifically the legacy Criteria API that we still use, a little bit better at Faire — taming one of its pain points with a creative solution. <em>Hint: it will involve type-safety and KSP ;)</em></p><h3>The Criteria API</h3><p>The Criteria API is a legacy way of building queries. There are newer alternatives on newer Hibernate versions, including some that aim to fix this exact pain point we’re going to talk about. But if you’re in a large codebase that has evolved and adapted around the Criteria API, you might find newer alternatives can actually be more verbose, or require a bigger paradigm shift or migration that your team might not be ready or willing to accept. In which case, you will probably be familiar with your code for building queries currently looking something like:</p><pre>criteria.add(Restrictions.eq(&quot;column&quot;, value))</pre><p>If that looks anything familiar — this article is for you. Now, at Faire, we already had, for quite a while, a wrapper over this to make it nicer to use with Kotlin:</p><pre>session.query&lt;Book&gt;<br>	.addEq(&quot;name&quot;, &quot;Lord of the Rings&quot;)<br>	.list() // returns a List&lt;Book&gt;</pre><p>That is a very thin wrapper we’ve been using and maintaining internally for years. However, it always bothered us that while we get back a fully typed List&lt;Book&gt;, consistent with the full-type-safety we know and expect in Kotlin, the actual arguments to the query were, let’s say, not ideal.</p><p>The column names are just String, and, even worse, the values are Any (that is from our Kotlin wrapper; the underlying Java API is just Object of course). If you make an incorrect assumption about your table, the best case-scenario is only catching that later on with unit tests (and that is the <em>best</em> scenario). After seeing again and again bugs and developer productivity hits, we had a dream of making this better.</p><p>But, as you can imagine, it was never a priority on top of other much-needed infrastructure improvements we’re always doing. Well, that all changed during one of our glorious <a href="https://craft.faire.com/crafting-a-hack-week-that-people-love-b0c2afe2e639">Hack Weeks</a> (an internal annual hackathon where everyone at Faire can participate and form teams to work on whatever projects their heart desires). And you can bet just what our heart desired.</p><p>So, during that pivotal Hack Week, we built a functional prototype of what would eventually replace our wrapper; a brand-new Hibernate Criteria API wrapper, with basically the same syntax we already knew and loved, minimally amended to provide one key benefit: <strong>full type-safety</strong>.</p><p>And over the course of the following 2 years (!!), we, slowly, whenever we had some spare time, migrated 61% of all queries across the entire codebase to the new infrastructure. As of now, we’re happy to say more than 11 of our production services are fully migrated. Given that, alongside the fact that all new queries are type-safe, we were able to reduce the number of magic-string induced incidents (and associated developer frustration only caught during tests) to zero.</p><p>As we rolled out this migration, we had to add support for different types of queries, refining the code-generation to power it, fixing bugs, listening to internal feedback, and thus homing in on what we now call, and are happy to introduce: <strong>Project Yawn</strong>.</p><h3>Introducing: Project Yawn</h3><p>This is what a Yawn query looks like:</p><pre>session.query(BookTable) { books -&gt;<br>	addEq(books.name, &quot;Lord of the Rings&quot;)<br>}</pre><p>That’s right! You get an object representing your Hibernate entity (BookTable), with all its fields. That means you get auto-complete, <em>intellisense,</em> and compile-time checks. But that’s not all—Yawn also knows the types of your columns, so it makes sure that the name column on the books table expects a String, and nothing else.</p><p>“But that is not going to cut it”, you might say, “I need complex queries!” Yawn has you covered — basically any query that can be written with Hibernate’s Criteria API can be written with Yawn, including complex and nested joins, projections, etc.</p><p>For example, here are the e-mails of all authors in the database whose favourite book is their own writing:</p><pre>val emails = session.project(PersonTable) { people -&gt;<br>	val favoriteBooks = join(people.favoriteBook)<br>	val favoriteBooksAuthors = join(favoriteBooks.author)<br>	addIn(people.name, authors)<br>	addEq(people.name, favoriteBooksAuthors.name)<br><br>	project(people.email)<br>}.list()</pre><p>And we support much more: projection to data classes, sub-queries (detached and correlated), join references, and much more.</p><p>And that was all thanks to…</p><h3>The magic of KSP</h3><p>In order to power the creation of the meta-model representations of our tables based on our Hibernate entities, we use the power of <a href="https://kotlinlang.org/docs/ksp-overview.html">Kotlin Symbol Processing</a>, the official meta-programming framework in Kotlin. That means we hook up compilations steps to the compiler itself — no external tools, no scripts. It is pure Kotlin code that is automatically run when you add the Yawn dependency, and integrates well with IntelliJ or your preferred editor (references and go to definition all work out-of-the-box).</p><p>We have generators that scan through any entities Foo annotated with @YawnEntity on our Gradle module and generate a FooTable definition to be used for queries, maintaining the same visibility modifiers as the original class. It has references to columns and relationships, allowing for type-safe joins, and works with all Hibernate use-cases we had so far in our vast codebase (the hardest technical challenges were settling the exact shape and design of our APIs to support these edge cases such as embedded entities, composite keys, references with different foreign keys, etc).</p><p>After a period of tweaking the interfaces to be more ergonomic, fighting with the underlying Hibernate implementations, and wrangling some complex generics, the generators are now just plain Kotlin code, so they’re easily amendable by the entire team if there’s any feature missing.</p><p>We’ve tinkered with and refined it as we added to more usages and more complex use cases. But we wanted more — we wanted to share what we did with the community, in case other projects using Hibernate could benefit (and contribute!) to Yawn. So… we did.</p><h3>OSS</h3><p>We are thrilled to announce that we are officially <a href="https://github.com/Faire/yawn/">fully open-sourcing Project Yawn under the MIT license</a>! We believe in the power and community of open source, and while we use many tools and libraries, we want to contribute back with something we could share from our work.</p><p>You can check out the repository at <a href="http://github.com/faire/yawn">github.com/faire/yawn</a> for instructions on how to get started. We also welcome contributions and constructive feedback, and would love if you could give us a star!</p><p>And that’s not all — this is just one of a few pieces we’re happy to announce as part of a broader company-wide commitment to open-source and the developer community. We’ve started to build a dedicated public-facing OSS page at <a href="http://faire.tech/open-source">faire.tech/open-source</a> where we aim to collect and catalogue projects we have (or have yet to) publish, as well as other contributions we’ve made over the years to existing and established libraries and tools we use every day.</p><p>If you’re interested in our other projects, I’d recommend checking out our <a href="https://github.com/Faire/faire-detekt-rules/">faire-detekt-rules</a>, a curated and opinionated collection of custom Detekt rules and configs that we use on our Kotlin modules. You can opt in many of the rules that help us catch bugs early on, standardize best practices, and just keep our code looking pristine.</p><p>If you’re looking to write more complex queries that Hibernate and Yawn can’t support, we highly recommend <a href="https://github.com/sqldelight/sqldelight">sqldelight</a> (which, fun fact, also uses KSP under the hood), in which case you might want to check out the <a href="https://github.com/faire/sqldelight-cockroachdb-dialect">CRDB connector we open-sourced a while ago.</a></p><p>This is just the beginning of how we think Faire can contribute to the broader OSS community — stay tuned for new additions very soon as we work to extract other pieces of our codebase and infrastructure.</p><p><em>Many thanks to everyone at Faire and elsewhere that has helped us along the way, including, but not limited to, </em><a href="https://www.linkedin.com/in/adrielmartinez/"><em>Adriel Martinez</em></a><em>, </em><a href="https://www.linkedin.com/in/emilycoleary/"><em>Emily O’Leary</em></a><em>, </em><a href="https://www.linkedin.com/in/jeanyang0/"><em>Jean Yang</em></a><em>, </em><a href="https://www.linkedin.com/in/kevinbrightwell/"><em>Kevin Brightwell</em></a><em>, </em><a href="https://www.linkedin.com/in/micahbeech/"><em>Micah Beech</em></a><em>, </em><a href="https://www.linkedin.com/in/oren-kislev-99506a77/"><em>Oren Kislev</em></a><em>, </em><a href="https://www.linkedin.com/in/quinn-budan/"><em>Quinn Budan</em></a><em>, </em><a href="https://www.linkedin.com/in/stanislav-novosad-04861b161/"><em>Stas Novosad</em></a><em>, and </em><a href="https://www.linkedin.com/in/zhipingcai/"><em>Zhiping Cai</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=17692cbbad0e" width="1" height="1" alt=""><hr><p><a href="https://craft.faire.com/how-we-tamed-hibernate-orm-in-kotlin-with-project-yawn-17692cbbad0e">How we tamed Hibernate ORM in Kotlin with Project Yawn</a> was originally published in <a href="https://craft.faire.com">The Craft</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building Faire’s Ads System from scratch]]></title>
            <link>https://craft.faire.com/building-faires-ads-system-from-scratch-5c24fc916995?source=rss----4af981bb79f--engineering</link>
            <guid isPermaLink="false">https://medium.com/p/5c24fc916995</guid>
            <category><![CDATA[engineering]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[data-science]]></category>
            <category><![CDATA[ads]]></category>
            <category><![CDATA[faire]]></category>
            <dc:creator><![CDATA[Le Nguyen]]></dc:creator>
            <pubDate>Tue, 09 Dec 2025 17:03:31 GMT</pubDate>
            <atom:updated>2025-12-10T02:14:06.919Z</atom:updated>
            <content:encoded><![CDATA[<h3>Building Faire’s Ads System from Scratch</h3><h4>Going from zero to one in six months</h4><figure><img alt="Abstract illustration of an ads system" src="https://cdn-images-1.medium.com/max/1024/1*JnJOPS3vCsO_6QE_23bdfQ.png" /></figure><p>At <a href="https://medium.com/faire-the-craft">Faire</a>, we’re building a marketplace that connects independent brands with retailers around the world. Two years ago, we saw an opportunity to help brands grow even faster by building an in-house Ads System — one that would give them effective, targeted ways to reach the right retailers at the right time. The result? Our Ads System has become one of the key drivers of new brand-retailer connections on the platform. In this post, we’ll share how we built Faire’s Ads System from the ground up and our key learnings along the way.</p><h3>The challenge and our approach</h3><p>Building the Ads System for Faire’s wholesale marketplace presented unique challenges shaped by how brands and retailers do business together: small businesses needing simplicity, brands seeking lasting retailer relationships, and extended purchase cycles. Here’s how we addressed each challenge:</p><ul><li><strong>Serve small and medium businesses with simplicity:</strong> Many of our brands are small businesses without dedicated marketing teams or advertising expertise. Rather than requiring brands to manage complex campaigns, we built auto-targeting and auto-bidding systems that handle optimization automatically. Brands simply set a monthly budget, and our system takes care of the rest — identifying relevant retailers, adjusting bids based on conversion probability, and pacing spend throughout the month to maximize results.</li><li><strong>Optimize for long-term brand-retailer relationships: </strong>In wholesale, acquiring a new customer means establishing a relationship that can last years with high lifetime value. We built prediction models to identify the most relevant brand-retailer matches, optimizing for actual orders rather than just clicks, accounting for the extended timelines where orders can take days or weeks as retailers carefully evaluate products. To quantify whether ads actually drive incremental value in these long-term relationships, we implemented holdout groups that compare outcomes with and without ads over extended time horizons, capturing the full impact on retailer behavior that may take months to materialize. Our reporting provides an estimated one-year ROAS (return on ad spend), giving brands the long-term perspective needed to make informed budget allocation decisions.</li><li><strong>Ship an MVP in six months:</strong> Building advertising for wholesale marketplaces meant venturing into uncharted territory — we didn’t know what we didn’t know. The best way to learn was to get a working product in front of real users as quickly as possible. We set an aggressive six-month timeline to go from zero to a working pilot, allowing us to rapidly validate hypotheses with real brands and retailers. We delivered by ruthlessly prioritizing and leveraging existing infrastructure — piggybacking on our organic search system rather than building from scratch, which saved development time and benefited from battle-tested systems.</li></ul><p>These challenges shaped our technical requirements and informed the architecture we built.</p><h3>Faire’s Ads System</h3><p>Our Ads System consists of three major components:</p><ol><li>The brand-facing Ads Manager tools, which allow brands to create campaigns and view results.</li><li>The retailer-facing Ads Delivery flow, which determines which ads to show for a given retailer action (like a search query).</li><li>The foundational data systems that log events, attribute outcomes, and power the entire platform.</li></ol><figure><img alt="Diagram of Faire’s Ads System with three components: Ads Manager handles forecasting, analytics, billing, and campaign management; Ads Foundation manages metrics, budget planning, attribution, and data logging; Ads Delivery handles slotting, bidding, prediction, and targeting." src="https://cdn-images-1.medium.com/max/1024/1*_a7VkMuAR0H1bNaBEbz-MQ.png" /><figcaption><em>Ads System consists of Ads Manager, Ads Foundation, and Ads Delivery components.</em></figcaption></figure><h4>Ads Delivery</h4><p>When a retailer performs a search on Faire, a request is sent to our Ads System to retrieve promoted results. The system processes this request through several stages to determine which ads to show and in what order:</p><ul><li><strong>Retrieval and targeting:</strong> We retrieve an initial pool of eligible ad candidates based on the incoming request, applying ad-specific filters for campaign budget and targeting rules. We initially reused our<a href="https://craft.faire.com/embedding-based-retrieval-our-journey-and-learnings-around-semantic-search-at-faire-2aa44f969994"> existing search framework</a> for this step — using a mix of text- and embedding-based methods — a pragmatic choice that let us ship faster while leveraging battle-tested infrastructure.</li><li><strong>Prediction and ranking:</strong> For each candidate ad, we use machine learning models to predict pCTR (predicted click-through rate) and pCTO (predicted click-to-order rate). We calibrate these probabilities using historical data to ensure fair billing. Ads are then ranked by combining their predicted click-through rate with their bid, prioritizing ads with the highest expected value. This means a highly relevant ad with a moderate bid can outrank a higher bid with lower relevance, keeping results useful for retailers.</li><li><strong>Auto-bidding and budget pacing:</strong> While brands pay per click, our system automatically optimizes for actual orders by using the predicted click-to-order rate to dynamically set appropriate click bids. The system adjusts bids based on each campaign’s performance — increasing them when conversions are efficient, reducing them when costs run high. Meanwhile, budget tracking aggregates spend for each campaign, pacing delivery throughout the month to respect budget constraints.</li><li><strong>Quality controls and display:</strong> We perform final checks to ensure ads meet minimum relevance thresholds before display, maintaining the high-quality browsing experience our retailers expect. We then log impressions and track any clicks or orders for analytics and model training.</li></ul><figure><img alt="Screenshot of Faire’s product grid showing products with ads highlighted by purple borders among regular results." src="https://cdn-images-1.medium.com/max/1024/1*J3y_R00TsQAfbbcuf5iCZg.png" /><figcaption><em>Ads appear as promoted product listings in search results and browse pages.</em></figcaption></figure><h4>Ads Manager</h4><p>We built a self-serve Ads Manager interface where brands can create and manage campaigns with minimal complexity:</p><ul><li><strong>Campaign setup and monitoring:</strong> Brands create campaigns by setting monthly budgets and basic targeting criteria. Performance dashboards display key metrics — impressions, clicks, orders, click-through rate, and spend. To help brands understand the long-term value of new relationships, we provide an estimated one-year ROAS that accounts for the extended value of wholesale customer acquisition.</li><li><strong>Billing and payments:</strong> We integrated with Faire’s existing billing infrastructure. Brands are billed monthly and only pay when retailers click on their ads. Our prediction calibration ensures fair billing — if our model overestimates conversion likelihood, advertisers could overpay. Accuracy and fairness in billing were top priorities.</li><li><strong>Forecasting and recommendations:</strong> We provide personalized monthly spending recommendations based on each brand’s past sales performance. These recommendations appear during campaign setup and when adjusting budgets. Active advertisers also receive prompts to increase budgets when they consistently approach their spending limit, helping brands understand when to scale their campaigns.</li></ul><figure><img alt="Screenshot of Faire’s Ads Manager interface displaying a dashboard with performance metrics, including total sales, costs, and spend, along with tables showing current and previous campaign activity with impressions, clicks, and conversion data." src="https://cdn-images-1.medium.com/max/1024/1*97JS4C-ibTqEgjRZG2zNVQ.png" /><figcaption><em>Ads Manager showing actionable campaign performance and spending metrics.</em></figcaption></figure><h4>Ads Foundation</h4><p>Building the real-time delivery flow and user-facing features was only part of the story. We also invested heavily in the data foundation supporting the Ads System:</p><ul><li><strong>Unified event logging:</strong> We established rigorous logging for all ads events — impressions, clicks, and conversions — standardizing definitions across web, iOS, and Android. This company-wide alignment ensured consistent measurement and reliable data for model training.</li><li><strong>Click and order attribution:</strong> We built a click attribution service that connects retailer clicks to eventual orders across the extended timelines typical in wholesale. Accurate attribution was critical for fair billing and measuring campaign effectiveness.</li><li><strong>Experimentation framework:</strong> Holdout groups without ads let us compare outcomes and quantify incremental value. We also designed a budget-balanced experimentation framework to run A/B tests without effects leaking between control and treatment groups through shared budgets, ensuring we could iterate and improve the system with confidence.</li></ul><h3>Key learnings from our journey</h3><p>Building Faire’s Ads System from scratch taught us lessons that shaped our entire approach to product development:</p><ul><li><strong>Set ambitious goals, then execute pragmatically:</strong> We gave ourselves a bold six-month target to launch a functional pilot, creating urgency and focus. This forced pragmatic choices — reusing existing search infrastructure, simplified auction mechanics, static ad load — allowing us to hit the deadline without getting bogged down in perfection. Ambitious timelines drive creative problem-solving when you’re willing to start simple. There’s always time to add complexity later.</li><li><strong>Define success early and invest in foundations:</strong> We defined clear success metrics from the outset — N brands actively using ads, minimal impact on retailer experience (conversion rates couldn’t drop more than X%), and system scalability. But we also knew that without a solid data foundation, none of those metrics would matter. We standardized tracking definitions across platforms, built robust attribution systems, and implemented comprehensive monitoring. This dual investment in both clear targets and reliable measurement paid off, allowing us to catch issues early and giving us confidence in our results.</li><li><strong>Protect the experience with guardrails:</strong> Introducing ads carries risks to both user experience and brand satisfaction. We implemented safeguards at multiple levels: capping ads per page, de-duplicating repeat clicks, and setting minimum predicted conversion rate thresholds below which ads won’t display, regardless of bid amount. These guardrails ensured positive outcomes for all parties and maintained the marketplace quality our community expects.</li></ul><h3>Impact</h3><p>Since launch, we’ve seen strong adoption and results. Brands using ads have seen, on average, a <a href="https://www.faire.com/news/2024-09-12-introducing-promoted-listings-faires-first-wholesale-advertising-tool-for-brands">25%</a> increase in new customers compared to what they acquired organically, with some brands like Cheese Brothers seeing a <a href="https://www.faire.com/news/2024-09-12-how-brands-are-using-promoted-listings">49%</a> lift in customer acquisition. “Promoted Listings is about as easy as it can get,” says Eric Ludy from Cheese Brothers. “It’s literally just setting a budget. With Faire’s Promoted Listings, our customer base keeps growing, and they’re good customers. They keep coming back. In terms of customer acquisition costs versus lifetime revenue, it blows everything else out of the water.”</p><h3>Looking ahead</h3><p>Building Faire’s Ads System has been challenging but rewarding. We started with a clear mission: help brands connect with more retailers in a way that empowers small businesses. By focusing on simplicity, robust data foundations, and protecting the experience for both sides of our marketplace, we went from zero to a fully operational ads platform in six months — one that’s now driving significant incremental value for our community.</p><p>This is just the beginning, with even more significant opportunities to come; whether it’s expansion opportunities to new surfaces and geographies, iterating on our still-nascent models and optimization strategies, or rethinking system components such as auction design and pacing, the opportunity for impact has never been bigger. If you’re curious about how ads fit into our larger vision for Discovery at Faire, we recently hosted a tech talk covering our Discovery &amp; Intelligence platform (see the <a href="https://craft.faire.com/discovery-intelligence-at-faire-highlights-from-our-san-francisco-tech-talk-66dfd6210698">blog post</a> and <a href="https://youtu.be/c3gz-lqCgeY?si=Hb1XJpwLdJBqNfp8&amp;t=1935">video</a>). If you enjoy tackling problems at the intersection of machine learning and marketplace dynamics, we’d love to hear from you.</p><p><em>This work would not have been possible without the dedication and contributions of </em><a href="https://www.linkedin.com/in/adijp/"><em>Aditya Jayaprakash</em></a><em>, </em><a href="https://www.linkedin.com/in/andrew-c-voorhees/"><em>Andrew Voorhees</em></a><em>, </em><a href="https://www.linkedin.com/in/aniruddha-borah-08736a33/?originalSubdomain=ca"><em>Ani Borah</em></a><em>, </em><a href="https://www.linkedin.com/in/arthurche/"><em>Arthur Che</em></a><em>, </em><a href="https://www.linkedin.com/in/briandeluna/"><em>Brian de Luna</em></a><em>, </em><a href="https://www.linkedin.com/in/bathompso/"><em>Ben Thompson</em></a><em>, </em><a href="https://www.linkedin.com/in/chrisfarrell2/"><em>Chris Farrell</em></a><em>, </em><a href="https://ca.linkedin.com/in/danyaowang"><em>Danyao Wang</em></a><em>, </em><a href="https://www.linkedin.com/in/eliza-worcester-74360597/"><em>Eliza Worcester</em></a><em>, </em><a href="https://www.linkedin.com/in/elamoureux/"><em>Etienne Lamoureux</em></a><em>, </em><a href="https://www.linkedin.com/in/geoff-beresford-13727b125/"><em>Geoff Beresford</em></a><em>, </em><a href="https://www.linkedin.com/in/ianspear"><em>Ian Spear</em></a><em>, </em><a href="https://www.linkedin.com/in/jai-mankoo-b6024182/"><em>Jai Mankoo</em></a><em>, </em><a href="https://www.linkedin.com/in/hurjane/"><em>Jane Hur</em></a><em>, </em><a href="https://www.linkedin.com/in/jeanyang0/"><em>Jean Yang</em></a><em>,</em><a href="https://www.linkedin.com/in/jessicagschwarz/"><em> Jessica Schwarz</em></a><em>, </em><a href="https://www.linkedin.com/in/kevinaloisi/"><em>Kevin Aloisi</em></a><em>, </em><a href="https://www.linkedin.com/in/kevinbenscheidt/"><em>Kevin Benscheidt</em></a><em>, </em><a href="https://www.linkedin.com/in/koroshahangar/"><em>Korosh Ahangar</em></a><em>, </em><a href="https://www.linkedin.com/in/kyle-mcandrews/"><em>Kyle McAndrews</em></a><em>, </em><a href="https://www.linkedin.com/in/ntleeu/"><em>Le Nguyen</em></a><em>, </em><a href="https://www.linkedin.com/in/luan-nico/"><em>Luan Nico</em></a><em>, </em><a href="https://www.linkedin.com/in/marckelechava/"><em>Marc Kelechava</em></a><em>, </em><a href="https://www.linkedin.com/in/michael-yen/"><em>Michael Yen</em></a><em>, </em><a href="https://www.linkedin.com/in/qinyuwang/"><em>Qinyu Wang</em></a><em>,</em><a href="https://ca.linkedin.com/in/paul-duchesne-87315b51"><em> Paul Duchesne</em></a><em>, </em><a href="https://www.linkedin.com/in/nunezpaul/"><em>Paul Nunez</em></a><em>, </em><a href="https://www.linkedin.com/in/pavel-konkin/"><em>Pavel Konkin</em></a><em>, </em><a href="https://www.linkedin.com/in/pedrodebruin/"><em>Pedro Sales De Bruin</em></a><em>, </em><a href="https://www.linkedin.com/in/jeshua-perth-silvers/"><em>Perth Silvers</em></a><em>, </em><a href="https://www.linkedin.com/in/rafiu-hossain/"><em>Rafiu Hossain</em></a><em>, </em><a href="https://www.linkedin.com/in/shannon-broekhoven-39637349/?originalSubdomain=ca"><em>Shannon Broekhoven</em></a><em>, </em><a href="https://www.linkedin.com/in/sophie-jorasch-50781668/"><em>Sophie Jorasch</em></a><em>, </em><a href="https://www.linkedin.com/in/ziqi-zhang-093b3b128/"><em>Ziqi Zhang</em></a>,<em> and many talented individuals across Faire.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5c24fc916995" width="1" height="1" alt=""><hr><p><a href="https://craft.faire.com/building-faires-ads-system-from-scratch-5c24fc916995">Building Faire’s Ads System from scratch</a> was originally published in <a href="https://craft.faire.com">The Craft</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Accelerating mobile releases at Faire with 80% faster deployment lead times]]></title>
            <link>https://craft.faire.com/accelerating-mobile-releases-at-faire-with-80-faster-deployment-lead-times-27ff0de99f59?source=rss----4af981bb79f--engineering</link>
            <guid isPermaLink="false">https://medium.com/p/27ff0de99f59</guid>
            <category><![CDATA[faire]]></category>
            <category><![CDATA[mobile-engineering]]></category>
            <category><![CDATA[engineering]]></category>
            <category><![CDATA[runway]]></category>
            <category><![CDATA[mobile-release-cycle]]></category>
            <dc:creator><![CDATA[Zach Radford]]></dc:creator>
            <pubDate>Thu, 04 Dec 2025 17:02:48 GMT</pubDate>
            <atom:updated>2025-12-04T17:02:47.061Z</atom:updated>
            <content:encoded><![CDATA[<h4>Shipping faster, safer, and smarter</h4><figure><img alt="Illustration of a rocket launching next to a phone showing apps on screen." src="https://cdn-images-1.medium.com/max/1024/1*8yzO955kub8KvPvPMkQQNw.jpeg" /></figure><p>At <a href="https://medium.com/faire-the-craft">Faire</a>, we’re always asking: <strong><em>How can we deliver value to our customers faster?</em></strong></p><p>Our <a href="https://www.faire.com/retailers/app">apps</a> are at the heart of how small businesses discover products and build the unique character of their communities. Every release includes new features, bug fixes, and performance improvements that directly shape our customers’ experiences — and our ability to learn quickly from experiments and customer feedback.</p><p>Until recently, the path from “code merged” to “in customer hands” regularly took <strong>more than two weeks</strong>. That delay slowed down our feedback loops and our pace of iteration. The Engineering team set out to fix that without compromising our stability or customer trust.</p><p>By rethinking our mobile release process and leveraging <a href="https://www.runway.team/">Runway</a>, we cut our release cycle from 14 days to 3 days — a 80% improvement — while maintaining our 99.99% crash-free rate.</p><h3>The challenge: a two-week journey for every update</h3><p>Before these changes, the release process for Faire’s mobile apps looked like:</p><ul><li><strong>Friday midnight:</strong> Cut the release branch.</li><li><strong>Monday–Tuesday:</strong> QA and App Store submission.</li><li><strong>Wednesday–Thursday:</strong> App review and approval.</li><li><strong>Next week:</strong> Phased rollout (1% → 2% → 5% → 10% → 20% → 50% → 100%).</li><li><strong>A few days later</strong>: 80% of customers had the new version.</li></ul><p>Altogether, it took about 14 days from branch cut to full adoption!</p><p>Each step prioritized caution over speed. These defaults had served us well for years, but our pace and scale had outgrown them. Customers might wait two weeks to see a new feature. By the time we learned from an experiment, we were already planning the next release — slowing our ability to act on what we’d learned.</p><h3>The insight: risk wasn’t the bottleneck</h3><p>When we looked at our metrics, a few patterns stood out:</p><ul><li>Crash-free rate had held steady above 99% for six months.</li><li>No hotfixes in<strong> </strong>18 months.</li><li>Most risk already mitigated through feature flags and experiment rollouts.</li></ul><p>In short: our process was <em>over-optimized</em> for safety. Our real bottleneck was delivery speed, not reliability. So we asked: What if we could move faster and keep the same level of confidence?</p><h3>The solution: process + platform + tooling</h3><p>Before diving into the changes, here are the tools that power our pipeline:</p><ul><li><a href="https://www.runway.team/">Runway</a>: Orchestrates every stage of our mobile release — from branch cut and smoke testing to submission, approval, and rollout — in a single automated flow.</li><li><a href="https://embrace.io/">Embrace</a>: Our mobile observability platform, integrated with Runway to monitor crash-free sessions, error rates, and rollout stability in real time.</li></ul><p>Together, these tools give us the visibility and guardrails to move faster without sacrificing reliability.</p><h4>Step 1: changing when we cut</h4><p>We moved our release branch cut from Friday midnight to Thursday afternoon. That small shift removed two wasted weekend days and tightened our feedback loop.</p><figure><img alt="Table comparing each stage of the mobile release process before and after the workflow improvements." src="https://cdn-images-1.medium.com/max/1024/1*f5Tjd3UQisjij6MzvWnnAw.png" /></figure><p>This change also let us compress QA from two days to one — a major win for our remote QA team across time zones.</p><p>Many teams hesitate to launch over weekends, but Runway’s safeguards gave us confidence. If crash-free rates drop below a threshold, rollouts pause automatically, no weekend monitoring required.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*bN5SRWr_1ciX9SAc8Bai5w.png" /><figcaption>Screenshot of Runway showing an example release timeline.</figcaption></figure><h4>Step 2: replacing phased rollouts with experiment controls</h4><p>Next, we rethought how we roll out to customers. Rather than disabling phased rollouts entirely, we now use them adaptively. With real-time stability data from Embrace, we ramp to 100% after three days above 95% crash-free sessions.</p><p>For most new features, our experiment and feature-flag framework controls exposure:</p><ul><li>5% → 10% → 50% → 100% experiment rollout sequence.</li><li>Instant rollback for problematic features.</li><li>P0 hotfix protocol if a client-side fix is required.</li></ul><p>For app-wide or non-flagged changes, Runway and Embrace manage rollout pacing automatically. Since nearly all customers have automatic updates enabled, we now reach <strong>80% adoption within two days</strong> of full rollout, which means faster insights with no added risk.</p><h4>Step 3: automating the flow with Runway</h4><p>Runway unifies our mobile release workflows and provides real-time visibility across teams. It lets us:</p><ul><li><strong>Automate the entire release timeline:</strong> Branch cuts, smoke tests, screenshot approvals, and App Store submissions all happen in a single coordinated flow.</li><li><strong>Monitor release health in real time:</strong> Runway pauses or resumes rollouts automatically if metrics cross a threshold.</li><li><strong>Enable safe weekend automation:</strong> Runway halts rollouts if crash-free sessions dip, giving engineers time to diagnose before more customers are affected.</li></ul><p>Before Runway, we relied on checklists, Slack messages, and manual handoffs to manage our releases. Now everything is accessible in a shared dashboard, which serves as the one source of truth for everyone from QA to on-call. Releases are now predictable and nearly frictionless, freeing up valuable engineering hours.</p><figure><img alt="Screenshot of Runway’s automated rollout tooling." src="https://cdn-images-1.medium.com/max/1024/1*Zze1p62VlLOVtGlUwj6haQ.png" /></figure><h3>The impact on our customers and our team</h3><p>Now, the time from <strong>branch cut to app store availability</strong> is down to about 3 days. Reaching full adoption still depends on update behaviour, but with adaptive rollout monitoring, most customers (&gt;80%) are now on the latest version within about 5 days, down from 10–14.</p><figure><img alt="Table summarizing the impact of the mobile release improvements." src="https://cdn-images-1.medium.com/max/1024/1*MrwlCdYFyKwf3cDcHNUpDw.png" /></figure><p>That 3x speed-up means:</p><ul><li><strong>Better customer experience:</strong> Fixes and improvements reach customers in days, not weeks.</li><li><strong>Faster experimentation:</strong> We can release and learn weekly instead of biweekly.</li><li><strong>More engaged engineers:</strong> Releases are predictable, visible, and stress-free.</li></ul><h3>What’s next: continuous acceleration</h3><p>With faster version adoption, we’re now exploring:</p><ul><li>Reducing the 12-week support window through automated deprecations.</li><li>Removing the need to foreground the app for Experiment or setting updates via instant configuration delivery.</li><li>Keeping builds small to maximize over-the-air (OTA) update success and speed.</li><li>Enabling continuous delivery, where<strong> </strong>every successful build could automatically become a release candidate.</li></ul><h3>Lessons learned</h3><p>Looking to speed up your own mobile release cycle? Here are some of our biggest lessons from the process:</p><ol><li><strong>Measure before you optimize:</strong> Data showed we were stable enough to move faster.</li><li><strong>Automate confidence, not caution: </strong>Tooling like Runway replaces human hesitation with visibility and guardrails.</li><li><strong>Treat the release process like a product:</strong> We iterated, tested, and improved it — just like any customer-facing feature.</li><li><strong>The wins are compounding. </strong>With the right culture, process, and tools, you can move fast and stay safe — and ultimately serve your customers better.</li></ol><p>Special thanks to the Mobile Growth and Mobile Platform teams for their partnership, and our friends at Runway for making this possible.</p><p><em>Interested in building systems that help small businesses move faster? Check out our </em><a href="https://www.faire.com/careers"><em>open roles</em></a><em> at Faire.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=27ff0de99f59" width="1" height="1" alt=""><hr><p><a href="https://craft.faire.com/accelerating-mobile-releases-at-faire-with-80-faster-deployment-lead-times-27ff0de99f59">Accelerating mobile releases at Faire with 80% faster deployment lead times</a> was originally published in <a href="https://craft.faire.com">The Craft</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Discovery intelligence at Faire: highlights from our San Francisco Tech Talk]]></title>
            <link>https://craft.faire.com/discovery-intelligence-at-faire-highlights-from-our-san-francisco-tech-talk-66dfd6210698?source=rss----4af981bb79f--engineering</link>
            <guid isPermaLink="false">https://medium.com/p/66dfd6210698</guid>
            <category><![CDATA[discovery]]></category>
            <category><![CDATA[engineering]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[tech-talk]]></category>
            <category><![CDATA[machine-learning]]></category>
            <dc:creator><![CDATA[Eric Fan]]></dc:creator>
            <pubDate>Mon, 20 Oct 2025 19:41:40 GMT</pubDate>
            <atom:updated>2025-10-20T19:41:07.043Z</atom:updated>
            <content:encoded><![CDATA[<h4>Inside Faire’s next generation search, personalization, ads, feed, and machine learning systems</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*bBfW1QEF2IbKrDi7mB6iWw.png" /><figcaption><a href="https://www.google.com/url?q=https://www.linkedin.com/in/yuanchi-ning-a6130323/&amp;sa=D&amp;source=editors&amp;ust=1760575841512881&amp;usg=AOvVaw3YiroCwqFz_etKrzUjXxjZ">Yuanchi</a><a href="https://www.google.com/url?q=https://www.linkedin.com/in/yuanchi-ning-a6130323/&amp;sa=D&amp;source=editors&amp;ust=1760575841512960&amp;usg=AOvVaw2d6V51A8GLm_7KLQGzalsJ"> Ning</a>, Director of Personalization, started the event by sharing Faire’s story and mission, as well as the Discovery focus at Faire, with a full room.</figcaption></figure><p>At our recent <a href="https://www.google.com/url?q=https://faire.tech/events&amp;sa=D&amp;source=editors&amp;ust=1760575841512169&amp;usg=AOvVaw3ErY7m_4KeB6c9TpOozDRh">San Francisco Tech Talk</a>, AI scientists, machine learning engineers, and infra engineers gathered to share how AI and machine learning are revolutionizing product discovery across our marketplace. From semantic search to graph neural networks, from large language models (LLMs) to next-generation infrastructure, the evening showcased how Faire is building intelligent systems that empower brands and retailers worldwide.</p><p>You can <a href="https://www.youtube.com/watch?v=c3gz-lqCgeY">watch a video of the event</a> or read the highlights below!</p><h3><strong>The evolution of discovery at Faire</strong></h3><p>More than 60% of Faire’s order volume is driven by search. As <a href="https://www.google.com/url?q=https://www.linkedin.com/in/jason-xu-a3450068/&amp;sa=D&amp;source=editors&amp;ust=1760575841514250&amp;usg=AOvVaw06AcxNa-H1zMEvDfHAsn6b"><strong>Jason Xu</strong></a>, Senior Manager of Search, explained, discovery at Faire has evolved dramatically:</p><ul><li><strong>From keywords to semantic retrieval</strong>: We’re moving beyond literal word matches to embeddings that capture meaning.</li><li><strong>From text to multimodal understanding</strong>: We’re incorporating product images and visual similarity for richer results.</li><li><strong>From isolated queries to graph-based personalization</strong>: We’re modeling relationships between retailers, brands, and categories to surface better matches.</li></ul><p>Looking ahead, Jason highlighted the promise of <strong>generative search</strong>, where retrieval and ranking merge into a single intelligent model that aligns more closely with user intent.</p><figure><img alt="A presentation slide titled “Future frontiers: gen search &amp; scaling laws” describes how retrieval and ranking are merging into a single generative model, with notes on scaling laws and user alignment. On the right, two visuals appear: a complex diagram from Deng et al. 2025 illustrating the OneRec architecture, and a line graph from Zhai et al. 2024 comparing traditional recommendation models with generative recommenders." src="https://cdn-images-1.medium.com/max/1024/1*KbB7-M59G9eXDnIgP-l-bw.png" /><figcaption>Future frontiers in generative search and scaling laws, featuring diagrams from recent academic work on unified retrieval–ranking models and scaling trends for generative recommenders.</figcaption></figure><h3><strong>Tackling cold-start and personalization with LLMs</strong></h3><p><a href="https://www.google.com/url?q=https://www.linkedin.com/in/hoshun-yang-5307433a/&amp;sa=D&amp;source=editors&amp;ust=1760575841516026&amp;usg=AOvVaw1H5BDPtm6tojxfRUDemT44"><strong>Hoshun Yang</strong></a>, Principal Data Scientist, explored how LLMs are unlocking new ways to address the retailer <strong>cold-start problem</strong> — when new users arrive without a history of engagement. By leveraging rich retailer content (like websites, POS data, and social presence), Faire can:</p><ul><li><strong>Bootstrap recommendations</strong> using clustering and entity resolution</li><li><strong>Consolidate brands</strong> across POS and Faire catalogs</li><li>Introduce an <strong>AI business coach</strong>, turning complex data into actionable insights for retailers</li></ul><p>The goal is not just to recommend products, but to help retailers up-level their businesses as soon as they land on Faire.</p><figure><img alt="A presentation slide titled “Targeted recommendation” shows two visuals. On the left, a grid of product images illustrates personalized retailer recommendations on Faire, including pet accessories, skincare, stationery, and food items. On the right, a flowchart labeled “Recommendation System Workflow” outlines the process: engagement history and store information feed into data cleaning and integration, which then powers LLM-based theme generation to create tailored retailer themes." src="https://cdn-images-1.medium.com/max/1024/1*_HEJ6KLfXPKoQF5KkehVgw.png" /><figcaption>Faire’s targeted recommendation system, showing how engagement data, retailer information, and large language models combine to generate personalized product themes for each retailer.</figcaption></figure><h3><strong>Behind the homepage feed: unified retrieval and ranking</strong></h3><p>Personalization isn’t limited to search. <a href="https://www.google.com/url?q=https://www.linkedin.com/in/junozhu/&amp;sa=D&amp;source=editors&amp;ust=1760575841517865&amp;usg=AOvVaw2vebjP3DtOwQb8xqjeWlSd"><strong>Juno Zhu</strong></a>, Engineer on Inspire, and <a href="https://www.google.com/url?q=https://www.linkedin.com/in/kevinaloisi/&amp;sa=D&amp;source=editors&amp;ust=1760575841517989&amp;usg=AOvVaw1F5MxfCOHHTCvEjvd3CMYO"><strong>Kevin Aloisi</strong></a>, Staff Engineer on Discovery Infrastructure, shared how we’ve transformed the Faire homepage feed into a real-time, scrollable grid.</p><p>Previously, homepage content relied on static carousels generated offline — a system with latency and accuracy trade-offs. Today, Faire’s <strong>unified retrieval and ranking platform</strong> powers a dynamic feed that blends multiple retrieval types, integrates ranking layers, and allows product teams to A/B test experiences through a declarative domain-specific language (DSL).</p><figure><img alt="A presentation slide titled “Unified retrieval and ranking stack” depicts a three-layer workflow. The top layer, “Declarative Language,” used by product teams, defines retrieval sources, ranking, and blending algorithms. The middle layer, “Execution (DAG),” managed by infrastructure teams, handles parallelization, tracking, and observability. The bottom layer, “Algorithmic APIs,” owned by ML and ads teams, powers retrieval sources, ranking models, and ad auction logic. The diagram visualizes how" src="https://cdn-images-1.medium.com/max/1024/1*6zJEJ8Pd_w8imFECGudpmw.png" /><figcaption>Faire’s unified retrieval and ranking stack, showing how product teams, infrastructure engineers, and ML experts collaborate through a shared system that connects declarative configuration, execution, and algorithmic APIs.</figcaption></figure><figure><img alt="Two smartphone screenshots of the Faire app display personalized product discovery. The left screen shows a homepage with a search bar, product categories like “Brand updates” and “Home accents,” and sections labeled “Recently viewed” and “Ideas for you” featuring products such as candles, greeting cards, and apparel. The right screen shows a product feed with cards, hats, and stationery items, highlighting discounts and star ratings. The interface demonstrates how Faire uses AI to personalize s" src="https://cdn-images-1.medium.com/max/342/1*taGKbzgSbsIP_k1JVQoeQw.png" /><figcaption>Faire’s mobile app showcasing personalized discovery experiences, including tailored product recommendations, recently viewed items, and curated “Ideas for you” sections powered by AI-driven ranking and retrieval.</figcaption></figure><p>This evolution has already driven meaningful lifts in engagement and orders, while laying the groundwork for more adaptive, LLM-powered personalization.</p><h3><strong>Graph neural networks for smarter recommendations</strong></h3><p>Finding the right product among 20M+ SKUs is a massive challenge. Data Scientist <a href="https://www.google.com/url?q=https://www.linkedin.com/in/li-tian-2b8b72b1/&amp;sa=D&amp;source=editors&amp;ust=1760575841521098&amp;usg=AOvVaw37mvHS6VZUIvRWV3ye77wx"><strong>Li Tian</strong></a> introduced how <strong>graph neural networks</strong> (GNNs) help Faire model the deep relationships between retailers, products, and their interactions.</p><p>In our system, <strong>retailers and products are represented as nodes</strong> connected by weighted engagement edges (clicks, carts, and orders). Retailer nodes include store type and metadata, while product nodes capture descriptions, brands, and categories, etc. This transforms raw engagement logs into <strong>structured relational graphs</strong>, where each node learns not only from its own features but also from its neighbors.</p><figure><img alt="A slide titled “Turning noisy logs into structured signals” shows how raw retailer–product interactions become a structured graph. On the left, circles labeled “GiftShop,” “HomeStore,” and “BookStore” connect via arrows to “Book,” “Candle,” and “Wine,” representing engagement links. On the right, the data is reorganized into two smaller graphs with labeled boxes — “retailer information, store metadata” and “product description, attributes” — illustrating how Faire converts behavioral data into s" src="https://cdn-images-1.medium.com/max/1024/1*dEt0H8AQpchyuXaXijYk8g.png" /><figcaption>Faire transforms noisy engagement logs into structured graph signals by connecting retailers and products through weighted interactions.</figcaption></figure><p>We train these representations using a two-tower GNN architecture: one tower for retailers and one for products. Each tower starts with pre-trained text and ID embeddings, refined through a <strong>Graph Attention Network</strong> (GATConv) layer that aggregates signals from related nodes. The outputs are then passed through MLP layers to produce final embeddings in a shared space.</p><figure><img alt="A slide titled “Two-tower design for retailer &amp; product” shows two mirrored model pipelines — one for retailers and one for products. Each tower combines text and ID embeddings, refines them through a Graph Attention Network (GATConv) layer, and processes them via MLP layers. The outputs are compared using a dot product to measure similarity. The diagram illustrates how Faire learns separate yet aligned representations for retailers and products to improve recommendation accuracy." src="https://cdn-images-1.medium.com/max/1024/1*7MA6BiNSWMXTsRPiz7Bh1w.png" /><figcaption>Faire’s two-tower model architecture for retailers and products, where separate embedding towers are refined through Graph Attention Network (GATConv) and MLP layers.</figcaption></figure><p>This design allows Faire to capture both direct and indirect affinities across our marketplace, powering more personalized and context-aware recommendations.</p><h3><strong>From ranking to revenue: Faire’s ads engine</strong></h3><p>Discovery drives commerce, and Faire’s <strong>ads platform</strong> connects retailer intent with brand visibility. As <a href="https://www.google.com/url?q=https://www.linkedin.com/in/briandeluna/&amp;sa=D&amp;source=editors&amp;ust=1760575841524524&amp;usg=AOvVaw3vLOsHl7NgtJwftlpDNZHN"><strong>Brian de Luna</strong></a>, Staff Data Scientist &amp; Tech Lead Manager for Ads, explained, promoted listings appear within Faire’s search results and run on a cost-per-click model. Brands bid to reach relevant retailers where they might not appear organically, optimizing toward a “target cost-per-action” (tCPA) that links ad spend directly to conversions.</p><p>Behind the scenes, Faire’s ad system blends <strong>search-style retrieval and ranking</strong> with <strong>auction design and pacing optimization</strong>. Models like “pCTO” (probability of click-to-order) predict conversion likelihood, while tCPA acts as a control lever to balance efficiency and volume. Budgets are paced through a feedback loop: bids rise or fall as spend and conversion rates shift, ensuring budget utilization at the best possible price for brands.</p><figure><img alt="A slide titled “Using the tCPA lever” shows two charts. The left chart is a line graph labeled “Traffic (index)” over “Day of September,” illustrating monthly traffic fluctuations used to forecast ad spend. The right chart plots multiple colored curves demonstrating a PID-like controller response, showing how Faire adjusts bids dynamically — reducing tCPA for high-converting advertisers and increasing it when pacing lags — to maintain efficient budget utilization." src="https://cdn-images-1.medium.com/max/1024/1*-2ZFidFX0Euq8YDzX2b3OA.png" /><figcaption>Faire’s tCPA (target cost per action) control system, showing how forecasting and feedback loops help optimize ad spend across time and conversion performance.</figcaption></figure><p>Under the hood, Faire’s advertising ecosystem spans three integrated layers that work together.</p><ul><li><strong>Ads Manager</strong>: brand-facing tools to launch, monitor, and configure campaigns.</li><li><strong>Ads Foundation</strong>: shared ML and data infrastructure powering measurement and optimization.</li><li><strong>Ads Delivery</strong>: the real-time serving layer determining which ads appear to which retailers.</li></ul><figure><img alt="A diagram titled “Ads Manager, Ads Foundation, Ads Delivery” visualizes Faire’s ads platform architecture. On the left, Ads Manager (in yellow) represents brand-facing components — Campaign Management, Billing, Analytics &amp; Reporting, and Forecasting &amp; Suggestions — that help brands plan and measure ad spend. In the center, Ads Foundation (in blue) connects systems through shared infrastructure for advertiser data, logged data, click and order attribution, budget tracking, metrics, and measuremen" src="https://cdn-images-1.medium.com/max/1024/1*597GvyR3yaeMJDjE_zuFvw.png" /><figcaption>The three integrated layers that work together to power Faire’s Ads ecosystem.</figcaption></figure><p>Only two years in, Faire’s ads program already drives significant material value for brands — and it’s expanding fast. Upcoming priorities, in addition to further innovation on all modeling components, include <strong>new placements like the homepage feed, international launches in Europe and Canada, and “off-site” ads</strong> that use Faire’s first-party conversion data to help brands optimize campaigns across external advertising platforms.</p><h3><strong>The hidden engine: scaling ML infrastructure</strong></h3><p>Behind every discovery model lies the infrastructure that powers it. <a href="https://www.google.com/url?q=https://www.linkedin.com/in/mannkrishnan/&amp;sa=D&amp;source=editors&amp;ust=1760575841530284&amp;usg=AOvVaw176qqoRnj-rHFKOzx3n3vm"><strong>Manoj Krishnan</strong></a>, Sr. Staff ML Platform Engineer, shared how Faire’s AI Platform team builds the foundation that keeps our machine learning systems fast, reliable, and scalable.</p><figure><img alt="A diagram depicts Faire’s ML pipeline linking online and offline systems. Requests flow through a search service into an inference layer containing a DSL component and feature store, integrated with Amazon SageMaker for model training. Offline, Snowflake generates training data through UDFs and synchronizes with Amazon Kinesis logging. The illustration emphasizes the closed loop between real-time inference and offline retraining." src="https://cdn-images-1.medium.com/max/1024/1*lPE0gsxVsoeDvxO8k1NBIw.png" /><figcaption>Faire’s unified ML infrastructure connecting Snowflake, SageMaker, and Kinesis — showing how online requests, offline training, and inference remain synchronized through a shared feature store.</figcaption></figure><p>To handle <strong>2.2 trillion rows of data annually</strong>, the team redesigned its pipelines to log only lightweight identifiers and timestamps, <strong>generating features offline </strong>instead of online. This shift cut serving costs dramatically while allowing new features to be added without production overhead.</p><figure><img alt="A slide titled “Offline feature generation” compares two JSON logging payloads. On the left, the “before” example lists multiple product and query feature fields; on the right, the “after” example includes only product and query identifiers with timestamps. A note at the bottom reads, “Use identifiers + timestamps to generate feature values offline and lower kinesis costs.” The image illustrates how Faire simplified online logging to improve scalability." src="https://cdn-images-1.medium.com/max/1024/1*Mcf4vYS8nlT6E-iYjT3Icw.png" /><figcaption>Faire reduced online serving costs by logging only identifiers and timestamps instead of full feature payloads, enabling offline feature reconstruction at scale.</figcaption></figure><p>To maintain feature consistency, Faire repurposed TensorFlow’s static DAG engine for versioning, encoding each feature as an immutable protobuf stored in S3 — creating a single, reliable interface between Python-based offline workflows and Kotlin-based online inference.</p><figure><img alt="A diagram titled “TensorFlow DSL stack” outlines the flow of data between development, offline training, and online inference. Developers commit code to GitHub, which pushes TensorFlow graphs to Amazon S3. Offline, Snowflake and Spark generate training data; online, an inference service retrieves TensorFlow graphs and features for model serving. The diagram highlights how TensorFlow’s DAG framework provides cross-environment consistency." src="https://cdn-images-1.medium.com/max/1024/1*E70R0s0XKrK_8feqWBZoZg.png" /><figcaption>A diagram of Faire’s TensorFlow-based feature versioning workflow, showing how static DAGs and protobufs ensure consistent features across online and offline environments.</figcaption></figure><p>Finally, by migrating GPU workloads to Kubernetes with RunAI, the team gained containerization, elasticity, and isolation — eliminating instability from the earlier shared cluster.</p><p>Looking ahead, the platform team is investing in agentic infrastructure, a unified LLM gateway, and deeper GPU-level optimization to accelerate training and inference across Faire’s next generation of AI systems.</p><h3><strong>Fine-tuning LLMs at Faire</strong></h3><p>Large language models (LLMs) unlock new ways to understand product meaning, match intent, and streamline operations at Faire. <a href="https://www.google.com/url?q=https://www.linkedin.com/in/yanwei-wayne-zhang-314597ab/&amp;sa=D&amp;source=editors&amp;ust=1760575841536193&amp;usg=AOvVaw0kJ8wcikg3GROmoZJ4a32N"><strong>Wayne Zhang</strong></a>, Principal Data Scientist, and <a href="https://www.google.com/url?q=https://www.linkedin.com/in/qqhsu/&amp;sa=D&amp;source=editors&amp;ust=1760575841536383&amp;usg=AOvVaw3fiGXL_4Nxz0itbgPQWqY8"><strong>Quentin Hsu</strong></a>, Senior Data Scientist on Search, shared how the team uses fine-tuned domain-specific models to deliver real business impact.</p><figure><img alt="A slide titled “Example Faire tasks” displays a three-column table with Text Input, Image Input, and Label columns. The examples include: determining if “Live Moss Terrarium Supply Kit” is relevant to the query “fairy lights for tent” (labeled Irrelevant); identifying a product type for green drawstring pants (labeled Lounge Sweatpants/Joggers — Women’s); and generating a short title for a greeting card image (labeled Heart in Your Hand Love Card). The table demonstrates the types of labeled dat" src="https://cdn-images-1.medium.com/max/1024/1*45ufB5VwRMGN3_brpyc4sw.png" /><figcaption>Example fine-tuning tasks at Faire — illustrating how labeled text and image data train domain-specific LLMs.</figcaption></figure><p>For many domain specific use cases, zero-shot or few-shot prompting of chat based LLM models is not enough to achieve usable performance. Fine-tuning is necessary to teach LLMs Faire specific context to be used in our applications.</p><p>At Faire, we built a shared <strong>fine-tuning infrastructure</strong> on top of our dedicated GPU cluster. We created a custom faire_llm package that abstracts different fine-tuning optimizations like LoRA and quantization to enable rapid iteration and reliable production serving.</p><figure><img alt="A slide titled “Faire’s fine-tuning infra” displays a diagram and bullet points describing the setup. On the left, text explains that Faire uses a dedicated GPU cluster for training and batch inference, a custom Python package (faire_llm) to automate training, and a third-party service for real-time inference. On the right, a flow diagram shows an Orchestration Platform managing Training Jobs and Inference Jobs that run on a GPU Cluster composed of multiple nodes. Arrows between components illus" src="https://cdn-images-1.medium.com/max/1024/1*DyFtJbSRc27oTEoXtcwzsA.png" /><figcaption>Faire’s fine-tuning infrastructure showing how dedicated GPU clusters, orchestration tools, and a custom faire_llm package automate model training and inference.</figcaption></figure><p>We shared 2 production uses cases of these fine-tuned models at Faire:</p><ul><li><strong>Search relevance:</strong> We fine-tuned a <strong>Llama 3 8b “teacher” model</strong> to understand our definition of query product relevance, classifying each pair into exact, substitute, complement, or irrelevant. To serve in real-time within Faire’s search stack, we distilled the relevance model into a lightweight <strong>two-tower “student” model</strong>. Online experiment results showed significant reduction in egregious irrelevance and resulted in an increase in new retailer orders by <strong>10%</strong>.</li><li><strong>Taxonomy classification:</strong> We trained a <strong>multimodal model</strong> on text and images to assign new products to one of over <strong>3,000 Faire-specific product taxonomies</strong>. We reached <strong>95% accuracy</strong>, which outperforms human tagging and cuts down onboarding time + manual corrections dramatically.</li></ul><p>Together, these systems bring more semantic understanding and precision to Faire’s discovery experience, helping retailers find the right products faster and brands reach the right customers with ease.</p><h3><strong>Fireside chat with Faire’s Discovery leadership</strong></h3><figure><img alt="A photo of three speakers seated on stage during a fireside chat at Faire’s San Francisco Tech Talk. Two men and one woman are holding microphones and speaking to the audience, with a large projection screen behind them showing a close-up of the discussion. The setting features soft lighting, plants, and beige curtains, creating a warm, conversational atmosphere." src="https://cdn-images-1.medium.com/max/1024/1*FmCDabaebeo0XrHyw07z8w.png" /><figcaption>Faire CTO <a href="https://www.google.com/url?q=https://www.linkedin.com/in/thuanqpham/&amp;sa=D&amp;source=editors&amp;ust=1760575841541300&amp;usg=AOvVaw1dXd-YmMHm3BDyA1GIqgrt">Thuan Pham</a> shares insights on discovery challenges at Faire.</figcaption></figure><p><a href="https://www.google.com/url?q=https://www.linkedin.com/in/yvonneluo/&amp;sa=D&amp;source=editors&amp;ust=1760575841541840&amp;usg=AOvVaw3cuUbxvLAMgmGperOeupoJ"><strong>Yvonne Luo</strong></a><strong>, </strong>Head of Discovery Engineering<strong>,</strong> hosted a fireside chat with CTO <a href="https://www.google.com/url?q=https://www.linkedin.com/in/thuanqpham/&amp;sa=D&amp;source=editors&amp;ust=1760575841542002&amp;usg=AOvVaw2031HPHCcYKEtvqRvikKZJ"><strong>Thuan Pham</strong></a> and Head of Discovery ML <a href="https://www.google.com/url?q=https://www.linkedin.com/in/owenwenhao/&amp;sa=D&amp;source=editors&amp;ust=1760575841542087&amp;usg=AOvVaw2su-AIbrc4HkhZ1DAfljdQ"><strong>Wenhao</strong></a><a href="https://www.google.com/url?q=https://www.linkedin.com/in/owenwenhao/&amp;sa=D&amp;source=editors&amp;ust=1760575841542130&amp;usg=AOvVaw2D8A3er2jHgz44entQAUDt"><strong> Liu</strong></a>, offering a behind-the-scenes look at how AI is reshaping how we work at Faire.</p><p>Thuan shared how Faire’s marketplace impact and exceptional people drew him back into leadership, while Wenhao described the company as a place where curiosity meets purpose. Both emphasized that discovery is the “heart of the marketplace,” with AI enabling more intelligent, personalized retail experiences. They discussed challenges like low data density and multi-objective optimization, and how these drive innovation in fine-tuned LLMs, hybrid models, and agentic systems. Closing on an optimistic note, Thuan urged teams to embrace the AI wave with adaptability and action — echoed by Wenhao’s reminder that experimentation and learning are the surest paths to progress.</p><h3><strong>Join us to build the future of wholesale!</strong></h3><p>From embeddings to GNNs, from LLM-powered personalization to resilient ML infrastructure, Faire is building the systems that help wholesale retailers and brands grow.</p><p>And we’re just getting started. Interested in joining us? Explore open roles at<a href="https://www.google.com/url?q=https://faire.com/careers/openings&amp;sa=D&amp;source=editors&amp;ust=1760575841543308&amp;usg=AOvVaw0MlWsKS1KL0Jac7FwuPrd7"> </a><a href="https://www.google.com/url?q=https://faire.com/careers/openings&amp;sa=D&amp;source=editors&amp;ust=1760575841543362&amp;usg=AOvVaw0TFa4VZtz33oxPgsRVvRcG">faire.com/careers</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=66dfd6210698" width="1" height="1" alt=""><hr><p><a href="https://craft.faire.com/discovery-intelligence-at-faire-highlights-from-our-san-francisco-tech-talk-66dfd6210698">Discovery intelligence at Faire: highlights from our San Francisco Tech Talk</a> was originally published in <a href="https://craft.faire.com">The Craft</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Faire is now ISO 27001 certified]]></title>
            <link>https://craft.faire.com/faire-is-now-iso-27001-certified-d3474d4167fe?source=rss----4af981bb79f--engineering</link>
            <guid isPermaLink="false">https://medium.com/p/d3474d4167fe</guid>
            <category><![CDATA[iso-27001]]></category>
            <category><![CDATA[it]]></category>
            <category><![CDATA[compliance]]></category>
            <category><![CDATA[security]]></category>
            <category><![CDATA[engineering]]></category>
            <dc:creator><![CDATA[Waylon Janowiak]]></dc:creator>
            <pubDate>Thu, 02 Oct 2025 16:27:50 GMT</pubDate>
            <atom:updated>2025-10-02T16:27:45.612Z</atom:updated>
            <content:encoded><![CDATA[<h4>Protecting customer data is one of our highest priorities</h4><figure><img alt="Illustration of random objects showcasing ISO 27001 certification" src="https://cdn-images-1.medium.com/max/838/1*PC27wy2OXrPEzz80siOAWA.png" /></figure><p>Protecting customer data is one of our highest priorities. Today, we’re proud to share a milestone in that journey: <a href="https://www.faire.com/">Faire</a> is now <a href="https://www.iso.org/standard/27001">ISO 27001</a> certified.</p><h3>What is ISO 27001</h3><p>ISO 27001 is the globally recognized standard for building and governing an information security management system (ISMS). It promotes a holistic and risk-based approach to keeping information secure and considers people, policies, and technology. In short, ISO 27001 is about how a company designs, operates, and continuously improves the system that protects data. For customers, this means greater assurance that their data is protected by a globally recognized framework.</p><p>Our certification was issued by an independent, accredited auditor after a rigorous review of our security management system and supporting controls. The result gives customers a clear and comparable bar for security maturity across vendors.</p><h4>Why this matters for our customers</h4><ul><li>Greater assurance that Faire’s security controls are designed and governed according to an international standard for information security management.</li><li>Our program is built on risk management across people, process, and technology, not only on tools.</li><li>We are committed to transparency and third party validation, a practice followed by many leaders in our industry when they announce similar milestones.</li><li>This certificate isn’t just about checking boxes, it helps ensure safer transactions, trust, and reliability for Faire customers.</li></ul><h3>How Faire achieved certification</h3><p>Over the past year, we’ve built, strengthened, and audited the key building blocks of our ISMS. This included asset and access management, secure software development practices, vulnerability management, vendor risk management, incident response, and business continuity. These controls are documented, measured, and improved on an ongoing basis so that our security posture keeps getting stronger.</p><h3>Our commitment going forward</h3><p>We will continue to invest in our security program and to evolve our controls as standards and customer expectations change. ISO 27001 emphasizes continual improvement, and we take that obligation seriously. This continued investment means a safer, more reliable Faire experience for brands and retailers, and greater confidence that everyone’s data is protected at every step.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d3474d4167fe" width="1" height="1" alt=""><hr><p><a href="https://craft.faire.com/faire-is-now-iso-27001-certified-d3474d4167fe">Faire is now ISO 27001 certified</a> was originally published in <a href="https://craft.faire.com">The Craft</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Transforming wholesale with AI: the sequel (now with more agents)]]></title>
            <link>https://craft.faire.com/transforming-wholesale-with-ai-the-sequel-now-with-more-agents-9542f257dd45?source=rss----4af981bb79f--engineering</link>
            <guid isPermaLink="false">https://medium.com/p/9542f257dd45</guid>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[engineering]]></category>
            <category><![CDATA[ai-agent]]></category>
            <category><![CDATA[events]]></category>
            <dc:creator><![CDATA[Dave Collie]]></dc:creator>
            <pubDate>Mon, 29 Sep 2025 16:21:02 GMT</pubDate>
            <atom:updated>2025-09-29T16:20:53.269Z</atom:updated>
            <content:encoded><![CDATA[<h4>How we’re advancing search and discovery for our customers, and more</h4><figure><img alt="A large, diverse group sits at long tables in a bright office cafeteria, focused on an off‑camera speaker; laptops, drinks, and notebooks are on the tables." src="https://cdn-images-1.medium.com/max/1024/1*LdEBGEH5WbCzsoAfcOytBw.png" /></figure><p>On September 9, 2025, we opened our doors at Faire Waterloo for a two-part evening during <a href="https://waterlootechweek.ca/">Waterloo Tech Week</a>: a fireside chat with two of our senior engineering leaders, followed by a series of lightning tech talks from the teams building AI into our marketplace.</p><p>You can watch a <a href="https://www.youtube.com/watch?v=QOMYC425_bM">recording</a>, and here are the highlights!</p><h3>AI north star: solving real customer challenges</h3><p><strong>Faire’s mission is to empower brands and retailers to strengthen the unique character of local communities.</strong> That mission comes alive when we solve real, tractable problems our customers face. AI is one tool in our toolkit. Some key ways it can help are:</p><ul><li><strong>Helping retailers find unique products:</strong> Buyers want to discover differentiated brands that fit their shop’s vibe, without needing to guess the right keywords.</li><li><strong>Providing personalized recommendations:</strong> Retailers expect curated assortments and insights tailored to their business.</li><li><strong>Delivering more value to all of our customers:</strong> Wholesale often depends on manual workflows and multiple tools — there’s an opportunity to simplify and create a smoother experience.</li></ul><p>These needs shape our roadmap and where we apply AI — both externally for our customers, and internally to improve our systems and processes that ultimately serve our customers.</p><h3>Supercharging developer productivity with AI tooling</h3><p>If you’ve read our <a href="https://www.faire.com/news/2025-07-02-Toronto-Tech-Week-Recap">Toronto Tech Week recap</a>, you know we believe <strong>AI augments engineers</strong>. In Waterloo, our speakers dug into what that looks like in practice:</p><h4>Agentic development</h4><p><a href="https://www.linkedin.com/in/luke-bjerring/"><strong>Luke Bjerring</strong></a> shared how background coding agents take on well-scoped tasks (creating pull requests, writing tests, refactoring), so engineers can focus on harder problems. We walked through how we <strong>invoke GitHub Copilot programmatically</strong>, tailor instructions and setup, and wire in MCP servers to internal systems.</p><figure><img alt="Diagram titled “Example: setting cleanup.” A Slack channel prompts a Setting Cleanup Assistant agent, which coordinates with an Expired Setting Finder agent and a Readiness agent. The flow creates GitHub issues to remove usages and delete obsolete settings in frontend and backend code, and sends notifications back to Slack." src="https://cdn-images-1.medium.com/max/1024/1*KGWkMWNP2PmY0JirJj0uzQ.png" /><figcaption>Automated “setting cleanup” workflow at Faire: a Slack bot triggers agents to find expired flags, verify readiness, and open GitHub issues to remove dead code, with notifications along the way.</figcaption></figure><h4>Project Cyberpunk</h4><p><a href="https://www.linkedin.com/in/georgepjacob/"><strong>George Jacob</strong></a> presented our <strong>in‑house agentic development stack</strong>, aka Project Cyberpunk: pre‑warmed workspaces, a lightweight orchestrator that breaks work into reliable prompts, and a library of focused sub‑agents (e.g., settings cleanup, test authoring). Early usage patterns show steady adoption and meaningful offload on repetitive work.</p><figure><img alt="Diagram of Project Cyberpunk: ai-executor orchestrates code-quality agents with Redis and an external service, then outputs results." src="https://cdn-images-1.medium.com/max/1024/1*60fRDwcTIY6sgJe18SHezQ.png" /><figcaption>Project Cyberpunk agentic workspace: a core faire/ai-executor coordinates specialized code-quality agents, integrates with external services, uses Redis-backed containers for state, and emits validated outputs.</figcaption></figure><h4>Practical tips for mobile and more</h4><p><a href="https://www.linkedin.com/in/megan-bosiljevac-15a725183/"><strong>Megan Bosiljevac</strong></a> presented real cases, from an Android UI color accessibility fix to scaffolding an 800‑line Server‑Driven UI screen in hours instead of days, and a reminder that <em>prompting is a skill</em>, experimentation compounds, and humans remain owners of the code.</p><figure><img alt="Composite slide showing three Android screen mockups on the left (searchable home, a candies category, and a code editor with instructions to create a SeeMore module), an arrow with a cube icon in the center, and on the right the finished “Recently viewed” page with a 2xN product grid displaying thumbnails, badges like “Bestseller,” CAD prices, ratings, shipping notes, and add buttons." src="https://cdn-images-1.medium.com/max/1024/1*_zkyHmh2fsPDsDr_54u57Q.png" /><figcaption>From Home and Category screens to a “Recently viewed” See More page — spec and agent prompt generate a new Android module and the resulting grid view.</figcaption></figure><h3>Transforming discovery for retailers</h3><p>There are several ways we’re applying AI to transform discovery for retailers, including:</p><p><strong>Natural‑language search: </strong><a href="https://www.linkedin.com/in/tommammc/"><strong>Tom Ma</strong></a> explained that buyers should be able to ask for what they want on our platform, similar to how they would speak to a rep: “<em>Find me dresses made in Paris, under $100, and not sold on Amazon.</em>” Our system parses the phrase into structured filters under the hood and returns relevant products without the filter‑guessing game.</p><p><strong>Image search:</strong> Inspiration often starts with a photo. Upload an image and see visually similar products on Faire, turning a vibe into a shoppable result, especially on mobile.</p><p>Together, these capabilities reduce the distance between intent and discovery for Retailers saving them time and helping them stock their stores.</p><figure><img alt="Hand‑drawn system diagram showing an image of the character Snorlax sent to an AI model to extract the search term “Snorlax.” The client calls SearchProducts with that query to a Search Service, which in turn issues a request to a Text Search component and receives a list of products, then returns the search response back to the client. Labels indicate “AI to analyze the image → search query” and “Use AI processed search request.”" src="https://cdn-images-1.medium.com/max/1024/1*WjgmeHiHWnrJrqEHaBEMIw.png" /></figure><figure><img alt="An animated example of providing a reference “Snorlax” image to search for related products using AI powered image-to-text search flow." src="https://cdn-images-1.medium.com/max/884/1*GgO9OTY2UpSfNJKKHzu9EA.gif" /><figcaption><strong>Left:</strong> Image-to-text product search flow using AI. A client sends an image of Snorlax, AI turns it into the query “Snorlax,” the search service forwards that query to text search, and returns a product list to the client. <strong>Right:</strong> Image-to-text product search flow using AI. A client sends an image of Snorlax, AI turns it into the query “Snorlax,” the search service forwards that query to text search, and returns a product list to the client.</figcaption></figure><h3>Service architecture evolution to support AI agents</h3><p><a href="https://www.linkedin.com/in/luan-nico/"><strong>Luan Nico</strong></a> shared <strong>Project Burrito</strong>: a hybrid architecture that pairs <strong>Python‑powered AI agents</strong> with our <strong>Kotlin macroservice “wrapper”</strong> to plan multi‑step buying flows for account managers (and many other agentic AI flows to serve our brands and retailers in the future). Instead of a one‑off search, an agent composes a coherent, purchasable assortment that fits a retailer’s profile, then iterates.</p><figure><img alt="Diagram showing FE → nginx routing to monolith and services A, B, C, plus a “burrito service” with its own DB and Python sidecar connected into the service mesh." src="https://cdn-images-1.medium.com/max/1024/1*ccEJyKKxVLxeSvCOvlH8kA.png" /><figcaption>Project Burrito hybrid architecture integrating a Kotlin service and Python sidecar into the existing services via nginx.</figcaption></figure><h3>Fireside chat with Marcelo Cortes and Yvonne Luo</h3><figure><img alt="Three people sit on stage in armchairs holding microphones during a fireside chat. A projector screen behind them shows a slide titled “Fireside chat” with headshots and names of the speakers. The room has large windows and plants to the right." src="https://cdn-images-1.medium.com/max/1024/1*oknsHcFOrB6YSIH0ej3Ejw.jpeg" /><figcaption>Fireside chat at Faire’s Waterloo office with Yvonne Luo (Head of Engineering, Discovery), Dave Collie (Director of Engineering, Value) and Marcelo Cortes (Co-founder &amp; Chief Architect) discussing AI, discovery, and engineering culture.</figcaption></figure><p>Similar to our Toronto Tech Week event, we ended with a fireside chat. This time, Co‑Founder &amp; Chief Architect <a href="https://www.linkedin.com/in/marcelo-cortes-b34317/">Marcelo Cortes</a> was joined by Head of Engineering, Discovery, <a href="https://www.linkedin.com/in/yvonneluo/">Yvonne Luo</a>. The conversation took the following shape:</p><ul><li><strong>AI as a new abstraction layer:</strong> Like cloud and mobile before it, AI changes what it means to be a product or infra team: it’s about interfaces that reason, not just route requests.</li><li><strong>Humans‑in‑the‑loop by design:</strong> Guardrails, observability, and explicit failure modes matter, especially when agents touch customer‑facing systems.</li><li><strong>Audience Q&amp;A:</strong> We took questions from the crowd about Faire’s security stance, growth strategy, and culture.</li></ul><h3>Thank you, Waterloo 💙</h3><p>We loved hosting builders, students, and community partners at our Waterloo office. A huge thanks to everyone who helped make the evening happen, and to our guests for the thoughtful questions during Q&amp;A. If you didn’t catch us this time, we hope to see you at a <a href="https://faire.tech/events">future event</a> at our Waterloo, Toronto, or San Francisco office!</p><figure><img alt="Exterior entrance of a modern office with a black, white, and gold balloon arch framing glass doors. An A‑frame sign reads “FAIRE Tech Talks {KW} Welcome.” A person stands by the open door, smiling and gesturing inside. Stone tile floor with large inlaid numbers visible in the foreground." src="https://cdn-images-1.medium.com/max/1024/1*ChkOPmhpltrQf9Vg99pn0w.png" /><figcaption>Kitchener‑Waterloo Tech Talks entrance at Faire — thank you for joining us!</figcaption></figure><p><strong>Want to build with us?</strong> Explore roles at <a href="http://faire.com/careers">faire.com/careers</a> and follow along on <a href="https://craft.faire.com/"><strong>The Craft</strong></a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9542f257dd45" width="1" height="1" alt=""><hr><p><a href="https://craft.faire.com/transforming-wholesale-with-ai-the-sequel-now-with-more-agents-9542f257dd45">Transforming wholesale with AI: the sequel (now with more agents)</a> was originally published in <a href="https://craft.faire.com">The Craft</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>