<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Phlo Engineering on Medium]]></title>
        <description><![CDATA[Stories by Phlo Engineering on Medium]]></description>
        <link>https://medium.com/@phloengineering?source=rss-96f09b902122------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sun, 17 May 2026 10:18:15 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@phloengineering/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Tech Talks: Using Nx to improve build times]]></title>
            <link>https://medium.com/@phloengineering/tech-talks-using-nx-to-improve-build-times-ffb89cc5e97c?source=rss-96f09b902122------2</link>
            <guid isPermaLink="false">https://medium.com/p/ffb89cc5e97c</guid>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[continuous-integration]]></category>
            <category><![CDATA[agile]]></category>
            <category><![CDATA[startup]]></category>
            <dc:creator><![CDATA[Phlo Engineering]]></dc:creator>
            <pubDate>Tue, 09 Aug 2022 09:03:18 GMT</pubDate>
            <atom:updated>2022-08-09T09:03:18.412Z</atom:updated>
            <content:encoded><![CDATA[<p><em>with our Principal Engineer </em><a href="https://www.linkedin.com/in/jamie-macdonald-23729210/"><em>Jamie MacDonald</em></a><em>.</em></p><p><em>As part of our regular Tech Talks series, Jamie describes utilising </em><a href="https://nx.dev/"><em>N</em></a>x<em> to improve our CI build and testing times.</em></p><h4>Rebase, wait, repeat</h4><p>As the engineering team at <a href="https://www.linkedin.com/company/wearephlo/">Phlo</a> expands, so too does the number of <a href="https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-pull-requests">Pull Requests</a> (PRs) we open. We love using PRs to allow our engineers an opportunity to reach out for feedback, for testing on a <a href="https://wearephlo.com/post/pull-request-environments">deployed environment</a>, or even for testing out our latest <a href="https://wearephlo.com/post/pull-request-android-apks">React Native changes</a>. An increasing number of PRs, coupled with a growing platform has led to an increase in the amount of work our Continuous Integration (CI) pipelines have to complete. With builds regularly taking 15 minutes to complete, this can lead to a frustrating experience for developers, who are keen to get their changes merged in and to move on to the next task.</p><p>We wanted to improve our build and testing time without having to completely change the structure of our repository. All of our code lives within a monorepo, so we identified tools designed to support improvements in monorepo build performance. <a href="https://nx.dev/">Nx</a> and <a href="https://turborepo.org/">Turborepo</a> are two popular tools which offer a similar set of features. A quick assessment told us Nx was the safer bet to move forward with, as a more established tool with a strong ecosystem around it.</p><h4>What is Nx?</h4><p>Nx is a monorepo build tool that helps reduce build and test times by only building and testing code that has been affected by your current changes.</p><p>Nx achieves this by building a dependency tree of your monorepo and understanding how all of your packages relate to each other. Once Nx understands the dependencies throughout your monorepo, it can work out when a line of code is changed, all the packages this change could potentially impact. This means that instead of requiring a full rebuild on each change it can instead only rebuild the parts that have been affected by the change.</p><p>To support its build and test strategy Nx uses computation caching. The caching works by storing each of the build and test output files within a cache folder. Nx can then use these cached artefacts to mimic the output of the build and test steps which don’t need to be rerun.</p><p>Nx takes caching a step further by allowing you to host a shared cache in the cloud (distributed caching). This means that when code has been built on one person’s machine it no longer needs to be built on another’s — assuming the same version of the code and the same machine configuration. This allows you to realise huge time savings on your CI server and among your team.</p><h4>Initial Configuartion</h4><p>The issue that we wanted to focus on initially was bringing down our build time. We wanted this to be something we could get the benefit of both locally on each engineer’s machine and in our CI Pipelines. We started by running the handy Nx command for <a href="https://nx.dev/getting-started/nx-setup#add-nx-to-an-existing-project">adding to an existing monorepo</a>. This involved simply running the commandnpx cra-to-nx as that added all the necessary dependencies to our package.json file. This command also created an nx.json file in the project root, this file is used for declaring some configuration for Nx. See the screenshot below for a snippet of our config.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/740/1*ar1HWqbdpZuN55U-9k9fdQ.png" /><figcaption>Our initial Nx configuration used for caching our build tasks</figcaption></figure><p>Once we had the configuration file set up we just needed to ensure each of the packages within our monorepo was configured correctly and could be picked up by Nx. To do this we had to add a project.json file to each of our packages. An example project.json file can be seen below.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/576/1*ursVpDl9p34wZqGCtpB2jw.png" /><figcaption>An example project.json file, defining what build script to run and its output location</figcaption></figure><p>With a project.json file in place for each of our packages, that was our initial config complete.</p><h4>Improving build time</h4><p>With our config out of the way, our next step was running a build utilising Nx. As we wanted to run the build command for all our packages, we made use of the <a href="https://nx.dev/cli/run-many">nx run-many command</a>. For us, the command looked likeyarn nx run-many — target=build — all. The initial run was used to build the cache, which took about the same amount of time as our existing build step. In the second build, however, the build step was almost immediate. Nx had detected no changes so simply used the cached build already present.</p><p>We experimented with this configuration to ensure everything was working correctly. We made a change in one of our packages with no dependents to ensure that Nx would only rebuild that single package. This was successful and meant if we were making isolated changes in one package it wouldn’t build the rest of our repo unnecessarily. Our build time locally went from 4 minutes to effectively instantaneous.</p><p>Our final test was ensuring that when the output folders were deleted that Nx would use the cached build files and generate the output folders from the cache. We noticed that while Nx was reporting a successful read from the cache, only the folders were created, and not their contents. After a bit of digging, we realised that the issue was simply a misconfigured output path in our project.json files. After correcting these we re-ran our tests and confirmed that we could entirely populate the build folders from this cache. This was ideal as it meant it would work both locally and on the VMs we use for our CI pipelines.</p><h4>Test time improvements</h4><p>With our build times now improved by caching, we wanted to extend this to also cut down the time to run our test suite. This was achieved in a similar way to our build time improvements. First, we had to ensure that each of the packages had an entry in their package.json file detailing what command to run for their tests. Once this was in place we just needed to extend the project.json files we created before to include an entry for the tests. You can see an example below.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/746/1*86xeOjOz2_C_RAj29oqrXw.png" /><figcaption>An example project.json file with an additional target added for test</figcaption></figure><p>After adding the test option to our build script we wanted to be sure our outputs were pointing at the test results files so they could be picked up by the cache. We tried running a few tests and again saw the expected time improvements. After a successful run to build the cache, the next time we ran the tests they were completed almost instantaneously. We again tried to totally clear the outputs by deleting the test results files, when we ran the tests again. The results were successfully picked up from the cache and copied over. This meant we were finally ready to test things out in our CI pipelines.</p><h4>Improving CI Pipeline time</h4><p>Now we had our build and test runs utilising the cache locally we wanted to use this on our pipelines. The pipelines we use are Microsoft Hosted pipelines on Azure DevOps. This means that we can’t have a persisted cache between runs as each time the pipelines are run, a clean VM is used. Nx’s distributed caching was the answer to this problem.</p><p>As we already use Google Cloud Platform for hosting our infrastructure and also use Google Cloud Storage (GCS), we decided to try to utilise a plugin for Nx which allows you to use a GCS bucket to host your cache. The package we ended up using is <a href="https://www.npmjs.com/package/@mansagroup/nx-gcs-remote-cache">nx-gcs-remote-cache</a>, this allows us to simply point our Nx config to use a custom runner, which you can see in the screenshot below.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/738/1*m4FCvRLqmcT7PLE5_ZwYJA.png" /><figcaption>Using nx-gcs-remote-cache as our runner for NX</figcaption></figure><p>The final steps were authenticating with GCS in our pipelines. We used a <a href="https://cloud.google.com/iam/docs/creating-managing-service-account-keys#creating">service account file</a> to accomplish this. We then specified which bucket we wanted to use for our cache, this is as simple as specifying the bucket in the environment variable NX_REMOTE_CACHE_BUCKET.</p><h4>Success</h4><p>With everything in place we now wanted to test our changes in the CI Pipelines. We created a PR with the Nx changes and let the initial run complete so we could build the cache. As expected this took about our normal ~15 minutes. We then kicked off the build and test phases again. With no changes and Nx fully utilising the cache we found that our CI runs were now &lt;5 minutes. This is a huge time saving for us, meaning that instead of our current process where 1 PR could be built and merged within 15 minutes, we could now build and merge 3 PRs in the same time. This means that when engineers are putting up PRs they know that their PR will be merged in much sooner and they can instead focus on picking up the next exciting piece of work.</p><h4>Next Steps</h4><p>With our build and test times cut down our next steps are trying to remove any further bottlenecks from CI pipelines. One of the most time-consuming phases is building and uploading our docker images, a potential solution to this could be moving to <a href="https://github.com/GoogleContainerTools/distroless">distroless</a> and seeing if that cuts down on time.</p><p>Have you tried moving to distroless or found other ways of improving your CI pipeline times?</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ffb89cc5e97c" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[From Zero to Modern Data Stack]]></title>
            <link>https://medium.com/@phloengineering/from-zero-to-modern-data-stack-63a4964ca0dd?source=rss-96f09b902122------2</link>
            <guid isPermaLink="false">https://medium.com/p/63a4964ca0dd</guid>
            <category><![CDATA[cloud-computing]]></category>
            <category><![CDATA[data-engineering]]></category>
            <category><![CDATA[startup]]></category>
            <category><![CDATA[data-science]]></category>
            <category><![CDATA[software-development]]></category>
            <dc:creator><![CDATA[Phlo Engineering]]></dc:creator>
            <pubDate>Thu, 16 Jun 2022 10:03:13 GMT</pubDate>
            <atom:updated>2022-06-16T10:03:13.309Z</atom:updated>
            <content:encoded><![CDATA[<p><em>with our Data Scientist </em><a href="https://www.linkedin.com/in/ignaciovaldelviraisla/"><em>Nacho Valdelvira</em></a><em>.</em></p><p><em>Nacho describes the evolution of Phlo’s data platform, from an early hand-rolled v1 to a scalable Modern Data Stack.</em></p><p>As a digital service, data is at the centre of our operations. We collect a range of data every day and this gives us opportunities to do something useful with it. We can understand our patients profiles &amp; behaviours more deeply, whilst also using it to predict stock demand and patient churn.</p><p>To take action on these opportunities we need a solid Data Architecture that will allow us to easily access any data generated by the business. Our Data Architecture should consist of three (apparently) simple requirements — data should be easy to query, reliable and easy to discover.</p><p>Let’s define what this means in practice:</p><ul><li><strong>Data is easy to query</strong> → We need a single source of truth, a place where data from disparate sources converges, so we know that we’ll find everything we need in a single place.</li><li><strong>Data is reliable</strong> → We need data testing to ensure we are using reliable data in our analysis and predictions.</li><li><strong>Data is easy to discover </strong>→ There should be no ambiguity. All tables should be self-explanatory. When this isn’t possible, we need a Data Catalog — an index page where all data sources and metrics are documented and properly defined. We need a translation from technical to business definitions to reduce the number of blockers when someone unfamiliar with the data sources wants to dig into the data. Additionally, it should be easy to interpret insights and create stories based on them, so we need a data visualisation tool.</li></ul><p>We have now defined our requirements, so which tools can meet our needs?</p><p><strong>The Modern Data Stack</strong></p><p>In the last few years, many new tools and technologies have emerged into the data ecosystem, allowing us to retrieve and extract value from our data more easily than ever before. As a result, a new stack of tools has emerged, the <strong>Modern Data Stack (MDS).</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*EdRUbVqN8FIy-HTIXGTvpQ.png" /><figcaption>Our Modern Data Stack</figcaption></figure><p>The MDS (as defined by <a href="https://www.fivetran.com/blog/what-is-the-modern-data-stack">Fivetran</a>) is a suite of tools used for data ingestion. In order of how the data flows, these tools include:</p><ul><li>A fully managed ELT (Extract-Load-Transform) data pipeline</li><li>A cloud-based columnar warehouse or data lake as a destination</li><li>A data transformation tool</li><li>A business intelligence or data visualisation platform</li></ul><p>The MDS is hosted in the cloud and requires little technical configuration by the user. These characteristics promote end-user accessibility as well as scalability to quickly meet growing data needs without the costly and lengthy downtime associated with scaling local server instances.</p><p>Having done our research, we knew that the MDS was what we required. However, implementing the MDS takes time and resources, and the data tasks backlog doesn’t wait.</p><p><strong>Data Architecture v1</strong></p><p>We first started implementing a simplified v1 Data Architecture to keep the business moving. This consisted of:</p><ul><li>Version-controlled SQL queries running on a <a href="https://cloud.google.com/scheduler/docs/creating">GCP cronjob </a>to extract only the data we wanted without interfering with production operations.</li><li>Extracts which were enriched with <a href="https://cloud.google.com/bigquery/docs/scheduling-queries">Scheduled Queries</a> to get them ready for analysis and visualisation with Google Data Studio.</li><li>A data lake mounted using gcsfuse as a Linux file system on the same server that ran our SQL queries, so we could automatically extract data into our data lake each day without any fear that we’d exceed storage limits.</li><li>A consistently-structured “latest” daily dataset which we could then materialise automatically as tables in BigQuery to support operational reporting.</li></ul><p><strong>That was our first architecture, why did we do it like that?</strong></p><p>It allowed us to provide value quickly while buying us time to investigate the MDS. Once v1 was operational, we started looking for our new sets of tools and iterating over the Data Architecture, keeping a few principles in mind:</p><ul><li>The main purpose of the data stack is to provide business value. We’ll implement the tools we need to solve our problems and refine our tooling as new requirements emerge. While our v1 cronjob architecture was good enough, it wasn’t scalable and lacked most of the requirements that we defined at the start of this article.</li><li>We’ll use open source solutions and not get tied to any proprietary programming language.</li><li>We know that tools come and go, so we should be tool agnostic and make it easy to couple/decouple different parts of our architecture.</li><li>Keep it simple.</li></ul><p><strong>Our tool research journey</strong></p><p>Starting our tool research journey, a key priority was finding a Data Warehouse solution to act as our single source of truth. A place to bring together data from disparate sources.</p><p>We chose BigQuery as our Data Warehouse.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/300/1*SnDm5kb1zehu8OA5zyq6mw.png" /></figure><p>There are good alternatives like Snowflake, AWS Redshift or Azure SQL Data Warehouse + Synapse Analytics, but as an existing Google Cloud Platform user, it felt sensible to remain in the same context. Our experience with BigQuery is that it’s an easy-to-use and cost-effective tool, which is something important to consider in a start-up environment.</p><p><strong>Next up — Airbyte</strong></p><p>The Data Warehouse is our recipient, but we required a tool to ingest data from different sources into this recipient, and <a href="https://docs.airbyte.com/">Airbyte</a> offers what we need.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/1*G2pO_f-iT1hl0FvvATqn3w.png" /></figure><p>No-code tools are on the rise. In the end, the MDS is about empowering anyone to create a data stack without requiring a team of data engineers. Airbyte is on a mission to make data integration pipelines a commodity, connecting with most of the sources we required (Facebook Ads, Intercom, Google Ads, Google Analytics and more) with the click of a button. Airbyte continues to move quickly and is planning to add many more connectors in the future.</p><p>Additionally, Airbyte also offers Incremental Append functionality. As table size increases, we won’t be able to dump whole database tables into BigQuery every night without affecting our database’s performance. We want to add to the existing table only what has changed. We also want to refresh the data more frequently, so <a href="https://docs.airbyte.com/understanding-airbyte/cdc/">CDC</a> was a natural next step.</p><p>We trialled Fivetran and Matillion, though Airbyte stood out as it gave us everything we required.</p><p><strong>Next steps — data reliability and interpretation</strong></p><p>Great, we already have three entities in our architecture: Sources + Destination + tool that moves data from Sources to Destination. We now have all the raw data in our Data Warehouse, so with some knowledge of SQL we can join, transform and clean it to get whatever we need. This is a great first iteration and a huge improvement from the cronjob architecture. However, we are still lacking a data quality and discovery tool, so that’s our next step.</p><p>We need to know that our data is reliable and it has passed quality tests. We also want to be able to discover data and understand what ambiguous columns or metrics mean. And most importantly, we want to transform data and version control it.</p><p><a href="https://docs.getdbt.com/docs/introduction">DBT</a> can help with all of that!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/300/1*kiCK99RH-940QTxGRnQkRA.png" /></figure><p>It’s difficult to find an alternative tool that offers DBT’s capabilities in the current data ecosystem. DBT has been the dominant solution in data architecture and has completely revolutionised it.</p><p>DBT offers:</p><ul><li>Ability to version control data transformations (<a href="https://roundup.getdbt.com/p/data-modeling-for-collaboration?s=r#:~:text=Data%20modeling%20in,their%20analytics%20code.">have branches, commits and syntax</a>) and to create dependencies between tables. Let’s say that I want to show in a dashboard the weekly orders from patients with diabetes. We will need to join different tables to produce that — retrieving data from at least the orders, patients and medication tables. With DBT we will create this new table that will have a dependency with the other three, and we’ll be able to see all dependencies in a diagram. This way, if generating the table fails we can easily spot the root cause of the failure.</li><li><a href="https://docs.getdbt.com/docs/building-a-dbt-project/tests">Data testing</a> — For example, we can test if my orders table has no null ID fields every time I run the table. We can do more complex tests like checking that the number of rows of two tables is the same. This is ideal to deal with the data reliability requirement mentioned previously.</li><li><a href="https://docs.getdbt.com/docs/building-a-dbt-project/documentation">Data catalog</a> — We can document each column and table, all version-controlled, and generate a documentation website with a search engine to find what we’re looking for. How would I know what <em>ampp_id</em> means if it wasn’t for the data catalog? We always try to make names self-explanatory but sometimes that’s not possible!</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Ex3twF1UznuPyH0_GrKBzA.png" /><figcaption>Auto-generated dbt documentation website</figcaption></figure><p><strong>Syncing everything together</strong></p><p>We have a few pieces of the puzzle now — Sources + Destination + a tool that moves data from Sources to Destination + a tool that does data testing, transforms it and documents it. Now we just need to orchestrate all these tools.<strong> </strong>Ideally, we want to run transformations just after data is ingested, which would be more efficient than doing both actions independently.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/453/1*DLk22YT3mD7Z7q81YTVTag.png" /></figure><p>We will use <a href="https://airflow.apache.org/">Airflow</a> for this.</p><p>Airflow is a popular orchestrating tool used to automatically organise, execute and monitor data flows. Airflow allows us to create the basic dependency between Airbyte and DBT that we required.</p><p><a href="http://astronomer.io/">Astronomer.io</a> do a supported version of Airflow, while Dagster, Prefect and Luigi are also in the same space. We went with Airflow (or rather, Airphlo 🙂 ) as it had predictable pricing — the cost of the virtual machine each month.</p><p>Moving forward we recognise our need to run custom Airflow containers may mean turning to other solutions as part of our overall data architecture.</p><p><strong>Looking at the data in more detail</strong></p><p>Last but not least, we need to visualise the data.</p><p>We’re using two tools to meet this need — <a href="https://datastudio.google.com/">Google Data Studio</a> and <a href="https://www.metabase.com/">Metabase</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/410/1*tgV_GiOQTUHtiuEUVYTq9g.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/300/1*Ct3pQLoj5C66rxxb3K6BtQ.png" /></figure><p>Google Data Studio was an easy first choice as we use the Google Stack, so it’s easy to integrate and it’s also free. It offers enough functionality for our use cases, which is to build KPI (Key Performance Indicator) and metric tracking dashboards.</p><p>We’re also experimenting with Metabase with the idea of enabling self-service analytics. Metabase is a great tool for this purpose and also has a better UI (User Interface) than Google Data Studio.</p><p>Having a self-service tool allows anyone in the business to be an analyst and solve data questions without needing to create SQL queries or rely on the data team. This looks easy on paper but it requires a lot of background work to implement. DBT Data Catalog and a two-way movement of metadata (from DBT to dashboards and vice versa) helps with data discovery, but the complex task here is to translate our technical model into a rich domain model anyone in the business can understand.</p><p>Another approach is to accept that it’s difficult to solve every data question without knowing SQL, so we build dashboards that answer our common data questions. For more complex queries, we’ll rely on analysts. For example, specific questions like “How many patients had medication A combined with medication B in the last two months?” are difficult to solve without SQL knowledge, so the approach would be to assign this task to a “domain expert” analyst.</p><p>There is lots of thinking to do here so we don’t have a definitive answer on how to approach this!</p><p><strong>We’re almost done! (for now…)</strong></p><p>So, in summary, we have:</p><ul><li>A data warehouse, a single source of truth where we store data from disparate sources → <strong>BigQuery</strong></li><li>A tool that gets data from sources and puts them in our destination → <strong>Airbyte</strong></li><li>A tool that manages data transformation, provides testing functionality and data documentation → <strong>DBT</strong></li><li>A tool that orchestrates all of the above → <strong>Airflow</strong></li><li>A tool for data visualisation → <strong>Google Data Studio </strong>and <strong>Metabase</strong></li></ul><p>Our Data Architecture maturity has now reached an acceptable level for retrospective analysis, i.e. to understand what has happened. Next is paving the way to feed valuable data back into our products, and starting to look into the future with predictive models. This requires a higher level of maturity, and currently:</p><ul><li>We have no properly defined approach to update tooling versions, like Airbyte and the connectors it uses.</li><li>Staging and production are treated as independent environments which makes it easier for issues to appear.</li><li>We’ll likely have more issues that we haven’t encountered yet.</li></ul><p>Therefore, we need to improve on this and aim for a DevOps type of approach on our Data Architecture. We have a long way to go and lots of improvements to make, so we’ll continue to review our Data Architecture and update it to meet our needs.</p><p>As a final thought, we want to emphasise that building an effective data platform boils down to the principles mentioned at the beginning of this article. Our objective is to provide commercial business value by having data that is easy to query, reliable and easy to discover. Keeping that front of mind, it doesn’t necessarily matter which tools you use to achieve it.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=63a4964ca0dd" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Tech Talks: Mob Programming]]></title>
            <link>https://medium.com/@phloengineering/tech-talks-mob-programming-5a8ee10cab?source=rss-96f09b902122------2</link>
            <guid isPermaLink="false">https://medium.com/p/5a8ee10cab</guid>
            <category><![CDATA[agile]]></category>
            <category><![CDATA[startup]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[teamwork]]></category>
            <category><![CDATA[collaboration]]></category>
            <dc:creator><![CDATA[Phlo Engineering]]></dc:creator>
            <pubDate>Tue, 14 Jun 2022 12:04:13 GMT</pubDate>
            <atom:updated>2022-06-14T12:04:13.004Z</atom:updated>
            <content:encoded><![CDATA[<p><em>with our Software Engineer </em><a href="https://www.linkedin.com/in/colette-kinnaird-69a846188/"><em>Colette Kinnaird.</em></a></p><p><em>As part of our regular Tech Talks series, Colette shares some insights on how mob programming has lifted the team while working remotely.</em></p><p>In a team with so many outgoing, talkative and passionate developers, we<br>began to feel somewhat robbed by a remote working environment. Locked<br>into our tasks all day, unable to as easily collaborate or spark conversations, the team began to feel motivation taking a hit — that’s when we decided to experiment with Mob Programming, and here’s why we’ll never look back…</p><h3>What is mob programming?</h3><p><a href="https://www.agilealliance.org/resources/experience-reports/mob-programming-agile2014/">Mob Programming</a> is the development practice where the whole team <br>works on the same task, at the same time.</p><h3>How we do it</h3><p>For us, Mob Programming usually means getting together in a Slack <br>huddle and a member of the team sharing their screen as they take <br>control of the keyboard, or ‘drive’. The rest of the group guides the driver throughout the task. The ‘driver’ switches every time we feel<br>necessary, usually when there is a natural break in the session (for <br>example, we have completed an item on a ticket). Sometimes, we make use<br>of VS Code’s <a href="https://code.visualstudio.com/learn/collaboration/live-share">Live Share</a> feature to allow multiple drivers to code at the <br>same time.</p><p>We don’t Mob all the time. We take the best approach for the task at hand. Mobbing provides the most value when kicking off a ticket, or completing a particularly complex task. Other tasks are more suited to pairing or solo work, and it’s important for us that we get those experiences too. Taking on a task alone can at times provide for a more powerful learning experience.</p><h3>Why it works for us</h3><h4><strong>Our differences make us powerful</strong></h4><p>Like most teams, the knowledge and skills that each member brings along<br>are all incredibly different, and incredibly valuable. <br>We produce a much better level of code because we all see the value in <br>and share each other’s skills and opinions. Those of us who are most <br>passionate about user experience may notice something off in the UI that <br>those of us who are most passionate about data management and <br>infrastructure may not (and vice versa). <br>By mobbing, we also often produce much cleaner, readable code because <br>members of the mob can share experience and knowledge of design <br>patterns and best practices that others don’t have.</p><h4><strong>We learn from each other</strong></h4><p>Mobbing is a great way to pick up new knowledge and has been one of the<br>best learning experiences I’ve had as a developer. <br>Senior developers often share their thought processes whilst solving <br>problems, challenging the way other developers solve problems and <br>develop solutions, and giving junior developers a great learning opportunity. We even learn things as small as cool keyboard shortcuts <br>that we didn’t know about.</p><h4><strong>We move faster</strong></h4><p>It makes us incredibly productive; we can give each other instant <br>feedback without waiting for PR reviews. and we are often in total sync. <br>Everyone knows exactly how each element of our current project works, <br>so nothing gets blocked or put on hold because a member of the team has<br>other priorities, is unwell or on holiday. This also makes it much easier for <br>any member of the team to pick up subsequent tickets or share <br>knowledge of a project across other teams.</p><h4><strong>We have a great team</strong></h4><blockquote>During my years as a mob programmer, I have both loved<br>it and hated it. And I have concluded, it is not the Mob<br>Programming that is good or bad. It is all about the team. If<br>the team is healthy, they will love being mob programmers,<br>but a dysfunctional team will probably kill each other if they<br>are forced to mob.— Tobias Modig</blockquote><p>The key reason I truly believe mob programming works so well for us is <br>that we get along so well. We can discuss opinions and settle <br>disagreements with ease. <br>Working with people is fun! We feel it creates an awesome working <br>atmosphere and allows us to easily collaborate on issues with members of the <br>wider team such as design. We also have a great laugh, which is a huge <br>benefit!<br>Together, we reflect on every mistake and celebrate every achievement. <br>It makes every day working at Phlo something to look forward to.</p><h3>But don’t take it just from me…</h3><p>Here are the opinions of some other members of the patient experience <br>team on how they feel about mobbing:</p><blockquote>Mobbing is super helpful — especially as a new member of<br>the team. It helps to get to know the codebase and get to<br>grips with how the company works.— Noah Valuks</blockquote><blockquote>I find mobbing most useful when venturing into new<br>territory. It’s reassuring to know that you’re not left on your<br>own if you run into any problems. As well as this, it’s likely<br>that at least one member of the team has an idea for a task<br>that hasn’t been tried yet, and this can lead everyone to think about problems differently and lead to a solution that <br>may have taken longer. — Anthony Gangel</blockquote><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5a8ee10cab" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Tech Talks: Creating a Component Library]]></title>
            <link>https://medium.com/@phloengineering/tech-talks-creating-a-component-library-73e9e2f4d580?source=rss-96f09b902122------2</link>
            <guid isPermaLink="false">https://medium.com/p/73e9e2f4d580</guid>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[storybook]]></category>
            <category><![CDATA[startup]]></category>
            <category><![CDATA[component-libraries]]></category>
            <category><![CDATA[design-systems]]></category>
            <dc:creator><![CDATA[Phlo Engineering]]></dc:creator>
            <pubDate>Mon, 06 Jun 2022 10:27:02 GMT</pubDate>
            <atom:updated>2022-06-06T16:33:33.134Z</atom:updated>
            <content:encoded><![CDATA[<p><em>with our Senior Frontend Engineer </em><a href="https://www.linkedin.com/in/michael-nield-26b6803a?miniProfileUrn=urn%3Ali%3Afs_miniProfile%3AACoAAAhJMXMBaAL7092V-oFNfruvUVpZxlNuZXE&amp;lipi=urn%3Ali%3Apage%3Ad_flagship3_search_srp_all%3BmrbnqLlvRamtQfW93ozG8Q%3D%3D"><em>Michael Nield</em></a><em>.</em></p><p><em>As part of our regular Tech Talks series, Michael describes the creation of our new Component Library.</em></p><h3>Building Phlo’s Component Library</h3><p>We take our users&#39; experience seriously at Phlo. We have a team of designers, researchers and engineers dedicated to making sure our patients have the best possible experience when using our applications, and it’s something we’re constantly looking to evolve and improve. But what about our developer experience (DX)?</p><p>Admittedly, when I first joined the company early in 2022 this was something that needed refining in relation to the creation of UI components. There was no defined pattern in our code for styling our web application, and making even the simplest of changes was a frustrating experience for our engineers. We had multiple component libraries, some using <a href="https://css-tricks.com/css-modules-part-1-need/">CSS Modules</a> coupled with random CSS files created to overwrite default styles. It was a bit of a mess.</p><h3>What is a component library?</h3><p>A component library consists of ready-made UI components that can be used as building blocks to create layouts. If it’s done correctly it will standardise development, reduce code duplication, improve collaboration between teams and drive scalability.</p><h3><strong>Design system</strong></h3><p>I wanted to take the pain out of creating new components for our engineers and the guesswork that was currently causing us all a bit of strain. As a frontend engineer, I understand the importance of design and luckily for me, we have an extremely passionate and talented team at Phlo. The team are always looking at ways to improve. Having sat with a few members from the design team and discussed what I wanted to achieve it was clear we needed some foundations to build from. We started to create a design system that held all the <a href="https://spectrum.adobe.com/page/design-tokens/">design tokens</a> (you can see some of our tokens in the screenshot below) that we would use to build our new component library. The design system holds the fundamental properties that will create our components. The key to having this foundation in place is consistency when we are building out our component library.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3BUgF458xELZ0eCN8ExFSA.png" /><figcaption>Some of our design tokens for defining breakpoints, spacing, fonts, font sizes, and colours.</figcaption></figure><h3><strong>Theme</strong></h3><p>Once we defined our design system, I then had to use that to build the components. The solution to that was to create a theme using the CSS-in-JS library <a href="https://styled-components.com/">Styled Components</a>. Styled components, much like CSS Modules give our components scoped styles meaning they won’t clash with other styles that we create. The theme holds our design tokens and the application is wrapped in a theme provider giving us access to the design tokens within the theme. We’ll use these tokens to define the base properties that make up the structure of our components. Font sizes, font weights, margin values, padding values, breakpoints, button variants, colours, and borders. We can reference these properties within our styled components like the example below.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/482/1*2ZO3fPwqy_qg7cogwhM7OA.png" /><figcaption>Using our design tokens from the theme</figcaption></figure><h3>Storybook</h3><p>Instead of simply creating components in the monorepo inside another folder and writing a cluster of documentation to support this and hope it would help. I naturally turned to a more robust solution, and having heard about <a href="https://storybook.js.org/">Storybook</a> and dabbled with it in the past, this was clearly the best approach to take in order to solve our issue. For those unfamiliar with Storybook, this is how they describe it. <br> <br>“Storybook is an open source tool for building UI components and pages in isolation. It streamlines UI development, testing, and documentation.”<br> <br>Our web app is written in React with Typescript so I required the same environment to create my Storybook component library. After some digging, I found a tool called <a href="https://tsdx.io/">TSDX</a> a CLI for setting up a TypeScript package with zero configuration which included a template for React with Storybook. This was absolutely perfect for getting started. The end game was to create an npm package that we would install as a dependency in our monorepo and TSDX comes with build scripts out of the box that allows us to achieve this without all the headaches that can come with creating a package.</p><p>The power of Storybook comes from the controls, below is an example of a Heading component that I’ve created for our library.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/442/1*BaDp1LCFUHf2IP10Uw_Aaw.png" /><figcaption>Heading component displayed within Storybook</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/438/1*jkXa47GyqE9Zl3itm1ofKg.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/518/1*1qj3ItKH6-P7CZxupvdskA.png" /><figcaption>Heading component (left) and Heading component story for storybook (right).</figcaption></figure><p>Controls allow the engineers to make changes to the component inside of Storybook. As you can see from the image above it also ties in with our Typescript interface which declares which types our props accept and also a description if they are limited, like the <strong>‘as’ </strong>prop from the table above.</p><p>This type of information is extremely useful when we want to build up our UI using these components.</p><p>Another really useful feature is the <strong>“Docs” </strong>from here there is a<strong> “show code” </strong>button that displays the code needed to use the component with the current configuration that has been set up from the controls section on the “<strong>Canvas</strong>” panel. The engineer can simply copy and paste this code into their editor and off they go.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/646/1*XGw9HiKUTTUjX0iTaCA_yQ.png" /><figcaption>Code snippet generated automatically by Storybook</figcaption></figure><h3>What next?</h3><p>At this current time, we are using the component library on new features that we are building with a plan to refactor the full application on a feature by feature basis. It will take time but the feedback we’re getting from the team so far has been really positive and they can see why this approach will be worth it in the long run. Going forward, ideally, I’d like to give the engineers almost ‘a drag and drop’ experience when building the user interfaces with really minimum configuration allowing them more time to handle the data.</p><p>At the moment we don’t have everything documented in Storybook. Hopefully, in time we’ll have the library filled with every component we have in our design ecosystem.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=73e9e2f4d580" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Tech Talks: Creating a Microservice Generator]]></title>
            <link>https://medium.com/@phloengineering/tech-talks-creating-a-microservice-generator-d00a35637d75?source=rss-96f09b902122------2</link>
            <guid isPermaLink="false">https://medium.com/p/d00a35637d75</guid>
            <category><![CDATA[microservices]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[startup]]></category>
            <category><![CDATA[yeoman]]></category>
            <category><![CDATA[software-development]]></category>
            <dc:creator><![CDATA[Phlo Engineering]]></dc:creator>
            <pubDate>Mon, 30 May 2022 10:02:31 GMT</pubDate>
            <atom:updated>2022-05-30T10:02:31.492Z</atom:updated>
            <content:encoded><![CDATA[<p><em>with our Principal Engineer </em><a href="https://www.linkedin.com/in/jamie-macdonald-23729210/"><em>Jamie MacDonald</em></a><em>.</em></p><p><em>As part of our regular Tech Talks series, Jamie describes creating a microservice generator using </em><a href="https://yeoman.io/"><em>Yeoman</em></a><em>.</em></p><h4>Avoiding Copy/Paste</h4><p>At <a href="https://www.linkedin.com/company/wearephlo/">Phlo</a> we’re constantly spinning up new microservices to support our scaling teams and platform. Previously to get a microservice up and running, we had to carry out lots of error-prone and repetitive tasks. There was lots of copy-paste-amend and far too much YAML. We wanted to find a better way to achieve this, so we set out to do some research on the best solutions.</p><h4>Scaffolding</h4><p>After a bit of research, we decided to build a microservice generator using the popular scaffolding tool <a href="https://yeoman.io/">Yeoman</a>. The tool has good support and a big community following which helped make our decision. To begin, we used the handy <a href="https://github.com/yeoman/generator-generator">generator-generator</a> provided by Yeoman to build our first generator.</p><p>The base generator had some template files and example commands to get a feel for how the most basic generators would work. We reviewed the included template files and figured out the templating syntax; then investigated the generator commands that were available. Now we had a good idea of how we would amend this to work for our microservices.</p><h4>Templates</h4><p>To build the microservice generator we looked at our existing microservices, extracted the common components and stripped them back to the essential parts.</p><p>We took these base files and added them to our templates folder within the microservice generator package. We updated the files to include template variables so we can inject parameters such as name and description without error-prone manual work.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/474/1*hIjefin2lvwbYIzlV0kbVg.png" /><figcaption>The templates we use for our microservices</figcaption></figure><h4>Build and Run</h4><p>The generator could now create the files we needed for our microservice but we also wanted a way to run and build them without requiring any additional manual work. To do this we modified the generator to add the new package to our root package.json file, we also added a script which could build the new microservice.</p><p>For convenience, we also wanted to be able to run our new microservice from within VS Code so we added entries in both launch.json and tasks.json. This meant we could simply select our microservice from the run menu and with the push of a button our new microservice was built and up and running.</p><h4>Success</h4><p>Now we simply run yo microservicein our project and after inputting a name and description, the new microservice is ready to go.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1004/1*03J_G4bNB-7Xdg7-4gd3Uw.png" /><figcaption>Our microservice generator in action</figcaption></figure><p>The new microservice generator is a significant time-saver. With a simple command and a couple of prompts, we have a microservice up and running. This means engineers can focus on the development of exciting new features, instead of wasting their time with repetitive tasks.</p><p>This has reduced the setup time of a new microservice to under a minute.</p><h4>Next Steps</h4><p>With a growing number of services, we’re looking at how we can reduce our build times. We want our monorepo to be smarter and to only build and test packages when they have changed. Some potential solutions to this problem are <a href="https://nx.dev/">Nx</a> and <a href="https://turborepo.org/">Turborepo</a>.</p><p>The next step in our monorepo improvements will be optimising our build time, much like we’ve done here with microservice generation.</p><p>Have you tried any tools for speeding up building in your monorepo that you would recommend?</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d00a35637d75" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Tech Talks: Deploying a React Native App with Azure DevOps]]></title>
            <link>https://medium.com/@phloengineering/tech-talks-deploying-a-react-native-app-with-azure-devops-3816a144ab6e?source=rss-96f09b902122------2</link>
            <guid isPermaLink="false">https://medium.com/p/3816a144ab6e</guid>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[azure-devops]]></category>
            <category><![CDATA[continuous-delivery]]></category>
            <category><![CDATA[startup]]></category>
            <dc:creator><![CDATA[Phlo Engineering]]></dc:creator>
            <pubDate>Mon, 23 May 2022 10:03:33 GMT</pubDate>
            <atom:updated>2022-05-23T10:03:33.374Z</atom:updated>
            <content:encoded><![CDATA[<p><em>with our Principal Engineer </em><a href="https://www.linkedin.com/in/jamie-macdonald-23729210/"><em>Jamie MacDonald</em></a><em>.</em></p><p><em>As part of our regular Tech Talks series, Jamie describes setting up deployment pipelines for our new React Native app.</em></p><h4>Extending our Azure DevOps Pipelines</h4><p>We have recently launched our <a href="https://reactnative.dev/">React Native</a> (RN) app into both the Google Play Store (Android) and the iOS App Store. To streamline our releases, and to ensure we have a safe and repeatable release process, we have extended our existing suite of Azure DevOps pipelines.</p><p>While the RN app is a single codebase, we needed to build individual pipelines to target the respective app stores. With the Android launch coming first, the Google Play Store was the obvious place to start.</p><h4>Android</h4><p>Before reaching a production-ready state and setting up the new pipelines we had a set process to build and deploy the Android app from our own machines. This manual process acted as the blueprint for building out our new pipelines.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/514/1*bcwLbClwps0SAN1BdpzGEA.png" /><figcaption>Our Android RN deployment pipeline</figcaption></figure><p>Our Android deployment pipeline can be split into four sections:</p><ul><li>Credentials: To sign the app and ensure it is associated with our Google Play Account, we download our Android Keystore file and create a <a href="https://fastlane.tools/">fastlane</a> credentials file which allows fastlane to upload our build to the Google Play Store.</li><li>Environment: To ensure our deployments are flexible we create a <a href="https://docs.gradle.org/current/userguide/build_environment.html#sec:gradle_configuration_properties">Gradle Properties</a> file and a <a href="https://www.npmjs.com/package/dotenv">.env</a> file, this allows us to point our app builds at a variety of environments and specify different credentials to allow us to use both testing and production accounts.</li><li>Preparation: Our build from a previous stage in our pipeline is unzipped, we then install Ruby so we can install our <a href="https://rubygems.org/">gems</a> which we use to generate the app bundle. We also send our source maps to <a href="https://sentry.io/">Sentry</a> so that we can accurately track any issues that occur when the build is released.</li><li>Deployment: With everything we need now in place the final step is to run our fastlane command and specify which of our <a href="https://docs.fastlane.tools/getting-started/android/release-deployment/">lanes</a> we want to use. The app gets bundled and sent to our specified <a href="https://developers.google.com/android-publisher/tracks">Google Play Store Track</a>.</li></ul><h4>iOS</h4><p>With the Android app able to be built and deployed to the Google Play Store, we moved on to deploying to the iOS App Store. To do this we make use of the built-in Xcode tasks available on Azure DevOps.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/514/1*sVERR9qpSp4-pR2dYMiM1Q.png" /><figcaption>Our iOS RN deployment pipeline</figcaption></figure><p>Deployment of our iOS app can also be split into the same four sections:</p><ul><li>Credentials: For deploying to the iOS App Store we need to have our <a href="https://developer.apple.com/support/certificates/">Signing Certificate</a> and <a href="https://developer.apple.com/documentation/appstoreconnectapi/profiles">Provisioning Profile</a>, we install these so they can be used to sign and build our app.</li><li>Environment: We again use a <a href="https://www.npmjs.com/package/dotenv">.env</a> file for our iOS build. In addition to this, we also create an entitlements file which is used to specify which URLs are available to <a href="https://developer.apple.com/ios/universal-links/">universal link</a> into the app, as well as specifying our Apple Pay details.</li><li>Preparation: Again we unzip our build and then perform apod install so that all our <a href="https://cocoapods.org/">CocoaPod</a> dependencies are up to date.</li><li>Deployment: Now we are ready to archive our app, we use the <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/build/xcode?view=azure-devops">Azure Xcode build task</a> to do this and then use the <a href="https://marketplace.visualstudio.com/items?itemName=ms-vsclient.app-store">Apple App Store task</a> to publish our build to Test Flight.</li></ul><p>The final step in our iOS pipeline is sending a slack message to a dedicated RN Testing channel, this lets us know the build is complete and ready to test!</p><h4>Impact</h4><p>After making these changes, we now have full confidence that we were always deploying the latest and greatest version of the app. It has moved us from a complicated multi-step manual process, to simply needing to press one button. The use of .env files means we can build our app for any of our testing or production environments and we also no longer need to have secure credentials saved on our local machines - these can all be stored safely in the <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/library/secure-files?view=azure-devops">Azure Secure File Store</a>.</p><p>These improvements are a huge time saver and allow engineers to focus on solving the next interesting problem rather than waiting for builds and deployments to complete.</p><h4>Next Steps</h4><p>The current deployment process unfortunately still involves some manual work due to both app stores requiring unique build numbers for each deployment. Currently, this involves either manually updating the numbers before we deploy or running a script that updates them for us.</p><p>The next step in our deployment pipelines will be making this an entirely automated process.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3816a144ab6e" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Phlo’s Tech Talks]]></title>
            <link>https://medium.com/@phloengineering/phlos-tech-talks-49030943551a?source=rss-96f09b902122------2</link>
            <guid isPermaLink="false">https://medium.com/p/49030943551a</guid>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[startup]]></category>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[scaleup]]></category>
            <dc:creator><![CDATA[Phlo Engineering]]></dc:creator>
            <pubDate>Mon, 16 May 2022 07:57:20 GMT</pubDate>
            <atom:updated>2022-05-16T07:57:20.603Z</atom:updated>
            <content:encoded><![CDATA[<p><em>with our Principal Engineer </em><a href="https://www.linkedin.com/in/bazwilliams/"><em>Barry Williams</em></a><em>.</em></p><p><em>Barry discusses Phlo’s fortnightly Tech Talks and why they’re a great way to sign off on a Friday afternoon.</em></p><p>Phlo’s product delivery is split into three different areas — Patient, Pharmacy and Partner - each with its own dedicated team of engineers. With Phlo’s hybrid working environment and autonomy enjoyed by the teams, cross-team communication wasn’t at its peak.​ We needed a new forum to bring our engineering team together to get to know each other, share ideas and have a bit of fun! And so, Tech Talks was born.</p><h4>What did we hope to achieve from Tech Talks?</h4><p>We hoped to:</p><ul><li>Grow the collective engineering excellence of the business.</li><li>Provide an opportunity for all our engineers to be heard, challenge the way we do things, and ultimately make positive change.</li><li>Facilitate discussions around infrastructure shared across our teams. We work in a monorepo and make use of the same CI/CD tools. We have lots in common.</li><li>Build strong social connections so teams naturally reach out to each other for support.</li><li>Step out of the echo chamber within our own teams. While we already have slack channels where engineers are free to share ideas and tech articles, the asynchronous nature can often mean ideas are overlooked or lost in the stream.</li><li>Bring back the water cooler chat. Often one idea can lead to other engineers riffing off each other leading to an explosion of creativity that we weren’t harnessing or nurturing. ​</li></ul><p>Ultimately we wanted a forum that was a:</p><ul><li>Platform to share ideas;</li><li>Democratic and safe environment;</li><li>Place for discussion.</li></ul><h4>Getting Started​</h4><p>Our Tech Talks were initially designed to be a series of two or three lightning talks on a Friday afternoon in an informal setting, scheduled to help wind down the working week. We asked for ideas for the first session, with the team full of suggestions.<br>​<br>We started with a session on Google Task Queues; followed by a suggested improvement to our GitHub PR process by using a tool called Mergify; and finally, a group discussion reviewing a draft of onboarding guides for new starters.</p><h4>How did it go?</h4><p>We expected ten-minute talks with five minutes of questions. But instead, we had thirty minutes of discussion and comments as we went through each topic. This wasn’t death by PowerPoint, this was an informal chat with screen sharing where it helped.<br>​<br>The team left the meeting motivated and ready to learn more about the topics discussed.<br>​<br>That was just the beginning… We’ve been running Tech Talks every month this year and the impact on the team has been beyond expectation. It’s a Zoom call that everyone <em>actually</em> looks forward to.</p><h4>What’s next?</h4><p>To date we’ve discussed building microservices templates, improving test coverage visibility, Playwright end to end tests, API integrations, React Native toolchains, Mob Programming, Pair Programming and lots of our team’s processes. We’re keen to keep going and expand out further to non-work-related tech, and also discuss the books and blog posts everyone has been reading lately.</p><p>​Come to think of it… It&#39;s clear many of our tech talks would make a great blog post in their own right. So we’re going to step out of our <em>Phlo echo chamber </em>and give you a glimpse into life at Phlo with our <strong>Tech Talks</strong> <strong>Series</strong>. Look out for our first post on <strong>Deploying a React Native App with Azure DevOps </strong>next week.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=49030943551a" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Hot Topics — May 2022]]></title>
            <link>https://medium.com/@phloengineering/hot-topics-may-2022-3c692dba5af5?source=rss-96f09b902122------2</link>
            <guid isPermaLink="false">https://medium.com/p/3c692dba5af5</guid>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[startup]]></category>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[product-management]]></category>
            <dc:creator><![CDATA[Phlo Engineering]]></dc:creator>
            <pubDate>Thu, 05 May 2022 12:16:09 GMT</pubDate>
            <atom:updated>2022-05-06T08:05:06.698Z</atom:updated>
            <content:encoded><![CDATA[<h3>Hot Topics — May 2022</h3><p><em>The engineering team at Phlo has a strong culture of continuous improvement and is always looking for ways to improve our developer experience.</em></p><p><em>Here are a few of the hot topics the team have been discussing this month.</em></p><h4>Snaplet</h4><figure><img alt="Snaplet logo" src="https://cdn-images-1.medium.com/max/165/1*IMkFX1qi4dszhHklKw2Ziw.png" /></figure><p><a href="https://www.snaplet.dev/">Snaplet</a> allows developers to capture copies of a production database, transform the data within the database to remove any Personally Identifiable Information (PII), and restore the database to a development environment. This allows developers to build and test against close to real-world data. While there are tools that offer similar, Snaplet is focused on providing a tool with a great developer experience that fits easily into your stack.</p><p>Snaplet’s founder Peter gave some of the team a demo of Snaplet’s capabilities, with particular emphasis on the ability to run Snaplet in your own environment. Data transforms can be configured in a JSON format, and custom transformers can be easily built using plain old Javascript. The simplicity of the configuration (which can be source controlled) and running a command-line tool makes Snaplet really stand out. Peter also described some exciting plans to infer types of data to provide improved Typescript support when building data transforms.</p><p>The next version of Snaplet, which gives you the ability to run Snaplet locally, is coming shortly. We can’t wait to give it a try.</p><h4>Graphite</h4><p><a href="https://graphite.dev/">Graphite</a> aims to allow engineers to write and review smaller pull requests, and ship faster.</p><p>Graphite is built on top of Git and works on the concept of “stacked changes”, where feature development can easily be broken down into multiple pull requests and reviewed in an incremental fashion. In this new model, what was previously a commit, now naturally becomes a branch which can be reviewed. The team doesn’t need to wait for this branch to be merged before continuing to build “stacked changes” on top of the branch. This all (hopefully) leads to smaller pull requests and a workflow that increases productivity for engineers 👍🏻.</p><p>The pull request dashboard provided by Graphite also provides an alternative way to manage pull requests, making it clear what pull requests need to be actioned and providing a faster reviewing experience.</p><p>We hope to find a non-invasive way to trial Graphite over the coming months.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FpP0AYz9ttC0%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DpP0AYz9ttC0&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FpP0AYz9ttC0%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/0132b20d9371dd629a7ecd0cb2d810d5/href">https://medium.com/media/0132b20d9371dd629a7ecd0cb2d810d5/href</a></iframe><h4>Athenian</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/861/1*shc2SWItEJQv9qSeYYL5uQ.png" /><figcaption>Athenian Dashboard</figcaption></figure><p>As our engineering team continues to grow we want to ensure we maintain our high performing culture and have a way to measure and adapt as we get to grips with new challenges.</p><p><a href="https://athenian.com/">Athenian</a> aims to provide end-to-end visibility of your software delivery process. It does this by plugging into Github, Jira and CI pipelines (via Github checks), bringing all the data together and providing key engineering metrics to help teams measure and improve their processes.</p><p>While at Phlo we have always been very self-aware of areas for improvement, having the data at your fingertips allows you to maintain your focus on the key problematic areas. While we’re only scratching the surface during a trial of Athenian, we believe it can be a key part of our team’s ability to grow, and be central to our sprint and project retrospectives.</p><p>Outside of the product, we have found Athenian’s team to be hugely supportive, with the people side being a key part of their offering. We have already had multiple calls with their CEO and team, and have received lots of great advice. Athenian’s CEO Eiso also hosts a great podcast <a href="https://www.developingleadership.co/">Developing Leadership</a>, and you can find lots of great content on the Athenian <a href="https://athenian.com/blog">blog</a>.</p><h4>And you?</h4><p>What has caught your attention this month? Are you going to give any of these tools a try?</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3c692dba5af5" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Auth0 vs Ory Kratos]]></title>
            <link>https://medium.com/@phloengineering/auth0-vs-ory-kratos-3a849a18e8e4?source=rss-96f09b902122------2</link>
            <guid isPermaLink="false">https://medium.com/p/3a849a18e8e4</guid>
            <category><![CDATA[security]]></category>
            <category><![CDATA[healthtech]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[startup]]></category>
            <dc:creator><![CDATA[Phlo Engineering]]></dc:creator>
            <pubDate>Tue, 03 May 2022 13:25:58 GMT</pubDate>
            <atom:updated>2022-05-03T16:40:09.711Z</atom:updated>
            <content:encoded><![CDATA[<p><em>with our Junior Software Engineer </em><a href="https://www.linkedin.com/in/barrettandpen"><em>Matt Barrett</em></a><em>.</em></p><p><em>As we continue to extend our range of products, Matt compares two potential authentication solutions — Auth0 and Ory Kratos.</em></p><h4>Building a Platform</h4><p>As we continue to build out our <a href="https://phloconnect.com/">Phlo Connect</a> platform and introduce new digital prescribing products, having a safe and secure way for our partners to confirm their identity and log into our products is a key focus.</p><p>While building an end-to-end <strong>user authentication</strong> solution in-house is possible, building upon an already established solution helps keep build and maintenance costs low and ensures we stay at the forefront of best practices.</p><h4>Selecting an Authentication Solution</h4><p>After an initial appraisal of the myriad solutions on offer, we decided to take a closer look at two products from opposite ends of the market: <a href="https://auth0.com/">Auth0</a> and <a href="https://www.ory.sh/docs/kratos">Ory Kratos</a>.</p><p><strong>Auth0</strong>,<strong> </strong>a fully hosted cloud solution,<strong> </strong>sells itself on its ease of implementation. The Quickstart guide quotes a 15-minute timeframe to get a login flow up and running on an existing React web app. This is possible as Auth0 provides authentication screens out of the box — there is no need to build your own login and registration screens. This simplicity does come with some drawbacks, however, such as limitations on the level of customisation, and an ongoing cost that grows as your user base scales. Auth0 is managed via an online dashboard, through which you can tweak security settings, user flows and other configuration parameters.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/184/1*l4B53qItMcnYqQ7xVy3c2Q.png" /></figure><p><strong>Ory Kratos, </strong>on the other hand, is an open-source, API driven solution to user authentication. It doesn’t include prebuilt authentication screens and needs to be self-hosted (unless using <a href="https://www.ory.sh/docs/concepts/project">Ory Cloud</a>, a cloud-hosted variation of Ory Kratos). While this means there is an overhead to setting Ory up, it also allows complete freedom to build your own authentication screens and fully control user journeys, with Ory only responsible for its highly specialised purpose.</p><h4>Essential Functionality</h4><p>The first thing we needed to confirm was that both products were able to meet the needs of our products, and so defined <strong>7 key acceptance criteria</strong>:</p><p>· Customisable UI<br>· Configurable Password Requirements<br>· Session Management<br>· Email Verification<br>· Phone Verification/2FA <br>· Account Recovery<br>· Supports migrating existing login</p><p>On the whole, we found that both Auth0 and Ory would be suitable for our purposes, either containing the functionality we needed as standard or allowing us to add it ourselves. There were a few caveats and differences separating the two in key areas though, and these became our decision points.</p><h4>Customisable UI</h4><p>With Auth0 the scope for customisation is limited. The login box itself is a widget which you can customise using the dashboard — but the only options available to edit are the logo (which displays above the login fields), and the button colour.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/451/1*vGfvqeau5SdmdA6tIA9kRw.png" /></figure><p>The background colour option controls the colour of the page surrounding the login box. This is further customisable using the <a href="https://shopify.github.io/liquid/">Liquid template language</a>. You can read more about the specific ways in which Auth0 allow you to edit this page in their documentation <a href="https://auth0.com/docs/customize/universal-login-pages/universal-login-page-templates">here</a>. Ultimately, you are only able to use Liquid to adjust the content displayed around the Universal Login widgets, meaning the layout and components that make up the login form cannot be modified.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/451/1*mWkAchOT6d8yb50cAmCA-Q.png" /></figure><p>By contrast, Ory Kratos places no limitations on your UI. While you will need to build it from scratch, you have complete freedom to design your login page as you see fit. As long as your text fields/buttons are correctly connected to Kratos, their appearance is irrelevant to the functioning of the authentication process. As design and UX are core to our development philosophy at Phlo, this extra freedom was a major factor in our decision-making process.</p><h4>Configurable Password Requirements</h4><p>When settling on our acceptance criteria, we found the ability to configure our password policy to be essential, and this is another area in which Auth0 and Kratos differ.</p><p>Auth0 comes with 5 pre-defined password strengths, allowing you to select which level to enforce. The levels are as follows:</p><p>· <strong>None</strong> (default): at least 1 character of any type.<br>· <strong>Low</strong>: at least 6 characters.<br>· <strong>Fair</strong>: at least 8 characters including a lower-case letter, an upper-case letter, and a number.<br>· <strong>Good</strong>: at least 8 characters including at least 3 of the following 4 types of characters: a lower-case letter, an upper-case letter, a number, a special character.<br>· <strong>Excellent</strong>: at least 10 characters including at least 3 of the following 4 types of characters: a lower-case letter, an upper-case letter, a number, a special character. Not more than 2 identical characters in a row.</p><p>Ory Kratos however only has 3 parameters to configure when it comes to the password policy, two of which are only configurable in the sense they can be turned on or off. They are: a minimum length, an identifier similarity check (which checks to see if the password is similar to the user identifier), and a ‘Have I Been Pwned’ check (which checks if the password has been found in the <a href="https://haveibeenpwned.com/">Have I Been Pwned</a> database).</p><p>We were initially concerned that this may not be enough flexibility, but Ory has published their reasoning and research into Password Policy Best Practices <a href="https://www.ory.sh/docs/kratos/concepts/security#password-policy">here</a>. Reading this resulted in us rethinking the initial importance we had placed on stringent password requirements.</p><h4>Migrating Existing Accounts</h4><p>The third area the two products notably differed was in the methods offered for migrating users across from an existing database. When coming to a decision, we wanted to remain open to the possibility of using the chosen product for login across the whole Phlo platform — meaning migrating user accounts over from our existing solutions. Both Auth0 and Kratos can support this, although the process is simpler with Auth0.</p><p>Auth0 supports automatic migrations, sometimes known as trickle or lazy migration, which can be enabled after connecting your Auth0 dashboard to your existing database via their custom database connection interface. This will move users to Auth0 the first time they log in after integration has been set up, with no need to reset their password.</p><p>Migration in Ory is handled by feeding user data into the same endpoint used for account creation. This method supports hashed passwords encrypted using PKBDF2, Argon2 or BCrypt algorithms.</p><p>Both of these methods will appear seamless to an end-user, however, the Kratos migration requires additional work on our end, as existing data will need to be reformatted into an appropriate payload, and fed into the endpoint.</p><h4>Making a Decision</h4><p>After reviewing these factors as a team, we found the decision boiled down to the core principles of the two products, rather than the specifics (which both were able to handle).</p><p>Auth0 offered speed, simplicity, and support. It would be quick and easy to set up, and we would have access to their dedicated support team if we ran into any major issues. This came at a price, however, both literal (see below) and figurative, in the sacrificed design freedom and reliance on a third-party provider.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/451/1*m361xJIP5ic-zUADWtU7ig.png" /></figure><p>Ory Kratos offered the freedom to design our login flow exactly to our vision, using open-source software that we can host ourselves. While there is no dedicated support, the welcome we received on the <a href="https://slack.ory.sh/">Ory Slack</a> was exceptional. All our questions were answered within minutes and we had a clear understanding of what the future of Ory looked like.</p><p>Despite necessitating additional upfront development time, Ory Kratos has won us over as a flexible and scalable solution with a strong supporting community.</p><p>We are currently working on an initial proof of concept using Ory Kratos and look forward to sharing our findings as we gain a deeper understanding of its capabilities.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3a849a18e8e4" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>