<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by atadataco.com on Medium]]></title>
        <description><![CDATA[Stories by atadataco.com on Medium]]></description>
        <link>https://medium.com/@atadataco?source=rss-5a1371b55f29------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Mon, 11 May 2026 16:53:12 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@atadataco/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Building a Modern Data Layer for a High-growth SaaS Company]]></title>
            <link>https://atadataco.medium.com/building-a-modern-data-layer-for-a-high-growth-saas-company-75b50aa62801?source=rss-5a1371b55f29------2</link>
            <guid isPermaLink="false">https://medium.com/p/75b50aa62801</guid>
            <dc:creator><![CDATA[atadataco.com]]></dc:creator>
            <pubDate>Tue, 09 Aug 2022 19:44:15 GMT</pubDate>
            <atom:updated>2022-08-09T19:44:15.622Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/740/0*BIQyjVdm4jZKxXkc" /><figcaption>Hedge your company’s success with a data layer from Atadataco.com and avoid data chaos. Source: xkdc.com.</figcaption></figure><blockquote><em>Problem Statement: Business Leaders can’t use customer data to make core business decisions.</em></blockquote><h3>Sketchy’s Data Problem…Situation. ⏳</h3><p><strong>Context: </strong>Sketchy.com builds training and educational products for medical school students.</p><p><strong>Problem: </strong>Team confidence in their data infrastructure and understanding of the customer’s product was low. Marketing used one dataset. Sales used another. Product, a third.</p><p><strong>Opportunity: </strong>Each team thought they could see the complete view of the customer. How can you trust your data if you have three definitions for gross revenue?</p><p>Sketchy’s CPO wanted to know:</p><ul><li>What’s the customer trying to accomplish?</li><li>How do the customers’ goals align with the company’s goals?</li><li>What customer actions drive trial account conversion?</li></ul><p>Desired Outcome: Maturing a growing company’s data strategy and infrastructure to scale with them delivers more than building a better stack.</p><h3>Sketchy Starting State.</h3><p>I was brought in as an outside data architect to pick up the work from previous firms. The below system was fragmented with data quality issues from Postgres, multiple and duplicate schemas of product analytics information, and no central data model.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*ECjhXIqePh1aATyq" /><figcaption>Sketchy Starting Stack</figcaption></figure><h3>Insight and stakeholders…Insight. 💡</h3><p>The core opportunity to unlock product growth and trust in data…Integrating these three views of data from Marketing, Finance, and Product teams:</p><p>-&gt; Remove vendor data silos and replace them with unified data access.</p><p>-&gt; Deliver single-source-of-truth so all business stakeholders can collaborate on customer analytics.</p><p>-&gt; New platforms/vendors must be capable of efficient integration without requesting Eng support.</p><h3>Data maturity journey.</h3><p>I view building a data layer for a new client as following the maturity journey. It’s not a statement of company size aligned to which stack is needed. It’s a statement that you have to work through a Starter Stack before you can build a Growth Stack and so forth. My goals are to drive impact, take the time to architect a stack that will grow to meet the company’s needs, and avoid data pitfalls as the models and functions become more sophisticated.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/796/0*vWR9S73T2_a_fpFN" /><figcaption>Data Maturity</figcaption></figure><h3>Where Sketchy wanted to be.</h3><p><em>Project Goals = Starter Stack. 🥅</em></p><p>Sketchy wanted that solid foundation that would support the fast-growing company they already were. They wanted experimentation and eventually machine learning, but first they wanted a strong, stable foundation that could scale with them. Their primary tenants where</p><ol><li>Self-service for all non-technical and business teams.</li><li>A single data modeling layer controlling all business metric definitions.</li><li>DataOps to add QA and validate data at source = BI dashboards.</li><li>Plan for hiring, reduce infrastructure costs, and formalize contract between Engineering, and Data.</li></ol><h3>My approach for Sketchy. 🛬</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*vMn0Tow7VnKFct07" /><figcaption>Atadataco Build Process</figcaption></figure><p>Stage one: integrating existing product/web data with ad and revenue datasets.</p><p>Stage two: building a data warehouse and metrics system for sharing customer insights.</p><p>Stage three: validating all datasets and training the team in self-service.</p><p>Stage four: evangelize best practices and define a hiring plan.</p><h3>Project implementation Stages.</h3><p>Stage one:</p><ul><li>Build LookML data relationships from the base of web events.</li><li>Build from scratch data model for the product, marketing, CS, and finance teams.</li></ul><p>Stage two:</p><ul><li>Build a central metrics layer for the business in Looker to unify metric definitions.</li><li>Move data modeling from LookML to DBT by building defining SQL transformation layer.</li></ul><p>Stage three:</p><ul><li>QA and add DBT tests to every dataset.</li><li>Deliver product, marketing, and company dashboards.</li></ul><p>Stage four:</p><ul><li>Complete infrastructure rebuild plan.</li><li>Cost-saving initiatives.</li><li>Optimize all queries and switch to incremental builds.</li><li>Ownership strategy for infrastructure.</li><li>Move to Airbyte and Snowflake for more control of their new backend RDS infra.</li><li>One-year hiring and team development plan.</li></ul><h3>Project outcome. 🚀</h3><p>Sketchy reported executives trusted their new data platform. They had cross-company, customer analytics that worked.</p><p>Self-serve analytics worked for Sketchy. Their customer insights from the platform drove a six-figure revenue lift in the first year by correctly identifying and removing account sharing.</p><p>They had…</p><ol><li>Trust in data.</li><li>Adding custom ad hoc requests now only requires a single new dashboard or SQL file added to the data model.</li><li>Centralized reporting drives better collaboration and quicker customer insights.</li><li>Save 100K in year one from analytics insights.</li></ol><h3>Next steps with Atadataco.</h3><p><em>Achieved: Sketchy complete analytics stack and data pipeline rebuild.</em></p><p><em>Next Steps: Sketchy opted for more control and ownership of their data stack.</em></p><p>The first on the roadmap was Airbyte + Snowflake to replace Fivetran + BigQuery for more flexibility and cost savings.</p><p>The second was to implement end-to-end integration testing.</p><p>The third was a formal experimentation engine to drive faster growth.</p><p>Now they had a collaborative, scalable Starter Stack. Now they could build their Growth Stack with confidence.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*2brzTft26pP8D5ls" /><figcaption>Sketchy Complete Startup Stack</figcaption></figure><h3>Trusted by high-growth companies and founders like Sketchy. 🤝</h3><p>Our managed data layer and building methods work whether you are starting from scratch or growing into a bigger data stack. We scale to match your needs and your business goals. Let us collaborate with you in every stage of development.</p><p>Our specialty is the startup space. We have built complete, trusted analytics platforms for many verticals including AdTech, e-Commerce, Fashion, FinTech, MarTech, Medical, Recruiting, SAAS, Sales, and Security.</p><p>Schedule a time for us to review your business needs and recommend a managed data stack sized perfectly for your company:</p><p><a href="https://www.atadataco.com/contact">Atadataco | Contact</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=75b50aa62801" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[An Overview Of The Future Of Analytics Engineering Talent]]></title>
            <link>https://atadataco.medium.com/an-overview-of-the-future-of-analytics-engineering-talent-896b04e16a2f?source=rss-5a1371b55f29------2</link>
            <guid isPermaLink="false">https://medium.com/p/896b04e16a2f</guid>
            <category><![CDATA[analytics]]></category>
            <category><![CDATA[founders]]></category>
            <category><![CDATA[freelancing-tips]]></category>
            <category><![CDATA[analytics-engineering]]></category>
            <category><![CDATA[modern-data-platform]]></category>
            <dc:creator><![CDATA[atadataco.com]]></dc:creator>
            <pubDate>Tue, 17 May 2022 20:05:09 GMT</pubDate>
            <atom:updated>2022-05-17T20:06:44.730Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/988/1*DKal47GluDB7usYV0pGodw.png" /><figcaption>Data Happiness</figcaption></figure><p>Today’s entrepreneurial marketplace is a hotbed of competition for analytics professionals. Many young companies and upstarts are realizing the need to hire fresh analytics engineers from the pool of job applicants that apply. In order to stand out from the crowd, start-ups are actively engaging the talent marketplace by offering jobs that go beyond traditional job descriptions: offering unique opportunities for advancement in the company or interesting, specialized roles. There is a tremendous amount of demand to fill positions in analytics and many are not well advertised. As the analytics marketplace continues to evolve, these organizations must improve their hiring and retain the best analytics professionals in the industry.</p><h3><strong>Startups Must Press Hard for More Experienced Talent 📥</strong></h3><p>For startup companies or relatively new organizations, they must compete with larger firms for top talent. These firms typically have an existing relationship and rapport with analytics professionals and have established relationships with consulting firms, agencies, and recruiters. The best advice for these firms is to be aggressive in their quest to recruit experienced and talented analytics professionals. They are also willing to invest heavily in technology and other assets to promote their own company image.</p><h3>The Old Paradigm: hire on Upwork⏰</h3><p>There are many ways for start-ups to tap into the world of analytics professionals. One way is to build the team with only freelancers and a product manager for oversight. In this scenario, the start-up company can source the analytics engineers on a freelance basis and pay only for those employees and time that can make a significant impact on the business. Additionally, smaller analytics firms may be willing to provide on-the-job training and mentoring from a more senior engineer. And don’t underestimate the value a junior hire will receive from working for and with you as an executive. Your skillset and ability to simply convey the business problem will be super valuable to them in the future.</p><h3>Try This Instead as a Startup Founder</h3><p>Recruiting professionals who have the skillset and knowledge necessary to become effective leaders and managers can be challenging for many organizations. However, with the right type of support from a consulting firm, an organization can attract and hire the best analytics professionals without having to pay exorbitant full-time salaries. Today, many recruiting agencies offer analytics job recruiting services such as ours. These services target smaller firms in the analytics industry. In addition to offering recruitment services, these recruitment agencies can also provide mentoring and on-the-job training. This type of collaboration can prove to be very helpful for small businesses looking to build their own team of analytics engineers.</p><p>In conclusion, the future of analytics is vast and promising. If this idea of building your team with remote talent appeals to you, reach out to us at <a href="http://hello@atadataco.com/"><strong>hello@atadataco.com</strong></a>. We will treat you as more than just a vendor at <a href="http://Atadataco.com"><strong>Atadataco.com</strong></a>. Our vision is to deliver the Modern Data Stack (MDS) to startups around the world. We build using a proven design and delivery process to give you a best-in-class data stack that can deliver more than customer analytics: customer delight and action.</p><p>Originally published at <a href="https://blog.locale.ai/how-locale-is-winning-hearts-with-an-outcome-based-approach-to-customer-success/">a</a>tadataco.com/blog on January 15th, 2022.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=896b04e16a2f" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Guest Post: An Overview of Testing Options in dbt]]></title>
            <link>https://atadataco.medium.com/guest-post-an-overview-of-testing-options-in-dbt-cdf4fe0e081d?source=rss-5a1371b55f29------2</link>
            <guid isPermaLink="false">https://medium.com/p/cdf4fe0e081d</guid>
            <category><![CDATA[data-model]]></category>
            <category><![CDATA[analytics-engineering]]></category>
            <category><![CDATA[data-validation]]></category>
            <category><![CDATA[dbt]]></category>
            <category><![CDATA[data-testing]]></category>
            <dc:creator><![CDATA[atadataco.com]]></dc:creator>
            <pubDate>Sun, 24 Oct 2021 20:20:20 GMT</pubDate>
            <atom:updated>2021-10-24T20:20:20.810Z</atom:updated>
            <content:encoded><![CDATA[<p>By <a href="mailto:joshuasmartolufemi@gmail.com">joshuasmartolufemi@gmail.com</a></p><figure><img alt="DBT Testing" src="https://cdn-images-1.medium.com/max/1024/1*IqQUg9HusIIWYI4Cn6N82w.jpeg" /></figure><p>As data practitioners, we need to ensure that data is accurate in order to build trust in the analytics we deliver. There are many ways to identify these exceptions, but we need a scaleable approach when working with large amounts of data. We need a simple approach where a data practitioner can quickly analyze large datasets and identify these exceptions quickly. This is where dbt comes in.</p><p>‍<a href="https://getdbt.com/"><strong>dbt</strong></a>, also known as data build tool, is a data transformation tool that leverages templated SQL to transform and test data. It was developed by dbt Labs, formerly known as Fishtown Analytics. Data build tool is part of the modern data stack and helps practitioners apply software development best practices to data pipelines. Some of these best practices include code modularity, version control, and continuous testing via its built-in data quality framework. In this article, we will focus on how data can be tested with dbt via built-in functionality and with additional extensions. In future articles, we will dive deeper into each of these areas.</p><h3>Standard dbt tests</h3><p>In dbt there are two categories of tests: Generic tests (formerly known as schema tests and Bespoke tests (formerly referred to as data tests). The difference between the two are that Generic tests are reusable functions for which you do not have to write the SQL query while Bespoke tests are custom tests you write when a generic test does not cover your use case. Regardless of the type of test, behind the scenes the process is the same, dbt will compile code to SQL and execute that against your database. If any rows are returned by the query, that indicates a failure.</p><h3>Generic (Schema) tests</h3><p>dbt Core ships with four basic tests:</p><p><strong>unique</strong>: is a test to verify that every value in a column (e.g. customer_id) contains unique values. This is useful for finding records that may inadvertently be duplicated in your data.</p><p><strong>not_null</strong>: is a test to check that the values for a given column are always present. This can help you find cases where data in a column suddenly arrives without being populated.</p><p><strong>accepted_values</strong>: this test is used to validate whether a set of values within a column is present. For example, in a column called <strong>payment_status,</strong> there can be values like <strong>pending</strong>, <strong>failed</strong>, <strong>accepted</strong>, <strong>rejected</strong>, etc. This test is used to verify that each row within the column contains one of the different payment statuses, but no other. This is useful to detect changes in the data like when a value gets changed such as <strong>accepted </strong>being replaced with<strong> approved.</strong></p><p><strong>relationships</strong>: these tests check referential integrity. This type of test is useful when you have related columns (e.g. product id) in two different tables. One table serves as the “parent” and the other is the “child” table. This is common when one table has a transaction and only lists a customer_id and the other table has the details for that customer. With this test we can verify that every row in the transaction table has a corresponding record in the dimension/details table. For example, if you have orders for customer_ids 1, 2, 3 we can validate that we have information about each of these customers.</p><p>Generic tests were enhanced<strong> </strong>in dbt version 0.20. Now a <strong>where </strong>clause can be added to a generic test to focus the test on a particular range of rows. This can be useful on large tables by limiting the test to recent data. Another improvement is the ability to define severity based on the number of exceptions. This would allow a test to pass with a warning below a certain threshold and become a true error above that threshold.</p><h3>Bespoke (Data) tests</h3><p>Bespoke/custom tests allow you to create tests when the generic ones(or the ones in the packages discussed below) do not meet your needs. These tests are simple SQL queries that express assertions about your data. An example of this type of test is that sales for one product should be within +/- 10% of another product. The SQL simply needs to return the rows that do not meet that condition.</p><h3>Tests in dbt-utils package</h3><p>In addition to the genetic tests that can be found within dbt Core, there are a lot more in the dbt ecosystem. These tests are found in dbt packages. Packages are libraries of reusable SQL code created by members of the dbt community. We will briefly go over some of the tests that can be found in these packages.</p><p>The <a href="https://hub.getdbt.com/dbt-labs/dbt_utils/latest/"><strong>dbt-utils</strong></a> package is a library that was created by dbt Labs. It contains special schema tests, SQL generators, and macros. Some of the tests in the <strong>dbt_utils</strong> package include:</p><p><strong>not_accepted_values</strong>: this test is the opposite of the <em>accepted_values </em>test and is used to check that specific values are NOT present in a particular range of rows.<strong>‍</strong></p><p><strong>equal_rowcount</strong>: this test checks that two different tables have the same number of rows. This is a useful test that can assure that a transformation step does not accidentally introduce additional rows in the target table.<strong>‍</strong></p><p><strong>fewer_rows_than</strong>: this test is used to verify that a target table contains fewer rows than a source table. For example, if you are aggregating a table, you expect that the target table will have fewer rows than the table you are aggregating. This test can help you validate this condition.</p><p>There are several additional tests available in the dbt-utils package which can be found in the <a href="https://hub.getdbt.com/dbt-labs/dbt_utils/latest/"><strong>documentation</strong></a>.<strong>‍</strong></p><h3>Tests in dbt-expectations‍</h3><p>Another package that can accelerate your data testing is <a href="https://github.com/calogica/dbt-expectations"><strong>dbt-expectations.</strong></a> This package is a port of the awesome Python library <strong>Great Expectations</strong>. For those not familiar, Great Expectations is an open-source Python library that is used for automated testing. <strong>dbt-expectations </strong>is modeled after this library and was developed by <a href="https://calogica.com/"><strong>Calogica</strong></a><em> </em>so dbt practitioners would have access to an additional set of pre-created Generic tests for dbt testing. Tests in <strong>dbt-expectations </strong>are divided into seven categories:</p><ul><li>Table shape</li><li>Missing values, unique values, and types</li><li>Sets and ranges</li><li>String matching</li><li>Aggregate functions</li><li>Multi-column</li><li>Distributional functions</li></ul><p>We will focus on the tests in each category in a future article.</p><h3>Conclusion</h3><p>Getting started with dbt testing is simple thanks to the predefined tests found within dbt Core and the additional tests found in dbt-utils and dbt-expectations, These can be used to assure various aspects of data quality using dbt. In all, dbt has 4 test, the <strong>dbt-utils</strong> package has 12 tests and dbt-expectations package has a total of 58 tests for a total of 74 tests we don’t need to write. When we need to deviate from those tests, we can create our own using Bespoke tests and if they are reusable, we can create reusable macros which we can share within our organization or with the dbt community. These tests can help you increase trust in your analytics by alerting you of error conditions before your users notice. To find out more about dbt and its capabilities, dbt Labs has a course that can introduce you to the basics of dbt for free <a href="https://learn.getdbt.com/collections"><strong>here</strong></a>.</p><h3>References</h3><p><a href="https://discourse.getdbt.com/t/creating-an-error-threshold-for-schema-tests/966"><strong>https://discourse.getdbt.com/t/creating-an-error-threshold-for-schema-tests/966</strong></a></p><p><a href="https://docs.getdbt.com/docs/building-a-dbt-project/tests"><strong>https://docs.getdbt.com/docs/building-a-dbt-project/tests</strong></a></p><p><a href="https://docs.getdbt.com/docs/building-a-dbt-project/tests#bespoke-tests"><strong>https://docs.getdbt.com/docs/building-a-dbt-project/tests#bespoke-tests</strong></a></p><p><a href="https://docs.getdbt.com/faqs/custom-test-thresholds"><strong>https://docs.getdbt.com/faqs/custom-test-thresholds</strong></a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=cdf4fe0e081d" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Your First Analytics Engineering Job Should be in Consulting]]></title>
            <link>https://atadataco.medium.com/your-first-analytics-engineering-job-should-be-in-consulting-1d3c0cf1b91d?source=rss-5a1371b55f29------2</link>
            <guid isPermaLink="false">https://medium.com/p/1d3c0cf1b91d</guid>
            <category><![CDATA[analytics-engineering]]></category>
            <category><![CDATA[first-job]]></category>
            <category><![CDATA[analytics]]></category>
            <category><![CDATA[data-science-job]]></category>
            <category><![CDATA[hiring-for-startup]]></category>
            <dc:creator><![CDATA[atadataco.com]]></dc:creator>
            <pubDate>Sun, 24 Oct 2021 20:14:33 GMT</pubDate>
            <atom:updated>2022-05-17T19:38:50.285Z</atom:updated>
            <content:encoded><![CDATA[<p>Hiring managers are looking foremost for proof of basic Analytics Engineering skills in candidates applying for their first Data Scientist job. A portfolio with a data cleaning project and a data storytelling project will get you hired quicker than only machine learning or competition projects.‍</p><figure><img alt="Study for first Data Job" src="https://cdn-images-1.medium.com/max/1024/1*l6I44Q6WLyGuf1CBC-BTOA.jpeg" /></figure><p><strong>STOP COMPETING IN KAGGLE COMPETITIONS</strong></p><p>Don’t get me wrong, Kaggle.com is great. But put yourself in the shoes of the hiring manager. Competitions do not show you can set up an environment or that you can handle any other file format besides a CSV of cleaned data. Do one or two competitions. Then focus on getting projects that are as close to the daily tasks of an analytics engineer as possible. I say this because your primary goal is to have 2–3 projects on your resume that demonstrate that you can tackle a problem from start to finish. Completing a ‘full-stack’ data project will always impress me more than demonstrating one’s ability to apply a model. I will give my opinion on the single best place to get this experience, but first, let’s talk about what is needed for a good resume.‍</p><p><strong>SHOW YOU CAN CLEAN DATA FIRST</strong></p><p>Hireability (or a good resume) comes down to proof of experience. I don’t need to see how you tried 10 overtrained models to handle this one training dataset. In fact, think about this in production. When would you ever expect to build a model that isn’t retrained or used beyond a single run? This single-run model development is more like a Proof-Of-Concept (POC) project than anything else, which is prone to creating overfitted models that would never survive on real data. Everywhere I look, everyone says data cleaning is the biggest part of data science, so I know I’m not overstating it here. Ask someone farther along in the industry than you: “<em>What are the core Analytics Engineering skills I would need in my first 90 days at startup?</em>”‍</p><p>Take the advice of experienced people and layout the skills you would need for a project. For example, if I were trying to land a job at a SAAS B2C company, I would know they are likely to have a set of needed skills.</p><ol><li>Pulling together onboarding and marketing data and building retention funnels</li><li>Creating subscription or Life-Time-Value (LTV) analysis in cohort form</li><li>Communicating requirements for landing page tracking and a/b testing</li><li>Creating technical requirements from business stakeholders</li></ol><p>When I look at a resume and see that someone is just starting out in the field, I do not consider this a red flag against them. I immediately look at their school projects, internships, or alternate learning opportunities to see if they would be a good fit. I will always suggest two projects for the candidate applying for their first analytics engineering job: a data cleaning project and a data storytelling project.</p><p><strong>‍BEST OPTION: BUILD A PORTFOLIO WITH CONSULTING PROJECTS</strong></p><p>I’ve tried so many types of projects and presentations of project summaries on resumes over the years. My experience has taught me that the best experience can be found in consulting jobs. They are often already divided into that perfect, 1–6 weeks timeframe. Consulting projects seek to solve an actual business problem with an actual business mess (helps a lot when writing about it on a resume). Additionally, they are partially constrained by the fact that the project manager had to spend some time thinking about the definition and steps of the project.</p><p>My advice is to try to find 2 to 3 consulting projects/externships while looking and applying for jobs. As you finish up each individual stage of the project, stop to write down a little summary. Think about what skills a hiring manager would be looking for and craft a couple of sentences that show you understood the business problem and know how to technically address it. By the time you have a couple of these summaries, you will have a complete project section for your resume and your new job. Reach out to tell me how it goes at <a href="http://hello@atadataco.com"><strong>hello@atadataco.com</strong></a>. And if you want to join our freelancer network to get your first job, visit us at <a href="http://atadataco.com">atadataco.com</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1d3c0cf1b91d" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Startup Growth Kpis]]></title>
            <link>https://atadataco.medium.com/startup-growth-kpis-a0f1dcc0ad0b?source=rss-5a1371b55f29------2</link>
            <guid isPermaLink="false">https://medium.com/p/a0f1dcc0ad0b</guid>
            <category><![CDATA[analytics]]></category>
            <category><![CDATA[consulting-strategy]]></category>
            <category><![CDATA[growth-marketing]]></category>
            <category><![CDATA[product-kpis]]></category>
            <dc:creator><![CDATA[atadataco.com]]></dc:creator>
            <pubDate>Sun, 06 Sep 2020 17:23:14 GMT</pubDate>
            <atom:updated>2020-09-06T17:47:42.759Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/563/1*p3jIiF3rhF1hGFYK_IkS7Q@2x.jpeg" /></figure><p>The name, north star metric, is used by many companies like Amplitude, Mixpanel, Lean Analytics to name a few. The concept is simple: pick a North Star metric that’s most predictive of a company’s long-term success. I want you to think about it as your company’s leading indicator of revenue, creating customer value and measurable/actionable.</p><p><strong>Why is a North Star metric important?</strong></p><p>Let’s look at how other people have implemented it first for their own customer analytics platforms.</p><p><strong><em>Mixpanel</em></strong></p><p>“Teams use North Star <a href="https://mixpanel.com/topics/lean-product-development-metrics/">metrics</a> to get everyone in a company focused on one goal. Ninety percent of the world’s data has been created <a href="https://www.mediapost.com/publications/article/291358/90-of-todays-data-created-in-two-years.html">in just the past few years</a> and the profusion of analytical possibilities allow every department, every team, and even every contributor to chase their own metrics. If each team defines goals differently, they can work against each other and duplicate effort. When startup investor <a href="https://earthsky.org/brightest-stars/polaris-the-present-day-north-star">Sean Ellis</a> coined the term “North Star metric,” he intended it to reduce administration, simplify meetings, and align teams around the singular goal of growth. The term North Star metric — drawn from the common name for Polaris, the star that lies directly above the Earth’s Northern pole — is mostly rhetorical. Companies with complex business models can have multiple North Stars, and any given North Star metric is composed of sub-metrics anyway. Any company that literally foreswore all metrics in favor of just one, such as recurring revenue, would almost certainly fail. The North Star metric is, simply, an exercise in simplifying the overall company strategy into terms all can remember, understand, and apply. North Star metrics are not to be confused with the acronym OMTM, or one metric that matters, a term <a href="http://leananalyticsbook.com/one-metric-that-matters/">popularized by the authors of Lean Analytics</a>. There’s a subtle yet meaningful difference. OMTM is intended to mean “one metric that matters right now” and is a leadership tactic for fixing a short-term problem, whereas a North Star metric is intended as a long-term guide. Though, just like the real North Star, it too is impermanent. When the Egyptians built the pyramids, the Earth had a different North Star — <a href="https://earthsky.org/brightest-stars/polaris-the-present-day-north-star">Thuban</a> — but it’s crept out of alignment, just as Polaris will in time. Companies should feel equally free to reevaluate their North Star metrics to make sure they still point the right direction, and amend them when they prove flawed.” (mixpanel.com/topics/north-star-metric/)</p><p><strong><em>Amplitude</em></strong></p><p>“A north star metric is the key measure of success for the product team in a company. It defines the relationship between the customer problems that the product team is trying to solve and the revenue that the business aims to generate by doing so.</p><p>This serves three critical purposes in any company:</p><p>It gives your organization clarity and alignment on what the product team needs to be optimizing for and what can be traded off.</p><p>It communicates the product organizations’ impact and progress to the rest of the company — resulting in more support and acceleration of strategic product initiatives.</p><p>Most importantly, it holds the product accountable to an outcome.</p><p>In most companies, product teams are measured by how much they ship, not on the impact they have on the business. Without an impact driven culture in product, you can’t influence the destiny of your business. Without a north star, you can’t have a <a href="https://amplitude.com/blog/2017/10/04/thrive-product-led-era">product-led company</a>.” (https://amplitude.com/blog/2018/03/21/product-north-star-metric)</p><p><strong>Examples of North Star metrics </strong>(suggestions from Mixpanel)</p><p>E-commerce</p><ul><li>Number of weekly customers completing their first order</li><li>Value of daily purchases</li><li>Lifetime value</li></ul><p>Consumer tech</p><ul><li>Number of daily active users (DAU)</li><li>Number of messages sent per day (habit metric)</li><li>Retention</li></ul><p>B2B SaaS</p><ul><li>Number of trial accounts with over 3 users in their first week</li><li>Percentage retention</li></ul><p>Media</p><ul><li>Signups and retention</li><li>Number of daily active visitors</li><li>Total read time</li><li>Total watch time</li></ul><p>Fintech</p><ul><li>Total assets under management</li><li>Number of daily active users</li></ul><p><strong>Deciding What’s Right for You</strong></p><p>To find their North Star metric, companies must decide what is truly essential to the business. Companies are complex and succeed and fail for lots of reasons. But what are the pillars to the business that are, as an architect might say, load-bearing? That if they alone failed, would ruin the company? For many teams, that’s <a href="https://mixpanel.com/blog/2018/02/01/product-teams-need-ux-research/">making customers happy</a>, <a href="https://mixpanel.com/topics/how-to-develop-and-measure-a-user-adoption-strategy/">generating profit</a>, and <a href="https://mixpanel.com/blog/behavioral-analytics-guide/">measuring progress</a> toward those goals. A metric that simply makes money without satisfying customers will fail in the long run, as will a company that satisfies customers without being profitable. And a metric that doesn’t measure progress in a way that allows teams to act on its insights and change their behaviors isn’t useful. A North Star metric must reflect all three factors, tailored to each business. To find your North Star metric:</p><ol><li>Ask, what is essential to the business’ functioning? Prioritize a list.</li><li>Ask, what <a href="https://mixpanel.com/topics/important-user-engagement-metrics-apps/">KPIs and metrics</a> measure the top few, key factors?</li><li>Ask, what metric encapsulates all of the above?</li><li>Build a metric hierarchy, with the North Star metric on top of the pyramid</li></ol><p>Like a seed, North Star metrics need fertile ground to grow. Companies that select a North Star need the right culture and infrastructure. Without cross-silo relationships and a willingness to prioritize the company good above the team good, some employees may reject the North Star metric, especially if they must change their behavior significantly, or if, like many sales teams, their compensation structure presents a conflict of interest. Companies also need <a href="https://mixpanel.com/topics/user-analytics-overview/">the right analytics tools</a> to measure progress toward their North Star metric and sub-metrics. Without user-friendly analytics that teams can access at a whim, companies can’t tell whether they’re succeeding, and can’t course correct. Most teams find <a href="https://mixpanel.com/topics/user-analytics-overview/">user analytics</a> vital to measuring their North Star metric. User analytics provide user-level insights that most analytics platforms — especially free ones — don’t capture. User analytics:</p><ul><li><a href="https://mixpanel.com/topics/introduction-unified-user-profiles/">Track individual users across platforms</a></li><li>Generate reports and dashboards</li><li><a href="https://mixpanel.com/data-science/">Use machine learning to detect anomalies in the data</a></li><li>Offer a user-friendly interface so the whole team can access insights</li></ul><p>North Star metrics can be an effective strategy for aligning all teams around a singular goal, provided they’re not taken too literally, are supported by a flexible culture, and measured with analytics tools that helps teams tell whether it’s still guiding the way.</p><p>For me working in the fintech space, my north star metric was a habit metric. Lot of companies have them. They are usually retention metrics. For Pinterest, they are trying to drive weekly repins to drive usage. This retention metric leads to the habit metric of 4 days pinning out of the first 28 days. I picked my fintech north star to be a similar time frame since users must wait for their card to arrive. For me 28 days worked best and allowed me to define my key performance metric as 3 transactions in the first 28 days with the leading indicator metric as number of days to first funding. This way, if I can drive a user to fund their account quicker, I can unlock their ability to purchase and show the value of my fintech platform.</p><p><strong>Example Diagrams</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/539/1*tLjK0cjDNimCKzDRlqR5cQ@2x.jpeg" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/544/1*nQ9-xXceTHWwRRjMkn_Tog@2x.jpeg" /></figure><p><strong>Contact Us</strong></p><p>If you would like help defining your north star metric for your startup, schedule a time to review your retention metrics and pick the best one to define your success and growth with our <a href="https://www.upwork.com/o/profiles/users/~01aa6846782e36bd02/?s=1110580752008335360">Analytics Consulting Offering</a>!</p><p>North Star image courtesy of: <a href="https://unsplash.com/@_louisreed">https://unsplash.com/@_louisreed</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a0f1dcc0ad0b" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[YOUR DATA SCIENCE TEAM MUST BE BUILDING]]></title>
            <link>https://atadataco.medium.com/your-data-science-team-must-be-building-6ff1c9590d6?source=rss-5a1371b55f29------2</link>
            <guid isPermaLink="false">https://medium.com/p/6ff1c9590d6</guid>
            <category><![CDATA[data-science]]></category>
            <category><![CDATA[consulting]]></category>
            <category><![CDATA[analytics]]></category>
            <category><![CDATA[analytics-platforms]]></category>
            <category><![CDATA[analysis]]></category>
            <dc:creator><![CDATA[atadataco.com]]></dc:creator>
            <pubDate>Sat, 05 Sep 2020 21:11:40 GMT</pubDate>
            <atom:updated>2020-09-05T21:25:53.171Z</atom:updated>
            <content:encoded><![CDATA[<p><strong>YOUR DATA SCIENCE TEAM MUST BE BUILDING</strong></p><p>“Why did the Facebook/Google/etc install conversion tank on Friday?” Have you ever been asked this question as a data team? I have and everyone I know in San Francisco data science has as well. This question asks for historical context into an abstract user behavior problem. You need to make it your team’s goal that this question is already answered automatically and presented to the business stakeholder before the end of day Friday. I’m not suggesting bringing in machine learning to detect any possible question. No, I’m suggesting your data team should build self-service and data products into the core functionality/actions of the company.</p><p>There is one thing that separates a successful data org from one constantly building dashboards and reactively fixing reporting on old data. I am not saying I do not build dashboards and occasionally need to put out fires or deliver low impact, last-minute critical business concerns that tomorrow won’t be important or remembered. I’m talking about building the data best practices, design and core infrastructure. I’m saying data teams must be building data products. My point is you need to get your org to the level on the data science maturity curve that you are building products that can be consumed by the rest of the company instead of quick-fix dashboards and analytics. To put it one more way before moving on to what this looks like, focus on building predictive tools and data insights as opposed to building infra that only reports explanations of what happened last week.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/295/1*oL4b7Z2JlqxTAS6Rf4AkzA@2x.jpeg" /></figure><p>Courtesy of Gartner.</p><p><strong>What are Data Products?</strong></p><p>Data products refers more to how your team builds insights, models or infrastructure projects than who sees the result. Data products usually aid other internal teams with well-designed and executed product verses only ad hoc analytics.</p><ul><li>Documenting them to allow for future repeatability and versioning in some common metadata store like Github.</li><li>Building a unified data model that has defined datasets, single source of data and summary tables.</li><li>Setting up flexible, self-serve BI tools and follow up data training so most questions are answered with existing dashboards.</li></ul><p>Founding Analytics can jump start your first analytics platform build and set up a powerful analytics database to answer the most pressing business questions!</p><p>Start here or connect by email: <a href="https://www.upwork.com/o/profiles/users/~01aa6846782e36bd02/">https://www.upwork.com/o/profiles/users/~01aa6846782e36bd02/</a></p><p>Inspired by: <a href="https://www.upwork.com/o/profiles/users/~01aa6846782e36bd02/">https://dataform.co/blog/great-data-teams-build-products</a></p><p>#datascienceconsulting #analytics #consulting #sql #snowflakedb</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6ff1c9590d6" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>