How Jyve Leveraged Datacoral’s Batch Compute to Better Understand its Marketplace

Sean Guntvedt
Datacoral
Published in
5 min readJun 16, 2020

Jyve: Company Background

Jyve serves the Grocery and CPG industry by connecting them with a pool of skilled labor professionals, called Jyvers, who are trained to complete jobs like restocking shelves and auditing goods for expiry. As a gig-economy platform, Jyve understands that success lies in making both its partners and Jyvers happy and that data is an invaluable tool to gain insight into its marketplace dynamics. Focusing on the needs of its partners and Jyvers represents a mission that is unique to Jyve, so when Jyve’s tech team was beginning to grow its Data team it made sense to find a solution that would allow the company to structure its resources with a focus on these Jyve-specific challenges.

Early on into our data journey at Jyve, we decided that Datacoral’s Data Engineering Platform would give data scientists like myself and data analysts the ability to quickly set up data pipelines without having to worry about the plumbing itself. We leveraged our domain-specific knowledge to build pipelines that drove reporting from various SaaS services, such as Zendesk and Fountain, as well as Jyve’s own production database. Our team was able to achieve this without having to be involved with the intricacies of managing the company’s data infrastructure stack.

Challenge

As with all marketplaces, understanding liquidity, or the efficiency with which a marketplace matches buyers (Jyve’s partnered brands and retailers) and sellers (Jyve’s contractors aka Jyvers), is essential for a successful platform, and therefore, is always continuously improved as more information becomes available. With newly warehoused mobile data in Segment, Jyve looked to better its knowledge of the dynamics in its marketplace. More specifically, the goal was to determine the geographical density of Jyvers in relation to the stores listed, so that Jyve could better serve stores that were historically less staffed and also alert nearby Jyvers as jobs opened up. This project would require a pipeline that would not only be dynamic enough to react to changes in user behavior over time, but would also be able to distill potentially millions of data points for a single user into one value, allowing downstream teams to access this without having to perform complex geo-calculations every time. Without leveraging mobile data, reporting on user behavior was reliant on self-reported addresses, brittle to edge cases in which users were looking for work in an area away from their permanent mailing address, like college students.

Solution

Python Transformations in Jyve’s Data Pipeline

With aggregating millions of geo-points being too complex to handle in a SQL environment alone, Datacoral recommended its Batch Compute feature which allows users to create and run a Python User Defined Function (UDF). The feature would work as follows:

  • Jyve’s data science to provide a SQL query to fetch data from Redshift
  • This would be converted into a Pandas DataFrame and serve as the input to a Python UDF provided by Jyve
  • The UDF would return another DataFrame that would be written as a table in Redshift

Datacoral provides data infrastructure essentials so that its customers can focus on doing what they do best: applying domain-specific knowledge in creating customer-specific solutions.

With Datacoral taking care of the data infrastructure (running the UDFs in a container that would run Python), Jyve could then focus on writing the appropriate input SQL script taking into account business logic, such as removing user geo-points when users were working on the job, and applying geospatial indexing techniques for grouping coordinates in its Python script. Better yet, deploying this pipeline on Jyve’s side was as simple as writing the appropriate SQL and python scripts (examples below) and pushing the files with a few Datacoral CLI commands. This was made self-serve with clear instructions available in the documentation. With Batch Compute materialized views already compatible with existing Datacoral DAGs, querying from the distilled user location data alongside production data unlocked a myriad of opportunities for downstream users to perform complex geolocation analysis without having to wrangle the upstream mobile data, opportunities such as correlating the density of Jyvers around a kilometer radius of a store to increased spend or improving on already existing notification ranking model.

  • Input Redshift Table: app_location (Segment to Redshift)
  • Input Redshift Table: store (synced from Postgres by Datacoral)
  • Input SQL query to join the tables above:
  • Input Python script for grouping geo-locations: udf.py
  • Output Redshift table: user_location
Coordinates can be bucketed by hexagonal geospatial indexes to determine density in an area, such as the location of cars as in a city. Source

Result

With the Batch Compute stack running successfully, Jyve now has a clearer lens into how its Jyvers interact on its platform. Not only does Jyve benefit from being able to accurately quantify Jyver density in its marketplace, but so do its Jyvers, as they can now be more appropriately notified when jobs become available in their area or recommended to take certifications for brands that most jobs close to them require. This, in turn, helps Jyve’s partnered customers (brands and retailers) as the platform can recommend optimal areas to expand in, where additional jobs can be met with the capacity to fill them. Datacoral’s Batch Compute transformation has certainly supported Jyve in transitioning to the next level of becoming a more data-informed marketplace that elevates its Jyvers and clients by allowing both sides to tap into nearby and available opportunities.

About Datacoral

Datacoral is a Data Engineering Platform that allows users to quickly build data pipelines in the cloud. Customers can build observable data pipelines with no-code/low-code connectors to read any data source and write to any data lake or warehouse, SQL to specify multi-level transformations in multiple data lakes and data warehouses, and connectors to publish data to applications. Datacoral offers unprecedented security and scalability through its cloud-prem software delivery model powered by serverless microservices.

Reach out to us at hello@datacoral.co or https://datacoral.com/request-info to learn more about our Batch Compute transformations and the overall data engineering platform.

--

--