Engineering at Acast

Acast Tech Blog
acast-tech
Published in
4 min readApr 24, 2020

By: Daniel Grefberg, Thomas Darr, and Aysin Oruz

Acast offers the most sophisticated tools in the podcast industry, to all content creators large and small. We’re aiming to be pioneers in the podcast space by gathering, analyzing and utilizing data at scale, but this means we face a wide variety of technical challenges. To overcome them, we use a collection of different technologies.

We’ve been focusing on growing our engineering operations to enable autonomous, cross-functional, domain-driven teams that can handle problems within their area of expertise and create solutions to put us ahead of the rest of the podcasting industry. During this growth, we also pay attention to diversity and bringing in skills that can set Acast ahead of competition going forward.

We’ve written about our tech stack before, but this is a fast-moving industry, so we thought it might be time for an update.

Autonomy in teams

We believe in team autonomy, guided by best practices and a set of general principles that we’ve made collectively as an organization. This means each team is encouraged to experiment with different technologies and methodologies, to ensure we’re using the most appropriate tools for the task at hand. While decisions might be questioned from time to time, no one will ever be stopped from doing things their own way if they can present a compelling argument. Rules, after all, are made to be broken.

What follows, then, is a description of our general principles.

Tech

Acast tech today is primarily Docker containers running Node managed by ECS. They talk to each other via their APIs or messaging (SNS and SQS), and use Postgres, Elastic and/or Redis databases for persistence. Let’s dive a little deeper.

In days gone by, our backend code at Acast was written in C#. We aim to follow the Boy Scout Rule — “always leave the codebase cleaner than you found it” — but some of that legacy code remains in use today.

The majority of our software is written in JavaScript but TypeScript, which offers a better development experience, is becoming increasingly common. Its type annotations provide better autocompletion and serve as a form of documentation, making code easier to read, write, and reason about. Furthermore, static type checking helps to prevent an entire class of runtime errors that would otherwise require dynamic type checking and exhaustive automated testing.

Although teams can try whatever framework they like, our current application frontends are all written in React. This helps when it comes to cross-team knowledge sharing and code reuse, while still leaving a lot of room for team-specific solutions. A good example of this is Decibel, our in-house component library that every team contributes to with the help of our designers. Decibel enables each team to develop its UIs faster, focusing on what’s important without neglecting our design guidelines and accessibility requirements.

Jest is used as the test runner in nearly all projects, both backend and frontend, but React component testing varies slightly between projects. A lot of teams are currently transitioning from unit testing with Enzyme to testing full component renders with React Testing Library. For end-to-end and browser testing, a lot of teams use Cypress for its helpful client and easy setup.

Besides JavaScript, our data pipeline makes extensive use of Scala and Python because they’re more suitable for data analysis — and some teams have started to experiment with Rust for systems that have strict performance requirements.

We also believe in automating the boring stuff. For example, we use Renovate for keeping dependencies up to date, which prevents us from getting stuck with several breaking updates all at once and slowing us down. We use CircleCI , integrated with GitHub, so that when a pull request is merged, the code is automatically linted, tested, and built into a Docker image and deployed to the test environment.

Infrastructure is managed as code using the AWS CDK, which gives us change management for free, and means that everything is automatic, consistent, and immutable — no more clicking around the AWS console, and you have to do better than “it works on my machine”.

Previously, we wrote about persistence with Cosmos and MongoDB. Nowadays, we use a variety of database technologies. For example: relational data is stored in Postgres; Redis is used as a cache where necessary; ElasticSearch provides scalable and performant search for our consumer-facing products; and DynamoDB is used as a flexible yet performant “database as a service”. We also use Athena to query and analyze large datasets.

And you might be thinking, “how do you monitor your system, log requests and create alerts if the system is not performing?” We use Kibana as a log aggregator and analysis tool and CloudWatch for metrics. Also, all our APIs are connected to Pingdom for uptime and response time monitoring, and Sentry for track and triage customer-facing errors.

All of those are integrated with PagerDuty for on-call alerts which each team has on their rotation.

Wow, that’s a lot of things. We hope you’ve enjoyed this really high-level overview of all the different technologies we use on a day-to-day basis. If you’d like to know more, you can check us out on StackShare, and if any of this seems particularly exciting to you and you’d like to work with us, here’s a link to our open positions: https://jobs.lever.co/acast/.

--

--