How we built Finimize Markets — Part 1

Mark Carrington
Finimize Engineering
3 min readSep 27, 2022

In this first part of the blog, we’ll run through our technical design process, before explaining the solution in part 2.

As part of our longer term aims at Finimize we’re looking to improve the investment journey for our users. Discovering investment ideas is the first step in the process, and is something we do really well with the content we produce. The aim with ‘Markets’ is to help you build confidence in those ideas, so you feel ready to make investment decisions.

Extending the journey (Discovery > Research)

Investment process

Before the Markets beta we released an earlier version to a subset of users to test the concept. But this MVP version wasn’t designed to scale.

We built this feature within our Django application, which polled the data needed directly from an external vendor. We used write-through caching to help reduce average latency. But, with this initial implementation we had lots of cache misses, leading to very slow requests. We did look at cache warming, but it wasn’t possible to warm everything given the high number of the stocks supported, and given we shared a cache with the backend application. We were aware of the limitations of this approach from the beginning, but decided it made sense for the MVP.

Designing Markets V2 for scale

At the beginning of 2022 a bunch of Finimize engineers got together in an Architecture Jam session to discuss what this next version could look like. We discussed everything from compute options, network infrastructure, data transfer protocols, workflow orchestration tools, database design and even improvements we’d need to make to our pricing graph. This session was partly remote, so we used a Miro whiteboard which you can see below.

Architecture Jam Whiteboard

By the end of the session we had a broad agreement in terms of the approach, and had reached a decision on all of the undecided points, all within a couple of hours.

The new approach

Our early approach, in which we made live calls to an external data vendor, led to problems with latency. Whenever we had to make a call to our data vendor to get pricing data or fundamental metrics, we would often observe requests that took 1–2 seconds, which is obviously unacceptable. So we decided to move towards a model where we fetch as much data as we can offline, and serve that via DB/cache to keep our application performant in terms of latency.

Summary

We’ve talked about building out this new feature as an MVP, testing the concept, then thinking about how we build for scale. We also touched on using System Design Jams to build consensus around an approach and make decisions quickly. Continue reading part II where we’ll delve into what the new solution looks like.

Keen on partnering with Finimize? Finimize is a financial insights platform that supports the most engaged investor community in the world.

We’ve helped more than https://www.finimize.com/wp/partners/ from fintech disruptors to traditional financial with growth and engagement.

👉🏻 Get in touch https://business.finimize.com

--

--