InReach Ventures — The Tech Begins

Roberto told the world what we were up to at InReach and for the past month I’ve been doing what no VC partner should ever do: hiding in the corner, coding.

It may seem like I have the wrong job, but our statement-of-intent is to tackle VC by employing software-engineers, not associates.


Before building anything we wanted to have some principles for how we develop the platform. Not an architectural vision or a software development process, more a few simple guidelines.

  1. Do everything to avoid running servers. If you find yourself ssh’ing into a box, work out how not to do it again.
  2. Use and contribute to open-source software.
  3. Always prefer to use X-as-a-service.
  4. Running locally, on a server, or scaling out to your n’th host should look the same.

These might seem obvious but they warrant over-stating. We want to focus our development efforts on bringing core proprietary value to InReach, not on re-implementing wheels. If this means paying more to have someone build and run something we could have done in-house, we consider their expertise totally worth paying for.


But we’re just starting out. We haven’t hired anyone yet so the development team is, well, me. I haven’t even had a chance to build a ‘hiring’ page yet!

What we have been doing is gluing together a platform for our own use. Iterating on-top of different data-sources and technologies to change the way we invest in companies.

The interesting result of combining our principles and limited user-base is just how sophisticated a data-mining system you can develop while keeping under the free-tier of the Internet’s various PaaS / SaaS providers.

Just as we’d advise our portfolio companies to do, we want to iterate on our core product and not try to scale prematurely. I wanted to mention a few techniques I’ve been using to keep us in the free-tier while giving us the ability to scale up when we’re ready.

Keep the Graphs Flat

Depending on the hosting provider you tend to get a little server and equivalent across a variety of data-stores. The small instance can do a lot as long as you’re able to efficiently use the resources and, crucially, keep the behaviour constant. Users cause peaks and peaks cost money. You have to scale to your outliers, so avoid having outliers.

Programming languages with good (or just any) concurrency abstractions help here but the tools I’ve found the most useful are:

  1. Queues — off-loading work to be completed when possible
  2. Rate-Limiters — something like Guava’s. As long as it’s shared across processes, it’ll do the trick.
  3. Lambdas — Event-driven architecture is really interesting. Being able to move even more work out of the request-response model helps smooth out workloads. It reminds me of BottledWater.

In particular, using a rate-limiter to normalise access to data-stores means you encourage yourself to avoid burst-y usage patterns. This will lead to partitioning workloads, which lend themselves to task queues — and so the virtuous cycle continues.

Future-Proof Yourself

Auto-Scaling. Resilience. Microservices.

These are all other people’s problems. For now I just want to reduce the amount of time I spend on OPS and make sure my users (both of them) can use the system most of the time. Frankly I consider having to use Puppet / Chef a bit extreme at the moment (actually, I always have).

What’s impressed me while building our platform is that the path-of-least-resistance to shipping to a single instance is the same as to your n’th. Using a hosted continuous-deployer then uploading to a tool like ElasticBeanstalk means that scaling is in-the-order-of pressing a button.

Oh, but you can keep your microservices 😜