MVP: The Art of Cutting Corners

Magic Sandbox
13 min readJan 26, 2019

--

Hard to believe Steve approved of the Logo engraving.

Eight months ago we got accepted into the EF incubator program in Berlin.

The goal of the Entrepreneur First program is to give individuals a platform to build a large and influential company with their support. The program acts like a large pressure cooker designed to force two individuals to connect, and have them decide whether they’re willing to spend the next 7-or-so years working together. While that process is interesting, it’s not the point of this series.

The point of this series is to describe the process of evaluating the decisions one makes when going through EF’s incinerator, and after that, how to utilize the support and resources gained through that process.

The premise is, if you can convince EF that your team has an idea worth pursuing, you’ll be able to receive initial funding (~$100K), gain brand support and ultimately get access to their investor network post Demo Day.

In the end, as with any investor, or with any “sales” process, it all comes down to building your case. Your case can be approached from multiple (infinite?) dimensions, but from all this chaos one has to choose a path.

Magic Sandbox

The Magic Sandbox project (and soon thereafter) company were formed.

My co-founder and I had bonded over the frustration with the current state of education. We have experienced first hand that current learning mediums do not equip one with the hands-on experience & skills that run the “real world”. More importantly, we noticed it is very hard if not impossible to prove one’s competence in a certain field in an objective manner. Hence Magic Sandbox.

Without getting into the philosophy of the project, we‘d like to share the approaches we took while trying to achieve our goals and discuss some of the hard tradeoffs we were forced to make, both technical and circumstantial.

The ultimate goal of Magic Sandbox is to rethink the way people learn. Not because we think we know better, but because we have recognized that the current state of technology allows us to do so. The combination of falling server costs, modern orchestration tools and new browser technologies, mean we can now deploy the whole Uber stack for cents per hour for pure training purposes. On top of that we have created a visual experience that guides you through the possible challenges and all of it shuts down once you’re done.

To start off, we chose to focus on a technology called kubernetes, and become the main resource online where one goes to learn kubernetes.

How we approached building the product depended on what we were trying to optimize for at each stage, and that in turn depended on the stage of our EF and fundraising process.

Our journey can be separated in several parts:

1. Optimizing for EF’s selection process
2. Optimizing for EF Demo Day
3. Optimizing for fundraising

1. Getting through EF’s selection process

Getting through EF’s selection process requires you to convince at least one of five EF partners to vouch for you (not unlike a Partner leading a deal at a VC).
Convincing them to do so requires a strong story backed up by facts. Since we were claiming we would reinvent the CS degree, but didn’t have the pedigree to prove our competence, we decided to focus on our strength: execution. We knew we had the design, hustle and engineering skills to gain a bit of traction to speak for us.

The process started with building a simple landing page which explained what our platform “does” with a Call To Action button teasing users with Early Access. Clicking the CTA opened a Typeform asking for “a few more data points” before gaining access. After the form got submitted, there was a polite Thank You screen but no platform or product whatsoever.

The initial landing page was… rough

Obviously we weren’t feeling good about the whole experience, but as we were solely optimizing for gaining data & insights to support our case, it was a necessary evil.

In parallel to that process, the building of the actual platform had begun. We had a vague idea how the platform would look and function but this process was an intense month of building, iterating and exploring what’s possible. The core objective was to ship an MVP that proved our case and ability to execute.

Magic Sandbox UX evolution

One challenge was that even the MVP was a sizeable project in and of itself. We couldn’t just prepare a small demo as our platform required extensive server infrastructure to support our users (basically one cluster/machine per user), as well as a complicated and dynamic frontend. When we pushed the Alpha version live it had around 10,000 lines of code, including the frontend, API server (mostly dealing with infrastructure scheduling) and various config files — which we accomplished in around a month.

Most tensions during the decision making process came from the hard EF deadline that was approaching, so all of our efforts had to be “front facing”. This meant we focussed more on the looks and functions of the system while the backend was built as simple as possible. (For example, for a long time our supervisor process was a simple tmux session).

For our server pool we used Digital Ocean’s infrastructure. Kubernetes clusters were provided by running minikube, running a kubectl proxy to provide access to the k8s control plane and a small tmuxinator server providing the websocket to the console. The API server dealing with scheduling was a python script (mis-)using the Digital Ocean tagging system, as we were tagging machines with timestamps and various labels to store metadata information. This system was somewhat unreliable, leaving us at times with 100+ machines that weren’t properly scheduled for termination, so a lot of manual checks were necessary to ensure a working system.

Another good example of how patched together the system really was is that we didn’t have a proper server provisioning script. There was just one master machine where things were setup manually, and the snapshots of this machine were used to create new ones.

Our Digital Ocean images

At this point, we were trying to offload as much workload on external services as possible. Auth0 for authentication, Zapier and Google Sheets for data collection and various triggers, GA, Hotjar and MixPanel for analytics, Slack for alerts and basic monitoring, Mailchimp for email gathering etc.

One of the best decisions we made at this point was creating a public Slack channel that allowed our users to talk with us directly. This has been a great hub in which we could not only interact with our users as “users,” but rather as friendly, supportive people who understand the process we’re going through and have been nothing but amazing.

Discussing one of the early dashboard designs at the EF work space in Berlin.

On the frontend, things were looking better, but still far from the level we had in mind. However, a hard cutoff and feature freeze had to be made. A basic kubernetes lesson was created, we’ve integrated a questionnaire to collect more data and we started emailing people and bothering them on slack to try us out. Despite all hiccups we managed to get a fairly good User Experience score of 8.5 with just a few hundred people trying us out.

Alpha release customer feedback was not bad (notice our logo back then)

At this stage our landing page setup along with “shameless hustling” started to pay off: we hit 1600 signups by the time of the EF deadline and this became one of our main metrics in our case to get buy-in from EF.

With all this effort, we got through the EF selection process, managed to get our initial funding and we were off preparing for the next milestone:

2. Optimizing for EF Demo Day

The moment all Founders enter the auditorium for the first time, a few hours before Demo Day kicks off.

Demo Day is a big event where one founder goes on stage (usually the CEO) and is given a chance to excite hundreds of investors with your company’s vision in a 3min pitch.

Our strategy for this was the same as before — build out the product as much as possible, present it to the world and hope for a good response to use as the main cornerstone upon which we would build our case to investors. While we had some funding now (which is supposed to last 8~10 months with 2 co-founders drawing a minimum salary) our resources were as limited as before — but the stakes were bigger now. And we had one month to prepare for it.

As we continued developing the platform, an interesting process started appearing — it took more and more time to develop “important” front facing features. As we would soon start a pre-sale to seek further market validation, new background systems had to be developed — from payment processing, to user management, to license generation, a lesson editor, improved server scheduling etc. Unfortunately that process remains inexorable up until now, and has arguably gotten worse.

On the technical side, we made a few changes to the backend — python scripts were rewritten in go, which greatly simplified our system and improved our reliability. The previous system of using Digital Ocean tagging stayed, but the wonderful go buffered channels and goroutines which now allowed our system to breathe and respond to changes in a much more elegant way.

Another unexpected change happened because of our issues with the traefik.io reverse proxy. Up until this point, when a new server was created, it’s routing information was written in etcd, which was read by traefik. For some reason, traefik couldn’t handle the constant changes well in config and would simply freeze and refuse to accept new connections. This was a big problem because the process didn’t crash, allowing us to restart it, but it simply continued running while frozen — which required constant manual checks. One late night I started digging into the source code of Traefik and realized that most of the actual reverse proxy functionality is covered by the go HTTP ReverseProxy library itself, so a small reverse proxy was written around that library which had no hiccups after that.

With these systems in place, a minor touch-up of the frontend, a bit improved lesson, we came to our “go big or go broke” moment:

The Hacker News launch

The Hacker News ‘Hug of Death’

The Hacker News launch went amazingly well and horribly wrong at the same time. Within 2 minutes of the launch, we reached the number 1 spot, with thousands of people coming to our platform. Our Slack ‘new user’ channel exploded until we ran out of free Zaps and the rest of the system imploded within minutes as we couldn’t support such an influx of users.

Most of our users were welcomed by a blank dashboard and were unable to experience Magic Sandbox. Unfortunately, this was still the best case scenario for us. We were very conscious of the terrible user experience most users were getting, it really was just one more necessary evil, as we were fully optimizing for Demo Day and the investor meetings that would follow.

EF co-founder Alice Bentinck introducing Magic Sandbox, the 2ooth EF company to pitch.

The exposure from the launch provided us with lots of new opportunities, some sales, good metrics to show off on Demo Day, and most of all confidence that we were onto something of value.

Right after the event, a hectic process of talking to dozens of investors started, with several calls & meetings per day, everyday, for a few weeks. In our case across two cities: Berlin and London.

3. Optimizing for fundraising (and not dying)

Pitch. Eat. Fly. Sleep. This was the 3rd flight of the week meeting investors between Berlin < > London.

Currently, as a young company with big ambitions, we are forced to resolve a series of chicken-and-egg problems. To get funding you need strong metrics to support the case, to achieve those metrics one needs to build out features and infrastructure to support large scale sales, to achieve those one needs engineering power which, in turn, requires funding.

For better or worse, we’ve decided to optimize our resources for projects that might not be instantly noticeable on the outside, but instead allocate them on projects which are focused more on mid- and long-term.

The things we focused on:

  1. Switching user and lesson management to Firebase
    This allows us to finally have a central point of truth in the system, and an information hub around which other systems are built. Up until this point we were depending on various disjointed systems held together by Zapier, but this was a hard blocker when talking to larger enterprise customers, as the whole project seemed very unprofessional
  2. New content editor
    Up until this point, lesson creation has been a very manual process, requiring writing actual code to insert various pieces of content into the platform. This should allow us to explore several avenues, as we’re turning MSB into an engine which can be utilized by various sources
  3. Complete new backend
    We finally decided to splurge and hire an amazing contributor to our team, who quickly understood what we’re trying to build and helped us build a new server pool, based on kubernetes. In essence, it’s a large cluster running virtual machines, which in turn run smaller kubernetes clusters. Not only will this allow us to utilize various kubernetes mechanisms to improve reliability of our systems, the VM as Pods approach should serve as a flexible foundation for future technologies.
    The project required overcoming several technical difficulties, which will be a part of another series, but an overview how we accomplished running VMs inside a GKE cluster will be published shortly.
    This project should allow us to support much larger workloads, supports on-premise deployments and much more.

At this stage we’re getting to a point where we can live with our infrastructure support, and can finally return to more front-facing features — features that add visible value to our users. These include new content/lessons, new visualization and introspection views, team management screens and more.

8 months in and its slowly getting there.

Conclusion

Last months have been a large exercise in resource allocation and prioritization. The tensions coming from pleasing potential investors or pleasing your users; from following your gut feeling to trying to make your decisions data driven; or trying to justify large-scale projects while several quick wins might be achieved in that time.

My cofounder and I are both perfectionists and often find it difficult to ship a release even when it’s MVP+ level. On the one hand, every release was a difficult process, but on the other this attitude has forced us to put in a few more late nights to add a bit more polish. The best part of the process is the understanding we get from our users. While we expected our users to be less forgiving for each delay, their reactions have been the complete opposite.

To conclude, lessons we (think) we’ve learned from this process are:

  1. Start small?
    We may have bitten off a bigger piece than we could chew going into the world and saying we will teach you kubernetes and eventually rebuild the CS degree for life. This is always difficult to judge as we don’t know what the alternative would look like. But one could argue that had we started with a smaller target, there would have been more focus.
  2. Reuse external services as much as possible in the beginning
    Zapier, Auth0, Mailchimp, Stripe, Google Sheets etc. have all been an integral part of making this project work. While this approach is definitely a trade-off and an accruement of technical debt, it is definitely worth the hours they free up which could now be used for more important features.
  3. There’s a lot more to a SaaS project than the main functionality
    As your project starts to interact with users as part of a larger context, more and more adjacent processes start to appear that need to be automated. The small issues that bothered you in the beginning which required manual intervention quickly escalate into real project stoppers and sources of frustration. This can easily amount to 50% of your time.
  4. No databases
    This point could be considered controversial. While we’re a big fan of properly utilizing databases (and we intend to offer a deep dive into a database like postgres very soon) formal data structures can solidify a young project too much and encourage the system to grow around it. Not having a proper database forces you to simply dump information for later and demand a more creative approach to solve problems. It dictates a more stateless and modular design of the system in the very early days.
  5. Early on, trust your gut
    While we’re a fan of the data driven, lean startup methodology to drive decision making, we believe this process becomes much more valuable in later stages of the project. Early on, while the project isn’t actually near the vision you’re trying to accomplish, a strong gut feeling should not be disregarded. You have spent much more time thinking about the problem and your solution than your users, a lot of the time they aren’t aware of what else is possible, and are basing their decisions (with the best of intentions, of course) on the current state of the project.

We hope you found some nuggets of value in this post. If you’d like to learn more about Magic Sandbox or join our mission to rethink the way people learn, feel free to ping us on Slack :)

--

--