The Mad Science of Product Creation

Our first release? A hypothesis.

Derek DeHart
DACA Time
4 min readMay 22, 2017

--

We’ve been working on the technology side of our product for about a month. After that one month of nights and weekends spent planning, developing, testing, and tweaking, we’re releasing the very first iteration of DACA Time into the world.

And it’s blissfully incomplete.

Our Release as a Hypothesis

I’ve already written a bit about our initial validation strategy, and we’re ready to start our grand experiment and begin iterating over our results. Our hypothesis is intentionally simple, lending itself to a very lean initial release:

DACA Time can reduce the need for legal assistance when completing DACA application forms.

That’s it! Like I said: simple.

You want to know the funny part? I’m almost certain that our hypothesis is wrong. I’m banking on it. And that’s exactly how it should be.

Of course, I don’t mean to imply that we’re willfully pushing valueless software to our customers. We’re still a long way off from what I would call a public launch. Rather, we’re going to test our hypothesis through a closed beta, change what variables we can control, and then test again until we get it right. If we were misguided enough to believe we’d nailed it the first time, we’d have a single “big bang” release, hope for the best, and then run the risk of jeopardizing the livelihood of the very folks we’re trying to help.

By releasing early, we get fast feedback about what works and what doesn’t in ways that we can control, well before the point at which we have people dependent upon our accuracy.

Setting Up Our Experiment

Do not be too timid and squeamish about your actions. All life is an experiment. The more experiments you make the better. What if they are a little coarse and you may get your coat soiled or torn? What if you do fail, and get fairly rolled in the dirt once or twice? Up again, you shall never be so afraid of a tumble.

— Ralph Waldo Emerson

Before we started product development in earnest, there was a vast panoply of possibilities ahead of us. We have so many ideas — so much we could start building. How is one expected to choose?

Fortunately, testing our hypothesis quickly became our North Star as far as prioritization was concerned, so we could narrow the field of perfectly viable ideas to just those things that would allow us to get the right validation at the right time.

So, what do we need to test that our software can guide an applicant through the forms to successful completion? Surprisingly little:

  • Architecture/infrastructure to host the application
  • An initial, vetted set of survey questions for iteration and refinement
  • A lightweight interface so that the application process is minimally workable
  • Storage for responses

Wire those things together, and — voilà! — we have our MVP.

A real screenshot of DACA Time on mobile

But don’t you…like…need to fill out the forms or something?

I wrestled with this one, too, and here’s the thing: not really.

You see, we’ve already proven that we can build the technology to fill out the form; there’s not a whole lot of uncertainty there. At the core of the hypothesis we need to test is whether we can fill out the form correctly via a simple online interface.

Allow me a brief interlude: An acquaintance of mine tells the story of how he and his colleagues wanted to test on-demand local deliveries of their company’s physical products. Building the interface to accept orders was inexpensive (in relative terms), but creating and maintaining the logistics of their supply chain leading to a product being delivered to someone’s door that very same day was a challenge. Should they spend all of that time and money if it turns out the concept didn’t pan out?

Nope.

Instead, they took the orders and delivered them from their warehouse themselves.

In much the same way, we don’t need to spend time at this very moment to build the technology to fill out all of the DACA application forms completely. Instead, since we’re capturing all of the information for the forms, we can fill them out by hand for now and compare them with the forms that otherwise would have been produced as a part of an attorney’s current process.

What matters is that we can measure along the way how well our software performs against the status quo: Did they survey questions require elaboration? Did they completely capture needed information? Are there any gotchas in our current interface that prevents someone from completing the process?

Those are the questions we need answered to test our hypothesis at this stage, not “Can we fill out the forms?

(It’s probably worthwhile to note here that we don’t yet have all of our security in place, so we do not capture any personally-identifiable information. Applicants likely don’t struggle with the name and address portions of the form, anyway— perhaps another assumption to be validated!)

So what now?

What comes next is really just a matter of hustle and execution. We have some valuable partners in immigration law, and we’re working with them to iterate, validate, and iterate some more.

In the meantime, we have a hefty roadmap leading to a public launch that requires some planning along with — no doubt — additional experiments along the way.

Have connections in immigration law who might want to be a part of changing people’s lives? Let us know, and we’ll see how we can best work them into our validation plan!

Otherwise, we’d love your feedback and questions, either here or on Facebook!

--

--

Derek DeHart
DACA Time

Tinkerer and Product enthusiast | Social Enterprise geek