We Failed and Learned: Lessons from Our First Project

EnAccess
EnAccess Blog
Published in
10 min readSep 16, 2019

In my last post, I introduced the results of the first project we funded, Devergy’s Survey Toolkit. We’ve heard from a few companies that are putting it to work and are excited about the feedback so far. (Note: If you’ve downloaded the toolkit and have feedback for us, let us know!)

While we’re happy that this toolkit is saving companies time and frustration, the feedback for this toolkit is especially sweet. That’s not just because this was the result of our pilot/proof of concept, but also because the toolkit was born out of a larger project that had wandered south — and had to be restructured.

In this post, I describe the initial project, what happened, how we worked it out with Devergy, and how we (EnAccess) absorbed lessons from our pilot.

Context

Although we first started thinking about EnAccess in mid-2017, we consider ourselves a new organization. Like many of our peers in the early stage, EnAccess is run by fractional FTEs and on a lean budget. And like other organizations in a proof of concept phase, nailing our first project was always a priority. We’re learning by doing while clarifying our strategy, testing our processes, and of course, fundraising. Validating the core thinking behind EnAccess, through a successful pilot, would help all of this tremendously.

Fortunately, we’ve been in this industry long enough to anticipate that whatever we did would likely not go to plan. We were trying out a wholly new model of funding and sharing innovation — i.e. a contracted “innovation services” model applied to projects that are otherwise grant- or equity-funded — which would require stress-testing. And since this was the first time that EnAccess was going into the wild, we were prepared to observe and learn from mistakes. This means that in addition to getting the pilot done, we prepared to ponder more abstract questions, like:

  • how the project was unfolding in reality vs. expectations;
  • how effective were communications along the project cycle;
  • what process or capability changes would we need to make as an organization;
  • …and so on.

Because of all the potential fuzziness, we needed a friendly pilot partner. The organization would need to withstand mistakes, provide useful feedback, and of course, be fun to work with. For these reasons, we kicked off our portfolio with Devergy. My co-founder at EnAccess is the CEO and a co-founder of Devergy, and I’ve collaborated with the Devergy team on and off over the years. This was a perfect environment for us to get a project going quickly, while also stress-testing the EnAccess model and process.

The Original Project: Communicating Energy Consumption to Mini-Grid Customers

The original project proposed by Devergy was a marketing toolkit meant to help mini-grid companies communicate energy consumption to customers.

The initial idea for this project came from a business problem that Devergy observed within its customer base for years, and that I had observed while assessing mini-grids in India, Nepal, Zambia, and elsewhere in Tanzania. The challenge that we aimed to tackle was a particular type of miscommunication that can arise between customers and operators, related to: a) what level of service had been pre-purchased by customers in the form of a pre-paid energy bundle or credit, and; b) what level of service was actually delivered by the operator, in service of that bundle.

(Note: In a pre-paid model for mini-grids, customers can pre-purchase energy the same way as mobile credit. Energy “bundles” may be defined on the basis of the amount of energy provided, the amount of time that a bundle is active at a predetermined level of service, or a combination of the two.)

Let’s take an example to illustrate where a communications breakdown between mini-grid operators and mini-grid customers might occur. Let’s say a customer pre-purchases an illustrative “Gold” pre-paid energy bundle one week, and that the bundle lasts for about four days. When pre-purchasing the same bundle the following week, the customer would likely expect to receive about the same duration of service. However, if during the second week, the customer uses their stereo longer and louder, and also plugs in a brand new fan, the bundle might expire in two days instead of four. If the customer is not aware of how their increased energy consumption will affect the duration of service delivered by the bundle, this customer will be confused: he or she might approach a sales agent, call customer support, or complain to neighbors. Broader confusion can set in, and more customers may begin questioning the integrity of the operator.

From the perspective of the operator, this kind of misunderstanding presents a serious business risk. Without clarity in the interface between pre-purchase and delivery, trust can erode. And as countless project experiences in development and in energy access can tell us: once trust is lost in a community, it can be hard to win back.

In this context, our research question became: how can mini-grid operators effectively communicate pre-paid energy to their customers? We aimed to design a simple approach for explaining energy consumption to mini-grid customers, which is otherwise an abstract and technical concept.

Some readers might be asking: Is this really an issue? Aren’t pre-paid models already figured out, as is clear from the use of pre-paid mobile credit, or the scaling of Pay-As-You-Go Solar Home Systems (PAYGO SHS)? Our thinking on this was that, while pre-paid is a familiar payment model generally, the mini-grid context provides a tricky use case.

To explain this, let’s compare the use of pre-paid in PAYGO SHS vs. the use of pre-paid in mini-grids. PAYGO SHS customers pay distributors a down payment, and receive an SHS kit that’s contained in a box. (Note: Loose generalizations on SHS configurations here, for the sake of discussion.) The SHS kit will likely include a panel, a battery, and perhaps a few appliances: LED lights, a phone charger, a torch, maybe a radio, and so on. With a physical system that’s placed directly in a customer’s home, customers develop a straightforward understanding of the level of energy service that they can expect to receive from the kit.

Understanding and monitoring energy service levels in a pre-paid mini-grid context can be a bit more taxing for customers. Whereas our hypothetical PAYGO SHS customer received a defined kit with defined appliances, mini-grid customers receive power outlets and a few other electrical installations. Any number and type of appliances might be plugged in at a given time, which are then fed with electricity. These appliances are often sourced from different brands, manufactured at varying levels of quality, and operated at varying intensities for varying periods of time. This high degree of load variability is what makes the mental accounting of pre-paid energy consumption challenging for customers.

This R&D project seemed like an interesting problem for us to unpack. We imagined an end product being a combination of visual aids (e.g. a series of light bulbs, an icon of jug of water, stickers to be placed on appliances) and SMS reminders that mini-grid operators could rely on to help customers accurately budget pre-paid bundles. The R&D project was scoped to define the toolkit, test and iterate a toolkit in the field, validate its performance, and package up both the toolkit and test data for broader consumption.

Satisfied that we had landed on a useful plan, we phased the project out in our R&D Agreement, which looked something like the below:

1. Exploration and Problem Definition

2. Innovation Project Planning

3. Prototype Development

4. First Prototype Implementation

5. Evaluation of Prototype

6. Prototype Iteration: Up to 2 Additional Cycles

7. Wrap-up and Content Delivery

To bound the project, we included a budget and time frame that would allow one prototype to be developed and tested (i.e. Phases 3 and 4), then iterated two more times (i.e. Phase 6 above). In Phase 7, a final package — including test data and R&D insights — would be published as an open source toolkit.

What Went Wrong

A few months into the project, neither Devergy nor EnAccess were thrilled with the project. The Devergy team found unexpected responses in their surveying work for the initial prototype. Time and budget in the project continued to tick over, and it became clear that the project had gone off track.

Around this time, the project manager at Devergy peeked into the company’s Customer Relationship Management (CRM) tool, to more deeply understand what might be going on. (Note: Like other smart mini-grid companies, Devergy’s tech is cloud-connected, so there’s plenty of data on transactions, generation, and consumption to dig into.)

The results of this digging? We had been solving a somewhat virtual problem. While there was a contingent of customers that complained loudly and often about pre-paid energy bundles expiring prematurely, this wasn’t the case for the majority of customers. In fact, rather than expiring credit on bundles prematurely, most customers were letting bundles expire with unused energy remaining on them (!). This means points to any number of other reasons for customer dissatisfaction.

This project was the first time that Devergy had looked into their CRM data with this specific business query. Only after doing so did they (and we) learn that a small but vocal minority of customers had created a false narrative. Combined with a blindspot in data analysis, we wound up with a flawed context for our R&D work.

Looking back, both EnAccess and Devergy teams made a simple error that risked this project from the get-go. In a rush to start the project, we had agreed to skip the most critical phase of any research or innovation activity: defining and validating the problem. We were each so convinced that this problem existed at large that, despite our initial project plan, we kicked off the project at Phase 2. Of course, the appropriate way to have started this project would have been to begin with a deep dive into the CRM data and continue on from there.

Working it out

As the project manager reported that we were solving a problem that didn’t exist — or at least, not in a major way at Devergy — we realized we needed to course-correct. As EnAccess, we had conflicted thoughts. There wasn’t a lot of time or budget left in this project: Do we end the project early and write an internal failure report, or do we use the remaining budget to work something out?

Ending the project early sounded unfair; by agreeing to bypass Phase 1, we were as much a part of the error as the Devergy team. Moreover, the option of writing an internal-only project failure report wasn’t appealing, as it wouldn’t create value for the sector. Considering the core EnAccess model — i.e. transforming philanthropic funding into actionable content and value for the ecosystem — we decided to restructure the project. Fortunately, the Devergy team was on the same page, and proposed the Survey Toolkit as a way forward.

Lessons from Our First Project

#1. When beginning an R&D activity, don’t let narratives or biases allow you to skip the basics; check the data. When we talked to a few close friends and colleagues about this experience, this story was all too recognizable. With a million fires to put out on any given day, it’s easy to let an existing and compelling narrative guide R&D activities, and to forget to check the data.

If EnAccess is in the business of creating sector solutions, we need to be committed to validating the problems and foundations for each R&D activity that we fund. In future projects, we’ll be spending much more time with partners and potential adopters to validate the R&D problems and/or opportunities that our funds address. We’re broadening the range of stakeholders we receive input from to ensure that a shared ecosystem interest exists for any R&D activity that we’re funding, and we’re expanding our board to provide additional perspectives on the proposals that we receive.

#2. As EnAccess, we were conflating two different types of sector-building R&D activity. This project helped us sharpen our thinking. EnAccess funds two types of R&D projects: “Building Block” and “Exploration” projects. This categorization wasn’t something that we started with, and was only possible for us to crystallize after reflecting on our proof of concept.

When we debriefed with the Devergy team, we realized that we had set ourselves and Devergy an impossible task. On one hand, we were exploring the potential to create a solution (i.e. the customer communications toolkit) to a specific business problem. The solution wasn’t “off-the-shelf” and still needed to be created and validated. On the other hand, we were expecting that the project would result in a market-ready solution that could be shared with other companies.

In retrospect, maintaining these two objectives the way that we had doesn’t make sense. Only once a toolkit is validated can it be presented for broader adoption. In the case of this project, that would mean extending Phases 5 and 6 considerably longer than we had originally anticipated, and likely, evaluating the toolkit’s performance in the context of another mini-grid company (i.e. one with a different business model and technology offering than Devergy).

Reflecting on this allowed us to more clearly articulate the two very different types of R&D projects that we support and how we need to differentiate between these in our contracting and portfolio management processes. “Building Block” projects are clean: there is a clear, well-defined problem that’s shared by many organization, and a clear, well-defined solution that can be open sourced to address that problem. Another way to say this is that we select Building Block projects to fund based on the outputs that they create.

“Exploration” projects are characterized as being more open ended and uncertain innovation activities. There is likely an interesting and well thought-out opportunity that’s worth exploring, but we aren’t precisely sure how the results of the project will be absorbed by or incorporated into the sector. We select Exploration projects based on the initial thinking that underpins the project.

In Closing

The first half our first project failed, but we were lucky to work with a flexible team that helped us restructure, debrief on the experience, and improve our processes. We’ve made a few changes and applied these in the projects that we’ve funded since.

While admitting to a failure like this feels strange, we’re ok with it. As I’ve alluded to previously, one of our objectives at EnAccess is to enable a more realistic and nuanced conversation about failure and innovation on the road to SDG 7. We’re happy to share when we’ve made mistakes and how we’re learning — especially when this can create value for other organizations. We’d be thrilled if other energy access organizations (particularly funders and investors) were to do the same.

--

--

EnAccess
EnAccess Blog

EnAccess supports open source solutions for the energy access industry. Save yourself time — visit enaccess.org to see if a tool you need is already published.