When evidence doesn’t drive decision-making

Daniel Stein
IDinsight Blog
Published in
3 min readMay 6, 2019
An Agrodealer demonstrating the PICS bag to curious farmers. ©IDinsight/Amy Chen

In 2016, IDinsight conducted an evaluation that examined the effect different media messages had on generating interest in the Purdue Improved Crop Storage (PICS) bags among smallholder farmers in western Kenya. PICS bags are hermetically-sealed crop storage bags that reduce post-harvest crop losses.

Individual farmers were asked the highest price they were willing to pay for the bag using a common experimental economics approach to measure willingness to pay, known as the Becker-DeGroot-Marschak (BDM) method. IDinsight used this data to estimate the demand curve for the PICS bag, which was used to determine profit-maximizing pricing of the bag.

IDinsight found that the price elasticity of the bag was high. Therefore, price reductions could lead to significant increases in demand for the bags, and increased revenue for the company. IDinsight’s key recommendation to the agricultural company selling the bags: they could increase profits by reducing the price of the PICS bag.

However, shifting market dynamics in Kenya following the project’s completion made this recommendation less relevant. Other demand-generation projects that created awareness about hermetic storage throughout the country resulted in increased demand for PICS bags, and soon sales of the bags became constrained by supply rather than demand. Therefore, the agriculture company decided to invest in increased supply, and did not lower the price.

So, what can we take away from this? One view would be to claim that the situation was totally out of our control, that we can’t hold ourselves accountable for unforeseen market dynamics.

I wouldn’t let us off the hook so easily.

If we are honest with ourselves, it’s unclear if the company ever really intended to lower prices. Lowering prices is likely seen as a risky strategy, and our experiment may not have been enough to sway them, even if demand had remained low. While the company told us that they really wanted evidence on how to increase demand, there were no high-level officials pushing for the project. We had no influential internal champion when we released the results. And since the evaluation was funded by an external donor (the Bill and Melinda Gates Foundation), one could argue that the company didn’t have any skin in the game.

What would we do differently next time?

Well, it would be easy to say that we should have just made sure we had the right internal champion, and a stronger commitment to react to the results. But given the reality of the fast-moving market, that wasn’t realistic.

Instead, we could have made the decision to simply not engage on this project because of the high risk the evidence would not be used. We were aware of the risks and it generated a lot of internal debate over whether it was worth engaging. I would argue, however, that the project was still worth taking on, even with the risk. It’s impossible to know with certainty at the onset how evidence will be used, so choosing to engage on a project always entails a similar risk.

In this case, we had the opportunity to conduct this exercise at very low cost by leveraging another field operation happening at the same time. That is why we decided to go forward. Faced with the same circumstances, I think we would green-light the project again.

There is also the possibility that this work will influence other companies in the sector. We have since presented the results at conferences and the journal Food Policy recently published an academic paper based on the work. Our findings on price elasticity of PICS bags provides a useful case study for measuring demand for new agricultural technologies in the developing world and may yet influence pricing strategies in other markets.

--

--

Daniel Stein
IDinsight Blog

Chief Economist at IDinsight. Passion for generating and using research to drive better policy.