Lessons Learned from Conducting Tech Research in the ‘Silicon Savannah’

*iHub_Research
iResearch Tech
Published in
6 min readJan 7, 2016

This post, is part of the iQuarterly publication by iHub Research, a series of reflections from the team on our work, and on technology and society. Angela Okune, a former Research Lead at iHub and a coordinator for the OCSDNet network reflects on lessons learnt.

As the research arm of iHub — Nairobi’s leading innovation space for the technology community — we pride ourselves on surfacing information for the improvement of decision making by technology stakeholders. Four years in, we’ve learned plenty, both on opportunities as well as barriers to generating truly “impactful” research.

The type of research projects we take on has evolved over the years. Initially, we grew our research portfolio by offering Non-Governmental Organisations (NGOs) and Small & Medium Sized Enterprises (SMEs) monitoring and evaluation of Information & Communication in Development (ICTD) projects, which entailed observing the implementation, uptake and sustainability of various ICT projects around the continent. Such projects ranged from looking at the potential of a mobile-to-mobile mesh-casting network in Nigeria to uptake of open data applications in Kenya. The following are reflections on how and why our research focus has evolved. We hope our experience will inform researchers interested in working in the ICTD space on potential limitations to be aware of before taking on a new research project.

Challenge: You can’t easily study what isn’t implemented.

Our experience with Monitoring and Evaluation (M&E) projects showed that if the implementation of a project was not completed or not done well, then it was difficult to study. As the M&E research partner, we are expected to document and study the impact of a technology product, but when the tech innovation and/or implementation is not fully executed, this cuts off the research team’s ability to conduct the commissioned research work. The researcher is put in an awkward position; you don’t want to report that the technology is a failure simply because the implementation of the technology was done badly, but if the tech is not implemented, then there is nothing for you to conclusively study!

We conducted an evaluation of a particular technology platform where unfortunately, the technology was not used by the target audience because of technical challenges with the platform and also challenges with implementation itself (delays in procurement of the hardware, limited use by the target audience due to rules and regulations). Despite a sound research framework and design, due to full dependence on the external technology and the inability of our team to solve the implementation problems that cropped up along the way (because we were not the implementers), our research findings were inconclusive and the research was less useful than it could have been had implementation been done well.

So we’ve learned: If you keep waiting for others to innovate before you study them, then as a researcher, you will always be playing catch up and be dependent on those “doing” the innovations. This is especially tough to stomach when you are a team of researchers who are technology savvy and caught waiting on others to execute.

Challenge: You can’t force a client to action your recommendations.

A key aim of iHub Research is to generate actionable research that anchors the plethora of observations and opinions about the Kenyan tech ecosystem. However, we realized that as a research consultancy group, we had no real mechanism to ensure clients used our recommendations and feedback. Despite working closely with clients to tailor their research questions to their needs, meeting with them to review the findings, and confirming they understood and were happy with the recommendations provided, in many cases, we were dismayed to find that nothing was done after the research work was completed. This was usually due to the client’s lack of follow-up capacity/resources or sometimes simply lack of prioritization of the research follow-ups from the client. Thus, in spite of our best efforts to generate actionable research, short of executing the recommendations ourselves, we did not know how to close that “last mile” gap between research and practice.

To summarize, two of the biggest obstacles experienced while running tech M&E studies for clients revolved around:

1) Dependency on external technology and innovations

When studying tech, you run risks based on the fact that you are not the one innovating. Your research is entirely contingent on others (usually the tech company) innovating and implementing. This means as a tech researcher, you often end up waiting for other people to innovate. In the worst case, we’ve seen other tech researchers (especially from the Global North) so hungry for a technology product to study that they overwhelm a budding initiative or perhaps even “over-research” certain populations, amplifying the efforts out of proportion.

2) Lack of follow-up at the completion of the study

After research work is completed, what happens to it? When the researcher has no input in whether or not research recommendations are implemented, recommendations can be made, but there is no way to ensure they are used for anything.

From these challenges experienced, we came to the realization that for our work to be effective and actionable, there needs to be a close and intertwined relationship between “doing” and “researching.” iHub Research therefore tweaked its approach to tech research. Where possible, we now take a more hands-on role, moving away from being researchers observing from the corner of the room to researchers participating in the development and running of the projects. We’ve realized it’s through the doing that we can gain research insights and through the research insights that we know what to do.

An example of this approach is our ongoing Umati project, which monitors online dangerous speech. We initially tried to use different content monitoring technologies but when none suited our needs, we decided to build our own and study the process as we built it. We realized that the technology output itself was just a small part of the research process and the iterative development process itself was just as, if not more, important. By taking a more action-oriented approach to research, we are not just putting out recommendations in the hope that someone else will pick them up, but we are actually using the recommendations ourselves to catalyze our own iterative feedback loops (doing → learning → doing → learning). We have been able to do this type of “social science meets tech” research, thanks to longer, usually grant-funded projects that offer greater flexibility in terms of timelines and research questions. While this type of support is more challenging to find, through such partnerships we have been able to embark on more exploratory quests to satiate curiosity about what is going on in our tech ecosystem.

Waza Experience, an on-going edtech research project looking at technology pedagogy for children in Kenya, is another example that has worked because we have control over the entire process. Not only do we implement the education sessions, we then study and test different educational approaches to teaching technology to youth. We take the learnings from the research ourselves and apply them back into the program as we iteratively test the curriculum and execution of the program.

Finally, our Builders-in-Residence (BIR) soft incubation program is an example of being hands-on researchers. The program targets young enterprising engineers who have a product they want to scale. By working with these engineers to support them with research, user experience, and hardware design skills, we have the opportunity for close observation of hardware entrepreneurship and the hurdles faced. This is primary data that we can use for our own research work on entrepreneurship in hardware in Kenya.

The question raised of course is, how far can you take this approach, especially given we are a research firm and a not-for-profit company. Although we are lucky to have a diverse team with different strengths (including running hardware engineering camps), our core area of expertise is research. By extending beyond “pure” research work and delving into execution, there is a risk of overextending our researchers’ scope, mandate, and abilities. We have tackled this by keeping our ‘action research’ projects focused on key areas and reducing the overall number of such projects that we take on.

We (and many others in the research world) continue to debate the “impact” question. What does real-world impact for research really mean and how is it measured? In the case of iHub Research, we’ve found that our impact is based on the level to which our work informs and is informed by technology developments. From this realization, we have therefore changed our methods and overall approach to research projects in order to better amplify the research’s impact. We’d love to hear what impact you feel your research work has had and how you have measured it. (Find and engage us on Twitter: @ihubresearch).

--

--

*iHub_Research
iResearch Tech

Driving local tech research in East Africa. Discovery, Knowledge, Sharing