Our evaluation of Groupe URD’s open-source Sigmah platform is published
2 minute read
We’re pleased to share our evaluation of the Sigmah platform: open-source project management software for humanitarians built by Groupe URD. This evaluation was the first to pilot the Evaluation Criteria we developed for our Monitoring and Evaluation Framework, and the first that we know of to evaluate a software-focussed aid or development project as if it were any other social change project.
Sigmah, created in 2008 in response to requests from Francophone humanitarian agencies to build for their overlapping needs and use case, and funded by Agence Française de Développement (AFD), was at a critical phase in its life, and needed an assessment of its outcomes, strengths and weaknesses to enable its Steering Cooperative and Groupe URD to make strategic decisions about its future. The evaluation sought to review the technical basis of the software; the team and business model around it; the support and sales approach; and its governance.
We were excited to undertake the project, bringing to it not only our work on how to evaluate technology in social change work, but also our experience building and maintaining free, open-source software for the humanitarian sector, in the FrontlineSMS project.)
The evaluation showed significant achievements: with a small team and budget Groupe URD had built a powerful platform with a solid code base; provided excellent support to their users and continued to bring on new clients; and had generated a new business model in response to their changing environment. However, it also showed that they needed to improve project management processes, streamline their governance, and refocus their target audience and thereby, their business model. Groupe URD have transparently shared their response to the evaluation, and their plans for next steps, alongside the full evaluation report.
More than anything, the evaluation findings underscored the acute challenges facing open-source platforms in the aid and development space. Frequently undertaken by organizations whose core business is not software development, such projects are dogged by unrealistic and conflicting stakeholder expectations, and chronic under-investment in a funding environment which prefers new and ‘innovative’ projects to maintaining and incrementally improving existing platforms. There is a real tension between the quality of user experience and enterprise-level capacity that organizations demand from software platforms, and low cost expectations attached to the open-source label. More on this in a forthcoming joint post reflecting on the recent MERLTech panel.
Evaluating such a project using our adapted OECD-DAC criteria was an interesting exercise, and will help as I refine and finalize our Monitoring and Evaluation Framework in the coming months, thanks to support from the Digital Impact Alliance. The frame of the Criteria ensured a rounded look at the project using an external logic, which I think made it a more powerful exercise, even if not all of the Criteria were relevant. We will share those resources on the SIMLab and DIAL websites as they become available.
Originally published at simlab.org on October 11, 2017.