Why stakeholders ‘lie’ during Requirements Elicitation — The Rashomon Effect
I have had a modestly successful career as a Business Analyst, but there was one case where my project failed that I was never able to fully understand. Until recently. This story is about that case.
My client had been on a legacy CMS and wanted to move to a different CMS. During the requirements elicitation stage, I interviewed the team members who worked on the system to capture how they did things, what they liked about the old CMS, and what improvements they would like to see after they moved. It was a fairly routine process that I had done on my previous projects, and I thought I had done a reasonably good job of capturing their requirements.
The requirements soon went into development, and because it was Agile, we had client demos after every sprint, which also went fairly well. As expected, there were few changes suggested by clients but they were cosmetic rather than functional so were absorbed quickly. Everything was thoroughly tested as per requirements and after a few months, the project went live. We all got appreciation mails, and went our own way to different projects.
This is when the project started failing.

Though we had introduced automation for some of the steps, there was no increase in efficiency. Some tasks in fact took much longer than earlier. The number of manual errors increased, and the stakeholders blamed the new system for being over-complicated in some aspects and over-simplified in others. Soon enough, the client launched another project to further customise the new system.
It’s been a few years since, but this case continued to bother me. This is because when I checked my notes, I had captured all the bottlenecks which the stakeholders highlighted, automated every process which they said may be automated, and steered clear of all processes which they insisted had to be done manually.
The system worked perfectly based on what I heard from the stakeholders, and yet it was a failure.
The only logical explanation for me was that the stakeholders had lied during the elicitation process. But that obviously didn’t make sense, because why would they?
It was a mystery to me till I stumbled upon a classic Japanese movie made in 1950 — Rashomon.

The movie is about an incident that is narrated by 4 different people, where the outline remains the same but the interpretations are wildly different. The incident is that a samurai is found dead in the woods, his wife is understood to have been violated, and a bandit has admitted to killing him. However, when each of them is questioned (including the dead samurai, through a ghost-whisperer), their accounts differ to make themselves look ‘honourable’, often at the cost of others.
The movie does not resolve which version of events is ‘true’. A wood-cutter at the end gives a supposedly unbiased, objective view as he was hidden in the trees when this happened, but even he is shown to have ulterior motives later, so his version may also have been biased. The essence of the movie is that we often ‘lie’, to others and to ourselves, to present a better image of ourselves. I use apostrophes around the word ‘lie’, because these are not deliberate attempts to misguide. Rather, they are unconscious biases we bring to any narrative that concerns us.
This movie made me re-visit the case, and I think I now have a better theory for what happened.
When interviewing the stakeholders about the process bottlenecks, they over-emphasised the factors they could not control and under-emphasised their own shortcomings.
For example, slow system performance and complicated navigation menu on the system were highlighted as 2 major challenges, but the fact that certain documents required manual sign-off from 5 separate stakeholders was glossed over. Similarly, while CMS Search features were given detailed requirements, the stakeholders did not mention that the underlying problem with the existing Search feature was that there was no standard document naming convention which they followed. So, even though the Search feature worked perfectly with respect to the requirements, this muddled up all the search results in practice.
None of the above examples were deliberate attempts to lie or mislead. Rather, they seemed to be biased unconsciously. They genuinely believed that the real issues were systemic rather than with their processes. It was my failure as a BA that I could not filter the biases out of the requirements I gave.
With the benefit of hindsight, I could identify 4 things I might have done differently:
- Engage multiple teams and perspectives — During this project, I spoke almost exclusively to members from the same team. With only a single perspective of the CMS, I ended up severely limiting my analysis. In hindsight, engaging other teams would have helped me add more perspective to the requirements I captured.
- Put myself in my stakeholders’ shoes — While interviews can be an effective starting point to understand requirements, it can be massively improved through shadowing. Shadowing enables you to not just see the requirements that the stakeholders talk about, but actual challenges and opportunities that they face in a typical day but take for granted.
- Ask challenging questions — This requires a fair amount of skill and tact, but this is critical to overcome unconscious biases. In my case, I did not ask the team about why they have so many reviews, and what value each review adds. I simply assumed that they had their reasons and did not bother getting into the details. Not asking this question, and other similar ones, were I believe directly responsible for the project failure.
- Hunt for as many data points as I can — Numbers are less likely to ‘lie’ than people. Of course, numbers may carry their own biases, but acquiring enough data points can go a long way in mitigating them. In my case, though we had the basic data points such as number of files to be migrated, time taken to load pages, etc., there were a few critical ones we did not. Such as, average / best-case / worst-case turn-around time for each document’s review, number of search results for standard queries, etc.
Ironically, Rashomon effect means that the above account of my project and my learning may have its own inherent biases, but I hope they will help me and other BAs reading this become better at our job.
