Online Engagement: the challenges of participation, scope creep and learning

Jenna Robson
EGOV503 e-engagement 2019
4 min readJan 24, 2020

Online engagement has the potential to draw numerous insights from the public, about their perceptions of the problem, and what we should be doing about it. But it takes significant effort to ensure its effectiveness. This blog will look into three, interconnected challenges, that may prevent successful outcomes. That is, how do we ensure that the audience seeks value in participating, that the discussion remains in scope, and that we truly learn from what they are saying? I will begin with an overview of the challenges, and then explore some ideas and preventative strategies.

Challenge 1: does the audience see value in them participating?

Before anything can happen, the target audience needs to be convinced that it is worth their while to participate in the online engagement, and continue to do so throughout the engagement window. There are many things that might demotivate them. For example, it might be that they have distrust, convinced that their words and reflections will not be heard. They may not have strong visibility of the purpose and desired outcomes of the online engagement initiative. They may not be happy with the proposed approach on how their inputs will feed into decision-making. They may have complete trust in public officials, or be time poor, and not feel the need to engage in an online platform.

To overcome these potential roadblocks, one strategy would be to offer the audience, in ‘Plain English’, an overview of the why and how, and testing with an initial group before rolling out the entire online engagement initiative. This should be tested not just in terms of whether it is understood, but also whether it helps create a sense of purpose amongst the audience. This could be supplemented with positive language that recognises the importance of gathering majority and minority views, even if they’re not entirely impacted by any future decisions. Lastly, it is very important to be clear on how their inputs will be collated, analysed and fed back to them, so they too can go on the learning journey.

Challenge 2: is the discussion remaining in scope?

So we’ve managed to convince the audience to participate. The next challenge is ensuring that things don’t get off topic, or perhaps even personal. With each online engagement initiative, there will be a general idea about what the audience needs to discuss, but because complex problems are often networked with other problems, it’s easy for these to come into the discussion as well. It might even be that some of the problems discussed have already been somewhat resolved, just not known by everyone. Issues will most certainly arise if the discussion doesn’t remain in scope, as the relevant comments will need to be drawn out. Likewise, if things do get personal, then this can mean less pragmatism about the problems or solutions.

One strategy for ensuring that discussion stays relevant is through the use of moderators. We’ve already articulated our purpose and desirable outcomes, and so it’s easy to enough to monitor discussions and ensure that they continue to contribute. This may mean some interference, but in a positive manner, that praises good dialogue, and redirects when appropriate. Feedback is also important, as it is with the above challenge, where the more that the audience can see how their input is building on itself, the better they will understand what the scope really is. Lastly, it might pay to run a trial discussion, and see what topics come up. If any are out of scope, these could be included in the introductory information covering ‘what this online engagement is not here to answer’.

Challenge 3: are we learning from what the audience is saying?

The audience is participating, and staying on topic, but how do we ensure that we, as officials, advisors and decision-makers, are gaining knowledge and understanding of what matters to the public? The larger the audience, the more information there is to analyse and synthesise, but full automation can loose some of the deep meaning portrayed in the commentary. Minority views may also get lost in the noise. We are talking about written dialogue, and so there are no facial expressions that add context to what people are saying. Here, the challenge is about effectively deriving a ‘common ground’ so that progress can then be made.

At the design phase, one approach would be to set up some key questions, not for guiding dialogue, but for framing up the outputs of it. Some analytics could be applied for organising commentary into these frames, which could then be shared to project team members to derive insights from. Secondly, the platform used should promote the use of emojis, to add some indication of people’s feelings behind certain statements. We need to test a subset of people in terms of whether what we have captured reflects the general discussion and direction, to ensure that we haven’t introduced bias. Lastly, from an agency perspective, the outputs of the dialogue needs to be widely shared, so that anyone in their role can have a greater appreciation of the challenges that we face, and thoughts that people have about them.

--

--